Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
718,745
| 24,730,464,917
|
IssuesEvent
|
2022-10-20 17:07:39
|
Sphereserver/Source-X
|
https://api.github.com/repos/Sphereserver/Source-X
|
closed
|
Delete character issue
|
Status-Bug: Confirmed Priority: High
|
When deleting a character it appears that it does not update the character list on your screen till you reload it, this should automatically happen after deleting the char, I believe this is the cause for a few players accidently deleting chars they didn't mean to, in an attempt to delete the character that appears to stay on the char list.
|
1.0
|
Delete character issue - When deleting a character it appears that it does not update the character list on your screen till you reload it, this should automatically happen after deleting the char, I believe this is the cause for a few players accidently deleting chars they didn't mean to, in an attempt to delete the character that appears to stay on the char list.
|
priority
|
delete character issue when deleting a character it appears that it does not update the character list on your screen till you reload it this should automatically happen after deleting the char i believe this is the cause for a few players accidently deleting chars they didn t mean to in an attempt to delete the character that appears to stay on the char list
| 1
|
537,787
| 15,737,106,606
|
IssuesEvent
|
2021-03-30 02:10:29
|
ctm/mb2-doc
|
https://api.github.com/repos/ctm/mb2-doc
|
closed
|
clean up unwrap and [] from nick_mapper
|
chore easy high priority
|
Yesterday's crashes (#336) were from the same bug where nick_mapper was dereferencing a hash element that no longer existed.
There are other places where nick_mapper does a blind deref where it shouldn't. There are places where we can do something other than crash if an expected value is not there. I will fix those.
|
1.0
|
clean up unwrap and [] from nick_mapper - Yesterday's crashes (#336) were from the same bug where nick_mapper was dereferencing a hash element that no longer existed.
There are other places where nick_mapper does a blind deref where it shouldn't. There are places where we can do something other than crash if an expected value is not there. I will fix those.
|
priority
|
clean up unwrap and from nick mapper yesterday s crashes were from the same bug where nick mapper was dereferencing a hash element that no longer existed there are other places where nick mapper does a blind deref where it shouldn t there are places where we can do something other than crash if an expected value is not there i will fix those
| 1
|
678,371
| 23,195,463,815
|
IssuesEvent
|
2022-08-01 15:58:48
|
phetsims/joist
|
https://api.github.com/repos/phetsims/joist
|
closed
|
"Read me" buttons should never be interrupted by sim voicing alerts
|
priority:2-high status:ready-for-review dev:voicing
|
@terracoda reported that it is possible to have alerts from the sim interrupt read-me buttons. @jessegreenberg and I thought that we could use `priority` to accomplish this since it just needs to be done for voicing.
|
1.0
|
"Read me" buttons should never be interrupted by sim voicing alerts - @terracoda reported that it is possible to have alerts from the sim interrupt read-me buttons. @jessegreenberg and I thought that we could use `priority` to accomplish this since it just needs to be done for voicing.
|
priority
|
read me buttons should never be interrupted by sim voicing alerts terracoda reported that it is possible to have alerts from the sim interrupt read me buttons jessegreenberg and i thought that we could use priority to accomplish this since it just needs to be done for voicing
| 1
|
659,425
| 21,926,810,357
|
IssuesEvent
|
2022-05-23 05:39:57
|
factly/dega
|
https://api.github.com/repos/factly/dega
|
opened
|
Inconsistencies retrieving users in policies
|
priority:high
|
I am unable to retrieve a list of all the users to assign for policies. Harsha is assigned with same permissions is able to see all the users as expected.
|
1.0
|
Inconsistencies retrieving users in policies - I am unable to retrieve a list of all the users to assign for policies. Harsha is assigned with same permissions is able to see all the users as expected.
|
priority
|
inconsistencies retrieving users in policies i am unable to retrieve a list of all the users to assign for policies harsha is assigned with same permissions is able to see all the users as expected
| 1
|
458,833
| 13,182,624,665
|
IssuesEvent
|
2020-08-12 16:05:08
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Cannot update the detail card for certain maps
|
Priority: High bug
|
### Description
A map that had a thumbnail (removed) can not update detail card.
You have to apply some changes also to metadata to allow Save functionality to work.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
any
*Steps to reproduce*
- Create a new map, with a thumbnail and detail card
- Edit the map you created removing the thumbnail, then save again
- Edit again the map and try to edit detail
- Try to Save
*Expected Result*
- The map is saved again with changes to the detail card
*Current Result*
- Click on save button has no effect. You have to edit also title or description to make it work.
### Other useful information (optional):
**dev notes**
The real save effect is generated by this
https://github.com/geosolutions-it/MapStore2/blob/master/web/client/components/maps/modals/MetadataModal.jsx#L190
And than this
https://github.com/geosolutions-it/MapStore2/blob/master/web/client/components/maps/forms/Thumbnail.jsx#L159
When `this.props.map.newThumbnail` has "NODATA" instead of undefined, the if block is skipped and so the map is not saved.
That code is more complicated then needed and it may require some refactor. We should also evaluate that #2908 may solve this and other issues, finalizing the refactor of this part that can be cancelled in favor of the dashboard's one, that is more stable and tested.
|
1.0
|
Cannot update the detail card for certain maps - ### Description
A map that had a thumbnail (removed) can not update detail card.
You have to apply some changes also to metadata to allow Save functionality to work.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
any
*Steps to reproduce*
- Create a new map, with a thumbnail and detail card
- Edit the map you created removing the thumbnail, then save again
- Edit again the map and try to edit detail
- Try to Save
*Expected Result*
- The map is saved again with changes to the detail card
*Current Result*
- Click on save button has no effect. You have to edit also title or description to make it work.
### Other useful information (optional):
**dev notes**
The real save effect is generated by this
https://github.com/geosolutions-it/MapStore2/blob/master/web/client/components/maps/modals/MetadataModal.jsx#L190
And than this
https://github.com/geosolutions-it/MapStore2/blob/master/web/client/components/maps/forms/Thumbnail.jsx#L159
When `this.props.map.newThumbnail` has "NODATA" instead of undefined, the if block is skipped and so the map is not saved.
That code is more complicated then needed and it may require some refactor. We should also evaluate that #2908 may solve this and other issues, finalizing the refactor of this part that can be cancelled in favor of the dashboard's one, that is more stable and tested.
|
priority
|
cannot update the detail card for certain maps description a map that had a thumbnail removed can not update detail card you have to apply some changes also to metadata to allow save functionality to work in case of bug otherwise remove this paragraph browser affected any steps to reproduce create a new map with a thumbnail and detail card edit the map you created removing the thumbnail then save again edit again the map and try to edit detail try to save expected result the map is saved again with changes to the detail card current result click on save button has no effect you have to edit also title or description to make it work other useful information optional dev notes the real save effect is generated by this and than this when this props map newthumbnail has nodata instead of undefined the if block is skipped and so the map is not saved that code is more complicated then needed and it may require some refactor we should also evaluate that may solve this and other issues finalizing the refactor of this part that can be cancelled in favor of the dashboard s one that is more stable and tested
| 1
|
205,975
| 7,107,793,147
|
IssuesEvent
|
2018-01-16 21:18:05
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
storage: unexpected GC queue activity immediately after DROP
|
bug high priority
|
Experimentation notes. I'm running single-node release-1.1 with a tpch.lineitem
(SF 1) table restore. Without changing the TTL, I dropped this table last night.
The "live bytes" fell to ~zero within 30 minutes (i.e., it took 30 minutes for
all keys to be deleted, but not cleared yet) while on disk we're now using 1.7GB
instead of 1.3GB (makes sense since we wrote lots of MVCC tombstones).
What stuck out is that while this was going on, I saw lots of unexpected GC runs
that didn't get to delete data. I initially thought those must have been
triggered by the "intent age" (which spikes as the range deletion puts down many
many intents that are only cleaned up after commit; they're likely visible for
too long and get the replica queued). But what speaks against this theory is
that all night, GC was running in circles, apparently always triggered but never
successful at reducing the score. This strikes me as quite odd and needs more
investigation.
This morning, I changed the TTL to 100s and am seeing steady GC queue activity,
each run clearing out a whole range and making steady progress. Annoyingly, the
consistency checker is also running all the time, which can't help performance.
The GC queue took around 18 minutes to clean up ~1.3 on-disk-data worth of data,
which seems OK. After the run, the data directory stabilized at 200-300MB, which
after an offline-compaction drops to 8MB.
RocksDB seems to be running compactions, since the data directory (at the time
of writing) has dropped to 613MB and within a minute more to 419MB (with some
jittering). Logging output is quiet, memory usage is stable, though I'm sometimes
seeing 25 GC runs logged in the runtime stats which I think is higher than I am
used to seeing (the GC queue is not allocation efficient, so that makes some sense
to me).
Running the experiment again to look specifically into the first part.
|
1.0
|
storage: unexpected GC queue activity immediately after DROP - Experimentation notes. I'm running single-node release-1.1 with a tpch.lineitem
(SF 1) table restore. Without changing the TTL, I dropped this table last night.
The "live bytes" fell to ~zero within 30 minutes (i.e., it took 30 minutes for
all keys to be deleted, but not cleared yet) while on disk we're now using 1.7GB
instead of 1.3GB (makes sense since we wrote lots of MVCC tombstones).
What stuck out is that while this was going on, I saw lots of unexpected GC runs
that didn't get to delete data. I initially thought those must have been
triggered by the "intent age" (which spikes as the range deletion puts down many
many intents that are only cleaned up after commit; they're likely visible for
too long and get the replica queued). But what speaks against this theory is
that all night, GC was running in circles, apparently always triggered but never
successful at reducing the score. This strikes me as quite odd and needs more
investigation.
This morning, I changed the TTL to 100s and am seeing steady GC queue activity,
each run clearing out a whole range and making steady progress. Annoyingly, the
consistency checker is also running all the time, which can't help performance.
The GC queue took around 18 minutes to clean up ~1.3 on-disk-data worth of data,
which seems OK. After the run, the data directory stabilized at 200-300MB, which
after an offline-compaction drops to 8MB.
RocksDB seems to be running compactions, since the data directory (at the time
of writing) has dropped to 613MB and within a minute more to 419MB (with some
jittering). Logging output is quiet, memory usage is stable, though I'm sometimes
seeing 25 GC runs logged in the runtime stats which I think is higher than I am
used to seeing (the GC queue is not allocation efficient, so that makes some sense
to me).
Running the experiment again to look specifically into the first part.
|
priority
|
storage unexpected gc queue activity immediately after drop experimentation notes i m running single node release with a tpch lineitem sf table restore without changing the ttl i dropped this table last night the live bytes fell to zero within minutes i e it took minutes for all keys to be deleted but not cleared yet while on disk we re now using instead of makes sense since we wrote lots of mvcc tombstones what stuck out is that while this was going on i saw lots of unexpected gc runs that didn t get to delete data i initially thought those must have been triggered by the intent age which spikes as the range deletion puts down many many intents that are only cleaned up after commit they re likely visible for too long and get the replica queued but what speaks against this theory is that all night gc was running in circles apparently always triggered but never successful at reducing the score this strikes me as quite odd and needs more investigation this morning i changed the ttl to and am seeing steady gc queue activity each run clearing out a whole range and making steady progress annoyingly the consistency checker is also running all the time which can t help performance the gc queue took around minutes to clean up on disk data worth of data which seems ok after the run the data directory stabilized at which after an offline compaction drops to rocksdb seems to be running compactions since the data directory at the time of writing has dropped to and within a minute more to with some jittering logging output is quiet memory usage is stable though i m sometimes seeing gc runs logged in the runtime stats which i think is higher than i am used to seeing the gc queue is not allocation efficient so that makes some sense to me running the experiment again to look specifically into the first part
| 1
|
616,185
| 19,295,844,816
|
IssuesEvent
|
2021-12-12 15:21:48
|
fvh-P/assaultlily-rdf
|
https://api.github.com/repos/fvh-P/assaultlily-rdf
|
closed
|
データ形式をTurtleへ移行
|
Priority-High in progress
|
現状はスキーマ定義・制約定義ファイルを除いてXML形式で記述しているが、XMLは可読性が低くさらに将来的にRDF-starという拡張構文を採用しようとしたときにはXMLは非対応になるので、少しずつ移行します。
|
1.0
|
データ形式をTurtleへ移行 - 現状はスキーマ定義・制約定義ファイルを除いてXML形式で記述しているが、XMLは可読性が低くさらに将来的にRDF-starという拡張構文を採用しようとしたときにはXMLは非対応になるので、少しずつ移行します。
|
priority
|
データ形式をturtleへ移行 現状はスキーマ定義・制約定義ファイルを除いてxml形式で記述しているが、xmlは可読性が低くさらに将来的にrdf starという拡張構文を採用しようとしたときにはxmlは非対応になるので、少しずつ移行します。
| 1
|
141,199
| 5,431,675,911
|
IssuesEvent
|
2017-03-04 02:25:48
|
ampproject/amphtml
|
https://api.github.com/repos/ampproject/amphtml
|
closed
|
amp-social-share fails in PWA
|
P1: High Priority
|
```
<amp-social-share
type="twitter"
layout="container"
data-param-text="..."
class="...">
</amp-social-share>
```
```
Uncaught Error: No ampdoc found for [object HTMLElement]
at qa (log.js:438)
at oa.f.createError (log.js:239)
at rf.getAmpDoc (ampdoc-impl.js:153)
at HTMLElement.Sf.b.connectedCallback (custom-element.js:928)
at A.define (document-register-element.node.js:1209)
at Nf (custom-element.js:1662)
at Pf (custom-element.js:183)
at $i.f.loadExtension (custom-element.js:180)
at Dj (runtime.js:757)
at zj.attachShadowDoc (runtime.js:640)
```
|
1.0
|
amp-social-share fails in PWA - ```
<amp-social-share
type="twitter"
layout="container"
data-param-text="..."
class="...">
</amp-social-share>
```
```
Uncaught Error: No ampdoc found for [object HTMLElement]
at qa (log.js:438)
at oa.f.createError (log.js:239)
at rf.getAmpDoc (ampdoc-impl.js:153)
at HTMLElement.Sf.b.connectedCallback (custom-element.js:928)
at A.define (document-register-element.node.js:1209)
at Nf (custom-element.js:1662)
at Pf (custom-element.js:183)
at $i.f.loadExtension (custom-element.js:180)
at Dj (runtime.js:757)
at zj.attachShadowDoc (runtime.js:640)
```
|
priority
|
amp social share fails in pwa amp social share type twitter layout container data param text class uncaught error no ampdoc found for at qa log js at oa f createerror log js at rf getampdoc ampdoc impl js at htmlelement sf b connectedcallback custom element js at a define document register element node js at nf custom element js at pf custom element js at i f loadextension custom element js at dj runtime js at zj attachshadowdoc runtime js
| 1
|
536,097
| 15,703,939,666
|
IssuesEvent
|
2021-03-26 14:26:45
|
epiphany-platform/epiphany
|
https://api.github.com/repos/epiphany-platform/epiphany
|
opened
|
[BUG] Epicli apply fails on updating in-cluster configuration after upgrading from older version
|
area/kubernetes priority/high status/grooming-needed type/bug
|
**Describe the bug**
Re-applying configuration after upgrading from version 0.6 to develop fails on TASK [kubernetes_common : Update in-cluster configuration].
**How to reproduce**
Steps to reproduce the behavior:
1. Deploy a 0.6 cluster with kubernetes master and node components enabled (at least 1 vm) - execute `epicli apply` from v0.6 branch
2. Upgrade the cluster to the develop branch - execute `epicli upgrade` from develop branch
3. Adjust config yaml to be compatible with the develop version by adding/enabling the repository vm
4. Execute `epicli apply` from develop branch
**Expected behavior**
The configuration has been successfully applied.
**Environment**
- Cloud provider: [all]
- OS: [all]
**epicli version**: [`epicli --version`]
v0.6 -> develop
**Additional context**
```
2021-03-26T14:04:11.6836483Z [38;21m14:04:11 INFO cli.engine.ansible.AnsibleCommand - TASK [kubernetes_common : Update in-cluster configuration] *********************[0m
2021-03-26T14:04:42.3211765Z [31;21m14:04:42 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (30 retries left).[0m
2021-03-26T14:05:22.8164666Z [31;21m14:05:22 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (29 retries left).[0m
2021-03-26T14:05:43.4044389Z [31;21m14:05:43 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (28 retries left).[0m
2021-03-26T14:06:19.4157736Z [31;21m14:06:19 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (27 retries left).[0m
2021-03-26T14:06:40.0400051Z [31;21m14:06:40 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (26 retries left).[0m
2021-03-26T14:07:20.5517770Z [31;21m14:07:20 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (25 retries left).[0m
2021-03-26T14:07:41.2678359Z [31;21m14:07:41 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (24 retries left).[0m
2021-03-26T14:08:02.7200544Z [31;21m14:08:01 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (23 retries left).[0m
2021-03-26T14:08:22.4268007Z [31;21m14:08:22 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (22 retries left).[0m
2021-03-26T14:08:53.0409911Z [31;21m14:08:53 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (21 retries left).[0m
2021-03-26T14:09:13.6231207Z [31;21m14:09:13 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (20 retries left).[0m
2021-03-26T14:09:34.2449222Z [31;21m14:09:34 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (19 retries left).[0m
2021-03-26T14:09:54.8584134Z [31;21m14:09:54 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (18 retries left).[0m
2021-03-26T14:10:15.6237479Z [31;21m14:10:15 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (17 retries left).[0m
2021-03-26T14:10:36.2497736Z [31;21m14:10:36 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (16 retries left).[0m
2021-03-26T14:10:56.8413822Z [31;21m14:10:56 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (15 retries left).[0m
2021-03-26T14:11:17.4373229Z [31;21m14:11:17 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (14 retries left).[0m
2021-03-26T14:11:38.0264108Z [31;21m14:11:38 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (13 retries left).[0m
2021-03-26T14:12:18.1581608Z [31;21m14:12:18 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (12 retries left).[0m
2021-03-26T14:12:38.7525803Z [31;21m14:12:38 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (11 retries left).[0m
2021-03-26T14:12:59.3522341Z [31;21m14:12:59 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (10 retries left).[0m
2021-03-26T14:13:19.9730043Z [31;21m14:13:19 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (9 retries left).[0m
2021-03-26T14:13:40.5855881Z [31;21m14:13:40 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (8 retries left).[0m
2021-03-26T14:14:02.0216303Z [31;21m14:14:02 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (7 retries left).[0m
2021-03-26T14:14:22.6657500Z [31;21m14:14:22 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (6 retries left).[0m
2021-03-26T14:14:43.3398206Z [31;21m14:14:43 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (5 retries left).[0m
2021-03-26T14:15:03.9312202Z [31;21m14:15:03 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (4 retries left).[0m
2021-03-26T14:15:24.5243491Z [31;21m14:15:24 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (3 retries left).[0m
2021-03-26T14:15:45.1148318Z [31;21m14:15:45 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (2 retries left).[0m
2021-03-26T14:16:05.7873626Z [31;21m14:16:05 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (1 retries left).[0m
2021-03-26T14:16:26.3828175Z [31;21m14:16:26 ERROR cli.engine.ansible.AnsibleCommand - fatal: [ec2-xx-xx-xx-xx.eu-west-1.compute.amazonaws.com]: FAILED! => {"attempts": 30, "changed": true, "cmd": "kubeadm init phase upload-config kubeadm --config /etc/kubeadm/kubeadm-config.yml\n", "delta": "0:00:09.978625", "end": "2021-03-26 14:16:26.296787", "msg": "non-zero return code", "rc": 1, "start": "2021-03-26 14:16:16.318162", "stderr": "W0326 14:16:16.361302 11488 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]\nerror execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: Post https://10.1.2.49:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s: dial tcp 10.1.2.49:6443: connect: connection refused\nTo see the stack trace of this error execute with --v=5 or higher", "stderr_lines": ["W0326 14:16:16.361302 11488 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]", "error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: Post https://10.1.2.49:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s: dial tcp 10.1.2.49:6443: connect: connection refused", "To see the stack trace of this error execute with --v=5 or higher"], "stdout": "[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace", "stdout_lines": ["[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace"]}[0m
```
---
**DoD checklist**
* [ ] Changelog updated (if affected version was released)
* [ ] COMPONENTS.md updated / doesn't need to be updated
* [ ] Automated tests passed (QA pipelines)
* [ ] apply
* [ ] upgrade
* [ ] Case covered by automated test (if possible)
* [ ] Idempotency tested
* [ ] Documentation updated / doesn't need to be updated
* [ ] All conversations in PR resolved
|
1.0
|
[BUG] Epicli apply fails on updating in-cluster configuration after upgrading from older version - **Describe the bug**
Re-applying configuration after upgrading from version 0.6 to develop fails on TASK [kubernetes_common : Update in-cluster configuration].
**How to reproduce**
Steps to reproduce the behavior:
1. Deploy a 0.6 cluster with kubernetes master and node components enabled (at least 1 vm) - execute `epicli apply` from v0.6 branch
2. Upgrade the cluster to the develop branch - execute `epicli upgrade` from develop branch
3. Adjust config yaml to be compatible with the develop version by adding/enabling the repository vm
4. Execute `epicli apply` from develop branch
**Expected behavior**
The configuration has been successfully applied.
**Environment**
- Cloud provider: [all]
- OS: [all]
**epicli version**: [`epicli --version`]
v0.6 -> develop
**Additional context**
```
2021-03-26T14:04:11.6836483Z [38;21m14:04:11 INFO cli.engine.ansible.AnsibleCommand - TASK [kubernetes_common : Update in-cluster configuration] *********************[0m
2021-03-26T14:04:42.3211765Z [31;21m14:04:42 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (30 retries left).[0m
2021-03-26T14:05:22.8164666Z [31;21m14:05:22 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (29 retries left).[0m
2021-03-26T14:05:43.4044389Z [31;21m14:05:43 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (28 retries left).[0m
2021-03-26T14:06:19.4157736Z [31;21m14:06:19 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (27 retries left).[0m
2021-03-26T14:06:40.0400051Z [31;21m14:06:40 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (26 retries left).[0m
2021-03-26T14:07:20.5517770Z [31;21m14:07:20 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (25 retries left).[0m
2021-03-26T14:07:41.2678359Z [31;21m14:07:41 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (24 retries left).[0m
2021-03-26T14:08:02.7200544Z [31;21m14:08:01 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (23 retries left).[0m
2021-03-26T14:08:22.4268007Z [31;21m14:08:22 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (22 retries left).[0m
2021-03-26T14:08:53.0409911Z [31;21m14:08:53 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (21 retries left).[0m
2021-03-26T14:09:13.6231207Z [31;21m14:09:13 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (20 retries left).[0m
2021-03-26T14:09:34.2449222Z [31;21m14:09:34 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (19 retries left).[0m
2021-03-26T14:09:54.8584134Z [31;21m14:09:54 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (18 retries left).[0m
2021-03-26T14:10:15.6237479Z [31;21m14:10:15 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (17 retries left).[0m
2021-03-26T14:10:36.2497736Z [31;21m14:10:36 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (16 retries left).[0m
2021-03-26T14:10:56.8413822Z [31;21m14:10:56 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (15 retries left).[0m
2021-03-26T14:11:17.4373229Z [31;21m14:11:17 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (14 retries left).[0m
2021-03-26T14:11:38.0264108Z [31;21m14:11:38 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (13 retries left).[0m
2021-03-26T14:12:18.1581608Z [31;21m14:12:18 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (12 retries left).[0m
2021-03-26T14:12:38.7525803Z [31;21m14:12:38 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (11 retries left).[0m
2021-03-26T14:12:59.3522341Z [31;21m14:12:59 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (10 retries left).[0m
2021-03-26T14:13:19.9730043Z [31;21m14:13:19 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (9 retries left).[0m
2021-03-26T14:13:40.5855881Z [31;21m14:13:40 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (8 retries left).[0m
2021-03-26T14:14:02.0216303Z [31;21m14:14:02 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (7 retries left).[0m
2021-03-26T14:14:22.6657500Z [31;21m14:14:22 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (6 retries left).[0m
2021-03-26T14:14:43.3398206Z [31;21m14:14:43 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (5 retries left).[0m
2021-03-26T14:15:03.9312202Z [31;21m14:15:03 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (4 retries left).[0m
2021-03-26T14:15:24.5243491Z [31;21m14:15:24 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (3 retries left).[0m
2021-03-26T14:15:45.1148318Z [31;21m14:15:45 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (2 retries left).[0m
2021-03-26T14:16:05.7873626Z [31;21m14:16:05 ERROR cli.engine.ansible.AnsibleCommand - FAILED - RETRYING: Update in-cluster configuration (1 retries left).[0m
2021-03-26T14:16:26.3828175Z [31;21m14:16:26 ERROR cli.engine.ansible.AnsibleCommand - fatal: [ec2-xx-xx-xx-xx.eu-west-1.compute.amazonaws.com]: FAILED! => {"attempts": 30, "changed": true, "cmd": "kubeadm init phase upload-config kubeadm --config /etc/kubeadm/kubeadm-config.yml\n", "delta": "0:00:09.978625", "end": "2021-03-26 14:16:26.296787", "msg": "non-zero return code", "rc": 1, "start": "2021-03-26 14:16:16.318162", "stderr": "W0326 14:16:16.361302 11488 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]\nerror execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: Post https://10.1.2.49:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s: dial tcp 10.1.2.49:6443: connect: connection refused\nTo see the stack trace of this error execute with --v=5 or higher", "stderr_lines": ["W0326 14:16:16.361302 11488 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]", "error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: Post https://10.1.2.49:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s: dial tcp 10.1.2.49:6443: connect: connection refused", "To see the stack trace of this error execute with --v=5 or higher"], "stdout": "[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace", "stdout_lines": ["[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace"]}[0m
```
---
**DoD checklist**
* [ ] Changelog updated (if affected version was released)
* [ ] COMPONENTS.md updated / doesn't need to be updated
* [ ] Automated tests passed (QA pipelines)
* [ ] apply
* [ ] upgrade
* [ ] Case covered by automated test (if possible)
* [ ] Idempotency tested
* [ ] Documentation updated / doesn't need to be updated
* [ ] All conversations in PR resolved
|
priority
|
epicli apply fails on updating in cluster configuration after upgrading from older version describe the bug re applying configuration after upgrading from version to develop fails on task how to reproduce steps to reproduce the behavior deploy a cluster with kubernetes master and node components enabled at least vm execute epicli apply from branch upgrade the cluster to the develop branch execute epicli upgrade from develop branch adjust config yaml to be compatible with the develop version by adding enabling the repository vm execute epicli apply from develop branch expected behavior the configuration has been successfully applied environment cloud provider os epicli version develop additional context error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left error cli engine ansible ansiblecommand failed retrying update in cluster configuration retries left failed attempts changed true cmd kubeadm init phase upload config kubeadm config etc kubeadm kubeadm config yml n delta end msg non zero return code rc start stderr configset go warning kubeadm cannot validate component configs for api groups nerror execution phase upload config kubeadm error uploading the kubeadm clusterconfiguration post dial tcp connect connection refused nto see the stack trace of this error execute with v or higher stderr lines warning kubeadm cannot validate component configs for api groups error execution phase upload config kubeadm error uploading the kubeadm clusterconfiguration post dial tcp connect connection refused to see the stack trace of this error execute with v or higher stdout storing the configuration used in configmap kubeadm config in the kube system namespace stdout lines storing the configuration used in configmap kubeadm config in the kube system namespace dod checklist changelog updated if affected version was released components md updated doesn t need to be updated automated tests passed qa pipelines apply upgrade case covered by automated test if possible idempotency tested documentation updated doesn t need to be updated all conversations in pr resolved
| 1
|
184,122
| 6,705,779,008
|
IssuesEvent
|
2017-10-12 02:35:12
|
syndesisio/syndesis-ui
|
https://api.github.com/repos/syndesisio/syndesis-ui
|
closed
|
Strange UI for new integration when no connection is available
|
bug Priority - High
|
Just reported by Lars:

IMO there should be only the lower 'create connection', or even having 'Create integration' disabled when there are no connections defined.
|
1.0
|
Strange UI for new integration when no connection is available - Just reported by Lars:

IMO there should be only the lower 'create connection', or even having 'Create integration' disabled when there are no connections defined.
|
priority
|
strange ui for new integration when no connection is available just reported by lars imo there should be only the lower create connection or even having create integration disabled when there are no connections defined
| 1
|
308,410
| 9,438,919,973
|
IssuesEvent
|
2019-04-14 05:23:53
|
CS2103-AY1819S2-W15-4/main
|
https://api.github.com/repos/CS2103-AY1819S2-W15-4/main
|
closed
|
Handle exception when list of people are cleared and schedule is called.
|
priority.High
|
How to get error: clear list of people. Call "Schedule".
|
1.0
|
Handle exception when list of people are cleared and schedule is called. - How to get error: clear list of people. Call "Schedule".
|
priority
|
handle exception when list of people are cleared and schedule is called how to get error clear list of people call schedule
| 1
|
64,422
| 3,211,550,484
|
IssuesEvent
|
2015-10-06 11:26:38
|
CoderDojo/community-platform
|
https://api.github.com/repos/CoderDojo/community-platform
|
closed
|
Verified Dojo missing from new database
|
bug high priority question
|
This Dojo is missing:
`Brecht @ de bib Brecht`
It is not in the new db. It is owned by the same user as this Dojo: http://zen.coderdojo.com/dashboard/dojo/be/kerklei-2-brecht/dojo-brecht
Was there an issue with the migration where one user owned more than one Dojo?
|
1.0
|
Verified Dojo missing from new database - This Dojo is missing:
`Brecht @ de bib Brecht`
It is not in the new db. It is owned by the same user as this Dojo: http://zen.coderdojo.com/dashboard/dojo/be/kerklei-2-brecht/dojo-brecht
Was there an issue with the migration where one user owned more than one Dojo?
|
priority
|
verified dojo missing from new database this dojo is missing brecht de bib brecht it is not in the new db it is owned by the same user as this dojo was there an issue with the migration where one user owned more than one dojo
| 1
|
735,569
| 25,403,908,969
|
IssuesEvent
|
2022-11-22 14:03:39
|
netdata/netdata-cloud
|
https://api.github.com/repos/netdata/netdata-cloud
|
opened
|
[Bug]: Chart only present on child agent isn't synced to cloud
|
bug FOSS agent priority/high visualizations-team
|
### Bug description
In a parent/child setup where child is streaming data in cloud through a claimed parent (only parent is claimed to netdata cloud, NOT the child), a chart X that lives on child but doesn't exist on parent **doesn't sync in cloud although it exists on agent dashboard**
### Expected behavior
All charts should sync to cloud eventually.
### Steps to reproduce
1. Setup parent and child agents
2. Claim parent to netdata cloud
3. Create a chart in child agent (e.g connect a usb flashdrive on the child host which doesn't exist on parent)
4. Chart should sync to cloud and appear in charts list
### Screenshots
_No response_
### Error Logs
Logs on child's side
```
2022-11-22 09:33:30: STREAM: 377 from 'parentd3tqypgcj2v:19999' for host 'child_one_d3phxg4jzio': REPLAY_CHART "mock_test.mock-area" "true" 0 0
2022-11-22 09:33:30: STREAM: 377 from 'parentd3tqypgcj2v:19999' for host 'child_one_d3phxg4jzio': REPLAY_CHART "netdata.runtime_mock_test" "true" 0 0
```
Parent is full of
`sending empty replication because first entry of the child is invalid (0)`
even though both have been claimed and streaming for more than 2 mins
### Desktop
OS: ubuntu
Browser Chrome
Browser Version 106
### Additional context
This seems to be an agent issue that affects syncing of charts in cloud. Agent dashboard seems to work as expected.
agent version is the latest nightly. Issues started appearing on 22/11/2022
|
1.0
|
[Bug]: Chart only present on child agent isn't synced to cloud - ### Bug description
In a parent/child setup where child is streaming data in cloud through a claimed parent (only parent is claimed to netdata cloud, NOT the child), a chart X that lives on child but doesn't exist on parent **doesn't sync in cloud although it exists on agent dashboard**
### Expected behavior
All charts should sync to cloud eventually.
### Steps to reproduce
1. Setup parent and child agents
2. Claim parent to netdata cloud
3. Create a chart in child agent (e.g connect a usb flashdrive on the child host which doesn't exist on parent)
4. Chart should sync to cloud and appear in charts list
### Screenshots
_No response_
### Error Logs
Logs on child's side
```
2022-11-22 09:33:30: STREAM: 377 from 'parentd3tqypgcj2v:19999' for host 'child_one_d3phxg4jzio': REPLAY_CHART "mock_test.mock-area" "true" 0 0
2022-11-22 09:33:30: STREAM: 377 from 'parentd3tqypgcj2v:19999' for host 'child_one_d3phxg4jzio': REPLAY_CHART "netdata.runtime_mock_test" "true" 0 0
```
Parent is full of
`sending empty replication because first entry of the child is invalid (0)`
even though both have been claimed and streaming for more than 2 mins
### Desktop
OS: ubuntu
Browser Chrome
Browser Version 106
### Additional context
This seems to be an agent issue that affects syncing of charts in cloud. Agent dashboard seems to work as expected.
agent version is the latest nightly. Issues started appearing on 22/11/2022
|
priority
|
chart only present on child agent isn t synced to cloud bug description in a parent child setup where child is streaming data in cloud through a claimed parent only parent is claimed to netdata cloud not the child a chart x that lives on child but doesn t exist on parent doesn t sync in cloud although it exists on agent dashboard expected behavior all charts should sync to cloud eventually steps to reproduce setup parent and child agents claim parent to netdata cloud create a chart in child agent e g connect a usb flashdrive on the child host which doesn t exist on parent chart should sync to cloud and appear in charts list screenshots no response error logs logs on child s side stream from for host child one replay chart mock test mock area true stream from for host child one replay chart netdata runtime mock test true parent is full of sending empty replication because first entry of the child is invalid even though both have been claimed and streaming for more than mins desktop os ubuntu browser chrome browser version additional context this seems to be an agent issue that affects syncing of charts in cloud agent dashboard seems to work as expected agent version is the latest nightly issues started appearing on
| 1
|
402,622
| 11,812,163,222
|
IssuesEvent
|
2020-03-19 19:36:27
|
ClinGen/clincoded
|
https://api.github.com/repos/ClinGen/clincoded
|
closed
|
Delete the GDM - Wrong MONDO ID
|
EP request GCI curation edit priority: high
|
https://curation.clinicalgenome.org/curation-central/?gdm=cdd07931-9249-49a5-b918-9f363cceda58&pmid=10700180
New MONDO ID hasn't yet been published, however it was due for a January release.
|
1.0
|
Delete the GDM - Wrong MONDO ID - https://curation.clinicalgenome.org/curation-central/?gdm=cdd07931-9249-49a5-b918-9f363cceda58&pmid=10700180
New MONDO ID hasn't yet been published, however it was due for a January release.
|
priority
|
delete the gdm wrong mondo id new mondo id hasn t yet been published however it was due for a january release
| 1
|
442,282
| 12,743,051,236
|
IssuesEvent
|
2020-06-26 09:38:31
|
wso2/micro-integrator
|
https://api.github.com/repos/wso2/micro-integrator
|
closed
|
[External user store][Ldap][Management API]Get Users return 404
|
Priority/High Severity/Blocker
|
**Description:**
When it is connected to an external user store (ldap), get users return 404.
Doc: https://ei.docs.wso2.com/en/7.1.0/micro-integrator/administer-and-observe/working-with-management-api/#get-users
**Steps to reproduce:**
1. Create the following ldap user store.
[userstore-ldif.zip](https://github.com/wso2/micro-integrator/files/4829475/userstore-ldif.zip)
2. Add the below configurations to the deployment.toml file.
```
[internal_apis.file_user_store]
enable = false
[user_store]
type = "read_only_ldap"
class = "org.wso2.micro.integrator.security.user.core.ldap.ReadOnlyLDAPUserStoreManager"
connection_url = "ldap://localhost:10389"
connection_name = "uid=admin,ou=system"
connection_password = "secret"
user_search_base = "ou=system"
```
3. Start the micro integrator as below.
`./micro-integrator.sh -DenableManagementApi`
4. Login to the server as below by obtaining the access token.
```
curl -X GET "https://localhost:9164/management/login" -H "accept: application/json" -H "Authorization: Basic YWRtaW46c2VjcmV0" -k -i
```
5. Now try to invoke the users and add users as below.
```
curl -X GET "https://localhost:9164/management/users?pattern=”*us*”&role=”role”" -H "accept: application/json" -H "Authorization: Bearer %AccessToken%" -k -i
```
```
curl -X POST -d @user "https://localhost:9164/management/users" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer %AccessToken% " -k -i
```
https://ei.docs.wso2.com/en/7.1.0/micro-integrator/administer-and-observe/working-with-management-api/#add-users
**Expected**: List the users or add users
**Actual**: Getting the following error.
```
HTTP/1.1 404 Not Found
Authorization: Bearer eyJraWQiOiI0M2QyYzhiZi1mOTM0LTRhY2MtOWFkYS0xY2IxODJkZTdmZjYiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImlzcyI6Imh0dHBzOlwvXC8xMjcuMC4wLjE6OTE2NFwvIiwiZXhwIjoxNTkzMDcxNjAyfQ.WPza3dOJH5WAd0m9-oYSK1eysg72_YaBBk7L0XxPKA4cjLP2L0O08E0wHkqwW6CaJFbeX0rpodtq5YQSkFMRgLjhanrObT4ZMn0L5oGWcOQhvqb95goGn_WOqHZDt5yVLaXcaCRzyE-N_i1CK-5Y2uLAiW3hvMzlwqYpSu1dczoFGcvMkKxx_F3IFqeOu__zsOvag6QvX395SJ-Ll0-iKDXMej9OQKIycagtBtGJr5M68uKJ8XADLfJxb0YCvShh14p91bsp-a48y7Q0WA8cCgI2AW7BRRuk6JctfRMzRjx2XnuNk3dYUzBoNoGDePRciDTi1UhmW3LMuT_HrEbPyg
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE,OPTIONS, PATCH
Host: localhost:9164
Access-Control-Allow-Headers: Authorization, Content-Type
accept: application/json
Date: Thu, 25 Jun 2020 06:54:12 GMT
Transfer-Encoding: chunked
```
Carbon logs
```
[2020-06-25 12:24:12,488] ERROR {AbstractUserStoreManager} - Error occurred while accessing Java Security Manager Privilege Block when called by method getRoleListOfUser with 1 length of Objects and argTypes [class java.lang.String]
[2020-06-25 12:24:12,489] ERROR {AuthorizationHandler} - Error initializing the user store org.wso2.micro.integrator.security.user.core.UserStoreException: Error occurred while accessing Java Security Manager Privilege Block when called by method getRoleListOfUser with 1 length of Objects and argTypes [class java.lang.String]
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.callSecure(AbstractUserStoreManager.java:192)
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.getRoleListOfUser(AbstractUserStoreManager.java:4271)
at org.wso2.micro.integrator.management.apis.security.handler.AuthorizationHandler.authorize(AuthorizationHandler.java:109)
at org.wso2.micro.integrator.management.apis.security.handler.AuthorizationHandler.processAuthorizationWithCarbonUserStore(AuthorizationHandler.java:99)
at org.wso2.micro.integrator.management.apis.security.handler.AuthorizationHandler.authorize(AuthorizationHandler.java:79)
at org.wso2.micro.integrator.management.apis.security.handler.AuthorizationHandlerAdapter.handle(AuthorizationHandlerAdapter.java:50)
at org.wso2.micro.integrator.management.apis.security.handler.SecurityHandlerAdapter.invoke(SecurityHandlerAdapter.java:120)
at org.wso2.micro.integrator.management.apis.security.handler.AuthorizationHandler.invoke(AuthorizationHandler.java:59)
at org.wso2.carbon.inbound.endpoint.internal.http.api.InternalAPIDispatcher.dispatch(InternalAPIDispatcher.java:75)
at org.wso2.carbon.inbound.endpoint.protocol.http.InboundHttpServerWorker.run(InboundHttpServerWorker.java:109)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.security.PrivilegedActionException: java.lang.reflect.InvocationTargetException
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.callSecure(AbstractUserStoreManager.java:171)
... 13 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager$2.run(AbstractUserStoreManager.java:174)
... 15 more
Caused by: java.lang.NullPointerException
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.doGetInternalRoleListOfUser(AbstractUserStoreManager.java:386)
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.doGetRoleListOfUser(AbstractUserStoreManager.java:5719)
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.getRoleListOfUser(AbstractUserStoreManager.java:4295)
... 20 more
[2020-06-25 12:24:12,490] ERROR {AuthorizationHandler} - User admin cannot be authorized
```
**Note**:
But using the same above access token can access the resources such as apis, carbon applications, and etc.
```
HTTP/1.1 200 OK
Authorization: Bearer eyJraWQiOiI0M2QyYzhiZi1mOTM0LTRhY2MtOWFkYS0xY2IxODJkZTdmZjYiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImlzcyI6Imh0dHBzOlwvXC8xMjcuMC4wLjE6OTE2NFwvIiwiZXhwIjoxNTkzMDcxNjAyfQ.WPza3dOJH5WAd0m9-oYSK1eysg72_YaBBk7L0XxPKA4cjLP2L0O08E0wHkqwW6CaJFbeX0rpodtq5YQSkFMRgLjhanrObT4ZMn0L5oGWcOQhvqb95goGn_WOqHZDt5yVLaXcaCRzyE-N_i1CK-5Y2uLAiW3hvMzlwqYpSu1dczoFGcvMkKxx_F3IFqeOu__zsOvag6QvX395SJ-Ll0-iKDXMej9OQKIycagtBtGJr5M68uKJ8XADLfJxb0YCvShh14p91bsp-a48y7Q0WA8cCgI2AW7BRRuk6JctfRMzRjx2XnuNk3dYUzBoNoGDePRciDTi1UhmW3LMuT_HrEbPyg
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE,OPTIONS, PATCH
Host: localhost:9164
Access-Control-Allow-Headers: Authorization, Content-Type
accept: application/json
Content-Type: application/json; charset=UTF-8
Date: Thu, 25 Jun 2020 06:55:40 GMT
Transfer-Encoding: chunked
{"count":2,"list":[{"name":"Extract","url":"http://localhost:8290/extract"},{"name":"TestGoogle","url":"http://localhost:8290/search"}]}%
````
Further, invoking secured rest APIs too work with the existing users in the external user store.
|
1.0
|
[External user store][Ldap][Management API]Get Users return 404 - **Description:**
When it is connected to an external user store (ldap), get users return 404.
Doc: https://ei.docs.wso2.com/en/7.1.0/micro-integrator/administer-and-observe/working-with-management-api/#get-users
**Steps to reproduce:**
1. Create the following ldap user store.
[userstore-ldif.zip](https://github.com/wso2/micro-integrator/files/4829475/userstore-ldif.zip)
2. Add the below configurations to the deployment.toml file.
```
[internal_apis.file_user_store]
enable = false
[user_store]
type = "read_only_ldap"
class = "org.wso2.micro.integrator.security.user.core.ldap.ReadOnlyLDAPUserStoreManager"
connection_url = "ldap://localhost:10389"
connection_name = "uid=admin,ou=system"
connection_password = "secret"
user_search_base = "ou=system"
```
3. Start the micro integrator as below.
`./micro-integrator.sh -DenableManagementApi`
4. Login to the server as below by obtaining the access token.
```
curl -X GET "https://localhost:9164/management/login" -H "accept: application/json" -H "Authorization: Basic YWRtaW46c2VjcmV0" -k -i
```
5. Now try to invoke the users and add users as below.
```
curl -X GET "https://localhost:9164/management/users?pattern=”*us*”&role=”role”" -H "accept: application/json" -H "Authorization: Bearer %AccessToken%" -k -i
```
```
curl -X POST -d @user "https://localhost:9164/management/users" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer %AccessToken% " -k -i
```
https://ei.docs.wso2.com/en/7.1.0/micro-integrator/administer-and-observe/working-with-management-api/#add-users
**Expected**: List the users or add users
**Actual**: Getting the following error.
```
HTTP/1.1 404 Not Found
Authorization: Bearer eyJraWQiOiI0M2QyYzhiZi1mOTM0LTRhY2MtOWFkYS0xY2IxODJkZTdmZjYiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImlzcyI6Imh0dHBzOlwvXC8xMjcuMC4wLjE6OTE2NFwvIiwiZXhwIjoxNTkzMDcxNjAyfQ.WPza3dOJH5WAd0m9-oYSK1eysg72_YaBBk7L0XxPKA4cjLP2L0O08E0wHkqwW6CaJFbeX0rpodtq5YQSkFMRgLjhanrObT4ZMn0L5oGWcOQhvqb95goGn_WOqHZDt5yVLaXcaCRzyE-N_i1CK-5Y2uLAiW3hvMzlwqYpSu1dczoFGcvMkKxx_F3IFqeOu__zsOvag6QvX395SJ-Ll0-iKDXMej9OQKIycagtBtGJr5M68uKJ8XADLfJxb0YCvShh14p91bsp-a48y7Q0WA8cCgI2AW7BRRuk6JctfRMzRjx2XnuNk3dYUzBoNoGDePRciDTi1UhmW3LMuT_HrEbPyg
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE,OPTIONS, PATCH
Host: localhost:9164
Access-Control-Allow-Headers: Authorization, Content-Type
accept: application/json
Date: Thu, 25 Jun 2020 06:54:12 GMT
Transfer-Encoding: chunked
```
Carbon logs
```
[2020-06-25 12:24:12,488] ERROR {AbstractUserStoreManager} - Error occurred while accessing Java Security Manager Privilege Block when called by method getRoleListOfUser with 1 length of Objects and argTypes [class java.lang.String]
[2020-06-25 12:24:12,489] ERROR {AuthorizationHandler} - Error initializing the user store org.wso2.micro.integrator.security.user.core.UserStoreException: Error occurred while accessing Java Security Manager Privilege Block when called by method getRoleListOfUser with 1 length of Objects and argTypes [class java.lang.String]
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.callSecure(AbstractUserStoreManager.java:192)
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.getRoleListOfUser(AbstractUserStoreManager.java:4271)
at org.wso2.micro.integrator.management.apis.security.handler.AuthorizationHandler.authorize(AuthorizationHandler.java:109)
at org.wso2.micro.integrator.management.apis.security.handler.AuthorizationHandler.processAuthorizationWithCarbonUserStore(AuthorizationHandler.java:99)
at org.wso2.micro.integrator.management.apis.security.handler.AuthorizationHandler.authorize(AuthorizationHandler.java:79)
at org.wso2.micro.integrator.management.apis.security.handler.AuthorizationHandlerAdapter.handle(AuthorizationHandlerAdapter.java:50)
at org.wso2.micro.integrator.management.apis.security.handler.SecurityHandlerAdapter.invoke(SecurityHandlerAdapter.java:120)
at org.wso2.micro.integrator.management.apis.security.handler.AuthorizationHandler.invoke(AuthorizationHandler.java:59)
at org.wso2.carbon.inbound.endpoint.internal.http.api.InternalAPIDispatcher.dispatch(InternalAPIDispatcher.java:75)
at org.wso2.carbon.inbound.endpoint.protocol.http.InboundHttpServerWorker.run(InboundHttpServerWorker.java:109)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.security.PrivilegedActionException: java.lang.reflect.InvocationTargetException
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.callSecure(AbstractUserStoreManager.java:171)
... 13 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager$2.run(AbstractUserStoreManager.java:174)
... 15 more
Caused by: java.lang.NullPointerException
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.doGetInternalRoleListOfUser(AbstractUserStoreManager.java:386)
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.doGetRoleListOfUser(AbstractUserStoreManager.java:5719)
at org.wso2.micro.integrator.security.user.core.common.AbstractUserStoreManager.getRoleListOfUser(AbstractUserStoreManager.java:4295)
... 20 more
[2020-06-25 12:24:12,490] ERROR {AuthorizationHandler} - User admin cannot be authorized
```
**Note**:
But using the same above access token can access the resources such as apis, carbon applications, and etc.
```
HTTP/1.1 200 OK
Authorization: Bearer eyJraWQiOiI0M2QyYzhiZi1mOTM0LTRhY2MtOWFkYS0xY2IxODJkZTdmZjYiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImlzcyI6Imh0dHBzOlwvXC8xMjcuMC4wLjE6OTE2NFwvIiwiZXhwIjoxNTkzMDcxNjAyfQ.WPza3dOJH5WAd0m9-oYSK1eysg72_YaBBk7L0XxPKA4cjLP2L0O08E0wHkqwW6CaJFbeX0rpodtq5YQSkFMRgLjhanrObT4ZMn0L5oGWcOQhvqb95goGn_WOqHZDt5yVLaXcaCRzyE-N_i1CK-5Y2uLAiW3hvMzlwqYpSu1dczoFGcvMkKxx_F3IFqeOu__zsOvag6QvX395SJ-Ll0-iKDXMej9OQKIycagtBtGJr5M68uKJ8XADLfJxb0YCvShh14p91bsp-a48y7Q0WA8cCgI2AW7BRRuk6JctfRMzRjx2XnuNk3dYUzBoNoGDePRciDTi1UhmW3LMuT_HrEbPyg
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE,OPTIONS, PATCH
Host: localhost:9164
Access-Control-Allow-Headers: Authorization, Content-Type
accept: application/json
Content-Type: application/json; charset=UTF-8
Date: Thu, 25 Jun 2020 06:55:40 GMT
Transfer-Encoding: chunked
{"count":2,"list":[{"name":"Extract","url":"http://localhost:8290/extract"},{"name":"TestGoogle","url":"http://localhost:8290/search"}]}%
````
Further, invoking secured rest APIs too work with the existing users in the external user store.
|
priority
|
get users return description when it is connected to an external user store ldap get users return doc steps to reproduce create the following ldap user store add the below configurations to the deployment toml file enable false type read only ldap class org micro integrator security user core ldap readonlyldapuserstoremanager connection url ldap localhost connection name uid admin ou system connection password secret user search base ou system start the micro integrator as below micro integrator sh denablemanagementapi login to the server as below by obtaining the access token curl x get h accept application json h authorization basic k i now try to invoke the users and add users as below curl x get h accept application json h authorization bearer accesstoken k i curl x post d user h accept application json h content type application json h authorization bearer accesstoken k i expected list the users or add users actual getting the following error http not found authorization bearer n hrebpyg access control allow origin access control allow methods get post put delete options patch host localhost access control allow headers authorization content type accept application json date thu jun gmt transfer encoding chunked carbon logs error abstractuserstoremanager error occurred while accessing java security manager privilege block when called by method getrolelistofuser with length of objects and argtypes error authorizationhandler error initializing the user store org micro integrator security user core userstoreexception error occurred while accessing java security manager privilege block when called by method getrolelistofuser with length of objects and argtypes at org micro integrator security user core common abstractuserstoremanager callsecure abstractuserstoremanager java at org micro integrator security user core common abstractuserstoremanager getrolelistofuser abstractuserstoremanager java at org micro integrator management apis security handler authorizationhandler authorize authorizationhandler java at org micro integrator management apis security handler authorizationhandler processauthorizationwithcarbonuserstore authorizationhandler java at org micro integrator management apis security handler authorizationhandler authorize authorizationhandler java at org micro integrator management apis security handler authorizationhandleradapter handle authorizationhandleradapter java at org micro integrator management apis security handler securityhandleradapter invoke securityhandleradapter java at org micro integrator management apis security handler authorizationhandler invoke authorizationhandler java at org carbon inbound endpoint internal http api internalapidispatcher dispatch internalapidispatcher java at org carbon inbound endpoint protocol http inboundhttpserverworker run inboundhttpserverworker java at org apache transport base threads nativeworkerpool run nativeworkerpool java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java security privilegedactionexception java lang reflect invocationtargetexception at java security accesscontroller doprivileged native method at org micro integrator security user core common abstractuserstoremanager callsecure abstractuserstoremanager java more caused by java lang reflect invocationtargetexception at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org micro integrator security user core common abstractuserstoremanager run abstractuserstoremanager java more caused by java lang nullpointerexception at org micro integrator security user core common abstractuserstoremanager dogetinternalrolelistofuser abstractuserstoremanager java at org micro integrator security user core common abstractuserstoremanager dogetrolelistofuser abstractuserstoremanager java at org micro integrator security user core common abstractuserstoremanager getrolelistofuser abstractuserstoremanager java more error authorizationhandler user admin cannot be authorized note but using the same above access token can access the resources such as apis carbon applications and etc http ok authorization bearer n hrebpyg access control allow origin access control allow methods get post put delete options patch host localhost access control allow headers authorization content type accept application json content type application json charset utf date thu jun gmt transfer encoding chunked count list further invoking secured rest apis too work with the existing users in the external user store
| 1
|
370,105
| 10,925,415,318
|
IssuesEvent
|
2019-11-22 12:27:50
|
ubtue/DatenProbleme
|
https://api.github.com/repos/ubtue/DatenProbleme
|
opened
|
ISSN 2159-6808 Journal of Religion and Violence Rezensionen
|
high priority
|
Rezensionen werden nicht mit 655 getagt.
Sie stehen in der Sektion Book Reviews
|
1.0
|
ISSN 2159-6808 Journal of Religion and Violence Rezensionen - Rezensionen werden nicht mit 655 getagt.
Sie stehen in der Sektion Book Reviews
|
priority
|
issn journal of religion and violence rezensionen rezensionen werden nicht mit getagt sie stehen in der sektion book reviews
| 1
|
75,214
| 3,460,290,365
|
IssuesEvent
|
2015-12-19 02:26:34
|
notsecure/uTox
|
https://api.github.com/repos/notsecure/uTox
|
closed
|
uTox automatically accepts group invites without asking
|
bug groups high_priority Security
|
When being invited into a group chat, uTox automatically accepts an invite. It should ask the user if they want to join the groupchat first.
|
1.0
|
uTox automatically accepts group invites without asking - When being invited into a group chat, uTox automatically accepts an invite. It should ask the user if they want to join the groupchat first.
|
priority
|
utox automatically accepts group invites without asking when being invited into a group chat utox automatically accepts an invite it should ask the user if they want to join the groupchat first
| 1
|
380,397
| 11,259,998,519
|
IssuesEvent
|
2020-01-13 09:38:28
|
ahmedkaludi/accelerated-mobile-pages
|
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
|
closed
|
Calculated fields form plugin is generating validation error on AMP pages.
|
[Priority: HIGH] bug
|
Missing URL for attribute 'src' in tag 'amp-img'.
Ref;-https://secure.helpscout.net/conversation/1049505523/105584?folderId=1060556
|
1.0
|
Calculated fields form plugin is generating validation error on AMP pages. - Missing URL for attribute 'src' in tag 'amp-img'.
Ref;-https://secure.helpscout.net/conversation/1049505523/105584?folderId=1060556
|
priority
|
calculated fields form plugin is generating validation error on amp pages missing url for attribute src in tag amp img ref
| 1
|
195,566
| 6,913,136,118
|
IssuesEvent
|
2017-11-28 14:25:40
|
smartchicago/chicago-early-learning
|
https://api.github.com/repos/smartchicago/chicago-early-learning
|
closed
|
Add map images to Family Resource Centers page
|
High Priority Waiting on Merge
|
We still need to add the map image assets to the Family Resource Centers page as seen in the original design - #828.
|
1.0
|
Add map images to Family Resource Centers page - We still need to add the map image assets to the Family Resource Centers page as seen in the original design - #828.
|
priority
|
add map images to family resource centers page we still need to add the map image assets to the family resource centers page as seen in the original design
| 1
|
337,263
| 10,212,850,371
|
IssuesEvent
|
2019-08-14 20:31:23
|
hydroshare/hydroshare
|
https://api.github.com/repos/hydroshare/hydroshare
|
opened
|
"Edit" message for keywords shows in "view" mode.
|
High Priority page state
|
Take a look at just about any public resource to reproduce. I was able to reproduce this issue on this resource without even being logged in.
https://www.hydroshare.org/resource/5586e9524b114c30a1a29d58f4c98355/

|
1.0
|
"Edit" message for keywords shows in "view" mode. - Take a look at just about any public resource to reproduce. I was able to reproduce this issue on this resource without even being logged in.
https://www.hydroshare.org/resource/5586e9524b114c30a1a29d58f4c98355/

|
priority
|
edit message for keywords shows in view mode take a look at just about any public resource to reproduce i was able to reproduce this issue on this resource without even being logged in
| 1
|
750,106
| 26,188,976,650
|
IssuesEvent
|
2023-01-03 06:35:41
|
factly/kavach
|
https://api.github.com/repos/factly/kavach
|
opened
|
Posthog not working on Kavach
|
priority:high
|
Environment variable passed to kavach-web image is not being taken up using process.env ( example - for kavach-web I am passing posthog_api_url and posthog_api_key and its not being taken because of which it is not showing the authentication events. )
|
1.0
|
Posthog not working on Kavach - Environment variable passed to kavach-web image is not being taken up using process.env ( example - for kavach-web I am passing posthog_api_url and posthog_api_key and its not being taken because of which it is not showing the authentication events. )
|
priority
|
posthog not working on kavach environment variable passed to kavach web image is not being taken up using process env example for kavach web i am passing posthog api url and posthog api key and its not being taken because of which it is not showing the authentication events
| 1
|
337,435
| 10,218,007,628
|
IssuesEvent
|
2019-08-15 14:57:37
|
wso2/streaming-integrator-tooling
|
https://api.github.com/repos/wso2/streaming-integrator-tooling
|
closed
|
change startup log to streaming integrator
|
Priority/Highest Severity/Critical
|
**Description:**
Currently startup log prints as 'WSO2 Stream Processor started in 3.979 sec' We need to change this.
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
|
1.0
|
change startup log to streaming integrator - **Description:**
Currently startup log prints as 'WSO2 Stream Processor started in 3.979 sec' We need to change this.
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
|
priority
|
change startup log to streaming integrator description currently startup log prints as stream processor started in sec we need to change this suggested labels suggested assignees affected product version os db other environment details and versions steps to reproduce related issues
| 1
|
587,663
| 17,628,424,134
|
IssuesEvent
|
2021-08-19 03:07:00
|
annagabriel-hash/stock-transfer-app
|
https://api.github.com/repos/annagabriel-hash/stock-transfer-app
|
closed
|
Given that I'm a user, when I create an account, the system automatically sets my account as a buyer role.
|
App: Backend Priority: High State: In Progress Type: Feature
|
### Tasks
- [x] Specify user method to add default role
- [x] Update user model to add method to assign buyer role if role is not provided
|
1.0
|
Given that I'm a user, when I create an account, the system automatically sets my account as a buyer role. - ### Tasks
- [x] Specify user method to add default role
- [x] Update user model to add method to assign buyer role if role is not provided
|
priority
|
given that i m a user when i create an account the system automatically sets my account as a buyer role tasks specify user method to add default role update user model to add method to assign buyer role if role is not provided
| 1
|
88,582
| 3,779,416,906
|
IssuesEvent
|
2016-03-18 08:10:36
|
sci-visus/visus-issues
|
https://api.github.com/repos/sci-visus/visus-issues
|
closed
|
rendering is slower than it should be
|
Bug microscopy Priority High
|
Now it appears that every mouse wheel update is passed to the app, resulting in a very unpleasant experience while zooming, especially when getting closer to objects.
Zoom once relied on a timer associated with the scroll wheel input in order to behave properly. This appears to have been removed.
Assigning to Duong since he was the last to work on the camera, but it may well be something for Giorgio.
|
1.0
|
rendering is slower than it should be - Now it appears that every mouse wheel update is passed to the app, resulting in a very unpleasant experience while zooming, especially when getting closer to objects.
Zoom once relied on a timer associated with the scroll wheel input in order to behave properly. This appears to have been removed.
Assigning to Duong since he was the last to work on the camera, but it may well be something for Giorgio.
|
priority
|
rendering is slower than it should be now it appears that every mouse wheel update is passed to the app resulting in a very unpleasant experience while zooming especially when getting closer to objects zoom once relied on a timer associated with the scroll wheel input in order to behave properly this appears to have been removed assigning to duong since he was the last to work on the camera but it may well be something for giorgio
| 1
|
258,346
| 8,169,499,814
|
IssuesEvent
|
2018-08-27 01:53:50
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Null Pointer Exception Printed in Back end when user authentication failed
|
5.7.0 Priority/High Severity/Minor Type/Bug
|
Null Pointer Exception printed in the back end logs when user authentication fails. However this exception does not harm the flow.
|
1.0
|
Null Pointer Exception Printed in Back end when user authentication failed - Null Pointer Exception printed in the back end logs when user authentication fails. However this exception does not harm the flow.
|
priority
|
null pointer exception printed in back end when user authentication failed null pointer exception printed in the back end logs when user authentication fails however this exception does not harm the flow
| 1
|
556,514
| 16,485,205,668
|
IssuesEvent
|
2021-05-24 16:55:06
|
sopra-fs21-group-11/sopra-client
|
https://api.github.com/repos/sopra-fs21-group-11/sopra-client
|
closed
|
S14: As a player I want to extend or edit the stored dataset of locations with fetched locations from a geo-location api in order to randomize and customize the game experience.
|
high priority user story
|
- [ ] logged in players can search for locations, they want to add.
- [ ] By clicking on a button, the player gets redirected to a geo-service. (eg. Google Maps, map.geo.admin.ch, wikipedia or similar).
- [ ] If the provided information is not sufficient, the player can add the missing information by hand (S13).
- [ ] only complete entries (coordinates, area, population etc…) are accepted.
|
1.0
|
S14: As a player I want to extend or edit the stored dataset of locations with fetched locations from a geo-location api in order to randomize and customize the game experience. - - [ ] logged in players can search for locations, they want to add.
- [ ] By clicking on a button, the player gets redirected to a geo-service. (eg. Google Maps, map.geo.admin.ch, wikipedia or similar).
- [ ] If the provided information is not sufficient, the player can add the missing information by hand (S13).
- [ ] only complete entries (coordinates, area, population etc…) are accepted.
|
priority
|
as a player i want to extend or edit the stored dataset of locations with fetched locations from a geo location api in order to randomize and customize the game experience logged in players can search for locations they want to add by clicking on a button the player gets redirected to a geo service eg google maps map geo admin ch wikipedia or similar if the provided information is not sufficient the player can add the missing information by hand only complete entries coordinates area population etc… are accepted
| 1
|
585,927
| 17,538,480,814
|
IssuesEvent
|
2021-08-12 09:13:01
|
Haivision/srt
|
https://api.github.com/repos/Haivision/srt
|
closed
|
[BUG] sender/listener with passphrase crashes (SIGSEGV) on incoming call with no passphrase
|
Priority: High Type: Bug [core]
|
Makito X4 encoder (SRT 1.4.2) caller/sender w/passphrase -> HMG listener/receiver no passphrase
SRTO_ENFORCEDENCRYPTION set (default) on caller/sender.
caller fails (REJECT from HS) and reconnect every 40ms multiple times (> 30 secs) before crashing (SIGSEGV).
Here a backtrace extracted from the attached log:
`[30/04/2021 10:09:59.843,sessmgrd:24257-619,CRITI,LIBMXTOOLS] _SIGSEVSignalHandler(threadcheck.c:172): SEGMENTATION FAULT (11) in thread "SRT:RcvQ:w13667"!!!
[30/04/2021 10:09:59.851,sessmgrd:24257-620,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:149): BACKTRACE INFO:
[30/04/2021 10:09:59.851,sessmgrd:24257-621,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#1/9: /usr/lib/libmxtools.so(DumpBackTraceInfo+0x18) [0x7f909c8aec]
[30/04/2021 10:09:59.851,sessmgrd:24257-622,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#2/9: /usr/lib/libmxtools.so(_SIGSEVSignalHandler+0xdc) [0x7f909c8d28]
[30/04/2021 10:09:59.851,sessmgrd:24257-623,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#3/9: linux-vdso.so.1(__kernel_rt_sigreturn+0) [0x7f90ad266c]
[30/04/2021 10:09:59.851,sessmgrd:24257-624,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#4/9: /usr/lib/libsrt.so.1(_ZNK14CCryptoControl17getKmMsg_needSendEmb+0x98) [0x7f907efd08]
[30/04/2021 10:09:59.851,sessmgrd:24257-625,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#5/9: /usr/lib/libsrt.so.1(_ZN4CUDT18createSrtHandshakeEiiPKjmR7CPacketR10CHandShake+0xb4c) [0x7f907d57bc]
[30/04/2021 10:09:59.851,sessmgrd:24257-626,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#6/9: /usr/lib/libsrt.so.1(_ZN4CUDT26processAsyncConnectRequestE11EReadStatus14EConnectStatusRK7CPacketRK12sockaddr_any+0x2bc) [0x7f907dc228]
[30/04/2021 10:09:59.851,sessmgrd:24257-627,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#7/9: /usr/lib/libsrt.so.1(_ZN16CRendezvousQueue16updateConnStatusE11EReadStatus14EConnectStatusRK7CPacket+0x450) [0x7f90828f2c]
[30/04/2021 10:09:59.851,sessmgrd:24257-628,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#8/9: /usr/lib/libsrt.so.1(_ZN9CRcvQueue6workerEPv+0x3d4) [0x7f90829c0c]
[30/04/2021 10:09:59.851,sessmgrd:24257-629,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#9/9: /lib/libpthread.so.0(+0x6fd8) [0x7f90a82fd8]
`
[sys-snapshot-4.txt](https://github.com/Haivision/srt/files/6414746/sys-snapshot-4.txt)
|
1.0
|
[BUG] sender/listener with passphrase crashes (SIGSEGV) on incoming call with no passphrase - Makito X4 encoder (SRT 1.4.2) caller/sender w/passphrase -> HMG listener/receiver no passphrase
SRTO_ENFORCEDENCRYPTION set (default) on caller/sender.
caller fails (REJECT from HS) and reconnect every 40ms multiple times (> 30 secs) before crashing (SIGSEGV).
Here a backtrace extracted from the attached log:
`[30/04/2021 10:09:59.843,sessmgrd:24257-619,CRITI,LIBMXTOOLS] _SIGSEVSignalHandler(threadcheck.c:172): SEGMENTATION FAULT (11) in thread "SRT:RcvQ:w13667"!!!
[30/04/2021 10:09:59.851,sessmgrd:24257-620,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:149): BACKTRACE INFO:
[30/04/2021 10:09:59.851,sessmgrd:24257-621,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#1/9: /usr/lib/libmxtools.so(DumpBackTraceInfo+0x18) [0x7f909c8aec]
[30/04/2021 10:09:59.851,sessmgrd:24257-622,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#2/9: /usr/lib/libmxtools.so(_SIGSEVSignalHandler+0xdc) [0x7f909c8d28]
[30/04/2021 10:09:59.851,sessmgrd:24257-623,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#3/9: linux-vdso.so.1(__kernel_rt_sigreturn+0) [0x7f90ad266c]
[30/04/2021 10:09:59.851,sessmgrd:24257-624,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#4/9: /usr/lib/libsrt.so.1(_ZNK14CCryptoControl17getKmMsg_needSendEmb+0x98) [0x7f907efd08]
[30/04/2021 10:09:59.851,sessmgrd:24257-625,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#5/9: /usr/lib/libsrt.so.1(_ZN4CUDT18createSrtHandshakeEiiPKjmR7CPacketR10CHandShake+0xb4c) [0x7f907d57bc]
[30/04/2021 10:09:59.851,sessmgrd:24257-626,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#6/9: /usr/lib/libsrt.so.1(_ZN4CUDT26processAsyncConnectRequestE11EReadStatus14EConnectStatusRK7CPacketRK12sockaddr_any+0x2bc) [0x7f907dc228]
[30/04/2021 10:09:59.851,sessmgrd:24257-627,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#7/9: /usr/lib/libsrt.so.1(_ZN16CRendezvousQueue16updateConnStatusE11EReadStatus14EConnectStatusRK7CPacket+0x450) [0x7f90828f2c]
[30/04/2021 10:09:59.851,sessmgrd:24257-628,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#8/9: /usr/lib/libsrt.so.1(_ZN9CRcvQueue6workerEPv+0x3d4) [0x7f90829c0c]
[30/04/2021 10:09:59.851,sessmgrd:24257-629,CRITI,LIBMXTOOLS] DumpBackTraceInfo(threadcheck.c:153): BT#9/9: /lib/libpthread.so.0(+0x6fd8) [0x7f90a82fd8]
`
[sys-snapshot-4.txt](https://github.com/Haivision/srt/files/6414746/sys-snapshot-4.txt)
|
priority
|
sender listener with passphrase crashes sigsegv on incoming call with no passphrase makito encoder srt caller sender w passphrase hmg listener receiver no passphrase srto enforcedencryption set default on caller sender caller fails reject from hs and reconnect every multiple times secs before crashing sigsegv here a backtrace extracted from the attached log sigsevsignalhandler threadcheck c segmentation fault in thread srt rcvq dumpbacktraceinfo threadcheck c backtrace info dumpbacktraceinfo threadcheck c bt usr lib libmxtools so dumpbacktraceinfo dumpbacktraceinfo threadcheck c bt usr lib libmxtools so sigsevsignalhandler dumpbacktraceinfo threadcheck c bt linux vdso so kernel rt sigreturn dumpbacktraceinfo threadcheck c bt usr lib libsrt so needsendemb dumpbacktraceinfo threadcheck c bt usr lib libsrt so dumpbacktraceinfo threadcheck c bt usr lib libsrt so any dumpbacktraceinfo threadcheck c bt usr lib libsrt so dumpbacktraceinfo threadcheck c bt usr lib libsrt so dumpbacktraceinfo threadcheck c bt lib libpthread so
| 1
|
387,886
| 11,471,441,085
|
IssuesEvent
|
2020-02-09 11:15:46
|
apexcharts/apexcharts.js
|
https://api.github.com/repos/apexcharts/apexcharts.js
|
closed
|
Panning is broken in most cases
|
bug high-priority
|
# Bug report
## Codepen
- Case 1 (series with numbers on x axis): https://codepen.io/jaksim/pen/QWbLoMQ
- Case 2 (series with datetime on x axis): https://codepen.io/jaksim/pen/GRJKeNq
## Some of the official examples that are broken
- https://apexcharts.com/javascript-chart-demos/line-charts/zoomable-timeseries/ (case 2)
- https://apexcharts.com/javascript-chart-demos/line-charts/line-chart-annotations/ (case 2)
- https://apexcharts.com/javascript-chart-demos/line-charts/syncing-charts/ (case 2)
- https://apexcharts.com/javascript-chart-demos/line-charts/stepline/ (case 1)
- https://apexcharts.com/javascript-chart-demos/line-charts/gradient/ (case 2)
- https://apexcharts.com/javascript-chart-demos/area-charts/spline/ (case 2)
- https://apexcharts.com/javascript-chart-demos/area-charts/datetime-x-axis/ (case 2)
- https://apexcharts.com/javascript-chart-demos/area-charts/negative/ (case 2)
- https://apexcharts.com/javascript-chart-demos/column-charts/column-with-rotated-xaxis-labels/ (case 1)
## Explanation
### Case 1
- Click on zoom in button once.
- Switch to panning tool
- Now try to drag your view to the right.
- You will not be able to get the right most point into view again.
### Case 2
- Click on zoom in button once.
- Switch to panning tool.
- Try to drag your to the right.
- You will be teleported to 1970.
Through bisect, I've found that Case 2 was broken by commit 0849cc3c805a64205e64ded892bbf734f5e613cd
Case 1 doesn't seem to behave very well before that commit either, though I can at least see the last data point. Just the x axis labels show ugly fractions, but that seems to be solvable from client code.
|
1.0
|
Panning is broken in most cases - # Bug report
## Codepen
- Case 1 (series with numbers on x axis): https://codepen.io/jaksim/pen/QWbLoMQ
- Case 2 (series with datetime on x axis): https://codepen.io/jaksim/pen/GRJKeNq
## Some of the official examples that are broken
- https://apexcharts.com/javascript-chart-demos/line-charts/zoomable-timeseries/ (case 2)
- https://apexcharts.com/javascript-chart-demos/line-charts/line-chart-annotations/ (case 2)
- https://apexcharts.com/javascript-chart-demos/line-charts/syncing-charts/ (case 2)
- https://apexcharts.com/javascript-chart-demos/line-charts/stepline/ (case 1)
- https://apexcharts.com/javascript-chart-demos/line-charts/gradient/ (case 2)
- https://apexcharts.com/javascript-chart-demos/area-charts/spline/ (case 2)
- https://apexcharts.com/javascript-chart-demos/area-charts/datetime-x-axis/ (case 2)
- https://apexcharts.com/javascript-chart-demos/area-charts/negative/ (case 2)
- https://apexcharts.com/javascript-chart-demos/column-charts/column-with-rotated-xaxis-labels/ (case 1)
## Explanation
### Case 1
- Click on zoom in button once.
- Switch to panning tool
- Now try to drag your view to the right.
- You will not be able to get the right most point into view again.
### Case 2
- Click on zoom in button once.
- Switch to panning tool.
- Try to drag your to the right.
- You will be teleported to 1970.
Through bisect, I've found that Case 2 was broken by commit 0849cc3c805a64205e64ded892bbf734f5e613cd
Case 1 doesn't seem to behave very well before that commit either, though I can at least see the last data point. Just the x axis labels show ugly fractions, but that seems to be solvable from client code.
|
priority
|
panning is broken in most cases bug report codepen case series with numbers on x axis case series with datetime on x axis some of the official examples that are broken case case case case case case case case case explanation case click on zoom in button once switch to panning tool now try to drag your view to the right you will not be able to get the right most point into view again case click on zoom in button once switch to panning tool try to drag your to the right you will be teleported to through bisect i ve found that case was broken by commit case doesn t seem to behave very well before that commit either though i can at least see the last data point just the x axis labels show ugly fractions but that seems to be solvable from client code
| 1
|
180,035
| 6,642,533,515
|
IssuesEvent
|
2017-09-27 07:50:35
|
crowdAI/crowdai
|
https://api.github.com/repos/crowdAI/crowdai
|
closed
|
1 Days
|
high priority
|
Thanks for fixing today. Small detail but sends subtle signals about committment to quality.
http://api.rubyonrails.org/classes/ActionView/Helpers/TextHelper.html#method-i-pluralize
<img width="206" alt="screen shot 2017-09-27 at 6 31 47 am" src="https://user-images.githubusercontent.com/215057/30895921-af955586-a34d-11e7-94a7-064d470e72c9.png">
<img width="199" alt="screen shot 2017-09-27 at 6 31 53 am" src="https://user-images.githubusercontent.com/215057/30895923-b4372ba0-a34d-11e7-9088-5f4986cd3969.png">
|
1.0
|
1 Days - Thanks for fixing today. Small detail but sends subtle signals about committment to quality.
http://api.rubyonrails.org/classes/ActionView/Helpers/TextHelper.html#method-i-pluralize
<img width="206" alt="screen shot 2017-09-27 at 6 31 47 am" src="https://user-images.githubusercontent.com/215057/30895921-af955586-a34d-11e7-94a7-064d470e72c9.png">
<img width="199" alt="screen shot 2017-09-27 at 6 31 53 am" src="https://user-images.githubusercontent.com/215057/30895923-b4372ba0-a34d-11e7-9088-5f4986cd3969.png">
|
priority
|
days thanks for fixing today small detail but sends subtle signals about committment to quality img width alt screen shot at am src img width alt screen shot at am src
| 1
|
477,364
| 13,760,634,451
|
IssuesEvent
|
2020-10-07 06:16:43
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
nn.Embedding with max_norm shows unstable behavior and causes sometimes runtime error.
|
high priority module: nn triaged
|
## 🐛 Bug
An `nn.Embedding` object with `max_norm` set to `True` causes a RuntimeError that is hard to track.
## To Reproduce
The following code causes a RuntimeError. The error can be avoided by **removing the max_norm** feature or by **swapping Line a and Line b** in the code.
```
import torch
import torch.nn as nn
n, d, m = 3, 5, 7
batch_size = 11
embedding = nn.Embedding(n, d, max_norm=True)
W = torch.randn((m, d), requires_grad=True)
optimizer = torch.optim.Adam(list(embedding.parameters()) + [W], lr=1e-3)
optimizer.zero_grad()
idx = torch.tensor([1, 2])
a = embedding.weight @ W.t() # Line a
b = embedding(idx) @ W.t() # Line b
out = (a.unsqueeze(0) + b.unsqueeze(1))
loss = out.sigmoid().prod()
loss.backward()
optimizer.step()
```
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-472-103ef18503d8> in <module>
17 out = (a.unsqueeze(0) + b.unsqueeze(1))
18 loss = out.sigmoid().prod()
---> 19 loss.backward()
20 optimizer.step()
~/miniconda3/envs/kg/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
116 products. Defaults to ``False``.
117 """
--> 118 torch.autograd.backward(self, gradient, retain_graph, create_graph)
119
120 def register_hook(self, hook):
~/miniconda3/envs/kg/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 Variable._execution_engine.run_backward(
92 tensors, grad_tensors, retain_graph, create_graph,
---> 93 allow_unreachable=True) # allow_unreachable flag
94
95
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3, 5]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
```
## Expected behavior
There shouldn't be any error when running the code above.
Strangely, there is no RuntimeError when **Line a** and **Line b** are swapped. This is something that has to be investigated.
## Environment
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.3 LTS
GCC version: (Homebrew gcc 5.5.0_4) 5.5.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce GTX 1080 Ti
Nvidia driver version: 430.26
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3
Versions of relevant libraries:
[pip] botorch==0.1.3
[pip] gpytorch==0.3.5
[pip] numpy==1.17.2
[pip] torch==1.2.0
[pip] torchvision==0.4.0a0+6b959ee
[conda] blas 1.0 mkl
[conda] botorch 0.1.3 pypi_0 pypi
[conda] gpytorch 0.3.5 pypi_0 pypi
[conda] libblas 3.8.0 12_mkl conda-forge
[conda] libcblas 3.8.0 12_mkl conda-forge
[conda] liblapack 3.8.0 12_mkl conda-forge
[conda] mkl 2019.4 243
[conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch
[conda] torchvision 0.4.0 py37_cu100 pytorch
## Additional context
<!-- Add any other context about the problem here. -->
cc @ezyang @gchanan @zou3519 @jlin27 @albanD @mruberry
|
1.0
|
nn.Embedding with max_norm shows unstable behavior and causes sometimes runtime error. - ## 🐛 Bug
An `nn.Embedding` object with `max_norm` set to `True` causes a RuntimeError that is hard to track.
## To Reproduce
The following code causes a RuntimeError. The error can be avoided by **removing the max_norm** feature or by **swapping Line a and Line b** in the code.
```
import torch
import torch.nn as nn
n, d, m = 3, 5, 7
batch_size = 11
embedding = nn.Embedding(n, d, max_norm=True)
W = torch.randn((m, d), requires_grad=True)
optimizer = torch.optim.Adam(list(embedding.parameters()) + [W], lr=1e-3)
optimizer.zero_grad()
idx = torch.tensor([1, 2])
a = embedding.weight @ W.t() # Line a
b = embedding(idx) @ W.t() # Line b
out = (a.unsqueeze(0) + b.unsqueeze(1))
loss = out.sigmoid().prod()
loss.backward()
optimizer.step()
```
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-472-103ef18503d8> in <module>
17 out = (a.unsqueeze(0) + b.unsqueeze(1))
18 loss = out.sigmoid().prod()
---> 19 loss.backward()
20 optimizer.step()
~/miniconda3/envs/kg/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
116 products. Defaults to ``False``.
117 """
--> 118 torch.autograd.backward(self, gradient, retain_graph, create_graph)
119
120 def register_hook(self, hook):
~/miniconda3/envs/kg/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 Variable._execution_engine.run_backward(
92 tensors, grad_tensors, retain_graph, create_graph,
---> 93 allow_unreachable=True) # allow_unreachable flag
94
95
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3, 5]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
```
## Expected behavior
There shouldn't be any error when running the code above.
Strangely, there is no RuntimeError when **Line a** and **Line b** are swapped. This is something that has to be investigated.
## Environment
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.3 LTS
GCC version: (Homebrew gcc 5.5.0_4) 5.5.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce GTX 1080 Ti
Nvidia driver version: 430.26
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3
Versions of relevant libraries:
[pip] botorch==0.1.3
[pip] gpytorch==0.3.5
[pip] numpy==1.17.2
[pip] torch==1.2.0
[pip] torchvision==0.4.0a0+6b959ee
[conda] blas 1.0 mkl
[conda] botorch 0.1.3 pypi_0 pypi
[conda] gpytorch 0.3.5 pypi_0 pypi
[conda] libblas 3.8.0 12_mkl conda-forge
[conda] libcblas 3.8.0 12_mkl conda-forge
[conda] liblapack 3.8.0 12_mkl conda-forge
[conda] mkl 2019.4 243
[conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch
[conda] torchvision 0.4.0 py37_cu100 pytorch
## Additional context
<!-- Add any other context about the problem here. -->
cc @ezyang @gchanan @zou3519 @jlin27 @albanD @mruberry
|
priority
|
nn embedding with max norm shows unstable behavior and causes sometimes runtime error 🐛 bug an nn embedding object with max norm set to true causes a runtimeerror that is hard to track to reproduce the following code causes a runtimeerror the error can be avoided by removing the max norm feature or by swapping line a and line b in the code import torch import torch nn as nn n d m batch size embedding nn embedding n d max norm true w torch randn m d requires grad true optimizer torch optim adam list embedding parameters lr optimizer zero grad idx torch tensor a embedding weight w t line a b embedding idx w t line b out a unsqueeze b unsqueeze loss out sigmoid prod loss backward optimizer step runtimeerror traceback most recent call last in out a unsqueeze b unsqueeze loss out sigmoid prod loss backward optimizer step envs kg lib site packages torch tensor py in backward self gradient retain graph create graph products defaults to false torch autograd backward self gradient retain graph create graph def register hook self hook envs kg lib site packages torch autograd init py in backward tensors grad tensors retain graph create graph grad variables variable execution engine run backward tensors grad tensors retain graph create graph allow unreachable true allow unreachable flag runtimeerror one of the variables needed for gradient computation has been modified by an inplace operation is at version expected version instead hint enable anomaly detection to find the operation that failed to compute its gradient with torch autograd set detect anomaly true expected behavior there shouldn t be any error when running the code above strangely there is no runtimeerror when line a and line b are swapped this is something that has to be investigated environment pytorch version is debug build no cuda used to build pytorch os ubuntu lts gcc version homebrew gcc cmake version could not collect python version is cuda available yes cuda runtime version gpu models and configuration gpu geforce gtx ti nvidia driver version cudnn version usr lib linux gnu libcudnn so versions of relevant libraries botorch gpytorch numpy torch torchvision blas mkl botorch pypi pypi gpytorch pypi pypi libblas mkl conda forge libcblas mkl conda forge liblapack mkl conda forge mkl pytorch pytorch torchvision pytorch additional context cc ezyang gchanan alband mruberry
| 1
|
601,670
| 18,425,882,707
|
IssuesEvent
|
2021-10-13 21:59:00
|
xldenis/creusot
|
https://api.github.com/repos/xldenis/creusot
|
closed
|
Fix incompatibility between trait implementations
|
bug high-priority
|
As observed in the latest comments of #130, it turned out that `model`s for `T` and `&mut T` can be incompatible.
The following is the relevant parts.
```ocaml
module C01_Impl2_Model_Interface
type t
use Type
use seq.Seq
function model (self : Type.c01_myvec t) : Seq.seq t
end
module C01_Impl2_Model
type t
use Type
use seq.Seq
function model (self : Type.c01_myvec t) : Seq.seq t
end
module CreusotContracts_Builtins_Model_Model
type self
type modelty
function model (self : self) : modelty
end
module C01_Impl2_Interface
type t
use Type
use seq.Seq
clone export C01_Impl2_Model_Interface with type t = t
type modelty =
Seq.seq t
clone export CreusotContracts_Builtins_Model_Model with type self = Type.c01_myvec t, type modelty = modelty,
function model = model
end
module C01_Impl2
type t
use Type
use seq.Seq
clone export C01_Impl2_Model with type t = t
type modelty =
Seq.seq t
clone export CreusotContracts_Builtins_Model_Model with type self = Type.c01_myvec t, type modelty = modelty,
function model = model
end
module CreusotContracts_Builtins_Model_Impl1_Model_Interface
type t
use prelude.Prelude
clone CreusotContracts_Builtins_Model_Model as Model0 with type self = t
function model (self : borrowed t) : Model0.modelty
end
module CreusotContracts_Builtins_Model_Impl1_Model
type t
use prelude.Prelude
clone CreusotContracts_Builtins_Model_Model as Model0 with type self = t
function model (self : borrowed t) : Model0.modelty =
Model0.model ( * self)
end
module C01_AllZero
...
clone C01_Impl2_Model as Model2 with type t = uint32
clone C01_Impl2 as Model3 with type t = uint32
clone CreusotContracts_Builtins_Model_Impl1_Model as Model1 with type t = Type.c01_myvec uint32,
type Model0.modelty = Model3.modelty, function Model0.model = Model3.model
...
let rec cfg all_zero (v : borrowed (Type.c01_myvec uint32)) : ()
ensures { Seq.length (Model1.model v) = Seq.length (Model2.model ( ^ v)) }
...
end
```
So the problem is that `Model3.model` (coming from `C01_Impl2`) and `Model2.model` (coming from `C01_Impl2_Model`) are incompatible.
|
1.0
|
Fix incompatibility between trait implementations - As observed in the latest comments of #130, it turned out that `model`s for `T` and `&mut T` can be incompatible.
The following is the relevant parts.
```ocaml
module C01_Impl2_Model_Interface
type t
use Type
use seq.Seq
function model (self : Type.c01_myvec t) : Seq.seq t
end
module C01_Impl2_Model
type t
use Type
use seq.Seq
function model (self : Type.c01_myvec t) : Seq.seq t
end
module CreusotContracts_Builtins_Model_Model
type self
type modelty
function model (self : self) : modelty
end
module C01_Impl2_Interface
type t
use Type
use seq.Seq
clone export C01_Impl2_Model_Interface with type t = t
type modelty =
Seq.seq t
clone export CreusotContracts_Builtins_Model_Model with type self = Type.c01_myvec t, type modelty = modelty,
function model = model
end
module C01_Impl2
type t
use Type
use seq.Seq
clone export C01_Impl2_Model with type t = t
type modelty =
Seq.seq t
clone export CreusotContracts_Builtins_Model_Model with type self = Type.c01_myvec t, type modelty = modelty,
function model = model
end
module CreusotContracts_Builtins_Model_Impl1_Model_Interface
type t
use prelude.Prelude
clone CreusotContracts_Builtins_Model_Model as Model0 with type self = t
function model (self : borrowed t) : Model0.modelty
end
module CreusotContracts_Builtins_Model_Impl1_Model
type t
use prelude.Prelude
clone CreusotContracts_Builtins_Model_Model as Model0 with type self = t
function model (self : borrowed t) : Model0.modelty =
Model0.model ( * self)
end
module C01_AllZero
...
clone C01_Impl2_Model as Model2 with type t = uint32
clone C01_Impl2 as Model3 with type t = uint32
clone CreusotContracts_Builtins_Model_Impl1_Model as Model1 with type t = Type.c01_myvec uint32,
type Model0.modelty = Model3.modelty, function Model0.model = Model3.model
...
let rec cfg all_zero (v : borrowed (Type.c01_myvec uint32)) : ()
ensures { Seq.length (Model1.model v) = Seq.length (Model2.model ( ^ v)) }
...
end
```
So the problem is that `Model3.model` (coming from `C01_Impl2`) and `Model2.model` (coming from `C01_Impl2_Model`) are incompatible.
|
priority
|
fix incompatibility between trait implementations as observed in the latest comments of it turned out that model s for t and mut t can be incompatible the following is the relevant parts ocaml module model interface type t use type use seq seq function model self type myvec t seq seq t end module model type t use type use seq seq function model self type myvec t seq seq t end module creusotcontracts builtins model model type self type modelty function model self self modelty end module interface type t use type use seq seq clone export model interface with type t t type modelty seq seq t clone export creusotcontracts builtins model model with type self type myvec t type modelty modelty function model model end module type t use type use seq seq clone export model with type t t type modelty seq seq t clone export creusotcontracts builtins model model with type self type myvec t type modelty modelty function model model end module creusotcontracts builtins model model interface type t use prelude prelude clone creusotcontracts builtins model model as with type self t function model self borrowed t modelty end module creusotcontracts builtins model model type t use prelude prelude clone creusotcontracts builtins model model as with type self t function model self borrowed t modelty model self end module allzero clone model as with type t clone as with type t clone creusotcontracts builtins model model as with type t type myvec type modelty modelty function model model let rec cfg all zero v borrowed type myvec ensures seq length model v seq length model v end so the problem is that model coming from and model coming from model are incompatible
| 1
|
119,896
| 4,777,472,728
|
IssuesEvent
|
2016-10-27 16:23:21
|
CS2103AUG2016-T11-C2/main
|
https://api.github.com/repos/CS2103AUG2016-T11-C2/main
|
opened
|
Fix done - undo - recurring bug
|
priority.high type.bug
|
bug occured when **done** an **overdue** **recurring** tasks.
Task remained overdue and in its original position despite the date changing
|
1.0
|
Fix done - undo - recurring bug - bug occured when **done** an **overdue** **recurring** tasks.
Task remained overdue and in its original position despite the date changing
|
priority
|
fix done undo recurring bug bug occured when done an overdue recurring tasks task remained overdue and in its original position despite the date changing
| 1
|
480,831
| 13,876,924,341
|
IssuesEvent
|
2020-10-17 01:22:04
|
giampaolo/psutil
|
https://api.github.com/repos/giampaolo/psutil
|
closed
|
[FreeBSD] segfault on Cirrus CI
|
bug priority-high
|
I've seen it a couple of times:
https://github.com/giampaolo/psutil/runs/456380368
It should not be because of changes introduced in 5.7.0 (I didn't touch that part). Also this is the second time I've seen it happening on Python 2.
|
1.0
|
[FreeBSD] segfault on Cirrus CI - I've seen it a couple of times:
https://github.com/giampaolo/psutil/runs/456380368
It should not be because of changes introduced in 5.7.0 (I didn't touch that part). Also this is the second time I've seen it happening on Python 2.
|
priority
|
segfault on cirrus ci i ve seen it a couple of times it should not be because of changes introduced in i didn t touch that part also this is the second time i ve seen it happening on python
| 1
|
792,152
| 27,948,026,686
|
IssuesEvent
|
2023-03-24 06:01:12
|
curiouslearning/FeedTheMonsterJS
|
https://api.github.com/repos/curiouslearning/FeedTheMonsterJS
|
closed
|
Create a DEVELOPMENT universal APK of Curious Reader container app for all the team to install
|
High Priority
|
**User Story**
As a Product Owner and Stakeholder,
I want a universal APK of the Curious Reader container app that points to the DEVELOPMENT URLs of the PWAs,
So that we have a development version of the Curious Reader container app that anyone with an Android device can install and be able to test any development PWA the way we want them played-- within the Curious Reader container app.
**Acceptance Criteria**
Given I have installed the development universal APK of the Curious Reader container app,
When I look at my device homescreen,
Then I should be able to differentiate the DEVELOPMENT app from the production app.
Given I have installed the development universal APK of the Curious Reader container app and launched it,
When I select any PWA contained inside,
Then I should be able to determine if I am looking at the latest development version of the PWA that I launched.
|
1.0
|
Create a DEVELOPMENT universal APK of Curious Reader container app for all the team to install - **User Story**
As a Product Owner and Stakeholder,
I want a universal APK of the Curious Reader container app that points to the DEVELOPMENT URLs of the PWAs,
So that we have a development version of the Curious Reader container app that anyone with an Android device can install and be able to test any development PWA the way we want them played-- within the Curious Reader container app.
**Acceptance Criteria**
Given I have installed the development universal APK of the Curious Reader container app,
When I look at my device homescreen,
Then I should be able to differentiate the DEVELOPMENT app from the production app.
Given I have installed the development universal APK of the Curious Reader container app and launched it,
When I select any PWA contained inside,
Then I should be able to determine if I am looking at the latest development version of the PWA that I launched.
|
priority
|
create a development universal apk of curious reader container app for all the team to install user story as a product owner and stakeholder i want a universal apk of the curious reader container app that points to the development urls of the pwas so that we have a development version of the curious reader container app that anyone with an android device can install and be able to test any development pwa the way we want them played within the curious reader container app acceptance criteria given i have installed the development universal apk of the curious reader container app when i look at my device homescreen then i should be able to differentiate the development app from the production app given i have installed the development universal apk of the curious reader container app and launched it when i select any pwa contained inside then i should be able to determine if i am looking at the latest development version of the pwa that i launched
| 1
|
289,546
| 8,872,301,039
|
IssuesEvent
|
2019-01-11 15:05:29
|
iSosnitsky/DschinghisKhan
|
https://api.github.com/repos/iSosnitsky/DschinghisKhan
|
opened
|
Все заявки. Изменение.
|
High priority
|
1) Изменить статус - кнопка активна, хотя заявка не выбрана! При нажатии выдает ошибку
2) Убираем вкладку "Заявки в работе". Кнопки с неё переносим на вкладку "Все заявки". Саму вкладку переименовываем в "Заявки".
3) Добавляем кнопку "Настроить таблицу" (убрать/показать столбцы)
4) Добавить фильтры в колонки
5) Кнопка "Показать все" - зачем нужна? (я забыла:))
6) Закрепить заголовки таблицы!
7) Саму таблицу оформить как у Энергомикса (стиль)
|
1.0
|
Все заявки. Изменение. - 1) Изменить статус - кнопка активна, хотя заявка не выбрана! При нажатии выдает ошибку
2) Убираем вкладку "Заявки в работе". Кнопки с неё переносим на вкладку "Все заявки". Саму вкладку переименовываем в "Заявки".
3) Добавляем кнопку "Настроить таблицу" (убрать/показать столбцы)
4) Добавить фильтры в колонки
5) Кнопка "Показать все" - зачем нужна? (я забыла:))
6) Закрепить заголовки таблицы!
7) Саму таблицу оформить как у Энергомикса (стиль)
|
priority
|
все заявки изменение изменить статус кнопка активна хотя заявка не выбрана при нажатии выдает ошибку убираем вкладку заявки в работе кнопки с неё переносим на вкладку все заявки саму вкладку переименовываем в заявки добавляем кнопку настроить таблицу убрать показать столбцы добавить фильтры в колонки кнопка показать все зачем нужна я забыла закрепить заголовки таблицы саму таблицу оформить как у энергомикса стиль
| 1
|
320,753
| 9,785,857,232
|
IssuesEvent
|
2019-06-09 11:41:30
|
janosg/estimagic
|
https://api.github.com/repos/janosg/estimagic
|
opened
|
Make functions that use constraints fool-proof
|
enhancement good first issue priority-high size-s volunteers-welcome
|
## Current situation
Some functions have to be called with processed constraints, others with user written constraints. This can be confusing for users.
## Goal and Implementation
All public functions that take constraints as an argument get an additional boolean argument called `processed` which is by default `False`. If `processed` is `False`, the function calls `process_constraints` first.
This requires to change `process_constraints` such that it can be called with already processed constraints and in that case does nothing.
## Remarks
- We cannot simply omit the new argument and always call `process_constraints` first since `process_constraints` is a rather expensive function and I don't want to be forced to make it faster.
- This is a good issue for someone who wants to develop a deeper understanding of how constraints are implemented in estimagic.
|
1.0
|
Make functions that use constraints fool-proof - ## Current situation
Some functions have to be called with processed constraints, others with user written constraints. This can be confusing for users.
## Goal and Implementation
All public functions that take constraints as an argument get an additional boolean argument called `processed` which is by default `False`. If `processed` is `False`, the function calls `process_constraints` first.
This requires to change `process_constraints` such that it can be called with already processed constraints and in that case does nothing.
## Remarks
- We cannot simply omit the new argument and always call `process_constraints` first since `process_constraints` is a rather expensive function and I don't want to be forced to make it faster.
- This is a good issue for someone who wants to develop a deeper understanding of how constraints are implemented in estimagic.
|
priority
|
make functions that use constraints fool proof current situation some functions have to be called with processed constraints others with user written constraints this can be confusing for users goal and implementation all public functions that take constraints as an argument get an additional boolean argument called processed which is by default false if processed is false the function calls process constraints first this requires to change process constraints such that it can be called with already processed constraints and in that case does nothing remarks we cannot simply omit the new argument and always call process constraints first since process constraints is a rather expensive function and i don t want to be forced to make it faster this is a good issue for someone who wants to develop a deeper understanding of how constraints are implemented in estimagic
| 1
|
410,986
| 12,004,765,724
|
IssuesEvent
|
2020-04-09 12:13:53
|
AY1920S2-CS2103T-W13-1/main
|
https://api.github.com/repos/AY1920S2-CS2103T-W13-1/main
|
closed
|
[PE-D] Edit command bug for Diet Tracker
|
priority.High task.DietTracker
|
Edit command will quit the app if index is invalid.

-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: tingalinga/ped#1
|
1.0
|
[PE-D] Edit command bug for Diet Tracker - Edit command will quit the app if index is invalid.

-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: tingalinga/ped#1
|
priority
|
edit command bug for diet tracker edit command will quit the app if index is invalid labels severity medium type functionalitybug original tingalinga ped
| 1
|
321,114
| 9,793,708,249
|
IssuesEvent
|
2019-06-10 20:39:06
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
reopened
|
Weird sampling from multinomial_alias_draw
|
high priority module: random triaged
|
## 🐛 Bug
Which distribution is `torch._multinomial_alias_draw` sampling from?
Related to #4115 and #18906.
## To Reproduce
Sampling 1 element at a time, hence, here we ignore whether it is sampling with or without replacement. In this case, the distribution to draw samples from equals the normalized weights.
```python
import torch
import scipy.stats
import numpy as np
import torch.nn.functional as F
n = 10000
replace = True
device = 'cpu'
multinomial_alias_samples = []
multinomial_samples = []
weights = torch.Tensor([0.1, 0.6, 0.2, 0.1], device=device)
J, q = torch._multinomial_alias_setup(weights)
for _ in range(n):
multinomial_alias_samples += torch._multinomial_alias_draw(
q,
J,
1
).cpu().numpy().tolist()
multinomial_samples += torch.multinomial(
weights,
1,
replace
).cpu().numpy().tolist()
correct_dist = weights / weights.sum()
correct_dist = correct_dist.to('cpu')
_, multinomial_alias_dist = np.unique(multinomial_alias_samples, return_counts=True)
_, p = scipy.stats.chisquare(multinomial_alias_dist, correct_dist.numpy() * n)
print("[ALIAS] Chi-Squared Test p-value {:.3f}".format(p))
multinomial_alias_dist = torch.Tensor(multinomial_alias_dist) / n
print("[ALIAS] KL Divergence {:.3f}".format(
F.kl_div(
multinomial_alias_dist.log(),
correct_dist,
reduction='sum')
)
)
_, multinomial_dist = np.unique(multinomial_samples, return_counts=True)
_, p = scipy.stats.chisquare(multinomial_dist, correct_dist.numpy() * n)
print("[NO ALIAS] Chi-Squared Test p-value {:.3f}".format(p))
multinomial_dist = torch.Tensor(multinomial_dist) / n
print("[NO ALIAS] KL Divergence {:.3f}".format(
F.kl_div(
multinomial_dist.log(),
correct_dist,
reduction='sum')
)
)
```
Results found:
- [ALIAS] p-values < 0.001 and kl_div > 0.3
- [NO ALIAS] p-values > 0.1 and kl_div < 0.01
## Expected behavior
Sample from a multinomial distribution.
## Environment
Today's nightly version of PyTorch.
|
1.0
|
Weird sampling from multinomial_alias_draw - ## 🐛 Bug
Which distribution is `torch._multinomial_alias_draw` sampling from?
Related to #4115 and #18906.
## To Reproduce
Sampling 1 element at a time, hence, here we ignore whether it is sampling with or without replacement. In this case, the distribution to draw samples from equals the normalized weights.
```python
import torch
import scipy.stats
import numpy as np
import torch.nn.functional as F
n = 10000
replace = True
device = 'cpu'
multinomial_alias_samples = []
multinomial_samples = []
weights = torch.Tensor([0.1, 0.6, 0.2, 0.1], device=device)
J, q = torch._multinomial_alias_setup(weights)
for _ in range(n):
multinomial_alias_samples += torch._multinomial_alias_draw(
q,
J,
1
).cpu().numpy().tolist()
multinomial_samples += torch.multinomial(
weights,
1,
replace
).cpu().numpy().tolist()
correct_dist = weights / weights.sum()
correct_dist = correct_dist.to('cpu')
_, multinomial_alias_dist = np.unique(multinomial_alias_samples, return_counts=True)
_, p = scipy.stats.chisquare(multinomial_alias_dist, correct_dist.numpy() * n)
print("[ALIAS] Chi-Squared Test p-value {:.3f}".format(p))
multinomial_alias_dist = torch.Tensor(multinomial_alias_dist) / n
print("[ALIAS] KL Divergence {:.3f}".format(
F.kl_div(
multinomial_alias_dist.log(),
correct_dist,
reduction='sum')
)
)
_, multinomial_dist = np.unique(multinomial_samples, return_counts=True)
_, p = scipy.stats.chisquare(multinomial_dist, correct_dist.numpy() * n)
print("[NO ALIAS] Chi-Squared Test p-value {:.3f}".format(p))
multinomial_dist = torch.Tensor(multinomial_dist) / n
print("[NO ALIAS] KL Divergence {:.3f}".format(
F.kl_div(
multinomial_dist.log(),
correct_dist,
reduction='sum')
)
)
```
Results found:
- [ALIAS] p-values < 0.001 and kl_div > 0.3
- [NO ALIAS] p-values > 0.1 and kl_div < 0.01
## Expected behavior
Sample from a multinomial distribution.
## Environment
Today's nightly version of PyTorch.
|
priority
|
weird sampling from multinomial alias draw 🐛 bug which distribution is torch multinomial alias draw sampling from related to and to reproduce sampling element at a time hence here we ignore whether it is sampling with or without replacement in this case the distribution to draw samples from equals the normalized weights python import torch import scipy stats import numpy as np import torch nn functional as f n replace true device cpu multinomial alias samples multinomial samples weights torch tensor device device j q torch multinomial alias setup weights for in range n multinomial alias samples torch multinomial alias draw q j cpu numpy tolist multinomial samples torch multinomial weights replace cpu numpy tolist correct dist weights weights sum correct dist correct dist to cpu multinomial alias dist np unique multinomial alias samples return counts true p scipy stats chisquare multinomial alias dist correct dist numpy n print chi squared test p value format p multinomial alias dist torch tensor multinomial alias dist n print kl divergence format f kl div multinomial alias dist log correct dist reduction sum multinomial dist np unique multinomial samples return counts true p scipy stats chisquare multinomial dist correct dist numpy n print chi squared test p value format p multinomial dist torch tensor multinomial dist n print kl divergence format f kl div multinomial dist log correct dist reduction sum results found p values p values and kl div expected behavior sample from a multinomial distribution environment today s nightly version of pytorch
| 1
|
784,658
| 27,580,900,392
|
IssuesEvent
|
2023-03-08 16:09:04
|
Zapit-Optostim/zapit
|
https://api.github.com/repos/Zapit-Optostim/zapit
|
closed
|
moveBeamXY is great but we want this in mm too
|
enhancement high priority
|
The moveBeamXY function would be more handy if it could accept mm. That would simplify a lot of other code.
|
1.0
|
moveBeamXY is great but we want this in mm too - The moveBeamXY function would be more handy if it could accept mm. That would simplify a lot of other code.
|
priority
|
movebeamxy is great but we want this in mm too the movebeamxy function would be more handy if it could accept mm that would simplify a lot of other code
| 1
|
306,191
| 9,381,954,577
|
IssuesEvent
|
2019-04-04 20:58:46
|
jdereus/labman
|
https://api.github.com/repos/jdereus/labman
|
closed
|
AT SOFT LAUNCH manually reset working primer plate dates
|
kludge priority:high scope:small
|
Once labman goes into use, working primer plate records can be created in labman at the time they are actually made. However, for working primer plates that already exist at the time labman use begins, I think we will need to create those plates through the interface and then afterwards run a sql statement directly on the db to set the creation date to the correct one (in the past), rather than the date the records were created. This should happen during the initial steps of soft launch (when the wet lab sets up their working primer plates and equipment) but will have to be performed by a technical person with direct database access.
|
1.0
|
AT SOFT LAUNCH manually reset working primer plate dates - Once labman goes into use, working primer plate records can be created in labman at the time they are actually made. However, for working primer plates that already exist at the time labman use begins, I think we will need to create those plates through the interface and then afterwards run a sql statement directly on the db to set the creation date to the correct one (in the past), rather than the date the records were created. This should happen during the initial steps of soft launch (when the wet lab sets up their working primer plates and equipment) but will have to be performed by a technical person with direct database access.
|
priority
|
at soft launch manually reset working primer plate dates once labman goes into use working primer plate records can be created in labman at the time they are actually made however for working primer plates that already exist at the time labman use begins i think we will need to create those plates through the interface and then afterwards run a sql statement directly on the db to set the creation date to the correct one in the past rather than the date the records were created this should happen during the initial steps of soft launch when the wet lab sets up their working primer plates and equipment but will have to be performed by a technical person with direct database access
| 1
|
206,614
| 7,114,109,807
|
IssuesEvent
|
2018-01-17 23:00:06
|
StratoDem/sd-material-ui
|
https://api.github.com/repos/StratoDem/sd-material-ui
|
opened
|
Add a new prop to menu items in SDDropDownMenu to hold values
|
Priority: High Tech: JS Tech: Single Component Type: Enhancement
|
Add a new prop to menu items in SDDropDownMenu to hold values
Sometimes data will be sent as part of the options, but it may not be in a human-readable form. Use a new prop field to hold that data for passing around inside an app.
|
1.0
|
Add a new prop to menu items in SDDropDownMenu to hold values - Add a new prop to menu items in SDDropDownMenu to hold values
Sometimes data will be sent as part of the options, but it may not be in a human-readable form. Use a new prop field to hold that data for passing around inside an app.
|
priority
|
add a new prop to menu items in sddropdownmenu to hold values add a new prop to menu items in sddropdownmenu to hold values sometimes data will be sent as part of the options but it may not be in a human readable form use a new prop field to hold that data for passing around inside an app
| 1
|
178,223
| 6,601,449,242
|
IssuesEvent
|
2017-09-18 00:44:13
|
RoboJackets/robocup-firmware
|
https://api.github.com/repos/RoboJackets/robocup-firmware
|
closed
|
Update base station code.
|
priority / high type / bug
|
Base station code is currently broken on jon/stuff which will soon be in master.
|
1.0
|
Update base station code. - Base station code is currently broken on jon/stuff which will soon be in master.
|
priority
|
update base station code base station code is currently broken on jon stuff which will soon be in master
| 1
|
516,095
| 14,975,138,567
|
IssuesEvent
|
2021-01-28 05:25:41
|
TerriaJS/RaPPMap
|
https://api.github.com/repos/TerriaJS/RaPPMap
|
closed
|
Geoglam - v7 Jan release
|
high priority
|
Based on this ticket https://github.com/TerriaJS/devops/issues/30, this urgent release is due to expiration of Bing maps usage on 31 Jan which affects Bing maps and Bing geocoder.
- update the Cesium ION key with premium account one
- include updated geocoder: https://github.com/TerriaJS/terriajs/issues/5107
|
1.0
|
Geoglam - v7 Jan release - Based on this ticket https://github.com/TerriaJS/devops/issues/30, this urgent release is due to expiration of Bing maps usage on 31 Jan which affects Bing maps and Bing geocoder.
- update the Cesium ION key with premium account one
- include updated geocoder: https://github.com/TerriaJS/terriajs/issues/5107
|
priority
|
geoglam jan release based on this ticket this urgent release is due to expiration of bing maps usage on jan which affects bing maps and bing geocoder update the cesium ion key with premium account one include updated geocoder
| 1
|
224,661
| 7,471,951,221
|
IssuesEvent
|
2018-04-03 10:58:24
|
ballerina-lang/composer
|
https://api.github.com/repos/ballerina-lang/composer
|
closed
|
Descriptions are overlapping in swagger definition
|
0.95.1 Imported Priority/High Severity/Major component/Composer
|
**Description:**

**Affected Product Version:**
0.95.1
**OS, DB, other environment details and versions:**
FireFox
|
1.0
|
Descriptions are overlapping in swagger definition - **Description:**

**Affected Product Version:**
0.95.1
**OS, DB, other environment details and versions:**
FireFox
|
priority
|
descriptions are overlapping in swagger definition description affected product version os db other environment details and versions firefox
| 1
|
259,979
| 8,202,275,630
|
IssuesEvent
|
2018-09-02 06:59:15
|
DiscordDungeons/Bugs
|
https://api.github.com/repos/DiscordDungeons/Bugs
|
closed
|
Error on #!gban add
|
Bot Bug High Priority
|
**Describe the bug**
Adding people into the guild blacklist results in an error. Both by ID and mention
**Screenshots**
https://puu.sh/Bo5Fj/7a401f92b0.png
https://puu.sh/Bo5IQ/3dccbe6480.png
**Version**
4.3.16
|
1.0
|
Error on #!gban add - **Describe the bug**
Adding people into the guild blacklist results in an error. Both by ID and mention
**Screenshots**
https://puu.sh/Bo5Fj/7a401f92b0.png
https://puu.sh/Bo5IQ/3dccbe6480.png
**Version**
4.3.16
|
priority
|
error on gban add describe the bug adding people into the guild blacklist results in an error both by id and mention screenshots version
| 1
|
120,672
| 4,792,815,107
|
IssuesEvent
|
2016-10-31 16:29:00
|
Tour-de-Force/btc-models
|
https://api.github.com/repos/Tour-de-Force/btc-models
|
closed
|
When hours are not inputted for a service it displays as closed.
|
needs review Priority: High
|
Do the models component of https://github.com/Tour-de-Force/btc-app/issues/69
|
1.0
|
When hours are not inputted for a service it displays as closed. - Do the models component of https://github.com/Tour-de-Force/btc-app/issues/69
|
priority
|
when hours are not inputted for a service it displays as closed do the models component of
| 1
|
1,779
| 2,519,644,985
|
IssuesEvent
|
2015-01-18 05:06:13
|
jdmack/tlb
|
https://api.github.com/repos/jdmack/tlb
|
closed
|
Renderer::render_in_frame()
|
Coding priority:high Task
|
Add function to Renderer that will render a texture inside a frame. It will take a Frame object as a parameter and then create a new SDL_Rect that renders the object as an offset of the frame.
|
1.0
|
Renderer::render_in_frame() - Add function to Renderer that will render a texture inside a frame. It will take a Frame object as a parameter and then create a new SDL_Rect that renders the object as an offset of the frame.
|
priority
|
renderer render in frame add function to renderer that will render a texture inside a frame it will take a frame object as a parameter and then create a new sdl rect that renders the object as an offset of the frame
| 1
|
562,984
| 16,674,279,093
|
IssuesEvent
|
2021-06-07 14:28:58
|
georchestra/mapstore2-georchestra
|
https://api.github.com/repos/georchestra/mapstore2-georchestra
|
closed
|
Regression on map templates from WMC
|
Priority: High
|
Map template imported from a WMC from Mapfishapp are no longer compatible with mapstore.
It is no longer possible to **display the attribute table of the layers present in these** Map Templates.
**URL:** https://portail-test.sig.rennesmetropole.fr/mapstore or
https://portail.sig.rennesmetropole.fr
**Version mapstore2 :** tag v1.2.x from https://hub.docker.com/r/geosolutionsit/mapstore2-georchestra/tags?page=1&ordering=last_updated
**To reproduce the issue:**
- [ ] 1/ open mapstore
- [ ] 2/ add a map template from "cartes modèles"
- [ ] 3/ select a layer
- [ ] 4/ TOC toolbar for this layer is reduced , **it's not OK - we are expecting for a full TOC toolbar**

- [ ] 5/ Load from Ajouter des données/Géoservices (/geoserver) the layer "Localisation des stations de métro"
- [ ] 6/ TOC toolbar for the layer is full , **it's OK**

All our MapTemplates (46) were created from mapfishapp WMC and exported in json format before loading it in context manager as map templates.
This problem has appeared since last week (new mapstore deployment)
**What could be the reason?**
Can we fix this quickly as we are training users next week ?
|
1.0
|
Regression on map templates from WMC - Map template imported from a WMC from Mapfishapp are no longer compatible with mapstore.
It is no longer possible to **display the attribute table of the layers present in these** Map Templates.
**URL:** https://portail-test.sig.rennesmetropole.fr/mapstore or
https://portail.sig.rennesmetropole.fr
**Version mapstore2 :** tag v1.2.x from https://hub.docker.com/r/geosolutionsit/mapstore2-georchestra/tags?page=1&ordering=last_updated
**To reproduce the issue:**
- [ ] 1/ open mapstore
- [ ] 2/ add a map template from "cartes modèles"
- [ ] 3/ select a layer
- [ ] 4/ TOC toolbar for this layer is reduced , **it's not OK - we are expecting for a full TOC toolbar**

- [ ] 5/ Load from Ajouter des données/Géoservices (/geoserver) the layer "Localisation des stations de métro"
- [ ] 6/ TOC toolbar for the layer is full , **it's OK**

All our MapTemplates (46) were created from mapfishapp WMC and exported in json format before loading it in context manager as map templates.
This problem has appeared since last week (new mapstore deployment)
**What could be the reason?**
Can we fix this quickly as we are training users next week ?
|
priority
|
regression on map templates from wmc map template imported from a wmc from mapfishapp are no longer compatible with mapstore it is no longer possible to display the attribute table of the layers present in these map templates url or version tag x from to reproduce the issue open mapstore add a map template from cartes modèles select a layer toc toolbar for this layer is reduced it s not ok we are expecting for a full toc toolbar load from ajouter des données géoservices geoserver the layer localisation des stations de métro toc toolbar for the layer is full it s ok all our maptemplates were created from mapfishapp wmc and exported in json format before loading it in context manager as map templates this problem has appeared since last week new mapstore deployment what could be the reason can we fix this quickly as we are training users next week
| 1
|
261,152
| 8,224,934,220
|
IssuesEvent
|
2018-09-06 14:55:52
|
muflihun/easyloggingpp
|
https://api.github.com/repos/muflihun/easyloggingpp
|
opened
|
Lever not assigned in Writer via custom LogMessage
|
accepted bug high-priority
|
This issue was originally noticed in residue logging server.
Basically, in Writer construction with custom LogMessage, m_level is not assigned, as a result level's enabled/disabled state is dependent upon undefined behaviour.
We need to assign anything that is available in msg as long as they're valid values, especially m_level as it's checked at `construct`
|
1.0
|
Lever not assigned in Writer via custom LogMessage - This issue was originally noticed in residue logging server.
Basically, in Writer construction with custom LogMessage, m_level is not assigned, as a result level's enabled/disabled state is dependent upon undefined behaviour.
We need to assign anything that is available in msg as long as they're valid values, especially m_level as it's checked at `construct`
|
priority
|
lever not assigned in writer via custom logmessage this issue was originally noticed in residue logging server basically in writer construction with custom logmessage m level is not assigned as a result level s enabled disabled state is dependent upon undefined behaviour we need to assign anything that is available in msg as long as they re valid values especially m level as it s checked at construct
| 1
|
553,738
| 16,381,412,919
|
IssuesEvent
|
2021-05-17 03:42:46
|
AstroHuntsman/huntsman-pocs
|
https://api.github.com/repos/AstroHuntsman/huntsman-pocs
|
closed
|
Meridian flip problem for long observations
|
high priority
|
`Duration` constraint is vetoing CenA observations:
`Observation minimum can't be met before meridian flip`
Hack is to remove the constraint:
```
def create_huntsman_scheduler(**kwargs):
""" Create scheduler, including configurable moon avoidance.
TODO: Implement this in panoptes-pocs.
"""
scheduler = create_scheduler_from_config(**kwargs)
constraints = [c for c in scheduler.constraints if not isinstance(c, MoonAvoidance)]
constraints = [c for c in scheduler.constraints if not isinstance(c, Duration)]
constraints.append(HuntsmanMoonAvoidance())
scheduler.constraints = constraints
return scheduler
```
But we need to figure out why this is happening and stop it, since meridian is not for another 1.5 hours.
|
1.0
|
Meridian flip problem for long observations - `Duration` constraint is vetoing CenA observations:
`Observation minimum can't be met before meridian flip`
Hack is to remove the constraint:
```
def create_huntsman_scheduler(**kwargs):
""" Create scheduler, including configurable moon avoidance.
TODO: Implement this in panoptes-pocs.
"""
scheduler = create_scheduler_from_config(**kwargs)
constraints = [c for c in scheduler.constraints if not isinstance(c, MoonAvoidance)]
constraints = [c for c in scheduler.constraints if not isinstance(c, Duration)]
constraints.append(HuntsmanMoonAvoidance())
scheduler.constraints = constraints
return scheduler
```
But we need to figure out why this is happening and stop it, since meridian is not for another 1.5 hours.
|
priority
|
meridian flip problem for long observations duration constraint is vetoing cena observations observation minimum can t be met before meridian flip hack is to remove the constraint def create huntsman scheduler kwargs create scheduler including configurable moon avoidance todo implement this in panoptes pocs scheduler create scheduler from config kwargs constraints constraints constraints append huntsmanmoonavoidance scheduler constraints constraints return scheduler but we need to figure out why this is happening and stop it since meridian is not for another hours
| 1
|
609,365
| 18,871,222,768
|
IssuesEvent
|
2021-11-13 07:40:44
|
ut-code/utmap-times
|
https://api.github.com/repos/ut-code/utmap-times
|
closed
|
インターンstructured contentが表示されない
|
bug Priority: High
|
インターンstructured content
に内容を入力しても表示されないので、修正お願いいたします。
大変恐縮なのですが、月曜日にはクライアントに見せたいので、ASAPでお願いいたします。
|
1.0
|
インターンstructured contentが表示されない - インターンstructured content
に内容を入力しても表示されないので、修正お願いいたします。
大変恐縮なのですが、月曜日にはクライアントに見せたいので、ASAPでお願いいたします。
|
priority
|
インターンstructured contentが表示されない インターンstructured content に内容を入力しても表示されないので、修正お願いいたします。 大変恐縮なのですが、月曜日にはクライアントに見せたいので、asapでお願いいたします。
| 1
|
170,218
| 6,426,524,890
|
IssuesEvent
|
2017-08-09 17:39:02
|
sr320/LabDocs
|
https://api.github.com/repos/sr320/LabDocs
|
opened
|
NO NEW ISSUES! USE NEW REPO!
|
high priority
|
We're migrating to a different GitHub repository for all things Roberts Lab.
Please update your bookmarks and use [RobertsLab/resources Issues page!!!](https://github.com/RobertsLab/resources/issues)
Previously open issues have been transferred over to the the new repo.
|
1.0
|
NO NEW ISSUES! USE NEW REPO! - We're migrating to a different GitHub repository for all things Roberts Lab.
Please update your bookmarks and use [RobertsLab/resources Issues page!!!](https://github.com/RobertsLab/resources/issues)
Previously open issues have been transferred over to the the new repo.
|
priority
|
no new issues use new repo we re migrating to a different github repository for all things roberts lab please update your bookmarks and use previously open issues have been transferred over to the the new repo
| 1
|
585,199
| 17,482,623,011
|
IssuesEvent
|
2021-08-09 06:27:12
|
jalavosus/solvm
|
https://api.github.com/repos/jalavosus/solvm
|
opened
|
Better `ls remote` output
|
enhancement high priority
|
Right now, it's pretty fugly.
Necessary things:
- Colorized output
- Output should show if versions are installed, as well as their aliases if so
|
1.0
|
Better `ls remote` output - Right now, it's pretty fugly.
Necessary things:
- Colorized output
- Output should show if versions are installed, as well as their aliases if so
|
priority
|
better ls remote output right now it s pretty fugly necessary things colorized output output should show if versions are installed as well as their aliases if so
| 1
|
548,950
| 16,082,147,375
|
IssuesEvent
|
2021-04-26 06:46:15
|
uioz/mfe-proxy-cli
|
https://api.github.com/repos/uioz/mfe-proxy-cli
|
closed
|
实现对 `manifest.publicPath` 的支持
|
enhancement high priority
|
# 实现对 `manifest.publicPath` 的支持
https://github.com/uioz/mfe-proxy-server/issues/18
_Originally posted by @uioz in https://github.com/uioz/mfe-proxy-cli/issues/17#issuecomment-826479645_
|
1.0
|
实现对 `manifest.publicPath` 的支持 - # 实现对 `manifest.publicPath` 的支持
https://github.com/uioz/mfe-proxy-server/issues/18
_Originally posted by @uioz in https://github.com/uioz/mfe-proxy-cli/issues/17#issuecomment-826479645_
|
priority
|
实现对 manifest publicpath 的支持 实现对 manifest publicpath 的支持 originally posted by uioz in
| 1
|
173,040
| 6,519,297,012
|
IssuesEvent
|
2017-08-28 12:11:44
|
huridocs/uwazi
|
https://api.github.com/repos/huridocs/uwazi
|
opened
|
Unable to upload new documents
|
NEW Priority: High Type: Bug
|
When trying to upload documents to https://afchpr-commentary.uwazi.io, the document is stuck in "processing" and never stops.

|
1.0
|
Unable to upload new documents - When trying to upload documents to https://afchpr-commentary.uwazi.io, the document is stuck in "processing" and never stops.

|
priority
|
unable to upload new documents when trying to upload documents to the document is stuck in processing and never stops
| 1
|
532,959
| 15,574,266,942
|
IssuesEvent
|
2021-03-17 09:38:17
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
opened
|
"hierarchical tags" crash export
|
bug: pending priority: high
|
**Describe the bug/issue**
select "hierarchical tags" in "edit metadata exportation" dialog. double free.
**To Reproduce**
1. select "hierarchical tags" in "edit metadata exportation" dialog
2. Click on 'export'
**Expected behavior**
No crash obviously!
**Platform**
_Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _
* darktable version : e.g. 3.5.0+250~gee17c5dcc
* OS : Linux
* Linux - Distro : Debian.
|
1.0
|
"hierarchical tags" crash export -
**Describe the bug/issue**
select "hierarchical tags" in "edit metadata exportation" dialog. double free.
**To Reproduce**
1. select "hierarchical tags" in "edit metadata exportation" dialog
2. Click on 'export'
**Expected behavior**
No crash obviously!
**Platform**
_Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _
* darktable version : e.g. 3.5.0+250~gee17c5dcc
* OS : Linux
* Linux - Distro : Debian.
|
priority
|
hierarchical tags crash export describe the bug issue select hierarchical tags in edit metadata exportation dialog double free to reproduce select hierarchical tags in edit metadata exportation dialog click on export expected behavior no crash obviously platform please fill as much information as possible in the list given below please state unknown where you do not know the answer and remove any sections that are not applicable darktable version e g os linux linux distro debian
| 1
|
626,782
| 19,843,117,891
|
IssuesEvent
|
2022-01-21 00:57:22
|
Tedeapolis/development
|
https://api.github.com/repos/Tedeapolis/development
|
closed
|
Wapens bij gangs kunnen labelen/sorteren
|
enhancement accepted high priority
|
**Beschrijf zo duidelijk mogelijk de feature**
Momenteel kun je wapens in een gang huis leggen, maar als je bijvoorbeeld 4 ak's erin legt met allerlei verschillende attachments, is het niet meer terug te zoeken je ziet dan letterlijk 4 keer
- AK
- AK
- AK
- AK
Graag zouden we hebben dat je wapens kunt labelen net zoals je dat nu ook kunt met voertuigen
dus dat je schrijft AK met flashlight of AK van pietje puk.
**Wat lost deze feature op?**
Het misbruiken van auto opslag als wapen kluis en het bieden van een handigheidje voor criminelen.
|
1.0
|
Wapens bij gangs kunnen labelen/sorteren - **Beschrijf zo duidelijk mogelijk de feature**
Momenteel kun je wapens in een gang huis leggen, maar als je bijvoorbeeld 4 ak's erin legt met allerlei verschillende attachments, is het niet meer terug te zoeken je ziet dan letterlijk 4 keer
- AK
- AK
- AK
- AK
Graag zouden we hebben dat je wapens kunt labelen net zoals je dat nu ook kunt met voertuigen
dus dat je schrijft AK met flashlight of AK van pietje puk.
**Wat lost deze feature op?**
Het misbruiken van auto opslag als wapen kluis en het bieden van een handigheidje voor criminelen.
|
priority
|
wapens bij gangs kunnen labelen sorteren beschrijf zo duidelijk mogelijk de feature momenteel kun je wapens in een gang huis leggen maar als je bijvoorbeeld ak s erin legt met allerlei verschillende attachments is het niet meer terug te zoeken je ziet dan letterlijk keer ak ak ak ak graag zouden we hebben dat je wapens kunt labelen net zoals je dat nu ook kunt met voertuigen dus dat je schrijft ak met flashlight of ak van pietje puk wat lost deze feature op het misbruiken van auto opslag als wapen kluis en het bieden van een handigheidje voor criminelen
| 1
|
293,475
| 8,996,248,489
|
IssuesEvent
|
2019-02-02 00:18:20
|
delaford/game
|
https://api.github.com/repos/delaford/game
|
opened
|
Game won't run on Windows
|
bug core engine good first issue high priority
|
<!-- Please don't delete this template or we'll close your issue -->
<!-- Before creating an issue please make sure you are using the latest version of the game. -->
**What is the current behavior?**
When you follow the instructions to install on any Windows machine, you go to the webpage after `npm run serve` and you get `Uncaught SyntaxError: Unexpected token _ in JSON at position 0`
**If the current behavior is a bug, please provide the exact steps to reproduce.**
1. git clone
2. cd game
3. npm install
4. npm run serve
**What is the expected behavior?**
For the game to run without errors on a Windows machine
**Additional context**
Add any other context about the problem here.
|
1.0
|
Game won't run on Windows - <!-- Please don't delete this template or we'll close your issue -->
<!-- Before creating an issue please make sure you are using the latest version of the game. -->
**What is the current behavior?**
When you follow the instructions to install on any Windows machine, you go to the webpage after `npm run serve` and you get `Uncaught SyntaxError: Unexpected token _ in JSON at position 0`
**If the current behavior is a bug, please provide the exact steps to reproduce.**
1. git clone
2. cd game
3. npm install
4. npm run serve
**What is the expected behavior?**
For the game to run without errors on a Windows machine
**Additional context**
Add any other context about the problem here.
|
priority
|
game won t run on windows what is the current behavior when you follow the instructions to install on any windows machine you go to the webpage after npm run serve and you get uncaught syntaxerror unexpected token in json at position if the current behavior is a bug please provide the exact steps to reproduce git clone cd game npm install npm run serve what is the expected behavior for the game to run without errors on a windows machine additional context add any other context about the problem here
| 1
|
38,229
| 2,842,425,785
|
IssuesEvent
|
2015-05-28 09:15:26
|
hydrosolutions/imomo-hydromet-client
|
https://api.github.com/repos/hydrosolutions/imomo-hydromet-client
|
opened
|
Allow editing water level values using the operational journal
|
enhancement Priority high
|
To correct for errors after submission, tricky part is recalculating the discharge with the right model.
|
1.0
|
Allow editing water level values using the operational journal - To correct for errors after submission, tricky part is recalculating the discharge with the right model.
|
priority
|
allow editing water level values using the operational journal to correct for errors after submission tricky part is recalculating the discharge with the right model
| 1
|
47,655
| 2,982,833,728
|
IssuesEvent
|
2015-07-17 14:02:18
|
pombase/curation
|
https://api.github.com/repos/pombase/curation
|
closed
|
query mitotic DNA repliction checkpoint pim1
|
auto-migrated high priority sourceforge
|
pim1 overexpression suppreses DNA replication checkpoint defects. Does that necessarily mean it is involved in replication checkpoint?
http://www.ncbi.nlm.nih.gov/pubmed?term=12135745
I think it could be just be that it is slowing the cell cycle instead of the replication checkpoint?
Thoughts?
Original comment by: ValWood
|
1.0
|
query mitotic DNA repliction checkpoint pim1 -
pim1 overexpression suppreses DNA replication checkpoint defects. Does that necessarily mean it is involved in replication checkpoint?
http://www.ncbi.nlm.nih.gov/pubmed?term=12135745
I think it could be just be that it is slowing the cell cycle instead of the replication checkpoint?
Thoughts?
Original comment by: ValWood
|
priority
|
query mitotic dna repliction checkpoint overexpression suppreses dna replication checkpoint defects does that necessarily mean it is involved in replication checkpoint i think it could be just be that it is slowing the cell cycle instead of the replication checkpoint thoughts original comment by valwood
| 1
|
408,760
| 11,951,681,698
|
IssuesEvent
|
2020-04-03 17:20:11
|
scality/metalk8s
|
https://api.github.com/repos/scality/metalk8s
|
closed
|
CVE-2020-8552: Kube-apiserver vulnerable to Denial of service(DoS)
|
complexity:easy priority:high severity:medium topic:security
|
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately to moonshot-platform@scality.com
-->
**Component**:
<!-- E.g. 'salt', 'containers', 'kubernetes', 'build', 'tests'... -->
'kubernetes'
**What happened**:
Source: https://github.com/kubernetes/kubernetes/issues/89378
The Kubernetes API server has been found to be vulnerable to a denial of service attack via authorized API requests.
CVSS Rating: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L (Medium)
Affected Versions
kube-apiserver v1.17.0 - v1.17.2
kube-apiserver v1.16.0 - v1.16.6
kube-apiserver < v1.15.10
Fixed Versions
v1.17.3
v1.16.7
v1.15.10
**Resolution proposal** (optional):
Bump the Kube-apiserver version for release and to be released branches.
- For branch 2.5 we use kube-apiserver 1.16.2(vulnerable)
- For branch 2.4 we use kube-apiserver 1.15.5(vulnerable)
|
1.0
|
CVE-2020-8552: Kube-apiserver vulnerable to Denial of service(DoS) - <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately to moonshot-platform@scality.com
-->
**Component**:
<!-- E.g. 'salt', 'containers', 'kubernetes', 'build', 'tests'... -->
'kubernetes'
**What happened**:
Source: https://github.com/kubernetes/kubernetes/issues/89378
The Kubernetes API server has been found to be vulnerable to a denial of service attack via authorized API requests.
CVSS Rating: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L (Medium)
Affected Versions
kube-apiserver v1.17.0 - v1.17.2
kube-apiserver v1.16.0 - v1.16.6
kube-apiserver < v1.15.10
Fixed Versions
v1.17.3
v1.16.7
v1.15.10
**Resolution proposal** (optional):
Bump the Kube-apiserver version for release and to be released branches.
- For branch 2.5 we use kube-apiserver 1.16.2(vulnerable)
- For branch 2.4 we use kube-apiserver 1.15.5(vulnerable)
|
priority
|
cve kube apiserver vulnerable to denial of service dos please use this template while reporting a bug and provide as much info as possible not doing so may result in your bug not being addressed in a timely manner thanks if the matter is security related please disclose it privately to moonshot platform scality com component kubernetes what happened source the kubernetes api server has been found to be vulnerable to a denial of service attack via authorized api requests cvss rating cvss av n ac l pr n ui n s u c n i n a l medium affected versions kube apiserver kube apiserver kube apiserver fixed versions resolution proposal optional bump the kube apiserver version for release and to be released branches for branch we use kube apiserver vulnerable for branch we use kube apiserver vulnerable
| 1
|
590,745
| 17,786,719,614
|
IssuesEvent
|
2021-08-31 12:00:25
|
airqo-platform/AirQo-frontend
|
https://api.github.com/repos/airqo-platform/AirQo-frontend
|
closed
|
review the device registry table's search functionality
|
bug priority-high
|
**What were you trying to achieve?**
search device by ID
**What are the expected results?**
filtered and correct results of the search
**What are the received results?**
review the device search functionality, sometimes just putting the number like 19 just returns many details.
**What are the steps to reproduce the issue?**
Just head over to Netmanager and search for devices 19, 20
**In what environment did you encounter the issue?**
Google chrome
**Additional context**
None
|
1.0
|
review the device registry table's search functionality - **What were you trying to achieve?**
search device by ID
**What are the expected results?**
filtered and correct results of the search
**What are the received results?**
review the device search functionality, sometimes just putting the number like 19 just returns many details.
**What are the steps to reproduce the issue?**
Just head over to Netmanager and search for devices 19, 20
**In what environment did you encounter the issue?**
Google chrome
**Additional context**
None
|
priority
|
review the device registry table s search functionality what were you trying to achieve search device by id what are the expected results filtered and correct results of the search what are the received results review the device search functionality sometimes just putting the number like just returns many details what are the steps to reproduce the issue just head over to netmanager and search for devices in what environment did you encounter the issue google chrome additional context none
| 1
|
631,437
| 20,151,672,765
|
IssuesEvent
|
2022-02-09 13:01:14
|
dice-group/LIMES
|
https://api.github.com/repos/dice-group/LIMES
|
closed
|
Trigrams similarity strange results
|
Priority: High Status: Available Type: Bug
|
The string pair `"Impasto Pizza & Cafe", "Mpoukia & Sychorio"` has a high similarity value for trigrams but it should not. Need to investigate.
|
1.0
|
Trigrams similarity strange results - The string pair `"Impasto Pizza & Cafe", "Mpoukia & Sychorio"` has a high similarity value for trigrams but it should not. Need to investigate.
|
priority
|
trigrams similarity strange results the string pair impasto pizza cafe mpoukia sychorio has a high similarity value for trigrams but it should not need to investigate
| 1
|
327,126
| 9,966,794,847
|
IssuesEvent
|
2019-07-08 12:08:40
|
LycheeOrg/Lychee-Laravel
|
https://api.github.com/repos/LycheeOrg/Lychee-Laravel
|
closed
|
Search doesn't work: Server error or API not found.
|
High Priority bug
|
### Detailed description of the problem
If I want to use the search I get the message `Server error or API not found`. I think the problem only occurs when something is found. If there is no result the search works (`No results` page).
### Steps to reproduce the issue
**Steps to reproduce the behavior:**
1. Go to the *Dashboard* (Album overview)
2. Click on `Search`
3. Enter something for which a picture should be found.
### Output of the diagnostics
```text
Diagnostics
-----------
Warning: Dropbox import not working. No property for dropboxKey.
System Information
------------------
Lychee-front Version: 3.2.16
Lychee Version (git): 64d9436 (master) - Up to date.
DB Version: 040000
System: Linux
PHP Version: 7.3
PostgreSQL Version: PostgreSQL 11.4 (Debian 11.4-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
Lychee total space: 213.57 MB
Upload folder space: 156.46 MB
System total space: 55.27 GB
System free space: 48.36 GB (87%)
Imagick: 1
Imagick Active: 1
Imagick Version: 1687
GD Version: 2.2.5
Config Information
------------------
version: 040000
checkForUpdates: 1
layout: 1
image_overlay: 1
image_overlay_type: exif
full_photo: 1
Mod_Frame: 0
Mod_Frame_refresh: 30000
landing_twitter:
landing_instagram:
landing_youtube:
landing_page_enable: 0
compression_quality: 94
site_title: Lychee v4
landing_owner: Test
sortingPhotos_col: takestamp
sortingPhotos_order: ASC
sortingAlbums_col: description
sortingAlbums_order: DESC
imagick: 1
skipDuplicates: 0
small_max_width: 0
small_max_height: 360
medium_max_width: 1920
medium_max_height: 1080
default_license: none
deleteImported: 1
landing_subtitle: Cat, Dogs & Humans Photography
landing_title: Test
landing_background: dist/cat.jpg
thumb_2x: 1
small_2x: 0
medium_2x: 0
site_copyright_begin: 2019
site_copyright_end: 2019
additional_footer_text:
display_social_in_gallery: 0
public_recent: 0
recent_age: 1
public_starred: 0
site_copyright_enable: 0
public_search: 0
landing_facebook:
landing_flickr:
lang: en
```
### Browser and system
* Firefox 67.0.4 (64-Bit)
* Arch Linux / Kernel 5.1.16
### Response
* Action: /api/search
* Status-Code: 500 / Internal Server Error
* Method: POST
Content:
```json
{
"message": "Call to undefined method App\\ModelFunctions\\PhotoFunctions::getUrl()",
"exception": "Symfony\\Component\\Debug\\Exception\\FatalThrowableError",
"file": "/var/www/lychee/app/Http/Controllers/SearchController.php",
"line": 207,
"trace": [
{
"function": "search",
"class": "App\\Http\\Controllers\\SearchController",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Controller.php",
"line": 54,
"function": "call_user_func_array"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/ControllerDispatcher.php",
"line": 45,
"function": "callAction",
"class": "Illuminate\\Routing\\Controller",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Route.php",
"line": 219,
"function": "dispatch",
"class": "Illuminate\\Routing\\ControllerDispatcher",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Route.php",
"line": 176,
"function": "runController",
"class": "Illuminate\\Routing\\Route",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Router.php",
"line": 680,
"function": "run",
"class": "Illuminate\\Routing\\Route",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 30,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Router",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Middleware/SubstituteBindings.php",
"line": 41,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Routing\\Middleware\\SubstituteBindings",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/VerifyCsrfToken.php",
"line": 75,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/app/Http/Middleware/VerifyCsrfToken.php",
"line": 60,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\VerifyCsrfToken",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "App\\Http\\Middleware\\VerifyCsrfToken",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Session/Middleware/AuthenticateSession.php",
"line": 39,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Session\\Middleware\\AuthenticateSession",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/View/Middleware/ShareErrorsFromSession.php",
"line": 49,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\View\\Middleware\\ShareErrorsFromSession",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Session/Middleware/StartSession.php",
"line": 56,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Session\\Middleware\\StartSession",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Cookie/Middleware/AddQueuedCookiesToResponse.php",
"line": 37,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Cookie\\Middleware\\AddQueuedCookiesToResponse",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Cookie/Middleware/EncryptCookies.php",
"line": 66,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Cookie\\Middleware\\EncryptCookies",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 104,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Router.php",
"line": 682,
"function": "then",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Router.php",
"line": 657,
"function": "runRouteWithinStack",
"class": "Illuminate\\Routing\\Router",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Router.php",
"line": 623,
"function": "runRoute",
"class": "Illuminate\\Routing\\Router",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Router.php",
"line": 612,
"function": "dispatchToRoute",
"class": "Illuminate\\Routing\\Router",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php",
"line": 176,
"function": "dispatch",
"class": "Illuminate\\Routing\\Router",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 30,
"function": "Illuminate\\Foundation\\Http\\{closure}",
"class": "Illuminate\\Foundation\\Http\\Kernel",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/bepsvpt/secure-headers/src/SecureHeadersMiddleware.php",
"line": 21,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Bepsvpt\\SecureHeaders\\SecureHeadersMiddleware",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/fideloper/proxy/src/TrustProxies.php",
"line": 57,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Fideloper\\Proxy\\TrustProxies",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php",
"line": 21,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\TransformsRequest",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php",
"line": 21,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\TransformsRequest",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/ValidatePostSize.php",
"line": 27,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\ValidatePostSize",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/CheckForMaintenanceMode.php",
"line": 62,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\CheckForMaintenanceMode",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 104,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php",
"line": 151,
"function": "then",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php",
"line": 116,
"function": "sendRequestThroughRouter",
"class": "Illuminate\\Foundation\\Http\\Kernel",
"type": "->"
},
{
"file": "/var/www/lychee/public/index.php",
"line": 54,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Kernel",
"type": "->"
}
]
}
```
|
1.0
|
Search doesn't work: Server error or API not found. - ### Detailed description of the problem
If I want to use the search I get the message `Server error or API not found`. I think the problem only occurs when something is found. If there is no result the search works (`No results` page).
### Steps to reproduce the issue
**Steps to reproduce the behavior:**
1. Go to the *Dashboard* (Album overview)
2. Click on `Search`
3. Enter something for which a picture should be found.
### Output of the diagnostics
```text
Diagnostics
-----------
Warning: Dropbox import not working. No property for dropboxKey.
System Information
------------------
Lychee-front Version: 3.2.16
Lychee Version (git): 64d9436 (master) - Up to date.
DB Version: 040000
System: Linux
PHP Version: 7.3
PostgreSQL Version: PostgreSQL 11.4 (Debian 11.4-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
Lychee total space: 213.57 MB
Upload folder space: 156.46 MB
System total space: 55.27 GB
System free space: 48.36 GB (87%)
Imagick: 1
Imagick Active: 1
Imagick Version: 1687
GD Version: 2.2.5
Config Information
------------------
version: 040000
checkForUpdates: 1
layout: 1
image_overlay: 1
image_overlay_type: exif
full_photo: 1
Mod_Frame: 0
Mod_Frame_refresh: 30000
landing_twitter:
landing_instagram:
landing_youtube:
landing_page_enable: 0
compression_quality: 94
site_title: Lychee v4
landing_owner: Test
sortingPhotos_col: takestamp
sortingPhotos_order: ASC
sortingAlbums_col: description
sortingAlbums_order: DESC
imagick: 1
skipDuplicates: 0
small_max_width: 0
small_max_height: 360
medium_max_width: 1920
medium_max_height: 1080
default_license: none
deleteImported: 1
landing_subtitle: Cat, Dogs & Humans Photography
landing_title: Test
landing_background: dist/cat.jpg
thumb_2x: 1
small_2x: 0
medium_2x: 0
site_copyright_begin: 2019
site_copyright_end: 2019
additional_footer_text:
display_social_in_gallery: 0
public_recent: 0
recent_age: 1
public_starred: 0
site_copyright_enable: 0
public_search: 0
landing_facebook:
landing_flickr:
lang: en
```
### Browser and system
* Firefox 67.0.4 (64-Bit)
* Arch Linux / Kernel 5.1.16
### Response
* Action: /api/search
* Status-Code: 500 / Internal Server Error
* Method: POST
Content:
```json
{
"message": "Call to undefined method App\\ModelFunctions\\PhotoFunctions::getUrl()",
"exception": "Symfony\\Component\\Debug\\Exception\\FatalThrowableError",
"file": "/var/www/lychee/app/Http/Controllers/SearchController.php",
"line": 207,
"trace": [
{
"function": "search",
"class": "App\\Http\\Controllers\\SearchController",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Controller.php",
"line": 54,
"function": "call_user_func_array"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/ControllerDispatcher.php",
"line": 45,
"function": "callAction",
"class": "Illuminate\\Routing\\Controller",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Route.php",
"line": 219,
"function": "dispatch",
"class": "Illuminate\\Routing\\ControllerDispatcher",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Route.php",
"line": 176,
"function": "runController",
"class": "Illuminate\\Routing\\Route",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Router.php",
"line": 680,
"function": "run",
"class": "Illuminate\\Routing\\Route",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 30,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Router",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Middleware/SubstituteBindings.php",
"line": 41,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Routing\\Middleware\\SubstituteBindings",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/VerifyCsrfToken.php",
"line": 75,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/app/Http/Middleware/VerifyCsrfToken.php",
"line": 60,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\VerifyCsrfToken",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "App\\Http\\Middleware\\VerifyCsrfToken",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Session/Middleware/AuthenticateSession.php",
"line": 39,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Session\\Middleware\\AuthenticateSession",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/View/Middleware/ShareErrorsFromSession.php",
"line": 49,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\View\\Middleware\\ShareErrorsFromSession",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Session/Middleware/StartSession.php",
"line": 56,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Session\\Middleware\\StartSession",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Cookie/Middleware/AddQueuedCookiesToResponse.php",
"line": 37,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Cookie\\Middleware\\AddQueuedCookiesToResponse",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Cookie/Middleware/EncryptCookies.php",
"line": 66,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Cookie\\Middleware\\EncryptCookies",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 104,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Router.php",
"line": 682,
"function": "then",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Router.php",
"line": 657,
"function": "runRouteWithinStack",
"class": "Illuminate\\Routing\\Router",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Router.php",
"line": 623,
"function": "runRoute",
"class": "Illuminate\\Routing\\Router",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Router.php",
"line": 612,
"function": "dispatchToRoute",
"class": "Illuminate\\Routing\\Router",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php",
"line": 176,
"function": "dispatch",
"class": "Illuminate\\Routing\\Router",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 30,
"function": "Illuminate\\Foundation\\Http\\{closure}",
"class": "Illuminate\\Foundation\\Http\\Kernel",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/bepsvpt/secure-headers/src/SecureHeadersMiddleware.php",
"line": 21,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Bepsvpt\\SecureHeaders\\SecureHeadersMiddleware",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/fideloper/proxy/src/TrustProxies.php",
"line": 57,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Fideloper\\Proxy\\TrustProxies",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php",
"line": 21,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\TransformsRequest",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php",
"line": 21,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\TransformsRequest",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/ValidatePostSize.php",
"line": 27,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\ValidatePostSize",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/CheckForMaintenanceMode.php",
"line": 62,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 163,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\CheckForMaintenanceMode",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php",
"line": 53,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 104,
"function": "Illuminate\\Routing\\{closure}",
"class": "Illuminate\\Routing\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php",
"line": 151,
"function": "then",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/var/www/lychee/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php",
"line": 116,
"function": "sendRequestThroughRouter",
"class": "Illuminate\\Foundation\\Http\\Kernel",
"type": "->"
},
{
"file": "/var/www/lychee/public/index.php",
"line": 54,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Kernel",
"type": "->"
}
]
}
```
|
priority
|
search doesn t work server error or api not found detailed description of the problem if i want to use the search i get the message server error or api not found i think the problem only occurs when something is found if there is no result the search works no results page steps to reproduce the issue steps to reproduce the behavior go to the dashboard album overview click on search enter something for which a picture should be found output of the diagnostics text diagnostics warning dropbox import not working no property for dropboxkey system information lychee front version lychee version git master up to date db version system linux php version postgresql version postgresql debian on pc linux gnu compiled by gcc debian bit lychee total space mb upload folder space mb system total space gb system free space gb imagick imagick active imagick version gd version config information version checkforupdates layout image overlay image overlay type exif full photo mod frame mod frame refresh landing twitter landing instagram landing youtube landing page enable compression quality site title lychee landing owner test sortingphotos col takestamp sortingphotos order asc sortingalbums col description sortingalbums order desc imagick skipduplicates small max width small max height medium max width medium max height default license none deleteimported landing subtitle cat dogs humans photography landing title test landing background dist cat jpg thumb small medium site copyright begin site copyright end additional footer text display social in gallery public recent recent age public starred site copyright enable public search landing facebook landing flickr lang en browser and system firefox bit arch linux kernel response action api search status code internal server error method post content json message call to undefined method app modelfunctions photofunctions geturl exception symfony component debug exception fatalthrowableerror file var www lychee app http controllers searchcontroller php line trace function search class app http controllers searchcontroller type file var www lychee vendor laravel framework src illuminate routing controller php line function call user func array file var www lychee vendor laravel framework src illuminate routing controllerdispatcher php line function callaction class illuminate routing controller type file var www lychee vendor laravel framework src illuminate routing route php line function dispatch class illuminate routing controllerdispatcher type file var www lychee vendor laravel framework src illuminate routing route php line function runcontroller class illuminate routing route type file var www lychee vendor laravel framework src illuminate routing router php line function run class illuminate routing route type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate routing closure class illuminate routing router type file var www lychee vendor laravel framework src illuminate routing middleware substitutebindings php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class illuminate routing middleware substitutebindings type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate foundation http middleware verifycsrftoken php line function illuminate routing closure class illuminate routing pipeline type file var www lychee app http middleware verifycsrftoken php line function handle class illuminate foundation http middleware verifycsrftoken type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class app http middleware verifycsrftoken type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate session middleware authenticatesession php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class illuminate session middleware authenticatesession type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate view middleware shareerrorsfromsession php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class illuminate view middleware shareerrorsfromsession type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate session middleware startsession php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class illuminate session middleware startsession type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate cookie middleware addqueuedcookiestoresponse php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class illuminate cookie middleware addqueuedcookiestoresponse type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate cookie middleware encryptcookies php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class illuminate cookie middleware encryptcookies type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate routing router php line function then class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate routing router php line function runroutewithinstack class illuminate routing router type file var www lychee vendor laravel framework src illuminate routing router php line function runroute class illuminate routing router type file var www lychee vendor laravel framework src illuminate routing router php line function dispatchtoroute class illuminate routing router type file var www lychee vendor laravel framework src illuminate foundation http kernel php line function dispatch class illuminate routing router type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate foundation http closure class illuminate foundation http kernel type file var www lychee vendor bepsvpt secure headers src secureheadersmiddleware php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class bepsvpt secureheaders secureheadersmiddleware type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor fideloper proxy src trustproxies php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class fideloper proxy trustproxies type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate foundation http middleware transformsrequest php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class illuminate foundation http middleware transformsrequest type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate foundation http middleware transformsrequest php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class illuminate foundation http middleware transformsrequest type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate foundation http middleware validatepostsize php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class illuminate foundation http middleware validatepostsize type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate foundation http middleware checkformaintenancemode php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function handle class illuminate foundation http middleware checkformaintenancemode type file var www lychee vendor laravel framework src illuminate routing pipeline php line function illuminate pipeline closure class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate pipeline pipeline php line function illuminate routing closure class illuminate routing pipeline type file var www lychee vendor laravel framework src illuminate foundation http kernel php line function then class illuminate pipeline pipeline type file var www lychee vendor laravel framework src illuminate foundation http kernel php line function sendrequestthroughrouter class illuminate foundation http kernel type file var www lychee public index php line function handle class illuminate foundation http kernel type
| 1
|
369,405
| 10,904,923,216
|
IssuesEvent
|
2019-11-20 09:44:06
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
closed
|
Implicit grant doesn't regenerate the JWT
|
3.1.0 Priority/Highest WUM
|
APIMTokenIssuer needs to override the newly introduced method in OauthTokenIssuer. Please refer [1]
[1] https://github.com/wso2-extensions/identity-inbound-auth-oauth/pull/1193
|
1.0
|
Implicit grant doesn't regenerate the JWT - APIMTokenIssuer needs to override the newly introduced method in OauthTokenIssuer. Please refer [1]
[1] https://github.com/wso2-extensions/identity-inbound-auth-oauth/pull/1193
|
priority
|
implicit grant doesn t regenerate the jwt apimtokenissuer needs to override the newly introduced method in oauthtokenissuer please refer
| 1
|
164,144
| 6,219,828,195
|
IssuesEvent
|
2017-07-09 17:10:25
|
CS2103JUN2017-T01-T1/main
|
https://api.github.com/repos/CS2103JUN2017-T01-T1/main
|
closed
|
Support for event task
|
priority.high type.enhancement
|
Need to implement an event task capable of supporting start date/time and end date/time
|
1.0
|
Support for event task - Need to implement an event task capable of supporting start date/time and end date/time
|
priority
|
support for event task need to implement an event task capable of supporting start date time and end date time
| 1
|
253,258
| 8,053,480,815
|
IssuesEvent
|
2018-08-01 23:15:23
|
ohni-us/android-app
|
https://api.github.com/repos/ohni-us/android-app
|
closed
|
Pin code functionality
|
high priority security
|
- [x] Functional pin entry
- [x] Lock the app until unlocked
- [ ] Include pin set up
- [ ] Redesign screen
- [ ] Lock app on app close
4 or 6 digits?
|
1.0
|
Pin code functionality - - [x] Functional pin entry
- [x] Lock the app until unlocked
- [ ] Include pin set up
- [ ] Redesign screen
- [ ] Lock app on app close
4 or 6 digits?
|
priority
|
pin code functionality functional pin entry lock the app until unlocked include pin set up redesign screen lock app on app close or digits
| 1
|
812,678
| 30,347,687,969
|
IssuesEvent
|
2023-07-11 16:31:18
|
Automattic/woocommerce-payments
|
https://api.github.com/repos/Automattic/woocommerce-payments
|
opened
|
Subscription currency can be changed when manually renewing with Multi-Currency
|
type: bug priority: high component: customer multi-currency category: core
|
### Describe the bug
When manually renewing a subscription, the currency is able to be changed through the url parameter:`?currency=XXX`.
As far as I can tell, if the currency is changed and the customer tries to check out, they receive an error, however, this creates a larger problem. Once the checkout is submitted, the order is created and the order currency is then set as the currency on checkout, and from there it cannot be reverted without editing the meta data of the order itself. The bright side is that this means that the check for paying for a manual order is working correctly.
### To Reproduce
<!-- Describe the steps to reproduce the behavior. -->
Note: You may need to use WooCommerce Subscriptions directly, and you will need to have manual renewals enabled under WooCommerce > Settings > Subscriptions.
1. Create a simple subscription product.
2. Add one additional currency under WooCommerce > Settings > Multi-Currency.
3. Add the subscription product to your cart, and check out using a currency that is not the store default currency.
4. On the order confirmation page, click into the subscription.
5. Choose to _Renew now_.
6. On the pay for order/checkout page, update the url to include `?currency=XXX` replacing `XXX` with your default currency code.
7. You will now see the currency change on the page, which it shouldn't be doing.
### Expected behavior
The currency on the checkout page shouldn't change if there is a subscription renewal in the cart.
### Additional context
Brought up in testing in:
https://github.com/Automattic/woocommerce-payments/pull/5489
Previously brought up and fixed:
https://github.com/Automattic/woocommerce-payments/issues/5356
https://github.com/Automattic/woocommerce-payments/pull/5404
|
1.0
|
Subscription currency can be changed when manually renewing with Multi-Currency - ### Describe the bug
When manually renewing a subscription, the currency is able to be changed through the url parameter:`?currency=XXX`.
As far as I can tell, if the currency is changed and the customer tries to check out, they receive an error, however, this creates a larger problem. Once the checkout is submitted, the order is created and the order currency is then set as the currency on checkout, and from there it cannot be reverted without editing the meta data of the order itself. The bright side is that this means that the check for paying for a manual order is working correctly.
### To Reproduce
<!-- Describe the steps to reproduce the behavior. -->
Note: You may need to use WooCommerce Subscriptions directly, and you will need to have manual renewals enabled under WooCommerce > Settings > Subscriptions.
1. Create a simple subscription product.
2. Add one additional currency under WooCommerce > Settings > Multi-Currency.
3. Add the subscription product to your cart, and check out using a currency that is not the store default currency.
4. On the order confirmation page, click into the subscription.
5. Choose to _Renew now_.
6. On the pay for order/checkout page, update the url to include `?currency=XXX` replacing `XXX` with your default currency code.
7. You will now see the currency change on the page, which it shouldn't be doing.
### Expected behavior
The currency on the checkout page shouldn't change if there is a subscription renewal in the cart.
### Additional context
Brought up in testing in:
https://github.com/Automattic/woocommerce-payments/pull/5489
Previously brought up and fixed:
https://github.com/Automattic/woocommerce-payments/issues/5356
https://github.com/Automattic/woocommerce-payments/pull/5404
|
priority
|
subscription currency can be changed when manually renewing with multi currency describe the bug when manually renewing a subscription the currency is able to be changed through the url parameter currency xxx as far as i can tell if the currency is changed and the customer tries to check out they receive an error however this creates a larger problem once the checkout is submitted the order is created and the order currency is then set as the currency on checkout and from there it cannot be reverted without editing the meta data of the order itself the bright side is that this means that the check for paying for a manual order is working correctly to reproduce note you may need to use woocommerce subscriptions directly and you will need to have manual renewals enabled under woocommerce settings subscriptions create a simple subscription product add one additional currency under woocommerce settings multi currency add the subscription product to your cart and check out using a currency that is not the store default currency on the order confirmation page click into the subscription choose to renew now on the pay for order checkout page update the url to include currency xxx replacing xxx with your default currency code you will now see the currency change on the page which it shouldn t be doing expected behavior the currency on the checkout page shouldn t change if there is a subscription renewal in the cart additional context brought up in testing in previously brought up and fixed
| 1
|
441,146
| 12,708,771,927
|
IssuesEvent
|
2020-06-23 11:08:58
|
vitreo12/omni
|
https://api.github.com/repos/vitreo12/omni
|
closed
|
new blocks always introduce new scope
|
high priority
|
Here `value` are two different variable declarations (due to `declaredInScope`)... This forces the user to declare any re-usable variable in `init`.
```nim
sample:
value = 0.0
for i in 0..10:
value = value + i
```
```nim
init:
value sig
sample:
value = 0.0
for i in 0..10:
value = value + i
```
|
1.0
|
new blocks always introduce new scope - Here `value` are two different variable declarations (due to `declaredInScope`)... This forces the user to declare any re-usable variable in `init`.
```nim
sample:
value = 0.0
for i in 0..10:
value = value + i
```
```nim
init:
value sig
sample:
value = 0.0
for i in 0..10:
value = value + i
```
|
priority
|
new blocks always introduce new scope here value are two different variable declarations due to declaredinscope this forces the user to declare any re usable variable in init nim sample value for i in value value i nim init value sig sample value for i in value value i
| 1
|
809,401
| 30,191,032,279
|
IssuesEvent
|
2023-07-04 15:23:55
|
giantswarm/roadmap
|
https://api.github.com/repos/giantswarm/roadmap
|
closed
|
PSP deprecation and kyverno-PSS compliance
|
priority/high team/honeybadger effort/l
|
Dashboard link - non-compliant apps: https://giantswarm.grafana.net/d/e5SZJRo4z/security-teams-overview?orgId=1&var-team=honeybadger&var-category=All&var-policy=All&var-app=All&var-deployment=All&var-daemonset=All
Also, info from Zach:
Questions
1. when we should be ready with our apps to pass kyverno-pss validation?
2. Can we entirely remove PSPs from our apps (and not just make them conditional on the API version) and assume it's OK to use admin level PSP by definition everywhere, as bad stuff will be blocked by kyverno anyway?
Answers:
1. The "hard" deadline is v20 because that's when PSPs will be gone and we'll have only Kyverno. The beta will be this month https://github.com/giantswarm/giantswarm/issues/26694, but kyverno won't be enforcing in beta
2. Kyverno is only auditing prior to v20, so PSPs should still be there. If you don't include one, you'll get the default restricted PSP, so it depends on your applications. If you don't actually need any privileges not granted by restricted then sure, you don't need to ship a PSP at all and can just accept the default one.
## TODO
- app-admission-controller: https://github.com/giantswarm/app-admission-controller/pull/318
- app-exporter: https://github.com/giantswarm/app-exporter/pull/334
- app-operator: done in https://github.com/giantswarm/app-operator/pull/1023
- cluster-apps-operator: https://github.com/giantswarm/cluster-apps-operator/pull/362
- config-controller: https://github.com/giantswarm/config-controller/pull/278
- flux-app:
- https://github.com/giantswarm/flux-app/pull/206
- https://github.com/giantswarm/management-cluster-bases/pull/19
|
1.0
|
PSP deprecation and kyverno-PSS compliance - Dashboard link - non-compliant apps: https://giantswarm.grafana.net/d/e5SZJRo4z/security-teams-overview?orgId=1&var-team=honeybadger&var-category=All&var-policy=All&var-app=All&var-deployment=All&var-daemonset=All
Also, info from Zach:
Questions
1. when we should be ready with our apps to pass kyverno-pss validation?
2. Can we entirely remove PSPs from our apps (and not just make them conditional on the API version) and assume it's OK to use admin level PSP by definition everywhere, as bad stuff will be blocked by kyverno anyway?
Answers:
1. The "hard" deadline is v20 because that's when PSPs will be gone and we'll have only Kyverno. The beta will be this month https://github.com/giantswarm/giantswarm/issues/26694, but kyverno won't be enforcing in beta
2. Kyverno is only auditing prior to v20, so PSPs should still be there. If you don't include one, you'll get the default restricted PSP, so it depends on your applications. If you don't actually need any privileges not granted by restricted then sure, you don't need to ship a PSP at all and can just accept the default one.
## TODO
- app-admission-controller: https://github.com/giantswarm/app-admission-controller/pull/318
- app-exporter: https://github.com/giantswarm/app-exporter/pull/334
- app-operator: done in https://github.com/giantswarm/app-operator/pull/1023
- cluster-apps-operator: https://github.com/giantswarm/cluster-apps-operator/pull/362
- config-controller: https://github.com/giantswarm/config-controller/pull/278
- flux-app:
- https://github.com/giantswarm/flux-app/pull/206
- https://github.com/giantswarm/management-cluster-bases/pull/19
|
priority
|
psp deprecation and kyverno pss compliance dashboard link non compliant apps also info from zach questions when we should be ready with our apps to pass kyverno pss validation can we entirely remove psps from our apps and not just make them conditional on the api version and assume it s ok to use admin level psp by definition everywhere as bad stuff will be blocked by kyverno anyway answers the hard deadline is because that s when psps will be gone and we ll have only kyverno the beta will be this month but kyverno won t be enforcing in beta kyverno is only auditing prior to so psps should still be there if you don t include one you ll get the default restricted psp so it depends on your applications if you don t actually need any privileges not granted by restricted then sure you don t need to ship a psp at all and can just accept the default one todo app admission controller app exporter app operator done in cluster apps operator config controller flux app
| 1
|
749,113
| 26,149,917,132
|
IssuesEvent
|
2022-12-30 11:58:52
|
FunnyGuilds/FunnyGuilds
|
https://api.github.com/repos/FunnyGuilds/FunnyGuilds
|
closed
|
/zapros all nie zaprasza ludzi w okolicy
|
Bug unreproducable Priority: HIGH
|
w zapros all pojawia sie mały bład mianowicie jak wyslemy zapros all to zadnem gracz w promieniu x kratek nie otrzymuje zaproszenia do gildiii mimo wyslania zapro
|
1.0
|
/zapros all nie zaprasza ludzi w okolicy - w zapros all pojawia sie mały bład mianowicie jak wyslemy zapros all to zadnem gracz w promieniu x kratek nie otrzymuje zaproszenia do gildiii mimo wyslania zapro
|
priority
|
zapros all nie zaprasza ludzi w okolicy w zapros all pojawia sie mały bład mianowicie jak wyslemy zapros all to zadnem gracz w promieniu x kratek nie otrzymuje zaproszenia do gildiii mimo wyslania zapro
| 1
|
446,661
| 12,875,914,066
|
IssuesEvent
|
2020-07-11 01:25:10
|
kubesphere/kubesphere
|
https://api.github.com/repos/kubesphere/kubesphere
|
opened
|
self-provisioner can't create pv
|
area/iam kind/bug priority/high
|
**Describe the Bug**
log in as the self-provisioner, and enter its project. Then create a PV. Error msg shows as below
<img width="1280" alt="Screen Shot 2020-07-11 at 9 21 05 AM" src="https://user-images.githubusercontent.com/28859385/87213515-2bf16480-c358-11ea-87c4-4da28529a077.png">
**Versions Used**
KubeSphere: 3.0.0-dev
**Environment**
allinone
**Expected behavior**
self-provisioner is able to create pv
|
1.0
|
self-provisioner can't create pv - **Describe the Bug**
log in as the self-provisioner, and enter its project. Then create a PV. Error msg shows as below
<img width="1280" alt="Screen Shot 2020-07-11 at 9 21 05 AM" src="https://user-images.githubusercontent.com/28859385/87213515-2bf16480-c358-11ea-87c4-4da28529a077.png">
**Versions Used**
KubeSphere: 3.0.0-dev
**Environment**
allinone
**Expected behavior**
self-provisioner is able to create pv
|
priority
|
self provisioner can t create pv describe the bug log in as the self provisioner and enter its project then create a pv error msg shows as below img width alt screen shot at am src versions used kubesphere dev environment allinone expected behavior self provisioner is able to create pv
| 1
|
746,640
| 26,039,641,065
|
IssuesEvent
|
2022-12-22 09:16:58
|
Edirom/Edirom-Online
|
https://api.github.com/repos/Edirom/Edirom-Online
|
reopened
|
fix imageZoom levels in MeasureBasedView (IIIF)
|
Type: bug report Priority: High Status: needs evaluation View: measureBasedView
|
Relates to #262
When changing the zoom level in measureBasedView the image will be reloaded with the wrong dimensions. See here
<img width="740" alt="Bildschirmfoto 2022-12-21 um 16 37 16" src="https://user-images.githubusercontent.com/27779797/208945691-5be5180a-b9d1-409f-8839-8a8b98e728f3.png">
<img width="740" alt="Bildschirmfoto 2022-12-21 um 16 37 21" src="https://user-images.githubusercontent.com/27779797/208945704-a6d939b9-78c2-4301-9cbb-4612aafe8687.png">
Here also the footer is missing. This is problematic, because the dimensions are different then and the measures are moving caused by this.
<img width="740" alt="Bildschirmfoto 2022-12-21 um 16 37 27" src="https://user-images.githubusercontent.com/27779797/208945715-84e72ec3-414c-4cc8-82d6-7555e4226a26.png">
@roewenstrunk I thought we had fixed this for 1.0.0-beta.4, can you imagine, why this occurs here again? Could this problem be caused by the IIIF-providing institution?
|
1.0
|
fix imageZoom levels in MeasureBasedView (IIIF) - Relates to #262
When changing the zoom level in measureBasedView the image will be reloaded with the wrong dimensions. See here
<img width="740" alt="Bildschirmfoto 2022-12-21 um 16 37 16" src="https://user-images.githubusercontent.com/27779797/208945691-5be5180a-b9d1-409f-8839-8a8b98e728f3.png">
<img width="740" alt="Bildschirmfoto 2022-12-21 um 16 37 21" src="https://user-images.githubusercontent.com/27779797/208945704-a6d939b9-78c2-4301-9cbb-4612aafe8687.png">
Here also the footer is missing. This is problematic, because the dimensions are different then and the measures are moving caused by this.
<img width="740" alt="Bildschirmfoto 2022-12-21 um 16 37 27" src="https://user-images.githubusercontent.com/27779797/208945715-84e72ec3-414c-4cc8-82d6-7555e4226a26.png">
@roewenstrunk I thought we had fixed this for 1.0.0-beta.4, can you imagine, why this occurs here again? Could this problem be caused by the IIIF-providing institution?
|
priority
|
fix imagezoom levels in measurebasedview iiif relates to when changing the zoom level in measurebasedview the image will be reloaded with the wrong dimensions see here img width alt bildschirmfoto um src img width alt bildschirmfoto um src here also the footer is missing this is problematic because the dimensions are different then and the measures are moving caused by this img width alt bildschirmfoto um src roewenstrunk i thought we had fixed this for beta can you imagine why this occurs here again could this problem be caused by the iiif providing institution
| 1
|
533,366
| 15,589,687,382
|
IssuesEvent
|
2021-03-18 08:23:52
|
sopra-fs21-group-02/server
|
https://api.github.com/repos/sopra-fs21-group-02/server
|
opened
|
Create map view
|
area:map priority:high task
|
Create a view that shows an externally embedded map.
## Estimate
4h
## User Story
This task belongs to user story #1
|
1.0
|
Create map view - Create a view that shows an externally embedded map.
## Estimate
4h
## User Story
This task belongs to user story #1
|
priority
|
create map view create a view that shows an externally embedded map estimate user story this task belongs to user story
| 1
|
383,650
| 11,360,515,328
|
IssuesEvent
|
2020-01-26 07:41:29
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
right sidebar: Don't show invite more users if you can't invite.
|
area: right-sidebar help wanted priority: high
|
Currently we show "Invite more users" in the right sidebar, to everyone (Edit: not quite, see below, it's an LDAP only issue). This can be disconcerting to folks that expect to be managing users through LDAP.
* If an invitation is required to join, and only admins can invite, we should not show "Invite more users" to non-admins.
* If the only auth method is LDAP, we should not show "Invite more users" to anyone.
We should make sure we also remove "Invite users" from the gear menu as appropriate.
|
1.0
|
right sidebar: Don't show invite more users if you can't invite. - Currently we show "Invite more users" in the right sidebar, to everyone (Edit: not quite, see below, it's an LDAP only issue). This can be disconcerting to folks that expect to be managing users through LDAP.
* If an invitation is required to join, and only admins can invite, we should not show "Invite more users" to non-admins.
* If the only auth method is LDAP, we should not show "Invite more users" to anyone.
We should make sure we also remove "Invite users" from the gear menu as appropriate.
|
priority
|
right sidebar don t show invite more users if you can t invite currently we show invite more users in the right sidebar to everyone edit not quite see below it s an ldap only issue this can be disconcerting to folks that expect to be managing users through ldap if an invitation is required to join and only admins can invite we should not show invite more users to non admins if the only auth method is ldap we should not show invite more users to anyone we should make sure we also remove invite users from the gear menu as appropriate
| 1
|
742,947
| 25,879,423,523
|
IssuesEvent
|
2022-12-14 10:13:47
|
Australian-Genomics/CTRL
|
https://api.github.com/repos/Australian-Genomics/CTRL
|
closed
|
Participant email validation
|
priority: high
|
Ensure the participant email entered into CTRL matches the email address entered into REDCap to verify participant.
|
1.0
|
Participant email validation - Ensure the participant email entered into CTRL matches the email address entered into REDCap to verify participant.
|
priority
|
participant email validation ensure the participant email entered into ctrl matches the email address entered into redcap to verify participant
| 1
|
335,979
| 10,169,367,788
|
IssuesEvent
|
2019-08-08 00:09:48
|
BCcampus/edehr
|
https://api.github.com/repos/BCcampus/edehr
|
opened
|
Display associated documents on the assignment page
|
Effort - Medium Priority - High ~Feature
|
## User story
As a instructor and seed editor user,
I want to be able to see what documents have been linked to each seed from the assignment page
So that I can confirm they are setup correctly at a glance instead of needing to go into each assignment and check the respective pages.
## Description
Display seed documents in assignment table next to the pages header
|
1.0
|
Display associated documents on the assignment page - ## User story
As a instructor and seed editor user,
I want to be able to see what documents have been linked to each seed from the assignment page
So that I can confirm they are setup correctly at a glance instead of needing to go into each assignment and check the respective pages.
## Description
Display seed documents in assignment table next to the pages header
|
priority
|
display associated documents on the assignment page user story as a instructor and seed editor user i want to be able to see what documents have been linked to each seed from the assignment page so that i can confirm they are setup correctly at a glance instead of needing to go into each assignment and check the respective pages description display seed documents in assignment table next to the pages header
| 1
|
356,326
| 10,591,582,088
|
IssuesEvent
|
2019-10-09 11:13:33
|
materna-se/declab
|
https://api.github.com/repos/materna-se/declab
|
closed
|
Add support for date, time and dateTime
|
Priority: High Status: Completed Type: Enhancement
|
We need to add support for date, time and dateTime into the builder, client and server must also recognize the ISO 8601 format and do the conversion automatically
|
1.0
|
Add support for date, time and dateTime - We need to add support for date, time and dateTime into the builder, client and server must also recognize the ISO 8601 format and do the conversion automatically
|
priority
|
add support for date time and datetime we need to add support for date time and datetime into the builder client and server must also recognize the iso format and do the conversion automatically
| 1
|
109,510
| 4,388,646,148
|
IssuesEvent
|
2016-08-08 19:33:28
|
SuperTux/flexlay
|
https://api.github.com/repos/SuperTux/flexlay
|
closed
|
Fix Tile Selector Layout
|
priority:high
|
The tile selector layout on the mono/c# editor is much cleaner and faster to select, because you can just hold down the right mouse button and select the group of tiles to put on your level. But in the flexlay level editor, the tiles are mixed up making it so that you have to select each tile separately.
See in images below:
<b><h4>Flexlay Level Editor:</h4></b>

<b><h4>Mono/C# Editor:</h4></b>

This would make a big difference in the speed of level design with flexlay if you could add it in.
|
1.0
|
Fix Tile Selector Layout - The tile selector layout on the mono/c# editor is much cleaner and faster to select, because you can just hold down the right mouse button and select the group of tiles to put on your level. But in the flexlay level editor, the tiles are mixed up making it so that you have to select each tile separately.
See in images below:
<b><h4>Flexlay Level Editor:</h4></b>

<b><h4>Mono/C# Editor:</h4></b>

This would make a big difference in the speed of level design with flexlay if you could add it in.
|
priority
|
fix tile selector layout the tile selector layout on the mono c editor is much cleaner and faster to select because you can just hold down the right mouse button and select the group of tiles to put on your level but in the flexlay level editor the tiles are mixed up making it so that you have to select each tile separately see in images below flexlay level editor mono c editor this would make a big difference in the speed of level design with flexlay if you could add it in
| 1
|
726,165
| 24,990,467,201
|
IssuesEvent
|
2022-11-02 18:18:22
|
MystenLabs/sui
|
https://api.github.com/repos/MystenLabs/sui
|
closed
|
[observability] Track RocksDB Memory Usage
|
Priority: High sui-node observability storage
|
Step 1: (in progress)
Assisted Ade with this PR: https://github.com/MystenLabs/mysten-infra/pull/116#pullrequestreview-1072935884
Step 2:
Add metrics for each DB along with size, number of items etc
|
1.0
|
[observability] Track RocksDB Memory Usage - Step 1: (in progress)
Assisted Ade with this PR: https://github.com/MystenLabs/mysten-infra/pull/116#pullrequestreview-1072935884
Step 2:
Add metrics for each DB along with size, number of items etc
|
priority
|
track rocksdb memory usage step in progress assisted ade with this pr step add metrics for each db along with size number of items etc
| 1
|
731,939
| 25,237,833,338
|
IssuesEvent
|
2022-11-15 03:28:14
|
apache/incubator-devlake
|
https://api.github.com/repos/apache/incubator-devlake
|
closed
|
[Bug][migration] Pipeline ids changed after upgrade to v0.14
|
type/bug priority/high
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/incubator-devlake/issues?q=is%3Aissue) and found no similar issues.
### What happened
When users upgrade existing Apache DevLake deploy to v0.14
The value of `pipeline.id` and `blueprint.id` got reordered.
Beside, the pipeline stop working and report the following error:

And the old table got deleted:


### What you expected to happen
`pipeline.id` and `blueprint.id` should stay the same after upgrade.
### How to reproduce
deploy v0.12/v0.13 , launch some pipelines, upgrade to v0.14
### Anything else
_No response_
### Version
v0.14.1
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
1.0
|
[Bug][migration] Pipeline ids changed after upgrade to v0.14 - ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/incubator-devlake/issues?q=is%3Aissue) and found no similar issues.
### What happened
When users upgrade existing Apache DevLake deploy to v0.14
The value of `pipeline.id` and `blueprint.id` got reordered.
Beside, the pipeline stop working and report the following error:

And the old table got deleted:


### What you expected to happen
`pipeline.id` and `blueprint.id` should stay the same after upgrade.
### How to reproduce
deploy v0.12/v0.13 , launch some pipelines, upgrade to v0.14
### Anything else
_No response_
### Version
v0.14.1
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
priority
|
pipeline ids changed after upgrade to search before asking i had searched in the and found no similar issues what happened when users upgrade existing apache devlake deploy to the value of pipeline id and blueprint id got reordered beside the pipeline stop working and report the following error and the old table got deleted what you expected to happen pipeline id and blueprint id should stay the same after upgrade how to reproduce deploy launch some pipelines upgrade to anything else no response version are you willing to submit pr yes i am willing to submit a pr code of conduct i agree to follow this project s
| 1
|
375,477
| 11,104,673,372
|
IssuesEvent
|
2019-12-17 08:11:26
|
wso2/ballerina-message-broker
|
https://api.github.com/repos/wso2/ballerina-message-broker
|
closed
|
Change the parent pom to point to the Ballerina parent pom
|
Complexity/Moderate Priority/Highest Severity/Major Type/Task
|
**Description:**
The current parent pom is the C5 pom, and this needs to be changed to the Ballerina parent pom.
Currently there are a couple of issues including MSF4J using v6.0.64 of transport-http (for org.wso2.transport.http.netty) while in Ballerina v6.0.102 is used.
|
1.0
|
Change the parent pom to point to the Ballerina parent pom - **Description:**
The current parent pom is the C5 pom, and this needs to be changed to the Ballerina parent pom.
Currently there are a couple of issues including MSF4J using v6.0.64 of transport-http (for org.wso2.transport.http.netty) while in Ballerina v6.0.102 is used.
|
priority
|
change the parent pom to point to the ballerina parent pom description the current parent pom is the pom and this needs to be changed to the ballerina parent pom currently there are a couple of issues including using of transport http for org transport http netty while in ballerina is used
| 1
|
302,350
| 9,257,244,557
|
IssuesEvent
|
2019-03-17 03:52:22
|
cs2103-ay1819s2-w10-2/main
|
https://api.github.com/repos/cs2103-ay1819s2-w10-2/main
|
closed
|
Set up classes for components of project
|
priority.High type.Task
|
- [x] Description
- [x] Milestones
- [x] employeeList (can just use UniqueEmployeeList)
|
1.0
|
Set up classes for components of project - - [x] Description
- [x] Milestones
- [x] employeeList (can just use UniqueEmployeeList)
|
priority
|
set up classes for components of project description milestones employeelist can just use uniqueemployeelist
| 1
|
93,176
| 3,886,607,804
|
IssuesEvent
|
2016-04-14 02:11:35
|
lale-help/lale-help
|
https://api.github.com/repos/lale-help/lale-help
|
closed
|
'Invite circle' button available to non-admins
|
bug priority:high
|
It seems that possibly in the work on #205 we dropped a check to see if a user is actually entitled to see the 'invite circle' button. This used to be enforced, but now no longer is.
Ex: The user2@lale.help (Helmut Helfer) is not an admin or organizer but has the invite button visible.

This is pervasive for tasks and supplies and should be fixed asap.
|
1.0
|
'Invite circle' button available to non-admins - It seems that possibly in the work on #205 we dropped a check to see if a user is actually entitled to see the 'invite circle' button. This used to be enforced, but now no longer is.
Ex: The user2@lale.help (Helmut Helfer) is not an admin or organizer but has the invite button visible.

This is pervasive for tasks and supplies and should be fixed asap.
|
priority
|
invite circle button available to non admins it seems that possibly in the work on we dropped a check to see if a user is actually entitled to see the invite circle button this used to be enforced but now no longer is ex the lale help helmut helfer is not an admin or organizer but has the invite button visible this is pervasive for tasks and supplies and should be fixed asap
| 1
|
53,172
| 3,036,388,718
|
IssuesEvent
|
2015-08-06 11:30:02
|
pombase/pombase-chado
|
https://api.github.com/repos/pombase/pombase-chado
|
closed
|
Quick/ vesion 31 datacheck
|
auto-migrated high priority sourceforge
|
Does GO:0070824 - SHREC complex have any annotations in Chado V 31?
Thanks
Val
Original comment by: ValWood
|
1.0
|
Quick/ vesion 31 datacheck - Does GO:0070824 - SHREC complex have any annotations in Chado V 31?
Thanks
Val
Original comment by: ValWood
|
priority
|
quick vesion datacheck does go shrec complex have any annotations in chado v thanks val original comment by valwood
| 1
|
822,460
| 30,873,278,824
|
IssuesEvent
|
2023-08-03 12:49:12
|
opendatahub-io/odh-dashboard
|
https://api.github.com/repos/opendatahub-io/odh-dashboard
|
closed
|
[Bug]: Pipelines: Parameters from triggered runs not copied when run is duplicated
|
kind/bug priority/high feature/ds-pipelines field-priority
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Deploy type
Directly installing the Dashboard at the latest (eg. `odh-dashboard/main`)
### Version
1.7.0
### Current Behavior
When I attempt to re-create a pipeline run using Actions -> Duplicate run, I get a new run form with parameters from the initial run not populated.
### Expected Behavior
I would expect that when a run is duplicated its parameter values are pre-filled in the new run launch screen.
### Steps To Reproduce
- Upload pipeline.
- Create a new run.
- After the run completes, go to Triggered tab and select the initial run.
- Select Actions -> Duplicate run and scroll down to Pipeline input parameters.
### Workaround (if any)
None found.
### What browsers are you seeing the problem on?
Chrome
### Anything else
_No response_
|
2.0
|
[Bug]: Pipelines: Parameters from triggered runs not copied when run is duplicated - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Deploy type
Directly installing the Dashboard at the latest (eg. `odh-dashboard/main`)
### Version
1.7.0
### Current Behavior
When I attempt to re-create a pipeline run using Actions -> Duplicate run, I get a new run form with parameters from the initial run not populated.
### Expected Behavior
I would expect that when a run is duplicated its parameter values are pre-filled in the new run launch screen.
### Steps To Reproduce
- Upload pipeline.
- Create a new run.
- After the run completes, go to Triggered tab and select the initial run.
- Select Actions -> Duplicate run and scroll down to Pipeline input parameters.
### Workaround (if any)
None found.
### What browsers are you seeing the problem on?
Chrome
### Anything else
_No response_
|
priority
|
pipelines parameters from triggered runs not copied when run is duplicated is there an existing issue for this i have searched the existing issues deploy type directly installing the dashboard at the latest eg odh dashboard main version current behavior when i attempt to re create a pipeline run using actions duplicate run i get a new run form with parameters from the initial run not populated expected behavior i would expect that when a run is duplicated its parameter values are pre filled in the new run launch screen steps to reproduce upload pipeline create a new run after the run completes go to triggered tab and select the initial run select actions duplicate run and scroll down to pipeline input parameters workaround if any none found what browsers are you seeing the problem on chrome anything else no response
| 1
|
158,977
| 6,038,505,897
|
IssuesEvent
|
2017-06-09 21:38:47
|
DCLP/dclpxsltbox
|
https://api.github.com/repos/DCLP/dclpxsltbox
|
closed
|
Regularization including diaeresis
|
enhancement priority: high review tweak XSLT
|
P.Oxy. 71.4804 (TM 112359)
https://github.com/DCLP/idp.data/blob/dclp/DCLP/113/112359.xml#l88
Leiden+
<:εἱ4.- στήκει|reg| ι̣(¨)4.- [στηκει]:>
EpiDoc
```
<choice><reg>εἱ<lb n="4" break="no"/>στήκει</reg><orig><hi rend="diaeresis><unclear>ι</unclear</hi><lb n="4" break="no"/><supplied reason="lost">στηκει</supplied></orig></choice>
```
Apparatus:
v.3-4. l. εἱ|στήκει : Ϊ|[ΣΤΗΚΕΙ]ϊ papyrus
What we would like to see is:
v.3-4. l. εἱ|στήκει : ϊ[στηκει] papyrus
or, perhaps,
v.3-4. l. εἱ|στήκει : ϊστηκει papyrus
|
1.0
|
Regularization including diaeresis - P.Oxy. 71.4804 (TM 112359)
https://github.com/DCLP/idp.data/blob/dclp/DCLP/113/112359.xml#l88
Leiden+
<:εἱ4.- στήκει|reg| ι̣(¨)4.- [στηκει]:>
EpiDoc
```
<choice><reg>εἱ<lb n="4" break="no"/>στήκει</reg><orig><hi rend="diaeresis><unclear>ι</unclear</hi><lb n="4" break="no"/><supplied reason="lost">στηκει</supplied></orig></choice>
```
Apparatus:
v.3-4. l. εἱ|στήκει : Ϊ|[ΣΤΗΚΕΙ]ϊ papyrus
What we would like to see is:
v.3-4. l. εἱ|στήκει : ϊ[στηκει] papyrus
or, perhaps,
v.3-4. l. εἱ|στήκει : ϊστηκει papyrus
|
priority
|
regularization including diaeresis p oxy tm leiden epidoc εἱ στήκει ι στηκει apparatus v l εἱ στήκει ϊ ϊ papyrus what we would like to see is v l εἱ στήκει ϊ papyrus or perhaps v l εἱ στήκει ϊστηκει papyrus
| 1
|
630,656
| 20,115,939,329
|
IssuesEvent
|
2022-02-07 19:31:20
|
kubermatic/kubeone
|
https://api.github.com/repos/kubermatic/kubeone
|
closed
|
Test upgrading cluster created by KubeOne 1.3 - Ubuntu -- Test Release 1.4
|
priority/high sig/cluster-management
|
This is a subtask of #1796 for testing upgrading Ubuntu clusters created by KubeOne 1.3. The cloud provider doesn't matter for this issue.
Check #1796 for instructions.
The following upgrade paths should be tested:
* [x] 1.22.6 -> 1.23.3
* [x] 1.21.9 -> 1.22.6
* [x] 1.20.15 -> 1.21.9
* [x] 1.19.16 -> 1.20.15
|
1.0
|
Test upgrading cluster created by KubeOne 1.3 - Ubuntu -- Test Release 1.4 - This is a subtask of #1796 for testing upgrading Ubuntu clusters created by KubeOne 1.3. The cloud provider doesn't matter for this issue.
Check #1796 for instructions.
The following upgrade paths should be tested:
* [x] 1.22.6 -> 1.23.3
* [x] 1.21.9 -> 1.22.6
* [x] 1.20.15 -> 1.21.9
* [x] 1.19.16 -> 1.20.15
|
priority
|
test upgrading cluster created by kubeone ubuntu test release this is a subtask of for testing upgrading ubuntu clusters created by kubeone the cloud provider doesn t matter for this issue check for instructions the following upgrade paths should be tested
| 1
|
466,014
| 13,395,879,331
|
IssuesEvent
|
2020-09-03 09:05:26
|
wso2/kubernetes-open-banking
|
https://api.github.com/repos/wso2/kubernetes-open-banking
|
closed
|
[1.5.0][Berlin] Introduce Evaluatory MySQL Data Source Helm Chart for Open Banking
|
Priority/Highest Type/Task
|
**Description:**
The $subject needs to be introduced for the purpose of evaluation of Open Banking product Helm charts.
|
1.0
|
[1.5.0][Berlin] Introduce Evaluatory MySQL Data Source Helm Chart for Open Banking - **Description:**
The $subject needs to be introduced for the purpose of evaluation of Open Banking product Helm charts.
|
priority
|
introduce evaluatory mysql data source helm chart for open banking description the subject needs to be introduced for the purpose of evaluation of open banking product helm charts
| 1
|
210,135
| 7,183,631,705
|
IssuesEvent
|
2018-02-01 14:04:13
|
DrylandEcology/rSOILWAT2
|
https://api.github.com/repos/DrylandEcology/rSOILWAT2
|
closed
|
No slot of name "MonthlyProductionValues_grass"
|
bug high priority
|
All rSFSW2 simulations are failing due to recent commits to rSOILWAT2's master branch.
```
[1] "Datafile 'sw_input_climscen_values' contains zero rows. 'Label's of the master input file 'SWRunInformation' are used to populate rows and 'Label's of the datafile."
Error in slot(prod_default, paste0("MonthlyProductionValues_", tolower(fg))) :
no slot of name "MonthlyProductionValues_grass" for this object of class "swProd"
```
The rSFSW2 automated builds did not catch this, because the last run on master was before the commits that caused this failure. I restarted rSFSW2's automated build on master and replicated the error, along with the other two pull requests open on rSFSW2.
Commit `e0fa1acd62c2d17c961e58ff2a1eed982a347937` does not have this error; have not tried identifying the specific commit causing this failure.
Assigning @dschlaep because the recent commits belong to him. This could also just be an issue in rSFSW2.
|
1.0
|
No slot of name "MonthlyProductionValues_grass" - All rSFSW2 simulations are failing due to recent commits to rSOILWAT2's master branch.
```
[1] "Datafile 'sw_input_climscen_values' contains zero rows. 'Label's of the master input file 'SWRunInformation' are used to populate rows and 'Label's of the datafile."
Error in slot(prod_default, paste0("MonthlyProductionValues_", tolower(fg))) :
no slot of name "MonthlyProductionValues_grass" for this object of class "swProd"
```
The rSFSW2 automated builds did not catch this, because the last run on master was before the commits that caused this failure. I restarted rSFSW2's automated build on master and replicated the error, along with the other two pull requests open on rSFSW2.
Commit `e0fa1acd62c2d17c961e58ff2a1eed982a347937` does not have this error; have not tried identifying the specific commit causing this failure.
Assigning @dschlaep because the recent commits belong to him. This could also just be an issue in rSFSW2.
|
priority
|
no slot of name monthlyproductionvalues grass all simulations are failing due to recent commits to s master branch datafile sw input climscen values contains zero rows label s of the master input file swruninformation are used to populate rows and label s of the datafile error in slot prod default monthlyproductionvalues tolower fg no slot of name monthlyproductionvalues grass for this object of class swprod the automated builds did not catch this because the last run on master was before the commits that caused this failure i restarted s automated build on master and replicated the error along with the other two pull requests open on commit does not have this error have not tried identifying the specific commit causing this failure assigning dschlaep because the recent commits belong to him this could also just be an issue in
| 1
|
486,267
| 14,006,695,369
|
IssuesEvent
|
2020-10-28 20:22:04
|
rstudio/shiny
|
https://api.github.com/repos/rstudio/shiny
|
closed
|
Setting shinyOptions() inside a module results in error
|
Priority: High Type: Bug :bug:
|
Here's a minimal example:
```r
library(shiny)
my_ui <- function(id) {
ns <- NS(id)
div(id = ns(id))
}
my_server <- function(id) {
moduleServer(id, function(input, output, session) {
shinyOptions("my_option" = 2)
})
}
shinyApp(
fluidPage(my_ui("foo")),
function(input, output, session) {
my_server("foo")
}
)
```
```r
Listening on http://127.0.0.1:4969
Warning: Error in $<-.session_proxy: Attempted to assign value on session proxy.
66: stop
65: $<-.session_proxy [/Users/cpsievert/github/shiny/R/modules.R#34]
63: shinyOptions [/Users/cpsievert/github/shiny/R/shiny-options.R#181]
62: module [#3]
57: callModule [/Users/cpsievert/github/shiny/R/modules.R#167]
56: moduleServer [/Users/cpsievert/github/shiny/R/modules.R#140]
55: my_server [#2]
54: server [#4]
Error in `$<-.session_proxy`(`*tmp*`, "options", value = list(appToken = "3519025a595dc742", :
Attempted to assign value on session proxy.
```
|
1.0
|
Setting shinyOptions() inside a module results in error - Here's a minimal example:
```r
library(shiny)
my_ui <- function(id) {
ns <- NS(id)
div(id = ns(id))
}
my_server <- function(id) {
moduleServer(id, function(input, output, session) {
shinyOptions("my_option" = 2)
})
}
shinyApp(
fluidPage(my_ui("foo")),
function(input, output, session) {
my_server("foo")
}
)
```
```r
Listening on http://127.0.0.1:4969
Warning: Error in $<-.session_proxy: Attempted to assign value on session proxy.
66: stop
65: $<-.session_proxy [/Users/cpsievert/github/shiny/R/modules.R#34]
63: shinyOptions [/Users/cpsievert/github/shiny/R/shiny-options.R#181]
62: module [#3]
57: callModule [/Users/cpsievert/github/shiny/R/modules.R#167]
56: moduleServer [/Users/cpsievert/github/shiny/R/modules.R#140]
55: my_server [#2]
54: server [#4]
Error in `$<-.session_proxy`(`*tmp*`, "options", value = list(appToken = "3519025a595dc742", :
Attempted to assign value on session proxy.
```
|
priority
|
setting shinyoptions inside a module results in error here s a minimal example r library shiny my ui function id ns ns id div id ns id my server function id moduleserver id function input output session shinyoptions my option shinyapp fluidpage my ui foo function input output session my server foo r listening on warning error in session proxy attempted to assign value on session proxy stop session proxy shinyoptions module callmodule moduleserver my server server error in session proxy tmp options value list apptoken attempted to assign value on session proxy
| 1
|
600,546
| 18,344,226,080
|
IssuesEvent
|
2021-10-08 02:35:00
|
pietervdvn/MapComplete
|
https://api.github.com/repos/pietervdvn/MapComplete
|
closed
|
Surveillance: non-camera directions showed
|
bug high-priority
|
For example around here: https://mapcomplete.osm.be/surveillance.html?z=19&lat=47.46478&lon=-0.55712&language=fr&tab=4, the directions that are shown match the `highway={stop,give_way}` that have `direction={forward,backward}`.
|
1.0
|
Surveillance: non-camera directions showed - For example around here: https://mapcomplete.osm.be/surveillance.html?z=19&lat=47.46478&lon=-0.55712&language=fr&tab=4, the directions that are shown match the `highway={stop,give_way}` that have `direction={forward,backward}`.
|
priority
|
surveillance non camera directions showed for example around here the directions that are shown match the highway stop give way that have direction forward backward
| 1
|
767,338
| 26,919,944,131
|
IssuesEvent
|
2023-02-07 09:45:03
|
fkie-cad/dewolf
|
https://api.github.com/repos/fkie-cad/dewolf
|
closed
|
IndexError: list index out of range in remove_stack_canary
|
bug priority-high
|
### What happened?
The decompiler crashes with an IndexError in remove_stack_canary during preprocessing.
```python
Traceback (most recent call last):
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompile.py"", line 80, in <module>
main(Decompiler)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/util/commandline.py"", line 65, in main
task = decompiler.decompile(function_name, options)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompile.py"", line 55, in decompile
pipeline.run(task)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/pipeline.py"", line 97, in run
instance.run(task)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/preprocessing/remove_stack_canary.py"", line 24, in run
self._patch_canary(fail_node)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/preprocessing/remove_stack_canary.py"", line 49, in _patch_canary
self._patch_branch_condition(pred)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/preprocessing/remove_stack_canary.py"", line 56, in _patch_branch_condition
branch_instruction = node.instructions[-1]
IndexError: list index out of range
```
### How to reproduce?
Decompile print_dir in ls, get_dev in df or main in one of the other samples given below.
[remove_stack_canary_index_error.zip](https://github.com/fkie-cad/dewolf/files/9979831/remove_stack_canary_index_error.zip)
### Affected Binary Ninja Version(s)
3.2.3814
|
1.0
|
IndexError: list index out of range in remove_stack_canary - ### What happened?
The decompiler crashes with an IndexError in remove_stack_canary during preprocessing.
```python
Traceback (most recent call last):
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompile.py"", line 80, in <module>
main(Decompiler)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/util/commandline.py"", line 65, in main
task = decompiler.decompile(function_name, options)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompile.py"", line 55, in decompile
pipeline.run(task)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/pipeline.py"", line 97, in run
instance.run(task)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/preprocessing/remove_stack_canary.py"", line 24, in run
self._patch_canary(fail_node)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/preprocessing/remove_stack_canary.py"", line 49, in _patch_canary
self._patch_branch_condition(pred)
File ""/home/ubuntu/.binaryninja/plugins/dewolf/decompiler/pipeline/preprocessing/remove_stack_canary.py"", line 56, in _patch_branch_condition
branch_instruction = node.instructions[-1]
IndexError: list index out of range
```
### How to reproduce?
Decompile print_dir in ls, get_dev in df or main in one of the other samples given below.
[remove_stack_canary_index_error.zip](https://github.com/fkie-cad/dewolf/files/9979831/remove_stack_canary_index_error.zip)
### Affected Binary Ninja Version(s)
3.2.3814
|
priority
|
indexerror list index out of range in remove stack canary what happened the decompiler crashes with an indexerror in remove stack canary during preprocessing python traceback most recent call last file home ubuntu binaryninja plugins dewolf decompile py line in main decompiler file home ubuntu binaryninja plugins dewolf decompiler util commandline py line in main task decompiler decompile function name options file home ubuntu binaryninja plugins dewolf decompile py line in decompile pipeline run task file home ubuntu binaryninja plugins dewolf decompiler pipeline pipeline py line in run instance run task file home ubuntu binaryninja plugins dewolf decompiler pipeline preprocessing remove stack canary py line in run self patch canary fail node file home ubuntu binaryninja plugins dewolf decompiler pipeline preprocessing remove stack canary py line in patch canary self patch branch condition pred file home ubuntu binaryninja plugins dewolf decompiler pipeline preprocessing remove stack canary py line in patch branch condition branch instruction node instructions indexerror list index out of range how to reproduce decompile print dir in ls get dev in df or main in one of the other samples given below affected binary ninja version s
| 1
|
34,012
| 2,774,377,201
|
IssuesEvent
|
2015-05-04 08:21:29
|
punongbayan-araullo/tickets
|
https://api.github.com/repos/punongbayan-araullo/tickets
|
opened
|
Projects dropped more that 6 months ago should not be allowed to be reinstated.
|
other priority - high status - accepted system - projects
|
Projects dropped more that 6 months ago should not be allowed to be reinstated. A new pursuit ,instead, should be created.
|
1.0
|
Projects dropped more that 6 months ago should not be allowed to be reinstated. - Projects dropped more that 6 months ago should not be allowed to be reinstated. A new pursuit ,instead, should be created.
|
priority
|
projects dropped more that months ago should not be allowed to be reinstated projects dropped more that months ago should not be allowed to be reinstated a new pursuit instead should be created
| 1
|
364,681
| 10,772,102,065
|
IssuesEvent
|
2019-11-02 12:46:08
|
windchime-yk/novel-support.js
|
https://api.github.com/repos/windchime-yk/novel-support.js
|
opened
|
v1.1.1として更新
|
Priority: High Type: Release
|
リファクタリングがメイン
## 今回の変更
- CHANGELOG.mdのリスト記号を変更する
- `{ content: 'xxx' }`の処理をリファクタリング
- Jestに`beforeEach`を使うなどしてリファクタリング
|
1.0
|
v1.1.1として更新 - リファクタリングがメイン
## 今回の変更
- CHANGELOG.mdのリスト記号を変更する
- `{ content: 'xxx' }`の処理をリファクタリング
- Jestに`beforeEach`を使うなどしてリファクタリング
|
priority
|
リファクタリングがメイン 今回の変更 changelog mdのリスト記号を変更する content xxx の処理をリファクタリング jestに beforeeach を使うなどしてリファクタリング
| 1
|
349,809
| 10,473,849,032
|
IssuesEvent
|
2019-09-23 13:27:23
|
infor-design/enterprise
|
https://api.github.com/repos/infor-design/enterprise
|
closed
|
Contextual Action Panel and Modal: Take up full width+height on mobile breakpoint
|
[3] focus: mobile priority: high team: inforGO team: landmark type: enhancement :sparkles:
|
**Is your feature request related to a problem? Please describe.**
On mobile breakpoints, the contextual action panel has a constrained size, which cramps the content and introduces issues with zoom and scroll.
**Describe the solution you'd like**
- [x] At a mobile breakpoint, the contextual action panel should take the full height and width of the viewport so it behaves like a "sheet".
- [x] Overall, modals should contain less padding around the edges.
- [x] Contextual Action Panels should be setup to allow horizontal scrolling, if applicable.
This also requires converting some components to using the contextual action panel instead of the simple modal. (See #2432)
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**

|
1.0
|
Contextual Action Panel and Modal: Take up full width+height on mobile breakpoint - **Is your feature request related to a problem? Please describe.**
On mobile breakpoints, the contextual action panel has a constrained size, which cramps the content and introduces issues with zoom and scroll.
**Describe the solution you'd like**
- [x] At a mobile breakpoint, the contextual action panel should take the full height and width of the viewport so it behaves like a "sheet".
- [x] Overall, modals should contain less padding around the edges.
- [x] Contextual Action Panels should be setup to allow horizontal scrolling, if applicable.
This also requires converting some components to using the contextual action panel instead of the simple modal. (See #2432)
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**

|
priority
|
contextual action panel and modal take up full width height on mobile breakpoint is your feature request related to a problem please describe on mobile breakpoints the contextual action panel has a constrained size which cramps the content and introduces issues with zoom and scroll describe the solution you d like at a mobile breakpoint the contextual action panel should take the full height and width of the viewport so it behaves like a sheet overall modals should contain less padding around the edges contextual action panels should be setup to allow horizontal scrolling if applicable this also requires converting some components to using the contextual action panel instead of the simple modal see describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context
| 1
|
542,325
| 15,858,620,332
|
IssuesEvent
|
2021-04-08 06:58:29
|
geolonia/estate-id-api
|
https://api.github.com/repos/geolonia/estate-id-api
|
closed
|
ステージ分け(本番デプロイ)
|
Priority: High
|
そろそろ心配なのでステージングを作りましょうか。
`main` へのマージ => ステージングへのデプロイ
タギング => 本番へデプロイ
するようにしましょう。
|
1.0
|
ステージ分け(本番デプロイ) - そろそろ心配なのでステージングを作りましょうか。
`main` へのマージ => ステージングへのデプロイ
タギング => 本番へデプロイ
するようにしましょう。
|
priority
|
ステージ分け(本番デプロイ) そろそろ心配なのでステージングを作りましょうか。 main へのマージ ステージングへのデプロイ タギング 本番へデプロイ するようにしましょう。
| 1
|
109,758
| 4,408,451,443
|
IssuesEvent
|
2016-08-12 01:49:38
|
Murkantilism/skylabgame
|
https://api.github.com/repos/Murkantilism/skylabgame
|
opened
|
Add New Stratosphere Prefab to Trampoline
|
bug High Priority
|
On the last stage of the tutorial if the player bounces on the trampoline, the old stratosphere appears and throws a null reference exception.
|
1.0
|
Add New Stratosphere Prefab to Trampoline - On the last stage of the tutorial if the player bounces on the trampoline, the old stratosphere appears and throws a null reference exception.
|
priority
|
add new stratosphere prefab to trampoline on the last stage of the tutorial if the player bounces on the trampoline the old stratosphere appears and throws a null reference exception
| 1
|
240,278
| 7,800,796,909
|
IssuesEvent
|
2018-06-09 13:45:58
|
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
closed
|
0011382:
Attachments in templates converted to rfc-822 attachment
|
Bug Felamimail Mantis high priority
|
**Reported by pschuele on 16 Oct 2015 10:43**
**Version:** Collin (2013.10.8)
email template with attachment problem
**Steps to reproduce:** 1.: Neue Email öffnen, HTML-Code eintragen, PDF anhängen -> als
Vorlage speichern
2.: In der Email-"Vorschau" wird noch alles schick angezeigt
3.: Die Vorlage mit Doppelklick öffnen -> im Anhang befindet sich nun
... was? Der Text der Email. Und wo ist das PDF?
|
1.0
|
0011382:
Attachments in templates converted to rfc-822 attachment - **Reported by pschuele on 16 Oct 2015 10:43**
**Version:** Collin (2013.10.8)
email template with attachment problem
**Steps to reproduce:** 1.: Neue Email öffnen, HTML-Code eintragen, PDF anhängen -> als
Vorlage speichern
2.: In der Email-"Vorschau" wird noch alles schick angezeigt
3.: Die Vorlage mit Doppelklick öffnen -> im Anhang befindet sich nun
... was? Der Text der Email. Und wo ist das PDF?
|
priority
|
attachments in templates converted to rfc attachment reported by pschuele on oct version collin email template with attachment problem steps to reproduce neue email öffnen html code eintragen pdf anhängen gt als vorlage speichern in der email quot vorschau quot wird noch alles schick angezeigt die vorlage mit doppelklick öffnen gt im anhang befindet sich nun was der text der email und wo ist das pdf
| 1
|
105,032
| 4,229,034,434
|
IssuesEvent
|
2016-07-04 04:57:07
|
PhonologicalCorpusTools/CorpusTools
|
https://api.github.com/repos/PhonologicalCorpusTools/CorpusTools
|
closed
|
error loading ILG corpus
|
bug High priority
|
Trying to create an ILG corpus from this file:
[ilg_sample.txt](https://github.com/PhonologicalCorpusTools/CorpusTools/files/342624/ilg_sample.txt)
Traceback (most recent call last):
File "/Users/KCH/Desktop/CorpusTools/corpustools/gui/iogui.py", line 85, in run
corpus = load_discourse_ilg(**self.kwargs)
File "/Users/KCH/Desktop/CorpusTools/corpustools/corpus/io/text_ilg.py", line 235, in load_discourse_ilg
discourse = data_to_discourse(data, lexicon, call_back=call_back, stop_check=stop_check)
File "/Users/KCH/Desktop/CorpusTools/corpustools/corpus/io/helper.py", line 428, in data_to_discourse
wordtoken = WordToken(**word_token_kwargs)
File "/Users/KCH/Desktop/CorpusTools/corpustools/corpus/classes/spontaneous.py", line 446, in __init__
if att.is_default:
UnboundLocalError: local variable 'att' referenced before assignment
|
1.0
|
error loading ILG corpus - Trying to create an ILG corpus from this file:
[ilg_sample.txt](https://github.com/PhonologicalCorpusTools/CorpusTools/files/342624/ilg_sample.txt)
Traceback (most recent call last):
File "/Users/KCH/Desktop/CorpusTools/corpustools/gui/iogui.py", line 85, in run
corpus = load_discourse_ilg(**self.kwargs)
File "/Users/KCH/Desktop/CorpusTools/corpustools/corpus/io/text_ilg.py", line 235, in load_discourse_ilg
discourse = data_to_discourse(data, lexicon, call_back=call_back, stop_check=stop_check)
File "/Users/KCH/Desktop/CorpusTools/corpustools/corpus/io/helper.py", line 428, in data_to_discourse
wordtoken = WordToken(**word_token_kwargs)
File "/Users/KCH/Desktop/CorpusTools/corpustools/corpus/classes/spontaneous.py", line 446, in __init__
if att.is_default:
UnboundLocalError: local variable 'att' referenced before assignment
|
priority
|
error loading ilg corpus trying to create an ilg corpus from this file traceback most recent call last file users kch desktop corpustools corpustools gui iogui py line in run corpus load discourse ilg self kwargs file users kch desktop corpustools corpustools corpus io text ilg py line in load discourse ilg discourse data to discourse data lexicon call back call back stop check stop check file users kch desktop corpustools corpustools corpus io helper py line in data to discourse wordtoken wordtoken word token kwargs file users kch desktop corpustools corpustools corpus classes spontaneous py line in init if att is default unboundlocalerror local variable att referenced before assignment
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.