Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
20,573
| 27,233,067,994
|
IssuesEvent
|
2023-02-21 14:33:14
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Build API Roadmap: discussion
|
P3 type: support / not a bug (process) untriaged team-Rules-API
|
This is a non-technical issue just meant as a connection point for discussion, thoughts, questions, concerns, etc. about Bazel's Build API work as prioritized in https://www.bazel.build/roadmaps/build-api.html.
It's also a step toward integrating our project workflow deeper into the Bazel community.
My current thinking is to maintain one of these issues and roadmaps for each year. But I'm open to suggestions for other venues like bazel-dev, etc.
I also want to have a dedicated GitHub issue for each individual roadmap item. As of this writing we're
about halfway there. So individual priorities can be discussed on their own threads and this thread can cover big picture stuff.
|
1.0
|
Build API Roadmap: discussion - This is a non-technical issue just meant as a connection point for discussion, thoughts, questions, concerns, etc. about Bazel's Build API work as prioritized in https://www.bazel.build/roadmaps/build-api.html.
It's also a step toward integrating our project workflow deeper into the Bazel community.
My current thinking is to maintain one of these issues and roadmaps for each year. But I'm open to suggestions for other venues like bazel-dev, etc.
I also want to have a dedicated GitHub issue for each individual roadmap item. As of this writing we're
about halfway there. So individual priorities can be discussed on their own threads and this thread can cover big picture stuff.
|
process
|
build api roadmap discussion this is a non technical issue just meant as a connection point for discussion thoughts questions concerns etc about bazel s build api work as prioritized in it s also a step toward integrating our project workflow deeper into the bazel community my current thinking is to maintain one of these issues and roadmaps for each year but i m open to suggestions for other venues like bazel dev etc i also want to have a dedicated github issue for each individual roadmap item as of this writing we re about halfway there so individual priorities can be discussed on their own threads and this thread can cover big picture stuff
| 1
|
2,167
| 5,018,592,633
|
IssuesEvent
|
2016-12-14 08:56:22
|
paulkornikov/Pragonas
|
https://api.github.com/repos/paulkornikov/Pragonas
|
closed
|
Blocage général archivage et autres processus
|
a-enhancement processus
|
à partir du web config. Voir si possible un app config.
|
1.0
|
Blocage général archivage et autres processus - à partir du web config. Voir si possible un app config.
|
process
|
blocage général archivage et autres processus à partir du web config voir si possible un app config
| 1
|
37,600
| 6,623,267,022
|
IssuesEvent
|
2017-09-22 06:10:43
|
openebs/openebs
|
https://api.github.com/repos/openebs/openebs
|
opened
|
Add Glossary section under Getting Started
|
documentation
|
Add a Glossary section where terms used in documentation are described and how they are used in the documentation.
|
1.0
|
Add Glossary section under Getting Started - Add a Glossary section where terms used in documentation are described and how they are used in the documentation.
|
non_process
|
add glossary section under getting started add a glossary section where terms used in documentation are described and how they are used in the documentation
| 0
|
245,065
| 18,771,715,993
|
IssuesEvent
|
2021-11-07 00:02:22
|
matheus-srego/DroneArara
|
https://api.github.com/repos/matheus-srego/DroneArara
|
closed
|
Pesquisar outras formas de reconhecimentos em IAs
|
documentation
|
1. Para recriar a Inteligência Artificial é necesário pesquisar outras formas de criar e treinar o reconhecimento dos gestos das mãos.
2. Pendente decidir se vamos seguir com o reconhecimento apenas dos gestos em libras ou usar outros gestos.
|
1.0
|
Pesquisar outras formas de reconhecimentos em IAs - 1. Para recriar a Inteligência Artificial é necesário pesquisar outras formas de criar e treinar o reconhecimento dos gestos das mãos.
2. Pendente decidir se vamos seguir com o reconhecimento apenas dos gestos em libras ou usar outros gestos.
|
non_process
|
pesquisar outras formas de reconhecimentos em ias para recriar a inteligência artificial é necesário pesquisar outras formas de criar e treinar o reconhecimento dos gestos das mãos pendente decidir se vamos seguir com o reconhecimento apenas dos gestos em libras ou usar outros gestos
| 0
|
234,537
| 7,722,942,487
|
IssuesEvent
|
2018-05-24 10:49:04
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Wrong geometry update and center of geodesic circle in query panel
|
Priority: High Project: C028 bug pending review review
|
### Description
Geodesic circle geometry is not updated correctly
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [ ] Internet Explorer
- [x] Chrome
- [ ] Firefox
- [ ] Safari
*Browser Version Affected*
- Chrome v66
*Steps to reproduce*
- open map with vector layer
- select layer in TOC
- open feature grid
- open query panel
- draw a spatial filter Circle
- edit spatial filter and confirm
*Expected Result*
- circle is updated correctly center and radius
*Current Result*
- circle isn't update correctly
### Other useful information (optional):
It needed the follow configuration
```
{
"name": "FeatureEditor",
"cfg": {
"showFilteredObject": true
}
}
```
```
{
"name": "QueryPanel",
"cfg": {
"activateQueryTool": true,
"spatialOperations": [
{"id": "INTERSECTS", "name": "queryform.spatialfilter.operations.intersects"},
{"id": "BBOX", "name": "queryform.spatialfilter.operations.bbox"},
{"id": "CONTAINS", "name": "queryform.spatialfilter.operations.contains"},
{"id": "WITHIN", "name": "queryform.spatialfilter.operations.within"}
],
"spatialMethodOptions": [
{"id": "Viewport", "name": "queryform.spatialfilter.methods.viewport"},
{"id": "BBOX", "name": "queryform.spatialfilter.methods.box"},
{"id": "Circle", "name": "queryform.spatialfilter.methods.circle", "geodesic": true},
{"id": "Polygon", "name": "queryform.spatialfilter.methods.poly"}
]
}
}
```
|
1.0
|
Wrong geometry update and center of geodesic circle in query panel - ### Description
Geodesic circle geometry is not updated correctly
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [ ] Internet Explorer
- [x] Chrome
- [ ] Firefox
- [ ] Safari
*Browser Version Affected*
- Chrome v66
*Steps to reproduce*
- open map with vector layer
- select layer in TOC
- open feature grid
- open query panel
- draw a spatial filter Circle
- edit spatial filter and confirm
*Expected Result*
- circle is updated correctly center and radius
*Current Result*
- circle isn't update correctly
### Other useful information (optional):
It needed the follow configuration
```
{
"name": "FeatureEditor",
"cfg": {
"showFilteredObject": true
}
}
```
```
{
"name": "QueryPanel",
"cfg": {
"activateQueryTool": true,
"spatialOperations": [
{"id": "INTERSECTS", "name": "queryform.spatialfilter.operations.intersects"},
{"id": "BBOX", "name": "queryform.spatialfilter.operations.bbox"},
{"id": "CONTAINS", "name": "queryform.spatialfilter.operations.contains"},
{"id": "WITHIN", "name": "queryform.spatialfilter.operations.within"}
],
"spatialMethodOptions": [
{"id": "Viewport", "name": "queryform.spatialfilter.methods.viewport"},
{"id": "BBOX", "name": "queryform.spatialfilter.methods.box"},
{"id": "Circle", "name": "queryform.spatialfilter.methods.circle", "geodesic": true},
{"id": "Polygon", "name": "queryform.spatialfilter.methods.poly"}
]
}
}
```
|
non_process
|
wrong geometry update and center of geodesic circle in query panel description geodesic circle geometry is not updated correctly in case of bug otherwise remove this paragraph browser affected use this site for non expert users internet explorer chrome firefox safari browser version affected chrome steps to reproduce open map with vector layer select layer in toc open feature grid open query panel draw a spatial filter circle edit spatial filter and confirm expected result circle is updated correctly center and radius current result circle isn t update correctly other useful information optional it needed the follow configuration name featureeditor cfg showfilteredobject true name querypanel cfg activatequerytool true spatialoperations id intersects name queryform spatialfilter operations intersects id bbox name queryform spatialfilter operations bbox id contains name queryform spatialfilter operations contains id within name queryform spatialfilter operations within spatialmethodoptions id viewport name queryform spatialfilter methods viewport id bbox name queryform spatialfilter methods box id circle name queryform spatialfilter methods circle geodesic true id polygon name queryform spatialfilter methods poly
| 0
|
525,612
| 15,257,114,539
|
IssuesEvent
|
2021-02-20 23:27:20
|
marbl/MetagenomeScope
|
https://api.github.com/repos/marbl/MetagenomeScope
|
opened
|
Detect paths encoded in GFA files and pass to user interface
|
highpriorityfeature
|
Offshoot of #147. Will depend on #205 being merged in.
The idea here is taking these paths (called "paths" in [GFA1](https://github.com/GFA-spec/GFA-spec/blob/master/GFA1.md), still called "paths" but encoded as O-lines (ordered "groups") in [GFA2](https://github.com/GFA-spec/GFA-spec/blob/master/GFA2.md#group)) from an input GFA file and storing them so that they can be viewed in the interface analogously to how paths in AGP files can be viewed. Fortunately, getting paths from GfaPy [seems pretty simple](https://gfapy.readthedocs.io/en/latest/tutorial/gfa.html) (just using `g.paths`), so the main challenge will just be passing this data to the JS cleanly (and figuring out how to represent it -- should this be independent of the AGP visualization? an alternate set of widgets?)
U-lines (unordered "groups") in GFA2 files might also be usable in the same way, but I suspect that might not be worth the trouble -- the interpretation seems different. Can reevaluate if there is demand.
It would be great to make this very simple for users, but in the worst case scenario (passing the path info is challenging) it should be pretty doable to write some code that converts the paths in a GFA file to AGP, which users can then upload in the interface.
|
1.0
|
Detect paths encoded in GFA files and pass to user interface - Offshoot of #147. Will depend on #205 being merged in.
The idea here is taking these paths (called "paths" in [GFA1](https://github.com/GFA-spec/GFA-spec/blob/master/GFA1.md), still called "paths" but encoded as O-lines (ordered "groups") in [GFA2](https://github.com/GFA-spec/GFA-spec/blob/master/GFA2.md#group)) from an input GFA file and storing them so that they can be viewed in the interface analogously to how paths in AGP files can be viewed. Fortunately, getting paths from GfaPy [seems pretty simple](https://gfapy.readthedocs.io/en/latest/tutorial/gfa.html) (just using `g.paths`), so the main challenge will just be passing this data to the JS cleanly (and figuring out how to represent it -- should this be independent of the AGP visualization? an alternate set of widgets?)
U-lines (unordered "groups") in GFA2 files might also be usable in the same way, but I suspect that might not be worth the trouble -- the interpretation seems different. Can reevaluate if there is demand.
It would be great to make this very simple for users, but in the worst case scenario (passing the path info is challenging) it should be pretty doable to write some code that converts the paths in a GFA file to AGP, which users can then upload in the interface.
|
non_process
|
detect paths encoded in gfa files and pass to user interface offshoot of will depend on being merged in the idea here is taking these paths called paths in still called paths but encoded as o lines ordered groups in from an input gfa file and storing them so that they can be viewed in the interface analogously to how paths in agp files can be viewed fortunately getting paths from gfapy just using g paths so the main challenge will just be passing this data to the js cleanly and figuring out how to represent it should this be independent of the agp visualization an alternate set of widgets u lines unordered groups in files might also be usable in the same way but i suspect that might not be worth the trouble the interpretation seems different can reevaluate if there is demand it would be great to make this very simple for users but in the worst case scenario passing the path info is challenging it should be pretty doable to write some code that converts the paths in a gfa file to agp which users can then upload in the interface
| 0
|
1,542
| 4,152,284,672
|
IssuesEvent
|
2016-06-16 00:02:52
|
onyx-platform/onyx
|
https://api.github.com/repos/onyx-platform/onyx
|
closed
|
Add dependent projects to release process
|
release-process
|
- [x] onyx-starter
- [x] onyx-template
- [x] onyx-examples
- [x] onyx-website
- [x] onyx-cheat-sheet
- [x] onyx-plugin-template
- [x] learn-onyx
There may be others, also.
|
1.0
|
Add dependent projects to release process - - [x] onyx-starter
- [x] onyx-template
- [x] onyx-examples
- [x] onyx-website
- [x] onyx-cheat-sheet
- [x] onyx-plugin-template
- [x] learn-onyx
There may be others, also.
|
process
|
add dependent projects to release process onyx starter onyx template onyx examples onyx website onyx cheat sheet onyx plugin template learn onyx there may be others also
| 1
|
5,892
| 8,709,663,990
|
IssuesEvent
|
2018-12-06 14:34:20
|
Open-EO/openeo-api
|
https://api.github.com/repos/Open-EO/openeo-api
|
closed
|
NoData handling in custom scripts
|
data processing extension udfs
|
Given that a user can provide his own scripts, how should he implement proper nodata handling?
The problem is that actual nodata values can differ, depending on the data layer and the endpoint against which the script is executed.
Options:
1. Nodata value is part of band metadata, developer should get it from there and propagate into his script
2. Nodata values passed along to the user function
3. Nodata 'macros' such as isNoData(x) are provided by the backend, that do the right thing?
This is linked to the question of what datatype will be used for pixel operations? We can standardize on always using doubles (even if the raw data is for instance stored as a byte), which would help in solving these things.
|
1.0
|
NoData handling in custom scripts - Given that a user can provide his own scripts, how should he implement proper nodata handling?
The problem is that actual nodata values can differ, depending on the data layer and the endpoint against which the script is executed.
Options:
1. Nodata value is part of band metadata, developer should get it from there and propagate into his script
2. Nodata values passed along to the user function
3. Nodata 'macros' such as isNoData(x) are provided by the backend, that do the right thing?
This is linked to the question of what datatype will be used for pixel operations? We can standardize on always using doubles (even if the raw data is for instance stored as a byte), which would help in solving these things.
|
process
|
nodata handling in custom scripts given that a user can provide his own scripts how should he implement proper nodata handling the problem is that actual nodata values can differ depending on the data layer and the endpoint against which the script is executed options nodata value is part of band metadata developer should get it from there and propagate into his script nodata values passed along to the user function nodata macros such as isnodata x are provided by the backend that do the right thing this is linked to the question of what datatype will be used for pixel operations we can standardize on always using doubles even if the raw data is for instance stored as a byte which would help in solving these things
| 1
|
16,625
| 21,684,651,644
|
IssuesEvent
|
2022-05-09 10:03:52
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
opened
|
ZERO Hydra Failures 22.05
|
6.topic: release process
|
## Mission
Every time we branch off a release we stabilize the release branch.
Our goal here is to get as little as possible jobs failing on the trunk/master jobsets.
We call this effort "Zero Hydra Failure".
I'd like to heighten, while it's great to focus on zero as our goal, it's essentially to
have all deliverables that worked in the previous release work here also.
Please note the changes included in [RFC 85](https://github.com/NixOS/rfcs/blob/master/rfcs/0085-nixos-release-stablization.md).
Most significantly, branch off will occur on 2022 May 22; prior to that date, ZHF will be conducted
on master; after that date, ZHF will be conducted on the release channel using a backport
workflow similar to previous ZHFs.
## Jobsets
[trunk Jobset](https://hydra.nixos.org/jobset/nixpkgs/trunk) (includes linux, darwin, and aarch64-linux builds)
[nixos/combined Jobset](https://hydra.nixos.org/jobset/nixos/trunk-combined) (includes many nixos tests)
<!--[nixos:release-22.05 Jobset](https://hydra.nixos.org/jobset/nixos/release-22.05)
[nixpkgs:nixpkgs-22.05-darwin Jobset](https://hydra.nixos.org/jobset/nixpkgs/nixpkgs-22.05-darwin)-->
## How to help (textual)
1. Select an evaluation of the [trunk jobset](https://hydra.nixos.org/jobset/nixpkgs/trunk)

2. Find a failed job ❌️ , you can use the filter field to scope packages to your platform, or search for packages that are relevant to you.

Note: you can filter for architecture by filtering for it, eg: https://hydra.nixos.org/eval/1719540?filter=x86_64-linux&compare=1719463&full=#tabs-still-fail
3. Search to see if a PR is not already open for the package. It there is one, please help review it.
4. If there is no open PR, troubleshoot why it's failing and fix it.
5. Create a Pull Request with the fix targeting master, wait for it to be merged.
If your PR causes around 500+ rebuilds, it's preferred to target `staging` to avoid compute and storage churn.
6. (after 2022 May 22) Please follow [backporting steps](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#backporting-changes) and target the `release-22.05` branch if the original PR landed in `master` or `staging-22.05` if the PR landed in `staging`. Be sure to do `git cherry-pick -x <rev>` on the commits that landed in unstable. @jonringer created [a video covering the backport process](https://www.youtube.com/watch?v=4Zb3GpIc6vk&t=520s).
Always reference this issue in the body of your PR:
```
ZHF: ##172160
```
Please ping @NixOS/nixos-release-managers on the PR.
If you're unable to because you're not a member of the NixOS org please ping @dasJ, @tomberek, @jonringer, @Mic92
## How can I easily check packages that I maintain?
I have created an experimental website that automatically crawls Hydra and lists packages by maintainer and lists the most important dependencies (failing packages with the most dependants).
You can reach it here: https://zh.fail
If you prefer the command-line way, you can also check failing packages that you maintain by running:
```
# from root of nixpkgs
nix-build maintainers/scripts/build.nix --argstr maintainer <name>
```
## New to nixpkgs?
- [Packaging a basic C application](https://www.youtube.com/watch?v=LiEqN8r-BRw)
- [Python nix packaging](https://www.youtube.com/watch?v=jXd-hkP4xnU)
- [Adding a package to nixpkgs](https://www.youtube.com/watch?v=fvj8H5yUKu8)
- other resources at: https://github.com/nix-community/awesome-nix
- https://nix.dev/tutorials/
## Packages that don't get fixed
The remaining packages will be marked as broken before the release (on the failing platforms).
You can do this like:
```nix
meta = {
# ref to issue/explanation
# `true` is for everything
broken = stdenv.isDarwin;
};
```
## Closing
This is a great way to help NixOS, and it is a great time for new contributors to start their nixpkgs adventure. :partying_face:
cc @NixOS/nixpkgs-committers @NixOS/nixpkgs-maintainers @NixOS/release-engineers
## Related Issues
- Timeline: #165792
- Feature Freeze Items: #167025
|
1.0
|
ZERO Hydra Failures 22.05 - ## Mission
Every time we branch off a release we stabilize the release branch.
Our goal here is to get as little as possible jobs failing on the trunk/master jobsets.
We call this effort "Zero Hydra Failure".
I'd like to heighten, while it's great to focus on zero as our goal, it's essentially to
have all deliverables that worked in the previous release work here also.
Please note the changes included in [RFC 85](https://github.com/NixOS/rfcs/blob/master/rfcs/0085-nixos-release-stablization.md).
Most significantly, branch off will occur on 2022 May 22; prior to that date, ZHF will be conducted
on master; after that date, ZHF will be conducted on the release channel using a backport
workflow similar to previous ZHFs.
## Jobsets
[trunk Jobset](https://hydra.nixos.org/jobset/nixpkgs/trunk) (includes linux, darwin, and aarch64-linux builds)
[nixos/combined Jobset](https://hydra.nixos.org/jobset/nixos/trunk-combined) (includes many nixos tests)
<!--[nixos:release-22.05 Jobset](https://hydra.nixos.org/jobset/nixos/release-22.05)
[nixpkgs:nixpkgs-22.05-darwin Jobset](https://hydra.nixos.org/jobset/nixpkgs/nixpkgs-22.05-darwin)-->
## How to help (textual)
1. Select an evaluation of the [trunk jobset](https://hydra.nixos.org/jobset/nixpkgs/trunk)

2. Find a failed job ❌️ , you can use the filter field to scope packages to your platform, or search for packages that are relevant to you.

Note: you can filter for architecture by filtering for it, eg: https://hydra.nixos.org/eval/1719540?filter=x86_64-linux&compare=1719463&full=#tabs-still-fail
3. Search to see if a PR is not already open for the package. It there is one, please help review it.
4. If there is no open PR, troubleshoot why it's failing and fix it.
5. Create a Pull Request with the fix targeting master, wait for it to be merged.
If your PR causes around 500+ rebuilds, it's preferred to target `staging` to avoid compute and storage churn.
6. (after 2022 May 22) Please follow [backporting steps](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#backporting-changes) and target the `release-22.05` branch if the original PR landed in `master` or `staging-22.05` if the PR landed in `staging`. Be sure to do `git cherry-pick -x <rev>` on the commits that landed in unstable. @jonringer created [a video covering the backport process](https://www.youtube.com/watch?v=4Zb3GpIc6vk&t=520s).
Always reference this issue in the body of your PR:
```
ZHF: ##172160
```
Please ping @NixOS/nixos-release-managers on the PR.
If you're unable to because you're not a member of the NixOS org please ping @dasJ, @tomberek, @jonringer, @Mic92
## How can I easily check packages that I maintain?
I have created an experimental website that automatically crawls Hydra and lists packages by maintainer and lists the most important dependencies (failing packages with the most dependants).
You can reach it here: https://zh.fail
If you prefer the command-line way, you can also check failing packages that you maintain by running:
```
# from root of nixpkgs
nix-build maintainers/scripts/build.nix --argstr maintainer <name>
```
## New to nixpkgs?
- [Packaging a basic C application](https://www.youtube.com/watch?v=LiEqN8r-BRw)
- [Python nix packaging](https://www.youtube.com/watch?v=jXd-hkP4xnU)
- [Adding a package to nixpkgs](https://www.youtube.com/watch?v=fvj8H5yUKu8)
- other resources at: https://github.com/nix-community/awesome-nix
- https://nix.dev/tutorials/
## Packages that don't get fixed
The remaining packages will be marked as broken before the release (on the failing platforms).
You can do this like:
```nix
meta = {
# ref to issue/explanation
# `true` is for everything
broken = stdenv.isDarwin;
};
```
## Closing
This is a great way to help NixOS, and it is a great time for new contributors to start their nixpkgs adventure. :partying_face:
cc @NixOS/nixpkgs-committers @NixOS/nixpkgs-maintainers @NixOS/release-engineers
## Related Issues
- Timeline: #165792
- Feature Freeze Items: #167025
|
process
|
zero hydra failures mission every time we branch off a release we stabilize the release branch our goal here is to get as little as possible jobs failing on the trunk master jobsets we call this effort zero hydra failure i d like to heighten while it s great to focus on zero as our goal it s essentially to have all deliverables that worked in the previous release work here also please note the changes included in most significantly branch off will occur on may prior to that date zhf will be conducted on master after that date zhf will be conducted on the release channel using a backport workflow similar to previous zhfs jobsets includes linux darwin and linux builds includes many nixos tests how to help textual select an evaluation of the find a failed job ❌️ you can use the filter field to scope packages to your platform or search for packages that are relevant to you note you can filter for architecture by filtering for it eg search to see if a pr is not already open for the package it there is one please help review it if there is no open pr troubleshoot why it s failing and fix it create a pull request with the fix targeting master wait for it to be merged if your pr causes around rebuilds it s preferred to target staging to avoid compute and storage churn after may please follow and target the release branch if the original pr landed in master or staging if the pr landed in staging be sure to do git cherry pick x on the commits that landed in unstable jonringer created always reference this issue in the body of your pr zhf please ping nixos nixos release managers on the pr if you re unable to because you re not a member of the nixos org please ping dasj tomberek jonringer how can i easily check packages that i maintain i have created an experimental website that automatically crawls hydra and lists packages by maintainer and lists the most important dependencies failing packages with the most dependants you can reach it here if you prefer the command line way you can also check failing packages that you maintain by running from root of nixpkgs nix build maintainers scripts build nix argstr maintainer new to nixpkgs other resources at packages that don t get fixed the remaining packages will be marked as broken before the release on the failing platforms you can do this like nix meta ref to issue explanation true is for everything broken stdenv isdarwin closing this is a great way to help nixos and it is a great time for new contributors to start their nixpkgs adventure partying face cc nixos nixpkgs committers nixos nixpkgs maintainers nixos release engineers related issues timeline feature freeze items
| 1
|
757,779
| 26,528,024,141
|
IssuesEvent
|
2023-01-19 10:15:56
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
RuntimeError: could not construct a memory descriptor using a format tag
|
high priority triage review oncall: jit module: mkldnn
|
### 🐛 Describe the bug
Hi! When optimizing the model by `optimize_for_inference`, I encountered this `RuntimeError`. The model works fine in eager mode, and also could be traced. But when using `optimize_for_inference` after tracing, it leads to an error.
```python
import torch
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(
1, 2, kernel_size=(509,2), stride=3, padding=255, dilation=(1, 1014),
)
def forward(self, i0, i1):
x = torch.max(i0, i1)
y = self.conv1(x)
return y
i0 = torch.zeros((1,1,2,505), dtype=torch.float32)
i1 = torch.zeros((1,2,505), dtype=torch.float32)
mod = MyModule()
out = mod(i0, i1)
print(f'eager: out = {out}') # <-- works fine
exported = torch.jit.trace(mod, [i0, i1])
exported = torch.jit.optimize_for_inference(exported) # <-- RuntimeError: could not construct a memory descriptor using a format tag
eout = exported(i0, i1)
print(f'JIT: eout = {eout}')
assert torch.allclose(out, eout)
```
Logs:
```python
eager: out = tensor([[[[-0.0269],
[-0.0269]],
[[-0.0094],
[-0.0094]]]], grad_fn=<ConvolutionBackward0>)
Traceback (most recent call last):
File "/home/colin/code/bug.py", line 25, in <module>
exported = torch.jit.optimize_for_inference(exported) # <-- RuntimeError: could not construct a memory descriptor using a format tag
File "/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/torch/jit/_freeze.py", line 218, in optimize_for_inference
torch._C._jit_pass_optimize_for_inference(mod._c, other_methods)
RuntimeError: could not construct a memory descriptor using a format tag
```
### Versions
```python
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1+cu116
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.12.1+cu116 pypi_0 pypi
[conda] torchaudio 0.12.1+cu116 pypi_0 pypi
[conda] torchvision 0.13.1+cu116 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @VitalyFedyunin
|
1.0
|
RuntimeError: could not construct a memory descriptor using a format tag - ### 🐛 Describe the bug
Hi! When optimizing the model by `optimize_for_inference`, I encountered this `RuntimeError`. The model works fine in eager mode, and also could be traced. But when using `optimize_for_inference` after tracing, it leads to an error.
```python
import torch
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(
1, 2, kernel_size=(509,2), stride=3, padding=255, dilation=(1, 1014),
)
def forward(self, i0, i1):
x = torch.max(i0, i1)
y = self.conv1(x)
return y
i0 = torch.zeros((1,1,2,505), dtype=torch.float32)
i1 = torch.zeros((1,2,505), dtype=torch.float32)
mod = MyModule()
out = mod(i0, i1)
print(f'eager: out = {out}') # <-- works fine
exported = torch.jit.trace(mod, [i0, i1])
exported = torch.jit.optimize_for_inference(exported) # <-- RuntimeError: could not construct a memory descriptor using a format tag
eout = exported(i0, i1)
print(f'JIT: eout = {eout}')
assert torch.allclose(out, eout)
```
Logs:
```python
eager: out = tensor([[[[-0.0269],
[-0.0269]],
[[-0.0094],
[-0.0094]]]], grad_fn=<ConvolutionBackward0>)
Traceback (most recent call last):
File "/home/colin/code/bug.py", line 25, in <module>
exported = torch.jit.optimize_for_inference(exported) # <-- RuntimeError: could not construct a memory descriptor using a format tag
File "/home/colin/miniconda3/envs/py39/lib/python3.9/site-packages/torch/jit/_freeze.py", line 218, in optimize_for_inference
torch._C._jit_pass_optimize_for_inference(mod._c, other_methods)
RuntimeError: could not construct a memory descriptor using a format tag
```
### Versions
```python
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.12.1+cu116
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] numpy 1.23.3 pypi_0 pypi
[conda] torch 1.12.1+cu116 pypi_0 pypi
[conda] torchaudio 0.12.1+cu116 pypi_0 pypi
[conda] torchvision 0.13.1+cu116 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @VitalyFedyunin
|
non_process
|
runtimeerror could not construct a memory descriptor using a format tag 🐛 describe the bug hi when optimizing the model by optimize for inference i encountered this runtimeerror the model works fine in eager mode and also could be traced but when using optimize for inference after tracing it leads to an error python import torch import torch nn as nn class mymodule nn module def init self super init self nn kernel size stride padding dilation def forward self x torch max y self x return y torch zeros dtype torch torch zeros dtype torch mod mymodule out mod print f eager out out works fine exported torch jit trace mod exported torch jit optimize for inference exported runtimeerror could not construct a memory descriptor using a format tag eout exported print f jit eout eout assert torch allclose out eout logs python eager out tensor grad fn traceback most recent call last file home colin code bug py line in exported torch jit optimize for inference exported runtimeerror could not construct a memory descriptor using a format tag file home colin envs lib site packages torch jit freeze py line in optimize for inference torch c jit pass optimize for inference mod c other methods runtimeerror could not construct a memory descriptor using a format tag versions python pytorch version is debug build false cuda used to build pytorch rocm used to build pytorch n a os ubuntu lts gcc version ubuntu clang version could not collect cmake version version libc version glibc python version main aug bit runtime python platform linux generic with is cuda available true cuda runtime version gpu models and configuration gpu nvidia geforce rtx gpu nvidia geforce rtx gpu nvidia geforce rtx nvidia driver version cudnn version could not collect hip runtime version n a miopen runtime version n a is xnnpack available true versions of relevant libraries numpy torch torchaudio torchvision numpy pypi pypi torch pypi pypi torchaudio pypi pypi torchvision pypi pypi cc ezyang gchanan gujinghui penghuicheng xiaobingsuper jianyuh vitalyfedyunin
| 0
|
825,453
| 31,390,725,302
|
IssuesEvent
|
2023-08-26 09:59:40
|
sosyal-app/frontend
|
https://api.github.com/repos/sosyal-app/frontend
|
closed
|
Giriş Yapma ekranı > Kod
|
priority feature
|
Giriş yapma fonksiyonu, çalıştığının onaylanması.
Giriş yapma sırasında dönen JWT tokenlarının işlenmesi, kaydedilmesi.
|
1.0
|
Giriş Yapma ekranı > Kod - Giriş yapma fonksiyonu, çalıştığının onaylanması.
Giriş yapma sırasında dönen JWT tokenlarının işlenmesi, kaydedilmesi.
|
non_process
|
giriş yapma ekranı kod giriş yapma fonksiyonu çalıştığının onaylanması giriş yapma sırasında dönen jwt tokenlarının işlenmesi kaydedilmesi
| 0
|
79,262
| 22,685,177,724
|
IssuesEvent
|
2022-07-04 13:29:13
|
hannobraun/Fornjot
|
https://api.github.com/repos/hannobraun/Fornjot
|
closed
|
Check documentation in CI build
|
good first issue type: development topic: build
|
The rustdoc documentation had accumulated a few warnings that I fixed (https://github.com/hannobraun/Fornjot/pull/270). This should be checked in the CI build. A check can be added to [`workflows/test.yml`](https://github.com/hannobraun/Fornjot/blob/main/.github/workflows/test.yml).
Labeling https://github.com/hannobraun/Fornjot/labels/good%20first%20issue, as this is a self-contained change that doesn't require knowledge of Fornjot.
|
1.0
|
Check documentation in CI build - The rustdoc documentation had accumulated a few warnings that I fixed (https://github.com/hannobraun/Fornjot/pull/270). This should be checked in the CI build. A check can be added to [`workflows/test.yml`](https://github.com/hannobraun/Fornjot/blob/main/.github/workflows/test.yml).
Labeling https://github.com/hannobraun/Fornjot/labels/good%20first%20issue, as this is a self-contained change that doesn't require knowledge of Fornjot.
|
non_process
|
check documentation in ci build the rustdoc documentation had accumulated a few warnings that i fixed this should be checked in the ci build a check can be added to labeling as this is a self contained change that doesn t require knowledge of fornjot
| 0
|
228,880
| 17,483,539,505
|
IssuesEvent
|
2021-08-09 07:55:48
|
LimeChain/hedera-services
|
https://api.github.com/repos/LimeChain/hedera-services
|
closed
|
Matrix of APIs pros/cons whether we should implement it
|
documentation research
|
Document the pros & cons of the APIs and decide whether to implement it.
|
1.0
|
Matrix of APIs pros/cons whether we should implement it - Document the pros & cons of the APIs and decide whether to implement it.
|
non_process
|
matrix of apis pros cons whether we should implement it document the pros cons of the apis and decide whether to implement it
| 0
|
418,921
| 28,132,466,892
|
IssuesEvent
|
2023-04-01 02:15:16
|
harrisont/fastbuild-vscode
|
https://api.github.com/repos/harrisont/fastbuild-vscode
|
closed
|
Improve documentation
|
documentation
|
* Update the contributor-docs with a high-level architecture (lexer, parser, evaluator).
* Update the contributor-docs with a file-layout overview.
|
1.0
|
Improve documentation - * Update the contributor-docs with a high-level architecture (lexer, parser, evaluator).
* Update the contributor-docs with a file-layout overview.
|
non_process
|
improve documentation update the contributor docs with a high level architecture lexer parser evaluator update the contributor docs with a file layout overview
| 0
|
21,188
| 28,180,221,327
|
IssuesEvent
|
2023-04-04 01:25:26
|
ssytnt/papers
|
https://api.github.com/repos/ssytnt/papers
|
opened
|
Differential Angular Imaging for Material Recognition[Xue+(Rutgers Univ.), CVPR2017]
|
ImageProcessing Imaging
|
## 概要
複数視点で撮影された(局所的な)RGB画像から材質推定。
## 方法
・Action recognitionの分野で優れた結果を出しているTwo-Stream構造のモデルを使用。
・ロボットアームが装備されたロボットを用いて、5°ステップで画像を取得@屋外。
## 結果
・差分画像を入力に追加することで精度改善。特に、データ数が十分でない場合、明示的に差分をとることは有効。
・複数のRGB画像を入力とするより、1枚の差分画像の方が良い結果。

|
1.0
|
Differential Angular Imaging for Material Recognition[Xue+(Rutgers Univ.), CVPR2017] - ## 概要
複数視点で撮影された(局所的な)RGB画像から材質推定。
## 方法
・Action recognitionの分野で優れた結果を出しているTwo-Stream構造のモデルを使用。
・ロボットアームが装備されたロボットを用いて、5°ステップで画像を取得@屋外。
## 結果
・差分画像を入力に追加することで精度改善。特に、データ数が十分でない場合、明示的に差分をとることは有効。
・複数のRGB画像を入力とするより、1枚の差分画像の方が良い結果。

|
process
|
differential angular imaging for material recognition 概要 複数視点で撮影された 局所的な rgb画像から材質推定。 方法 ・action recognitionの分野で優れた結果を出しているtwo stream構造のモデルを使用。 ・ロボットアームが装備されたロボットを用いて、 °ステップで画像を取得@屋外。 結果 ・差分画像を入力に追加することで精度改善。特に、データ数が十分でない場合、明示的に差分をとることは有効。 ・複数のrgb画像を入力とするより、 。
| 1
|
4,440
| 7,312,884,057
|
IssuesEvent
|
2018-02-28 22:32:01
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Edge support
|
active-directory cxp in-process triaged
|
Hello
When will you add support for Edge?
We have Edge as default browser in our Windows 10 setup.
We cannot deploy this needed feature before Edge is supported.
This has been a problem for almost 6 month now. It makes no sense to me, what is going on?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3e1db2d3-8e7d-0c60-f5c5-827b6a673505
* Version Independent ID: f28d55d1-f974-7767-6c69-56b74f14ff50
* [Content](https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-sso)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/connect/active-directory-aadconnect-sso.md)
* Service: active-directory
|
1.0
|
Edge support - Hello
When will you add support for Edge?
We have Edge as default browser in our Windows 10 setup.
We cannot deploy this needed feature before Edge is supported.
This has been a problem for almost 6 month now. It makes no sense to me, what is going on?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3e1db2d3-8e7d-0c60-f5c5-827b6a673505
* Version Independent ID: f28d55d1-f974-7767-6c69-56b74f14ff50
* [Content](https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-sso)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/connect/active-directory-aadconnect-sso.md)
* Service: active-directory
|
process
|
edge support hello when will you add support for edge we have edge as default browser in our windows setup we cannot deploy this needed feature before edge is supported this has been a problem for almost month now it makes no sense to me what is going on document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id service active directory
| 1
|
42,868
| 23,020,866,788
|
IssuesEvent
|
2022-07-22 04:34:03
|
keras-team/keras
|
https://api.github.com/repos/keras-team/keras
|
closed
|
NASNet error using custom input shape without top layer
|
type:bug/performance
|
https://github.com/keras-team/keras/blob/07e13740fd181fc3ddec7d9a594d8a08666645f6/keras/applications/nasnet.py#L174-L180
Shouldn't the parameter `require_flatten` uses `include_top` value instead of constant `True`?
Using `True` makes custom shaped input invalid when using size that isn't the `default_size` with `include_top = False`
Checked several other models such as efficientnet_v2 and xception, they all use `include_top` as value instead of `True`
|
True
|
NASNet error using custom input shape without top layer - https://github.com/keras-team/keras/blob/07e13740fd181fc3ddec7d9a594d8a08666645f6/keras/applications/nasnet.py#L174-L180
Shouldn't the parameter `require_flatten` uses `include_top` value instead of constant `True`?
Using `True` makes custom shaped input invalid when using size that isn't the `default_size` with `include_top = False`
Checked several other models such as efficientnet_v2 and xception, they all use `include_top` as value instead of `True`
|
non_process
|
nasnet error using custom input shape without top layer shouldn t the parameter require flatten uses include top value instead of constant true using true makes custom shaped input invalid when using size that isn t the default size with include top false checked several other models such as efficientnet and xception they all use include top as value instead of true
| 0
|
176,135
| 14,564,456,177
|
IssuesEvent
|
2020-12-17 05:09:26
|
sanskrit-lexicon/COLOGNE
|
https://api.github.com/repos/sanskrit-lexicon/COLOGNE
|
closed
|
pywork - segregate same and separate files
|
Documentation
|
A cursory look with `diff` shows the following differences between the actually used files of `pywork` folder between VCP and MD dictionaries.
# Same files
```
pywork/hw.py
pywork/hw0.py
pywork/hw2.py
pywork/hwparse.py # Only `dictcode = 'vcp'` line is different. This can be handled easily.
pywork/parseheadline.py
pywork/updateByLine.py # Except the `try except` in md
```
# Different
```
pywork/headword.py
pywork/hw1.py
pywork/hw_page.py
pywork/make_xml.py
pywork/update.py # Not seen in vcp/pywork.
```
@funderburkjim may like to tell whether the classification is OK or not.
*Question 1* - If the classification is OK, we can keep the SAME files at one place only. So that any update is directly applied to all the dictionaries at one go.
*Question 2* - Are the files in the DIFFERENT list so different that we may not have their unified version? There would be really different items at some place. Can we put them in some form of config file and make the scripts generic ?
If these two items are answered satisfactorily, we can move ahead to unified code base (as far as possible).
|
1.0
|
pywork - segregate same and separate files - A cursory look with `diff` shows the following differences between the actually used files of `pywork` folder between VCP and MD dictionaries.
# Same files
```
pywork/hw.py
pywork/hw0.py
pywork/hw2.py
pywork/hwparse.py # Only `dictcode = 'vcp'` line is different. This can be handled easily.
pywork/parseheadline.py
pywork/updateByLine.py # Except the `try except` in md
```
# Different
```
pywork/headword.py
pywork/hw1.py
pywork/hw_page.py
pywork/make_xml.py
pywork/update.py # Not seen in vcp/pywork.
```
@funderburkjim may like to tell whether the classification is OK or not.
*Question 1* - If the classification is OK, we can keep the SAME files at one place only. So that any update is directly applied to all the dictionaries at one go.
*Question 2* - Are the files in the DIFFERENT list so different that we may not have their unified version? There would be really different items at some place. Can we put them in some form of config file and make the scripts generic ?
If these two items are answered satisfactorily, we can move ahead to unified code base (as far as possible).
|
non_process
|
pywork segregate same and separate files a cursory look with diff shows the following differences between the actually used files of pywork folder between vcp and md dictionaries same files pywork hw py pywork py pywork py pywork hwparse py only dictcode vcp line is different this can be handled easily pywork parseheadline py pywork updatebyline py except the try except in md different pywork headword py pywork py pywork hw page py pywork make xml py pywork update py not seen in vcp pywork funderburkjim may like to tell whether the classification is ok or not question if the classification is ok we can keep the same files at one place only so that any update is directly applied to all the dictionaries at one go question are the files in the different list so different that we may not have their unified version there would be really different items at some place can we put them in some form of config file and make the scripts generic if these two items are answered satisfactorily we can move ahead to unified code base as far as possible
| 0
|
344,866
| 30,768,181,697
|
IssuesEvent
|
2023-07-30 15:09:21
|
oh-dab/server-api
|
https://api.github.com/repos/oh-dab/server-api
|
opened
|
[Feat] 교재 상세조회(전체 학생에 대한 오답노트 조회) controller 테스트 & 구현
|
feat test MISTAKENOTE
|
### WHAT-TO-DO
<!-- 진행할 작업을 나열하며 할 일을 정확히 파악합니다. -->
- [ ] controller 단위 테스트
- [ ] cotroller 구현
|
1.0
|
[Feat] 교재 상세조회(전체 학생에 대한 오답노트 조회) controller 테스트 & 구현 - ### WHAT-TO-DO
<!-- 진행할 작업을 나열하며 할 일을 정확히 파악합니다. -->
- [ ] controller 단위 테스트
- [ ] cotroller 구현
|
non_process
|
교재 상세조회 전체 학생에 대한 오답노트 조회 controller 테스트 구현 what to do controller 단위 테스트 cotroller 구현
| 0
|
15,656
| 19,846,876,032
|
IssuesEvent
|
2022-01-21 07:45:56
|
ooi-data/RS03AXPS-PC03A-06-VADCPA301-streamed-vadcp_velocity_beam_5
|
https://api.github.com/repos/ooi-data/RS03AXPS-PC03A-06-VADCPA301-streamed-vadcp_velocity_beam_5
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T07:45:56.012083.
## Details
Flow name: `RS03AXPS-PC03A-06-VADCPA301-streamed-vadcp_velocity_beam_5`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T07:45:56.012083.
## Details
Flow name: `RS03AXPS-PC03A-06-VADCPA301-streamed-vadcp_velocity_beam_5`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name streamed vadcp velocity beam task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
1,877
| 4,704,239,962
|
IssuesEvent
|
2016-10-13 10:44:36
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
opened
|
Adjust table row span after filtering
|
feature P2 preprocess
|
DITA source has tables with row spans and profiling attributes on rows. When publishing, the filter strips out some rows and the original `@morerows` is no longer valid. Right now this will fail in error because XSLT cannot deal with invalid tables like these.
One solution to fix this would be to write the custom table coordinate attributes during initial parse *and* replace the `@morerows` with `@dita:rowspan-end`. This is similar to what CALS does with column span, specifying span end column name instead of span length. After filtering this could be converted back to `@morerows` by calculating the length with the current coordinates.
This would also work when conref push is used to push new rows into a rows span.
|
1.0
|
Adjust table row span after filtering - DITA source has tables with row spans and profiling attributes on rows. When publishing, the filter strips out some rows and the original `@morerows` is no longer valid. Right now this will fail in error because XSLT cannot deal with invalid tables like these.
One solution to fix this would be to write the custom table coordinate attributes during initial parse *and* replace the `@morerows` with `@dita:rowspan-end`. This is similar to what CALS does with column span, specifying span end column name instead of span length. After filtering this could be converted back to `@morerows` by calculating the length with the current coordinates.
This would also work when conref push is used to push new rows into a rows span.
|
process
|
adjust table row span after filtering dita source has tables with row spans and profiling attributes on rows when publishing the filter strips out some rows and the original morerows is no longer valid right now this will fail in error because xslt cannot deal with invalid tables like these one solution to fix this would be to write the custom table coordinate attributes during initial parse and replace the morerows with dita rowspan end this is similar to what cals does with column span specifying span end column name instead of span length after filtering this could be converted back to morerows by calculating the length with the current coordinates this would also work when conref push is used to push new rows into a rows span
| 1
|
15,457
| 19,669,288,801
|
IssuesEvent
|
2022-01-11 04:21:17
|
q191201771/lal
|
https://api.github.com/repos/q191201771/lal
|
closed
|
ffmpeg 推 rtmp 到 lal,hls 拉流播放卡住
|
#Bug *In process
|
1. ffmpeg.exe -i http://devimages.apple.com.edgekey.net/streaming/examples/bipbop_4x3/gear1/prog_index.m3u8 -c:v copy -an -f flv rtmp://127.0.0.1/live/test110
2. ffplay.exe -i rtmp://127.0.0.1/live/test110 (播放正常)
3. ffplay.exe -i rtmp://127.0.0.1:8080/hls/test110.m3u8 (播放一段时间后卡住)
播放 hls 到卡住时日志如下:
```
2022/01/09 12:03:01.260854 DEBUG group size=1 - server_manager.go:242
2022/01/09 12:03:01.260854 DEBUG {"stream_name":"test110","audio_codec":"","video_codec":"H264","video_width":400,"video_height":300,"pub":{"protocol":"RTMP","session_id":"RTMPPUBSUB1","remote_addr":"127.0.0.1:52571","start_time":"2022-01-09 12:00:34.993","read_bytes_sum":44299697,"wrote_bytes_sum":3500,"bitrate":2822,"read_bitrate":2822,"write_bitrate":0},"subs":null,"pull":{"protocol":"","session_id":"","remote_addr":"","start_time":"","read_bytes_sum":0,"wrote_bytes_sum":0,"bitrate":0,"read_bitrate":0,"write_bitrate":0}} - server_manager.go:245
2022/01/09 12:03:18.917768 ERROR [STREAMER1] invalid video message length. header={Csid:6 MsgLen:5 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1800029}, payload=00000000 17 02 00 00 00 |.....|
- streamer.go:101
2022/01/09 12:03:18.919773 DEBUG [RTMPPUBSUB1] read command message, ignore it. cmd=FCUnpublish, header={Csid:3 MsgLen:34 MsgTypeId:20 MsgStreamId:0 TimestampAbs:0}, b=23, hex=00000000 02 00 0b 46 43 55 6e 70 75 62 6c 69 73 68 00 40 |...FCUnpublish.@|
00000010 18 00 00 00 00 00 00 05 02 00 07 74 65 73 74 31 |...........test1|
00000020 31 30 |10|
- server_session.go:323
2022/01/09 12:03:18.920767 DEBUG [RTMPPUBSUB1] read command message, ignore it. cmd=deleteStream, header={Csid:3 MsgLen:34 MsgTypeId:20 MsgStreamId:0 TimestampAbs:0}, b=24, hex=00000000 02 00 0c 64 65 6c 65 74 65 53 74 72 65 61 6d 00 |...deleteStream.|
00000010 40 1c 00 00 00 00 00 00 05 00 3f f0 00 00 00 00 |@.........?.....|
00000020 00 00 |..|
- server_session.go:323
2022/01/09 12:03:18.928763 DEBUG [NAZACONN1] close once. err=EOF - connection.go:504
2022/01/09 12:03:18.931767 INFO [RTMPPUBSUB1] rtmp loop done. err=EOF - server.go:69
2022/01/09 12:03:18.933762 DEBUG [GROUP1] [RTMPPUBSUB1] del rtmp PubSession from group. - group.go:652
2022/01/09 12:03:18.934761 INFO [HLSMUXER1] lifecycle dispose hls muxer. - muxer.go:129
2022/01/09 12:03:19.260662 INFO erase empty group. [GROUP1] - server_manager.go:230
2022/01/09 12:03:19.260662 INFO [GROUP1] lifecycle dispose group. - group.go:262
2022/01/09 12:03:31.260661 DEBUG group size=0 - server_manager.go:242
2022/01/09 12:04:01.261023 DEBUG group size=0 - server_manager.go:242
```
|
1.0
|
ffmpeg 推 rtmp 到 lal,hls 拉流播放卡住 - 1. ffmpeg.exe -i http://devimages.apple.com.edgekey.net/streaming/examples/bipbop_4x3/gear1/prog_index.m3u8 -c:v copy -an -f flv rtmp://127.0.0.1/live/test110
2. ffplay.exe -i rtmp://127.0.0.1/live/test110 (播放正常)
3. ffplay.exe -i rtmp://127.0.0.1:8080/hls/test110.m3u8 (播放一段时间后卡住)
播放 hls 到卡住时日志如下:
```
2022/01/09 12:03:01.260854 DEBUG group size=1 - server_manager.go:242
2022/01/09 12:03:01.260854 DEBUG {"stream_name":"test110","audio_codec":"","video_codec":"H264","video_width":400,"video_height":300,"pub":{"protocol":"RTMP","session_id":"RTMPPUBSUB1","remote_addr":"127.0.0.1:52571","start_time":"2022-01-09 12:00:34.993","read_bytes_sum":44299697,"wrote_bytes_sum":3500,"bitrate":2822,"read_bitrate":2822,"write_bitrate":0},"subs":null,"pull":{"protocol":"","session_id":"","remote_addr":"","start_time":"","read_bytes_sum":0,"wrote_bytes_sum":0,"bitrate":0,"read_bitrate":0,"write_bitrate":0}} - server_manager.go:245
2022/01/09 12:03:18.917768 ERROR [STREAMER1] invalid video message length. header={Csid:6 MsgLen:5 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1800029}, payload=00000000 17 02 00 00 00 |.....|
- streamer.go:101
2022/01/09 12:03:18.919773 DEBUG [RTMPPUBSUB1] read command message, ignore it. cmd=FCUnpublish, header={Csid:3 MsgLen:34 MsgTypeId:20 MsgStreamId:0 TimestampAbs:0}, b=23, hex=00000000 02 00 0b 46 43 55 6e 70 75 62 6c 69 73 68 00 40 |...FCUnpublish.@|
00000010 18 00 00 00 00 00 00 05 02 00 07 74 65 73 74 31 |...........test1|
00000020 31 30 |10|
- server_session.go:323
2022/01/09 12:03:18.920767 DEBUG [RTMPPUBSUB1] read command message, ignore it. cmd=deleteStream, header={Csid:3 MsgLen:34 MsgTypeId:20 MsgStreamId:0 TimestampAbs:0}, b=24, hex=00000000 02 00 0c 64 65 6c 65 74 65 53 74 72 65 61 6d 00 |...deleteStream.|
00000010 40 1c 00 00 00 00 00 00 05 00 3f f0 00 00 00 00 |@.........?.....|
00000020 00 00 |..|
- server_session.go:323
2022/01/09 12:03:18.928763 DEBUG [NAZACONN1] close once. err=EOF - connection.go:504
2022/01/09 12:03:18.931767 INFO [RTMPPUBSUB1] rtmp loop done. err=EOF - server.go:69
2022/01/09 12:03:18.933762 DEBUG [GROUP1] [RTMPPUBSUB1] del rtmp PubSession from group. - group.go:652
2022/01/09 12:03:18.934761 INFO [HLSMUXER1] lifecycle dispose hls muxer. - muxer.go:129
2022/01/09 12:03:19.260662 INFO erase empty group. [GROUP1] - server_manager.go:230
2022/01/09 12:03:19.260662 INFO [GROUP1] lifecycle dispose group. - group.go:262
2022/01/09 12:03:31.260661 DEBUG group size=0 - server_manager.go:242
2022/01/09 12:04:01.261023 DEBUG group size=0 - server_manager.go:242
```
|
process
|
ffmpeg 推 rtmp 到 lal,hls 拉流播放卡住 ffmpeg exe i c v copy an f flv rtmp live ffplay exe i rtmp live (播放正常) ffplay exe i rtmp hls (播放一段时间后卡住) 播放 hls 到卡住时日志如下 debug group size server manager go debug stream name audio codec video codec video width video height pub protocol rtmp session id remote addr start time read bytes sum wrote bytes sum bitrate read bitrate write bitrate subs null pull protocol session id remote addr start time read bytes sum wrote bytes sum bitrate read bitrate write bitrate server manager go error invalid video message length header csid msglen msgtypeid msgstreamid timestampabs payload streamer go debug read command message ignore it cmd fcunpublish header csid msglen msgtypeid msgstreamid timestampabs b hex fcunpublish server session go debug read command message ignore it cmd deletestream header csid msglen msgtypeid msgstreamid timestampabs b hex deletestream server session go debug close once err eof connection go info rtmp loop done err eof server go debug del rtmp pubsession from group group go info lifecycle dispose hls muxer muxer go info erase empty group server manager go info lifecycle dispose group group go debug group size server manager go debug group size server manager go
| 1
|
9,260
| 8,576,294,771
|
IssuesEvent
|
2018-11-12 19:55:42
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Not sure if you inteneted to give two links to the same article above?
|
cxp doc-provided in-progress service-fabric/svc triaged
|
Is listed twice above, it makes sense but wanted to make sure that was your intent
https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade-parameters
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 273ff0d0-2b07-0157-f090-fe6a4c8da21b
* Version Independent ID: 0039fbc1-0774-8a6b-0222-07df80ce216a
* Content: [Configure the upgrade of a Service Fabric application](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-visualstudio-configure-upgrade#parameters-needed-to-upgrade)
* Content Source: [articles/service-fabric/service-fabric-visualstudio-configure-upgrade.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-visualstudio-configure-upgrade.md)
* Service: **service-fabric**
* GitHub Login: @MikkelHegn
* Microsoft Alias: **mikkelhegn**
|
1.0
|
Not sure if you inteneted to give two links to the same article above? - Is listed twice above, it makes sense but wanted to make sure that was your intent
https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade-parameters
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 273ff0d0-2b07-0157-f090-fe6a4c8da21b
* Version Independent ID: 0039fbc1-0774-8a6b-0222-07df80ce216a
* Content: [Configure the upgrade of a Service Fabric application](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-visualstudio-configure-upgrade#parameters-needed-to-upgrade)
* Content Source: [articles/service-fabric/service-fabric-visualstudio-configure-upgrade.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-visualstudio-configure-upgrade.md)
* Service: **service-fabric**
* GitHub Login: @MikkelHegn
* Microsoft Alias: **mikkelhegn**
|
non_process
|
not sure if you inteneted to give two links to the same article above is listed twice above it makes sense but wanted to make sure that was your intent document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service service fabric github login mikkelhegn microsoft alias mikkelhegn
| 0
|
8,516
| 11,698,860,793
|
IssuesEvent
|
2020-03-06 14:38:53
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
API Proposal: WaitForExitAsync for System.Diagnostics.Process
|
api-approved area-System.Diagnostics.Process
|
Per discussion with @krwq, I'm splitting off an API proposal for a Task-based `WaitForExitAsync` method on `System.Diagnostics.Process`, since it is a building block API that is simpler to understand and use correctly, while the original issue (#12039) can continue to incubate additional convenience APIs.
Here's the proposed API:
```csharp
public partial class Process
{
public Task WaitForExitAsync(CancellationToken cancellationToken = default) { throw null; }
}
```
The method takes a single, optional parameter, a `CancellationToken`, to support timeouts, which matches `WaitForExit(int timeout)` semantics, while following Task-based programming best practices of allowing cancellation.
### API Rationale / Design notes
API doesn't expose an `WaitForExitAsync(int timeout)` overload since `CancellationToken` has a constructor that takes a timeout, and there's `CancellationToken.TimeoutAfter()`.
---
While the synchronous `WaitForExit` returns a `bool` to determine if the process exited or not, the async version returns a plain `Task`. Callers can determine that the process exited in two ways:
1. By checking the task's `IsCompletedSuccessfully`
2. By checking the process' `HasExited`
If the wait is cancelled, the caller can determine that in two ways:
1. If using `await`, the task will throw a `TaskCanceledException`
2. By checking the task's `IsCanceled`
---
Callers that want to emulate the old API more closely can add an extension method as follows:
```csharp
public static class ProcessExtensions
{
public static Task<bool> WaitForExitWithTimeoutAsync(this Process process, int timeout)
{
using (var cts = new CancellationTokenSource(timeout))
{
try
{
await process.WaitForExitAsync(cts.Token).ConfigureAwait(false);
return process.HasExited;
}
catch (OperationCanceledException)
{
return false;
}
}
}
}
```
This API isn't provided by the framework because:
1. It's targeted towards users converting code, rather than writing new idiomatic code
2. It increased API surface area without clear value
3. It's easy for developers to add if needed
---
Because the method internally relies on the `Exited` event and there's a potential race between setting `EnableRaisingEvents` and the process exiting, I introduced a new instance of `InvalidOperationException` informing the caller to set `EnableRaisingEvents` to make this explicit. If this is deemed undesirable coupling I can move the set into this method, at the expense of possibly throwing and catching the `InvalidOperationException` from `GetProcessHandle()`, which I'd rather avoid if possible.
### Implementation
I have an implementation available here with tests to show sample usage: [feature/34689-process-waitforexitasync](https://github.com/dotnet/corefx/compare/master...MattKotsenas:feature/34689-process-waitforexitasync).
|
1.0
|
API Proposal: WaitForExitAsync for System.Diagnostics.Process - Per discussion with @krwq, I'm splitting off an API proposal for a Task-based `WaitForExitAsync` method on `System.Diagnostics.Process`, since it is a building block API that is simpler to understand and use correctly, while the original issue (#12039) can continue to incubate additional convenience APIs.
Here's the proposed API:
```csharp
public partial class Process
{
public Task WaitForExitAsync(CancellationToken cancellationToken = default) { throw null; }
}
```
The method takes a single, optional parameter, a `CancellationToken`, to support timeouts, which matches `WaitForExit(int timeout)` semantics, while following Task-based programming best practices of allowing cancellation.
### API Rationale / Design notes
API doesn't expose an `WaitForExitAsync(int timeout)` overload since `CancellationToken` has a constructor that takes a timeout, and there's `CancellationToken.TimeoutAfter()`.
---
While the synchronous `WaitForExit` returns a `bool` to determine if the process exited or not, the async version returns a plain `Task`. Callers can determine that the process exited in two ways:
1. By checking the task's `IsCompletedSuccessfully`
2. By checking the process' `HasExited`
If the wait is cancelled, the caller can determine that in two ways:
1. If using `await`, the task will throw a `TaskCanceledException`
2. By checking the task's `IsCanceled`
---
Callers that want to emulate the old API more closely can add an extension method as follows:
```csharp
public static class ProcessExtensions
{
public static Task<bool> WaitForExitWithTimeoutAsync(this Process process, int timeout)
{
using (var cts = new CancellationTokenSource(timeout))
{
try
{
await process.WaitForExitAsync(cts.Token).ConfigureAwait(false);
return process.HasExited;
}
catch (OperationCanceledException)
{
return false;
}
}
}
}
```
This API isn't provided by the framework because:
1. It's targeted towards users converting code, rather than writing new idiomatic code
2. It increased API surface area without clear value
3. It's easy for developers to add if needed
---
Because the method internally relies on the `Exited` event and there's a potential race between setting `EnableRaisingEvents` and the process exiting, I introduced a new instance of `InvalidOperationException` informing the caller to set `EnableRaisingEvents` to make this explicit. If this is deemed undesirable coupling I can move the set into this method, at the expense of possibly throwing and catching the `InvalidOperationException` from `GetProcessHandle()`, which I'd rather avoid if possible.
### Implementation
I have an implementation available here with tests to show sample usage: [feature/34689-process-waitforexitasync](https://github.com/dotnet/corefx/compare/master...MattKotsenas:feature/34689-process-waitforexitasync).
|
process
|
api proposal waitforexitasync for system diagnostics process per discussion with krwq i m splitting off an api proposal for a task based waitforexitasync method on system diagnostics process since it is a building block api that is simpler to understand and use correctly while the original issue can continue to incubate additional convenience apis here s the proposed api csharp public partial class process public task waitforexitasync cancellationtoken cancellationtoken default throw null the method takes a single optional parameter a cancellationtoken to support timeouts which matches waitforexit int timeout semantics while following task based programming best practices of allowing cancellation api rationale design notes api doesn t expose an waitforexitasync int timeout overload since cancellationtoken has a constructor that takes a timeout and there s cancellationtoken timeoutafter while the synchronous waitforexit returns a bool to determine if the process exited or not the async version returns a plain task callers can determine that the process exited in two ways by checking the task s iscompletedsuccessfully by checking the process hasexited if the wait is cancelled the caller can determine that in two ways if using await the task will throw a taskcanceledexception by checking the task s iscanceled callers that want to emulate the old api more closely can add an extension method as follows csharp public static class processextensions public static task waitforexitwithtimeoutasync this process process int timeout using var cts new cancellationtokensource timeout try await process waitforexitasync cts token configureawait false return process hasexited catch operationcanceledexception return false this api isn t provided by the framework because it s targeted towards users converting code rather than writing new idiomatic code it increased api surface area without clear value it s easy for developers to add if needed because the method internally relies on the exited event and there s a potential race between setting enableraisingevents and the process exiting i introduced a new instance of invalidoperationexception informing the caller to set enableraisingevents to make this explicit if this is deemed undesirable coupling i can move the set into this method at the expense of possibly throwing and catching the invalidoperationexception from getprocesshandle which i d rather avoid if possible implementation i have an implementation available here with tests to show sample usage
| 1
|
14,071
| 16,904,416,409
|
IssuesEvent
|
2021-06-24 04:44:31
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
.Net 6 DI Resolving multi-instance services bug
|
area-System.ServiceProcess untriaged
|
### Describe the bug
.Net 6 Preview have bug when try to register multi-service implement same 1 interface. In ConfigureServices(IServiceCollection services), for example:
` services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService1), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService2), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService3), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService4), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService5), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService6), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService7), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService8), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService9), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService10), ServiceLifetime.Transient));`
Here TestService1=>TestService10 are services which implement ITestService.
When we try to resolve DI in controller or in Configure() of Startup, Ex:
Configure(IApplicationBuilder app, IWebHostEnvironment env, IEnumerable<ITestService> testServices)
What we receive in testServices is not correct:

What we expect: testServices contains all instance of each kind of services: TestService1 => TestService10.
But actually got: 2 instances of (TestService8, TestService9, TestService10), 1 instance of (TestService1, TestService5, TestService6, TestService7) & 0 instance of (TestService2, TestService3, TestService4).
This bug does not happen with .Net5 version.
### To Reproduce
Just follow simple steps already describe in Describe the bug above.
### Further technical details
- ASP.NET Core version .Net 6 Preview 5
(But the same problem happen with all .Net 6 version Preview maybe).
|
1.0
|
.Net 6 DI Resolving multi-instance services bug -
### Describe the bug
.Net 6 Preview have bug when try to register multi-service implement same 1 interface. In ConfigureServices(IServiceCollection services), for example:
` services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService1), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService2), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService3), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService4), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService5), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService6), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService7), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService8), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService9), ServiceLifetime.Transient));
services.Add(new ServiceDescriptor(typeof(ITestService), typeof(TestService10), ServiceLifetime.Transient));`
Here TestService1=>TestService10 are services which implement ITestService.
When we try to resolve DI in controller or in Configure() of Startup, Ex:
Configure(IApplicationBuilder app, IWebHostEnvironment env, IEnumerable<ITestService> testServices)
What we receive in testServices is not correct:

What we expect: testServices contains all instance of each kind of services: TestService1 => TestService10.
But actually got: 2 instances of (TestService8, TestService9, TestService10), 1 instance of (TestService1, TestService5, TestService6, TestService7) & 0 instance of (TestService2, TestService3, TestService4).
This bug does not happen with .Net5 version.
### To Reproduce
Just follow simple steps already describe in Describe the bug above.
### Further technical details
- ASP.NET Core version .Net 6 Preview 5
(But the same problem happen with all .Net 6 version Preview maybe).
|
process
|
net di resolving multi instance services bug describe the bug net preview have bug when try to register multi service implement same interface in configureservices iservicecollection services for example services add new servicedescriptor typeof itestservice typeof servicelifetime transient services add new servicedescriptor typeof itestservice typeof servicelifetime transient services add new servicedescriptor typeof itestservice typeof servicelifetime transient services add new servicedescriptor typeof itestservice typeof servicelifetime transient services add new servicedescriptor typeof itestservice typeof servicelifetime transient services add new servicedescriptor typeof itestservice typeof servicelifetime transient services add new servicedescriptor typeof itestservice typeof servicelifetime transient services add new servicedescriptor typeof itestservice typeof servicelifetime transient services add new servicedescriptor typeof itestservice typeof servicelifetime transient services add new servicedescriptor typeof itestservice typeof servicelifetime transient here are services which implement itestservice when we try to resolve di in controller or in configure of startup ex configure iapplicationbuilder app iwebhostenvironment env ienumerable testservices what we receive in testservices is not correct what we expect testservices contains all instance of each kind of services but actually got instances of instance of instance of this bug does not happen with version to reproduce just follow simple steps already describe in describe the bug above further technical details asp net core version net preview but the same problem happen with all net version preview maybe
| 1
|
157,774
| 12,390,838,491
|
IssuesEvent
|
2020-05-20 11:25:08
|
Students-of-the-city-of-Kostroma/trpo_automation
|
https://api.github.com/repos/Students-of-the-city-of-Kostroma/trpo_automation
|
closed
|
Тестирование класса Sheet
|
Google Sprint 10 Story Testing
|
Тестирование класса в соответствии с релизом v.0.0.0.6
Через написание unit-тестов
|
1.0
|
Тестирование класса Sheet - Тестирование класса в соответствии с релизом v.0.0.0.6
Через написание unit-тестов
|
non_process
|
тестирование класса sheet тестирование класса в соответствии с релизом v через написание unit тестов
| 0
|
250,660
| 18,901,819,974
|
IssuesEvent
|
2021-11-16 02:26:38
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
opened
|
Display whether fields are nullable in the OpenAPI documentation
|
bug area: documentation (api and integrations) priority: high
|
It appears in our `zulip.yaml` file that we have various fields, like the `stream_weekly_traffic` part of the [register response](https://zulip.com/api/register-queue), that are correctly marked in the OpenAPI field as `nullable: true`. However, we don't display that fact when displaying the type as e.g. "integer" for one of these fields. We should fix that.
See https://chat.zulip.org/#narrow/stream/378-api-design/topic/.22type.22.20annotations.20and.20.60null.60/near/1282157 for background and discussion on how to display the type.
|
1.0
|
Display whether fields are nullable in the OpenAPI documentation - It appears in our `zulip.yaml` file that we have various fields, like the `stream_weekly_traffic` part of the [register response](https://zulip.com/api/register-queue), that are correctly marked in the OpenAPI field as `nullable: true`. However, we don't display that fact when displaying the type as e.g. "integer" for one of these fields. We should fix that.
See https://chat.zulip.org/#narrow/stream/378-api-design/topic/.22type.22.20annotations.20and.20.60null.60/near/1282157 for background and discussion on how to display the type.
|
non_process
|
display whether fields are nullable in the openapi documentation it appears in our zulip yaml file that we have various fields like the stream weekly traffic part of the that are correctly marked in the openapi field as nullable true however we don t display that fact when displaying the type as e g integer for one of these fields we should fix that see for background and discussion on how to display the type
| 0
|
20,952
| 27,812,770,102
|
IssuesEvent
|
2023-03-18 10:21:51
|
nextflow-io/nextflow
|
https://api.github.com/repos/nextflow-io/nextflow
|
closed
|
`path from ...collect` on a channel length 1 leads to unexpected outcome
|
stale lang/processes
|
## Bug report
I am using the pattern shown in https://nextflow-io.github.io/patterns/index.html#_process_all_outputs_altogether to merge files generated by an upstream process. In some cases this process only generates one file.
The pattern suggests using
```
process bar {
echo true
input:
file '*.fq' from unzipped_ch.collect()
"""
cat *.fq
"""
}
```
For a single file, this file will then be named `.fq` as lined out by the rules in the "Multiple input files" https://www.nextflow.io/docs/latest/process.html#multiple-input-files. This works for `cat *.fq` but by default not for `ls *.fq` (or in my case, Python's `glob.glob("*.fq")` to collect all those files.
While everything is working exactly as specified (and so there is no true bug), it might be useful to specify `file_*.fq` in the example pattern and to highlight the issue.
### Steps to reproduce the problem
```
indata = Channel.from(1)
process make_config {
input:
val indata
output:
path "*.csv" mode flatten into csv_out
script:
"""
for N in `seq ${indata}`
do
echo \$N > \$N.csv
done
"""
}
process merge_config {
input:
path "*.csv" from csv_out.collect()
output:
stdout loglog
script:
'''
ls *.csv
'''
}
```
This lists nothing, because the file is named `.csv` and thus invisible by standard rules. Everything is working correctly, but the outcome is surprising
OS: Linux
|
1.0
|
`path from ...collect` on a channel length 1 leads to unexpected outcome - ## Bug report
I am using the pattern shown in https://nextflow-io.github.io/patterns/index.html#_process_all_outputs_altogether to merge files generated by an upstream process. In some cases this process only generates one file.
The pattern suggests using
```
process bar {
echo true
input:
file '*.fq' from unzipped_ch.collect()
"""
cat *.fq
"""
}
```
For a single file, this file will then be named `.fq` as lined out by the rules in the "Multiple input files" https://www.nextflow.io/docs/latest/process.html#multiple-input-files. This works for `cat *.fq` but by default not for `ls *.fq` (or in my case, Python's `glob.glob("*.fq")` to collect all those files.
While everything is working exactly as specified (and so there is no true bug), it might be useful to specify `file_*.fq` in the example pattern and to highlight the issue.
### Steps to reproduce the problem
```
indata = Channel.from(1)
process make_config {
input:
val indata
output:
path "*.csv" mode flatten into csv_out
script:
"""
for N in `seq ${indata}`
do
echo \$N > \$N.csv
done
"""
}
process merge_config {
input:
path "*.csv" from csv_out.collect()
output:
stdout loglog
script:
'''
ls *.csv
'''
}
```
This lists nothing, because the file is named `.csv` and thus invisible by standard rules. Everything is working correctly, but the outcome is surprising
OS: Linux
|
process
|
path from collect on a channel length leads to unexpected outcome bug report i am using the pattern shown in to merge files generated by an upstream process in some cases this process only generates one file the pattern suggests using process bar echo true input file fq from unzipped ch collect cat fq for a single file this file will then be named fq as lined out by the rules in the multiple input files this works for cat fq but by default not for ls fq or in my case python s glob glob fq to collect all those files while everything is working exactly as specified and so there is no true bug it might be useful to specify file fq in the example pattern and to highlight the issue steps to reproduce the problem indata channel from process make config input val indata output path csv mode flatten into csv out script for n in seq indata do echo n n csv done process merge config input path csv from csv out collect output stdout loglog script ls csv this lists nothing because the file is named csv and thus invisible by standard rules everything is working correctly but the outcome is surprising os linux
| 1
|
30,041
| 14,381,696,016
|
IssuesEvent
|
2020-12-02 06:00:42
|
PowerShell/PowerShell
|
https://api.github.com/repos/PowerShell/PowerShell
|
closed
|
Reduced Performance in PowerShell 7.1?
|
Issue-Question WG-Engine-Performance
|
<!--
For Windows PowerShell 5.1 issues, suggestions, or feature requests please use the following link instead:
Windows PowerShell [UserVoice](https://windowsserver.uservoice.com/forums/301869-powershell)
This repository is **ONLY** for PowerShell Core 6 and PowerShell 7+ issues.
- Make sure you are able to repro it on the [latest released version](https://github.com/PowerShell/PowerShell/releases)
- Search the existing issues.
- Refer to the [FAQ](https://github.com/PowerShell/PowerShell/blob/master/docs/FAQ.md).
- Refer to the [known issues](https://docs.microsoft.com/powershell/scripting/whats-new/known-issues-ps6).
-->
A script of mine that reliably took 26 or 27 minutes to run in 7.0 now does not complete within 2 hours in 7.1.
The script uses "ForEach-Object" and "Where-Object" on many lines of text to remove lines matching a criteria in a 85MB text file.
Maybe a performance bottleneck in 7.1?
-------------
**Repro script and profiling traces** (updated by @daxian-dbw)
- Repro script shared in https://github.com/PowerShell/PowerShell/issues/14087#issuecomment-728635880: [DEMO.zip](https://github.com/PowerShell/PowerShell/files/5550773/DEMO.zip)
- profiling traces shared in https://github.com/PowerShell/PowerShell/issues/14087#issuecomment-728834643: https://ru.files.fm/u/nnz6vztgj
|
True
|
Reduced Performance in PowerShell 7.1? - <!--
For Windows PowerShell 5.1 issues, suggestions, or feature requests please use the following link instead:
Windows PowerShell [UserVoice](https://windowsserver.uservoice.com/forums/301869-powershell)
This repository is **ONLY** for PowerShell Core 6 and PowerShell 7+ issues.
- Make sure you are able to repro it on the [latest released version](https://github.com/PowerShell/PowerShell/releases)
- Search the existing issues.
- Refer to the [FAQ](https://github.com/PowerShell/PowerShell/blob/master/docs/FAQ.md).
- Refer to the [known issues](https://docs.microsoft.com/powershell/scripting/whats-new/known-issues-ps6).
-->
A script of mine that reliably took 26 or 27 minutes to run in 7.0 now does not complete within 2 hours in 7.1.
The script uses "ForEach-Object" and "Where-Object" on many lines of text to remove lines matching a criteria in a 85MB text file.
Maybe a performance bottleneck in 7.1?
-------------
**Repro script and profiling traces** (updated by @daxian-dbw)
- Repro script shared in https://github.com/PowerShell/PowerShell/issues/14087#issuecomment-728635880: [DEMO.zip](https://github.com/PowerShell/PowerShell/files/5550773/DEMO.zip)
- profiling traces shared in https://github.com/PowerShell/PowerShell/issues/14087#issuecomment-728834643: https://ru.files.fm/u/nnz6vztgj
|
non_process
|
reduced performance in powershell for windows powershell issues suggestions or feature requests please use the following link instead windows powershell this repository is only for powershell core and powershell issues make sure you are able to repro it on the search the existing issues refer to the refer to the a script of mine that reliably took or minutes to run in now does not complete within hours in the script uses foreach object and where object on many lines of text to remove lines matching a criteria in a text file maybe a performance bottleneck in repro script and profiling traces updated by daxian dbw repro script shared in profiling traces shared in
| 0
|
152,235
| 23,935,830,656
|
IssuesEvent
|
2022-09-11 07:56:13
|
shiika-lang/shiika
|
https://api.github.com/repos/shiika-lang/shiika
|
closed
|
impl. class instance variable
|
design
|
Example:
```sk
class Id
def self.initialize
@last_id = 0
end
def self.generate -> String
@last_id += 1
"_#{@last_id}_"
end
end
p Id.generate #=> "_0_"
p Id.generate #=> "_1_"
p Id.generate #=> "_2_"
```
|
1.0
|
impl. class instance variable - Example:
```sk
class Id
def self.initialize
@last_id = 0
end
def self.generate -> String
@last_id += 1
"_#{@last_id}_"
end
end
p Id.generate #=> "_0_"
p Id.generate #=> "_1_"
p Id.generate #=> "_2_"
```
|
non_process
|
impl class instance variable example sk class id def self initialize last id end def self generate string last id last id end end p id generate p id generate p id generate
| 0
|
18,269
| 24,347,662,954
|
IssuesEvent
|
2022-10-02 14:36:39
|
OpenDataScotland/the_od_bods
|
https://api.github.com/repos/OpenDataScotland/the_od_bods
|
opened
|
Add Orkney Islands Council as source
|
research data processing back end
|
**Is your feature request related to a problem? Please describe.**
Orkney Islands is now the only local authority without any data listed.
**Describe the solution you'd like**
I'm going to message them to ask if they have any as it's not very visible on their site.
**Describe alternatives you've considered**
N/A
**Additional context**
N/A
|
1.0
|
Add Orkney Islands Council as source - **Is your feature request related to a problem? Please describe.**
Orkney Islands is now the only local authority without any data listed.
**Describe the solution you'd like**
I'm going to message them to ask if they have any as it's not very visible on their site.
**Describe alternatives you've considered**
N/A
**Additional context**
N/A
|
process
|
add orkney islands council as source is your feature request related to a problem please describe orkney islands is now the only local authority without any data listed describe the solution you d like i m going to message them to ask if they have any as it s not very visible on their site describe alternatives you ve considered n a additional context n a
| 1
|
38,029
| 5,164,253,514
|
IssuesEvent
|
2017-01-17 09:59:10
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
ClientMapNearCacheTest.receives_one_clearEvent_after_mapEvictAll_call_from_member
|
Team: Client Type: Test-Failure
|
```
java.lang.AssertionError: Expecting only 1 clear event expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at com.hazelcast.client.map.impl.nearcache.ClientMapNearCacheTest$31.run(ClientMapNearCacheTest.java:1096)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:905)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:919)
at com.hazelcast.client.map.impl.nearcache.ClientMapNearCacheTest.receives_one_clearEvent_after_mapEvictAll_call_from_member(ClientMapNearCacheTest.java:1093)
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast%20Maintenance/job/Hazelcast-3.maintenance/com.hazelcast$hazelcast-client/1469/testReport/junit/com.hazelcast.client.map.impl.nearcache/ClientMapNearCacheTest/receives_one_clearEvent_after_mapEvictAll_call_from_member_batchInvalidationEnabled_true_/
|
1.0
|
ClientMapNearCacheTest.receives_one_clearEvent_after_mapEvictAll_call_from_member - ```
java.lang.AssertionError: Expecting only 1 clear event expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at com.hazelcast.client.map.impl.nearcache.ClientMapNearCacheTest$31.run(ClientMapNearCacheTest.java:1096)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:905)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:919)
at com.hazelcast.client.map.impl.nearcache.ClientMapNearCacheTest.receives_one_clearEvent_after_mapEvictAll_call_from_member(ClientMapNearCacheTest.java:1093)
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast%20Maintenance/job/Hazelcast-3.maintenance/com.hazelcast$hazelcast-client/1469/testReport/junit/com.hazelcast.client.map.impl.nearcache/ClientMapNearCacheTest/receives_one_clearEvent_after_mapEvictAll_call_from_member_batchInvalidationEnabled_true_/
|
non_process
|
clientmapnearcachetest receives one clearevent after mapevictall call from member java lang assertionerror expecting only clear event expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at com hazelcast client map impl nearcache clientmapnearcachetest run clientmapnearcachetest java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast client map impl nearcache clientmapnearcachetest receives one clearevent after mapevictall call from member clientmapnearcachetest java
| 0
|
2,761
| 5,695,992,440
|
IssuesEvent
|
2017-04-16 05:57:06
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Allow only a single row to expand at a time
|
help wanted inprocess
|
Hi Allen, Thanks a lot for putting the time into building something really great and useful. I'm throwing a quick dashboard together where "onRowClick" it needs to do a row expand and hide all the other expanded rows. Matching the row key and looping through the table works somewhat, but its not very stable. Occasionally needs to click twice to expand the row. Do you happen to have an example that might point me in the right direction? Thanks!
|
1.0
|
Allow only a single row to expand at a time - Hi Allen, Thanks a lot for putting the time into building something really great and useful. I'm throwing a quick dashboard together where "onRowClick" it needs to do a row expand and hide all the other expanded rows. Matching the row key and looping through the table works somewhat, but its not very stable. Occasionally needs to click twice to expand the row. Do you happen to have an example that might point me in the right direction? Thanks!
|
process
|
allow only a single row to expand at a time hi allen thanks a lot for putting the time into building something really great and useful i m throwing a quick dashboard together where onrowclick it needs to do a row expand and hide all the other expanded rows matching the row key and looping through the table works somewhat but its not very stable occasionally needs to click twice to expand the row do you happen to have an example that might point me in the right direction thanks
| 1
|
2,348
| 5,157,286,560
|
IssuesEvent
|
2017-01-16 05:47:00
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
child_process IPC is very slow, about 100-10000x slower than I expected (scales with msg size)
|
child_process cluster performance
|
Hi,
I'm not sure if my expectations of using IPC are unreasonable, and if so please tell me / close this issue. I planned to use a forked child_process to do some background ops for a nwjs app, and intended to send roughly 40MB of JSON data to the forked ps, and I'd get back a pojo describing that data; I measured the timing at roughly 250-300 seconds on a maxed out 2015 macbook pro (sad face); a `Worker` in chromium is doing the same job in 1-2 milliseconds.
I then decided to measure the example in the documentation on my personal maxed macbook air (less ram, slower cpu, fast ssd):
``` javascript
// index.js
var cp = require('child_process'),
n = cp.fork(__dirname + '/sub.js'),
precise = require('precise'),
timer = precise();
n.on('message', function(m) {
timer.stop();
console.log('PARENT got message:', m);
console.log('Message received in', timer.diff() / 1000000, 'ms');
});
timer.start();
n.send({ hello: 'world' });
// sub.js
process.on('message', function(m) {
console.log('CHILD got message:', m);
});
process.send({ foo: 'bar' });
// Console output:
// PARENT got message: { foo: 'bar' }
// CHILD got message: { hello: 'world' }
// Message received in 94.963261 ms
```
In both hardware scenarios, a simple small text message takes 90-100ms. Not writing to console saves roughly 5-10ms in the provided example.
|
1.0
|
child_process IPC is very slow, about 100-10000x slower than I expected (scales with msg size) - Hi,
I'm not sure if my expectations of using IPC are unreasonable, and if so please tell me / close this issue. I planned to use a forked child_process to do some background ops for a nwjs app, and intended to send roughly 40MB of JSON data to the forked ps, and I'd get back a pojo describing that data; I measured the timing at roughly 250-300 seconds on a maxed out 2015 macbook pro (sad face); a `Worker` in chromium is doing the same job in 1-2 milliseconds.
I then decided to measure the example in the documentation on my personal maxed macbook air (less ram, slower cpu, fast ssd):
``` javascript
// index.js
var cp = require('child_process'),
n = cp.fork(__dirname + '/sub.js'),
precise = require('precise'),
timer = precise();
n.on('message', function(m) {
timer.stop();
console.log('PARENT got message:', m);
console.log('Message received in', timer.diff() / 1000000, 'ms');
});
timer.start();
n.send({ hello: 'world' });
// sub.js
process.on('message', function(m) {
console.log('CHILD got message:', m);
});
process.send({ foo: 'bar' });
// Console output:
// PARENT got message: { foo: 'bar' }
// CHILD got message: { hello: 'world' }
// Message received in 94.963261 ms
```
In both hardware scenarios, a simple small text message takes 90-100ms. Not writing to console saves roughly 5-10ms in the provided example.
|
process
|
child process ipc is very slow about slower than i expected scales with msg size hi i m not sure if my expectations of using ipc are unreasonable and if so please tell me close this issue i planned to use a forked child process to do some background ops for a nwjs app and intended to send roughly of json data to the forked ps and i d get back a pojo describing that data i measured the timing at roughly seconds on a maxed out macbook pro sad face a worker in chromium is doing the same job in milliseconds i then decided to measure the example in the documentation on my personal maxed macbook air less ram slower cpu fast ssd javascript index js var cp require child process n cp fork dirname sub js precise require precise timer precise n on message function m timer stop console log parent got message m console log message received in timer diff ms timer start n send hello world sub js process on message function m console log child got message m process send foo bar console output parent got message foo bar child got message hello world message received in ms in both hardware scenarios a simple small text message takes not writing to console saves roughly in the provided example
| 1
|
18,861
| 24,783,388,513
|
IssuesEvent
|
2022-10-24 07:48:35
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
concurrent.futures.wait calls len() on an possible iterable
|
type-bug stdlib 3.10 expert-multiprocessing
|
BPO | [41938](https://bugs.python.org/issue41938)
--- | :---
Nosy | @rohitkg98
PRs | <li>python/cpython#22555</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2020-10-05.02:02:57.246>
labels = ['type-bug', 'library', '3.10']
title = 'concurrent.futures.wait calls len() on an possible iterable'
updated_at = <Date 2020-10-05.02:35:35.432>
user = 'https://github.com/rohitkg98'
```
bugs.python.org fields:
```python
activity = <Date 2020-10-05.02:35:35.432>
actor = 'rohitkg98'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2020-10-05.02:02:57.246>
creator = 'rohitkg98'
dependencies = []
files = []
hgrepos = []
issue_num = 41938
keywords = ['patch']
message_count = 1.0
messages = ['377991']
nosy_count = 1.0
nosy_names = ['rohitkg98']
pr_nums = ['22555']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue41938'
versions = ['Python 3.10']
```
</p></details>
|
1.0
|
concurrent.futures.wait calls len() on an possible iterable - BPO | [41938](https://bugs.python.org/issue41938)
--- | :---
Nosy | @rohitkg98
PRs | <li>python/cpython#22555</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2020-10-05.02:02:57.246>
labels = ['type-bug', 'library', '3.10']
title = 'concurrent.futures.wait calls len() on an possible iterable'
updated_at = <Date 2020-10-05.02:35:35.432>
user = 'https://github.com/rohitkg98'
```
bugs.python.org fields:
```python
activity = <Date 2020-10-05.02:35:35.432>
actor = 'rohitkg98'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2020-10-05.02:02:57.246>
creator = 'rohitkg98'
dependencies = []
files = []
hgrepos = []
issue_num = 41938
keywords = ['patch']
message_count = 1.0
messages = ['377991']
nosy_count = 1.0
nosy_names = ['rohitkg98']
pr_nums = ['22555']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue41938'
versions = ['Python 3.10']
```
</p></details>
|
process
|
concurrent futures wait calls len on an possible iterable bpo nosy prs python cpython note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title concurrent futures wait calls len on an possible iterable updated at user bugs python org fields python activity actor assignee none closed false closed date none closer none components creation creator dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage patch review status open superseder none type behavior url versions
| 1
|
9,738
| 12,732,941,298
|
IssuesEvent
|
2020-06-25 11:21:29
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Pagination: cursor is ignored when using negative take argument
|
bug/2-confirmed kind/bug process/candidate team/engines topic: pagination
|
## Bug description
It looks like the "cursor" is being ignored when using a negative take argument.
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
Here's a repo reproduction: https://github.com/Gomah/prisma-negative-cursor-repro , with some query examples: https://github.com/Gomah/prisma-negative-cursor-repro#6-queries-to-test
## Expected behaviour
In the repo reproduction, It should return 10 posts from `ID` 30 to 39.
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Mac OS 10.15.5
- Database: PostresSQL / SQLite
- Prisma version: 2.0.0-beta.9
- Node.js version: 12.18.0
```
@prisma/cli : 2.0.0-beta.9
Current platform : darwin
Query Engine : query-engine de2bc1cbdb5561ad73d2f08463fa2eec48993f56 (at /Users/gomah/Playground/pagination-prisma/node_modules/@prisma/cli/query-engine-darwin)
Migration Engine : migration-engine-cli de2bc1cbdb5561ad73d2f08463fa2eec48993f56 (at /Users/gomah/Playground/pagination-prisma/node_modules/@prisma/cli/migration-engine-darwin)
Introspection Engine : introspection-core de2bc1cbdb5561ad73d2f08463fa2eec48993f56 (at /Users/gomah/Playground/pagination-prisma/node_modules/@prisma/cli/introspection-engine-darwin)
Format Binary : prisma-fmt de2bc1cbdb5561ad73d2f08463fa2eec48993f56 (at /Users/gomah/Playground/pagination-prisma/node_modules/@prisma/cli/prisma-fmt-darwin
```
|
1.0
|
Pagination: cursor is ignored when using negative take argument - ## Bug description
It looks like the "cursor" is being ignored when using a negative take argument.
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
Here's a repo reproduction: https://github.com/Gomah/prisma-negative-cursor-repro , with some query examples: https://github.com/Gomah/prisma-negative-cursor-repro#6-queries-to-test
## Expected behaviour
In the repo reproduction, It should return 10 posts from `ID` 30 to 39.
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Mac OS 10.15.5
- Database: PostresSQL / SQLite
- Prisma version: 2.0.0-beta.9
- Node.js version: 12.18.0
```
@prisma/cli : 2.0.0-beta.9
Current platform : darwin
Query Engine : query-engine de2bc1cbdb5561ad73d2f08463fa2eec48993f56 (at /Users/gomah/Playground/pagination-prisma/node_modules/@prisma/cli/query-engine-darwin)
Migration Engine : migration-engine-cli de2bc1cbdb5561ad73d2f08463fa2eec48993f56 (at /Users/gomah/Playground/pagination-prisma/node_modules/@prisma/cli/migration-engine-darwin)
Introspection Engine : introspection-core de2bc1cbdb5561ad73d2f08463fa2eec48993f56 (at /Users/gomah/Playground/pagination-prisma/node_modules/@prisma/cli/introspection-engine-darwin)
Format Binary : prisma-fmt de2bc1cbdb5561ad73d2f08463fa2eec48993f56 (at /Users/gomah/Playground/pagination-prisma/node_modules/@prisma/cli/prisma-fmt-darwin
```
|
process
|
pagination cursor is ignored when using negative take argument bug description it looks like the cursor is being ignored when using a negative take argument how to reproduce here s a repo reproduction with some query examples expected behaviour in the repo reproduction it should return posts from id to environment setup os mac os database postressql sqlite prisma version beta node js version prisma cli beta current platform darwin query engine query engine at users gomah playground pagination prisma node modules prisma cli query engine darwin migration engine migration engine cli at users gomah playground pagination prisma node modules prisma cli migration engine darwin introspection engine introspection core at users gomah playground pagination prisma node modules prisma cli introspection engine darwin format binary prisma fmt at users gomah playground pagination prisma node modules prisma cli prisma fmt darwin
| 1
|
5,982
| 8,799,301,352
|
IssuesEvent
|
2018-12-24 13:20:00
|
syauqiahmd/project_e-commerce
|
https://api.github.com/repos/syauqiahmd/project_e-commerce
|
closed
|
Statistik di Dashboard
|
on process
|
- [ ] jumlah pengguna (public)
- [ ] Penjualan online yang belum selesai
- [ ] barang terjuall
- [ ] Total penjualan
|
1.0
|
Statistik di Dashboard - - [ ] jumlah pengguna (public)
- [ ] Penjualan online yang belum selesai
- [ ] barang terjuall
- [ ] Total penjualan
|
process
|
statistik di dashboard jumlah pengguna public penjualan online yang belum selesai barang terjuall total penjualan
| 1
|
6,764
| 9,887,982,656
|
IssuesEvent
|
2019-06-25 10:24:58
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
Location text box is disable
|
2.0.7 Fixed Meetings Process bug
|
go to Meetings
create new item
enter at the Location text box
you can't write anything in it.

|
1.0
|
Location text box is disable - go to Meetings
create new item
enter at the Location text box
you can't write anything in it.

|
process
|
location text box is disable go to meetings create new item enter at the location text box you can t write anything in it
| 1
|
10,233
| 13,096,022,407
|
IssuesEvent
|
2020-08-03 15:01:46
|
ZbayApp/zbay
|
https://api.github.com/repos/ZbayApp/zbay
|
closed
|
Is windows installation with non-EV cert adequate UX?
|
dev process
|
Test (or google it) and post a screenshot here of the warnings. Then decide!
|
1.0
|
Is windows installation with non-EV cert adequate UX? - Test (or google it) and post a screenshot here of the warnings. Then decide!
|
process
|
is windows installation with non ev cert adequate ux test or google it and post a screenshot here of the warnings then decide
| 1
|
89,833
| 15,855,941,662
|
IssuesEvent
|
2021-04-08 01:07:54
|
rsoreq/django-DefectDojo
|
https://api.github.com/repos/rsoreq/django-DefectDojo
|
reopened
|
CVE-2018-11694 (High) detected in node-sassv4.6.1
|
security vulnerability
|
## CVE-2018-11694 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.6.1</b></p></summary>
<p>
<p>:rainbow: Node.js bindings to libsass</p>
<p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/rsoreq/django-DefectDojo/commit/778bcf0b3400f30c71d722f50e221c2eec64ea95">778bcf0b3400f30c71d722f50e221c2eec64ea95</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>django-DefectDojo/components/node_modules/bootswatch/docs/3/node_modules/node-sass/src/libsass/src/parser.cpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2018-11694 (High) detected in node-sassv4.6.1 - ## CVE-2018-11694 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.6.1</b></p></summary>
<p>
<p>:rainbow: Node.js bindings to libsass</p>
<p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/rsoreq/django-DefectDojo/commit/778bcf0b3400f30c71d722f50e221c2eec64ea95">778bcf0b3400f30c71d722f50e221c2eec64ea95</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>django-DefectDojo/components/node_modules/bootswatch/docs/3/node_modules/node-sass/src/libsass/src/parser.cpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in node cve high severity vulnerability vulnerable library node rainbow node js bindings to libsass library home page a href found in head commit a href found in base branch master vulnerable source files django defectdojo components node modules bootswatch docs node modules node sass src libsass src parser cpp vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass functions selector append which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass
| 0
|
19,272
| 25,463,246,153
|
IssuesEvent
|
2022-11-24 23:01:27
|
processing/processing4
|
https://api.github.com/repos/processing/processing4
|
closed
|
function/variable "does not exist" errors reported if trying to define a class without setup/draw
|
preprocessor
|
<!--- ** For coding questions, ask the forum: https://discourse.processing.org ** -->
<!--- ** This page is only for bugs in the software & feature requests ** -->
<!--- ** If your code won't start, that's a better question for the forum. ** -->
<!--- ** If Processing won't start, post on the forum where you can get help. ** -->
<!--- ** Also be sure to read the troubleshooting page first: ** -->
<!--- ** https://github.com/processing/processing/wiki/Troubleshooting ** -->
<!--- ** Before posting, please search Issues for duplicates ** -->
## Description
<!--- Use a title that describes what is happening. -->
<!--- Give a description of the proposed change. -->
I was starting a new project and had a mostly empty file as the main PDE, with the exception of an import for the video library. I created a new file to write a class, and all references to Processing functions & variables were red underlined as errors. Error messages were, for example, "mouseX does not exist", despite syntax highlighting picking them up correctly. I was able to make this go away by adding empty setup() and draw() definitions in the main PDE file.
## Expected Behavior
<!--- Bug? Tell us what you were expecting. -->
<!--- Improvement? Tell us how you’d like it to work. -->
A descriptive error suggesting to define setup() and draw() in the appropriate file before defining a class that depends on Processing variables and functions.
## Current Behavior
<!--- Explain the difference from current behavior. -->
App says library variables and function names like ellipse() and mouseX do not exist.
## Steps to Reproduce
<!--- Provide an unambiguous set of steps to reproduce. -->
<!--- Including code will make it more likely to be fixed. -->
1. Create and save a new sketch
2. Add a second tab with a name of your choice, in my case it was Button
3. Define a class in the new tab, referencing some of the Processing functions, but do not add code to the main file
Here's the code used in my second tab that reproduces the error:
```
class Button {
int x, y, radius;
public Button (int x, int y, int radius) {
this.x = x;
this.y = y;
this.radius = radius;
}
boolean over() {
return dist(mouseX, mouseY, this.x, this.y) < this.radius;
}
void draw() {
ellipse(this.x, this.y, this.radius * 2, this.radius * 2);
}
}
```
mouseX, mouseY, and ellipse() will show as errors, hovering/clicking for more detail says "does not exist"
## Your Environment
<!--- Include details about your environment. -->
<!--- Thousands of people use Processing every day and may not have this issue, -->
<!--- so this gives us clues about why you’re running into a problem. -->
* Processing version: 4.0.1
* Operating System and OS version: macOS 12.4 / Apple M1
* Other information:
## Possible Causes / Solutions
<!--- Optionally, if you have a diagnosis or fix in mind, please share. -->
Based on what I remember about how Processing parses the main sketch file depending on whether or not setup() or draw() are defined, I think it makes sense that it wouldn't be possible to define a (nested) class without formally defining setup()/draw(). But, initially I thought this was some kind of parsing error and restarted the app to see if it would go away. I'm *not* suggesting there's a mode of use that justifies defining a class without setup()/draw(), just that a more descriptive error message would be useful when this happens.
|
1.0
|
function/variable "does not exist" errors reported if trying to define a class without setup/draw - <!--- ** For coding questions, ask the forum: https://discourse.processing.org ** -->
<!--- ** This page is only for bugs in the software & feature requests ** -->
<!--- ** If your code won't start, that's a better question for the forum. ** -->
<!--- ** If Processing won't start, post on the forum where you can get help. ** -->
<!--- ** Also be sure to read the troubleshooting page first: ** -->
<!--- ** https://github.com/processing/processing/wiki/Troubleshooting ** -->
<!--- ** Before posting, please search Issues for duplicates ** -->
## Description
<!--- Use a title that describes what is happening. -->
<!--- Give a description of the proposed change. -->
I was starting a new project and had a mostly empty file as the main PDE, with the exception of an import for the video library. I created a new file to write a class, and all references to Processing functions & variables were red underlined as errors. Error messages were, for example, "mouseX does not exist", despite syntax highlighting picking them up correctly. I was able to make this go away by adding empty setup() and draw() definitions in the main PDE file.
## Expected Behavior
<!--- Bug? Tell us what you were expecting. -->
<!--- Improvement? Tell us how you’d like it to work. -->
A descriptive error suggesting to define setup() and draw() in the appropriate file before defining a class that depends on Processing variables and functions.
## Current Behavior
<!--- Explain the difference from current behavior. -->
App says library variables and function names like ellipse() and mouseX do not exist.
## Steps to Reproduce
<!--- Provide an unambiguous set of steps to reproduce. -->
<!--- Including code will make it more likely to be fixed. -->
1. Create and save a new sketch
2. Add a second tab with a name of your choice, in my case it was Button
3. Define a class in the new tab, referencing some of the Processing functions, but do not add code to the main file
Here's the code used in my second tab that reproduces the error:
```
class Button {
int x, y, radius;
public Button (int x, int y, int radius) {
this.x = x;
this.y = y;
this.radius = radius;
}
boolean over() {
return dist(mouseX, mouseY, this.x, this.y) < this.radius;
}
void draw() {
ellipse(this.x, this.y, this.radius * 2, this.radius * 2);
}
}
```
mouseX, mouseY, and ellipse() will show as errors, hovering/clicking for more detail says "does not exist"
## Your Environment
<!--- Include details about your environment. -->
<!--- Thousands of people use Processing every day and may not have this issue, -->
<!--- so this gives us clues about why you’re running into a problem. -->
* Processing version: 4.0.1
* Operating System and OS version: macOS 12.4 / Apple M1
* Other information:
## Possible Causes / Solutions
<!--- Optionally, if you have a diagnosis or fix in mind, please share. -->
Based on what I remember about how Processing parses the main sketch file depending on whether or not setup() or draw() are defined, I think it makes sense that it wouldn't be possible to define a (nested) class without formally defining setup()/draw(). But, initially I thought this was some kind of parsing error and restarted the app to see if it would go away. I'm *not* suggesting there's a mode of use that justifies defining a class without setup()/draw(), just that a more descriptive error message would be useful when this happens.
|
process
|
function variable does not exist errors reported if trying to define a class without setup draw description i was starting a new project and had a mostly empty file as the main pde with the exception of an import for the video library i created a new file to write a class and all references to processing functions variables were red underlined as errors error messages were for example mousex does not exist despite syntax highlighting picking them up correctly i was able to make this go away by adding empty setup and draw definitions in the main pde file expected behavior a descriptive error suggesting to define setup and draw in the appropriate file before defining a class that depends on processing variables and functions current behavior app says library variables and function names like ellipse and mousex do not exist steps to reproduce create and save a new sketch add a second tab with a name of your choice in my case it was button define a class in the new tab referencing some of the processing functions but do not add code to the main file here s the code used in my second tab that reproduces the error class button int x y radius public button int x int y int radius this x x this y y this radius radius boolean over return dist mousex mousey this x this y this radius void draw ellipse this x this y this radius this radius mousex mousey and ellipse will show as errors hovering clicking for more detail says does not exist your environment processing version operating system and os version macos apple other information possible causes solutions based on what i remember about how processing parses the main sketch file depending on whether or not setup or draw are defined i think it makes sense that it wouldn t be possible to define a nested class without formally defining setup draw but initially i thought this was some kind of parsing error and restarted the app to see if it would go away i m not suggesting there s a mode of use that justifies defining a class without setup draw just that a more descriptive error message would be useful when this happens
| 1
|
444,410
| 31,047,945,095
|
IssuesEvent
|
2023-08-11 02:50:26
|
vercel/next.js
|
https://api.github.com/repos/vercel/next.js
|
opened
|
Error page handling Information
|
template: documentation
|
### What is the improvement or update you wish to see?
I was trying to create an Error page for the pages that are not found. I searched for that in the documentation and first thing i found was the Error.js documentation describing about the Error handing with 404, obviously which didn't work for me.
Then second documentation i found was about the global-error.js and which was also not the answer that i was searching for.
After searching for some time i came up with the solution and it was not-found.js
There isn't any issue with the documentation but it's better to mention in error page that to handle the not found pages use nof-found.js, which took me sometime to find
### Is there any context that might help us understand?
Mention about not-found.js in error.js
### Does the docs page already exist? Please link to it.
https://nextjs.org/docs/app/api-reference/file-conventions/error
|
1.0
|
Error page handling Information - ### What is the improvement or update you wish to see?
I was trying to create an Error page for the pages that are not found. I searched for that in the documentation and first thing i found was the Error.js documentation describing about the Error handing with 404, obviously which didn't work for me.
Then second documentation i found was about the global-error.js and which was also not the answer that i was searching for.
After searching for some time i came up with the solution and it was not-found.js
There isn't any issue with the documentation but it's better to mention in error page that to handle the not found pages use nof-found.js, which took me sometime to find
### Is there any context that might help us understand?
Mention about not-found.js in error.js
### Does the docs page already exist? Please link to it.
https://nextjs.org/docs/app/api-reference/file-conventions/error
|
non_process
|
error page handling information what is the improvement or update you wish to see i was trying to create an error page for the pages that are not found i searched for that in the documentation and first thing i found was the error js documentation describing about the error handing with obviously which didn t work for me then second documentation i found was about the global error js and which was also not the answer that i was searching for after searching for some time i came up with the solution and it was not found js there isn t any issue with the documentation but it s better to mention in error page that to handle the not found pages use nof found js which took me sometime to find is there any context that might help us understand mention about not found js in error js does the docs page already exist please link to it
| 0
|
87,469
| 25,131,532,626
|
IssuesEvent
|
2022-11-09 15:28:10
|
woocommerce/woocommerce-blocks
|
https://api.github.com/repos/woocommerce/woocommerce-blocks
|
closed
|
Include PHP/WC versions in our testing pipeline ⚙️
|
type: enhancement type: build type: cooldown github_actions
|
## Is your feature request related to a problem? Please describe.
Currently, we run our automated testing jobs on PHP 7.4, which is not ideal since we support multiple PHP versions, and 7.4 will discontinue security updates in late November.
## Describe the solution you'd like
We already have Unit and E2E testing jobs in place on our pipeline. I consider low-hanging fruit (not much low but still) duplicating those jobs and having them running against our supported PHP (and possibly WC) versions. We might consider alternative solutions as long as they achieve similar results while evolving smaller efforts.
This is supposed to be an amount of work that can be tackled during a cooldown.
The WooCommerce repo already does something similar:

**EDIT:** After gathering some thoughts at Rubik's team meeting, it was proposed that a good course of action would be:
- Running PHP Unit Tests against 7.4 & 8.0 (low minute cost)
- ~~Change E2E to run against PHP 8.0 due to 7.4 security support being dropped soon~~
- Outside the scope of this task, we might pursue additional testing on nightlies/scheduled runs against multiple combinations of PHP/WC/WP
## Additional context
This will not be a definitive solution, as upstream efforts are in place to [migrate E2E testing to Playwright](https://masamunep2.wordpress.com/2022/08/26/testing-infrastructure-proposal/).
The goal here is to deploy with more confidence and avoid any related tech debt to be dealt with in the future (eg: introducing incompatible code/bugs)
[Slack thread for reference.](p1665745482034199-slack-C02UBB1EPEF)
|
1.0
|
Include PHP/WC versions in our testing pipeline ⚙️ - ## Is your feature request related to a problem? Please describe.
Currently, we run our automated testing jobs on PHP 7.4, which is not ideal since we support multiple PHP versions, and 7.4 will discontinue security updates in late November.
## Describe the solution you'd like
We already have Unit and E2E testing jobs in place on our pipeline. I consider low-hanging fruit (not much low but still) duplicating those jobs and having them running against our supported PHP (and possibly WC) versions. We might consider alternative solutions as long as they achieve similar results while evolving smaller efforts.
This is supposed to be an amount of work that can be tackled during a cooldown.
The WooCommerce repo already does something similar:

**EDIT:** After gathering some thoughts at Rubik's team meeting, it was proposed that a good course of action would be:
- Running PHP Unit Tests against 7.4 & 8.0 (low minute cost)
- ~~Change E2E to run against PHP 8.0 due to 7.4 security support being dropped soon~~
- Outside the scope of this task, we might pursue additional testing on nightlies/scheduled runs against multiple combinations of PHP/WC/WP
## Additional context
This will not be a definitive solution, as upstream efforts are in place to [migrate E2E testing to Playwright](https://masamunep2.wordpress.com/2022/08/26/testing-infrastructure-proposal/).
The goal here is to deploy with more confidence and avoid any related tech debt to be dealt with in the future (eg: introducing incompatible code/bugs)
[Slack thread for reference.](p1665745482034199-slack-C02UBB1EPEF)
|
non_process
|
include php wc versions in our testing pipeline ⚙️ is your feature request related to a problem please describe currently we run our automated testing jobs on php which is not ideal since we support multiple php versions and will discontinue security updates in late november describe the solution you d like we already have unit and testing jobs in place on our pipeline i consider low hanging fruit not much low but still duplicating those jobs and having them running against our supported php and possibly wc versions we might consider alternative solutions as long as they achieve similar results while evolving smaller efforts this is supposed to be an amount of work that can be tackled during a cooldown the woocommerce repo already does something similar edit after gathering some thoughts at rubik s team meeting it was proposed that a good course of action would be running php unit tests against low minute cost change to run against php due to security support being dropped soon outside the scope of this task we might pursue additional testing on nightlies scheduled runs against multiple combinations of php wc wp additional context this will not be a definitive solution as upstream efforts are in place to the goal here is to deploy with more confidence and avoid any related tech debt to be dealt with in the future eg introducing incompatible code bugs slack
| 0
|
15,382
| 19,565,783,108
|
IssuesEvent
|
2022-01-03 23:57:15
|
googleapis/java-spanner
|
https://api.github.com/repos/googleapis/java-spanner
|
closed
|
ITInstanceAdminTest.listInstances seems to fail if two instances of the test are running simultaneously
|
type: process api: spanner priority: p3
|
java.lang.IllegalArgumentException: expected one element but was: <Instance{name=projects/gcloud-devel/instances/spanner-testing, configName=projects/gcloud-devel/instanceConfigs/regional-us-central1, displayName=spanner-testing, nodeCount=2, state=READY, labels={}}, Instance{name=projects/gcloud-devel/instances/spanner-testing-east1, configName=projects/gcloud-devel/instanceConfigs/regional-us-east1, displayName=spanner-testing-east1, nodeCount=2, state=READY, labels={}}, Instance{name=projects/gcloud-devel/instances/spanner-testing-west1, configName=projects/gcloud-devel/instanceConfigs/regional-us-west1, displayName=spanner-testing-west1, nodeCount=2, state=READY, labels={}}>
at com.google.common.collect.Iterators.getOnlyElement(Iterators.java:315)
at com.google.cloud.spanner.it.ITInstanceAdminTest.listInstances(ITInstanceAdminTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
|
1.0
|
ITInstanceAdminTest.listInstances seems to fail if two instances of the test are running simultaneously - java.lang.IllegalArgumentException: expected one element but was: <Instance{name=projects/gcloud-devel/instances/spanner-testing, configName=projects/gcloud-devel/instanceConfigs/regional-us-central1, displayName=spanner-testing, nodeCount=2, state=READY, labels={}}, Instance{name=projects/gcloud-devel/instances/spanner-testing-east1, configName=projects/gcloud-devel/instanceConfigs/regional-us-east1, displayName=spanner-testing-east1, nodeCount=2, state=READY, labels={}}, Instance{name=projects/gcloud-devel/instances/spanner-testing-west1, configName=projects/gcloud-devel/instanceConfigs/regional-us-west1, displayName=spanner-testing-west1, nodeCount=2, state=READY, labels={}}>
at com.google.common.collect.Iterators.getOnlyElement(Iterators.java:315)
at com.google.cloud.spanner.it.ITInstanceAdminTest.listInstances(ITInstanceAdminTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
|
process
|
itinstanceadmintest listinstances seems to fail if two instances of the test are running simultaneously java lang illegalargumentexception expected one element but was at com google common collect iterators getonlyelement iterators java at com google cloud spanner it itinstanceadmintest listinstances itinstanceadmintest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit rules externalresource evaluate externalresource java at org junit rules runrules evaluate runrules java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executeeager junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java
| 1
|
51,970
| 7,740,181,984
|
IssuesEvent
|
2018-05-28 20:00:49
|
matplotlib/matplotlib
|
https://api.github.com/repos/matplotlib/matplotlib
|
closed
|
ConnectionStyle Angle3 hangs with specific parameters
|
Documentation confirmed bug
|
### Bug report
**Bug summary**
I'm getting weird results from using ``ConnectionStyle.Angle3``. If I run
```python
from matplotlib.patches import ConnectionStyle, FancyArrowPatch
conn_style_1 = ConnectionStyle.Angle3(angleA=0, angleB=180)
p1 = FancyArrowPatch((.2, .2), (.5, .5),
connectionstyle=conn_style_1)
plt.gca().add_patch(p1)
```
... the interpreter emits no sign of life except going to 100% of CPU usage until I kill it.
By the way, if I run
```python
from matplotlib.patches import ConnectionStyle, FancyArrowPatch
conn_style_1 = ConnectionStyle.Angle3(angleA=0, angleB=179)
p1 = FancyArrowPatch((.2, .2), (.5, .5),
connectionstyle=conn_style_1)
plt.gca().add_patch(p1)
```
I do get a line, but quite different from what I expected.
( xref #9518 , but I don't get any error)
**Actual outcome**
The second snippet results in

**Expected outcome**
Both snippets should give a similar result, with the curve reaching the second point from left, not from right.
**Matplotlib version**
* Operating system: Linux
* Matplotlib version: both git master and 2.0.0 from Debian
* Matplotlib backend: both ``'module://ipykernel.pylab.backend_inline'`` and ``Qt5Agg``
* Python version: 3.5.3
* Jupyter version: 5.4.0
|
1.0
|
ConnectionStyle Angle3 hangs with specific parameters - ### Bug report
**Bug summary**
I'm getting weird results from using ``ConnectionStyle.Angle3``. If I run
```python
from matplotlib.patches import ConnectionStyle, FancyArrowPatch
conn_style_1 = ConnectionStyle.Angle3(angleA=0, angleB=180)
p1 = FancyArrowPatch((.2, .2), (.5, .5),
connectionstyle=conn_style_1)
plt.gca().add_patch(p1)
```
... the interpreter emits no sign of life except going to 100% of CPU usage until I kill it.
By the way, if I run
```python
from matplotlib.patches import ConnectionStyle, FancyArrowPatch
conn_style_1 = ConnectionStyle.Angle3(angleA=0, angleB=179)
p1 = FancyArrowPatch((.2, .2), (.5, .5),
connectionstyle=conn_style_1)
plt.gca().add_patch(p1)
```
I do get a line, but quite different from what I expected.
( xref #9518 , but I don't get any error)
**Actual outcome**
The second snippet results in

**Expected outcome**
Both snippets should give a similar result, with the curve reaching the second point from left, not from right.
**Matplotlib version**
* Operating system: Linux
* Matplotlib version: both git master and 2.0.0 from Debian
* Matplotlib backend: both ``'module://ipykernel.pylab.backend_inline'`` and ``Qt5Agg``
* Python version: 3.5.3
* Jupyter version: 5.4.0
|
non_process
|
connectionstyle hangs with specific parameters bug report bug summary i m getting weird results from using connectionstyle if i run python from matplotlib patches import connectionstyle fancyarrowpatch conn style connectionstyle anglea angleb fancyarrowpatch connectionstyle conn style plt gca add patch the interpreter emits no sign of life except going to of cpu usage until i kill it by the way if i run python from matplotlib patches import connectionstyle fancyarrowpatch conn style connectionstyle anglea angleb fancyarrowpatch connectionstyle conn style plt gca add patch i do get a line but quite different from what i expected xref but i don t get any error actual outcome the second snippet results in expected outcome both snippets should give a similar result with the curve reaching the second point from left not from right matplotlib version operating system linux matplotlib version both git master and from debian matplotlib backend both module ipykernel pylab backend inline and python version jupyter version
| 0
|
645
| 3,105,447,541
|
IssuesEvent
|
2015-08-31 20:58:42
|
K0zka/kerub
|
https://api.github.com/repos/K0zka/kerub
|
opened
|
add l1 options to infinispan configuration
|
component:data processing enhancement
|
Add options to use l1 cache separately for dynamic, static and history entries.
|
1.0
|
add l1 options to infinispan configuration - Add options to use l1 cache separately for dynamic, static and history entries.
|
process
|
add options to infinispan configuration add options to use cache separately for dynamic static and history entries
| 1
|
4,070
| 7,001,733,407
|
IssuesEvent
|
2017-12-18 11:21:37
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Windows, testing: update tests to not call Bazel in a subshell
|
category: multi-platform > windows P2 type: process windows-q1-2018-maybe
|
### Description of the problem / feature request / question:
In Bazel's own shell integration tests, when they run Bazel it always starts a new server.
Culprit summary: MinGW Bash waits for all child processes of a process running in a subshell to terminate before terminating the subshell. Linux Bash's subshell exits as soon as the direct child process exited.
Finding the culprit required a solid day of debugging.
### Bug journey
#### Does this always happen?
No, it only happens if Bazel runs in a subshell. If the test runs:
```
pid1=$(bazel info server_pid)
pid2=$(bazel info server_pid)
echo "pid1=$pid1, pid2=$pid2"
```
the two PIDs are different. It also runs very slowly (~half a minute), the reason I'll explain later.
Running them directly is fast and prints the same PID:
```
bazel info server_pid
bazel info server_pid
```
#### Why is a new server started every time?
Because the `server.pid.txt` file is missing. It took a long time to find out that it was the server itself deleting that file, as part of an orderly shutdown.
#### Why does the server shut down?
Not because anything asks it... It does so because `--max_idle_secs` is 15 for tests, which elapses, and the server neatly shuts down and cleans up after itself, WAI. This is why two Bazel invocations in a subshell takes about half a minute.
#### Why does the subshell wait for the server to time out idling?
Because it turns out MSYS (and Cygwin) Bash waits for all children of a process running in a subshell to terminate, even if those processes are in a different process group (`CREATE_NEW_PROCESS_GROUP`).
The Bazel client starts the server process with `CREATE_NEW_PROCESS_GROUP`. The process tree (observed in Sysinternals' Process Explorer) shows that as long as the parent is running, the child is displayed as a child, but after the parent terminates the child becomes a top-level process. MSYS however doesn't know or doesn't care, and waits for the child process to exit.
#### Can we convince MSYS to not wait for the child process?
I don't know. I tried:
- double-`CreateProcess` (similar to double-`fork` idiom on Unixes), the middle process exiting thus orphaning the grandchild -- the process tree looks fine (orphaned process becomes a top-level one), yet Bash still waits for it
- creating the child process in a different job object and breaking away from the current one to avoid nesting job objects (see `CREATE_BREAKAWAY_FROM_JOB`)
- dynamically loading `msys-2.0.dll` and the `fork` and `setsid` methods, calling those to emulate a double-fork
None of the above works.
### If possible, provide a minimal example to reproduce the problem:
```
C:\work>type minibazel.cc
#include <windows.h>
#include <stdio.h>
void log(const char *format, ...) {
FILE* f = fopen("c:\\work\\minibazel.txt", "at");
va_list ap;
va_start(ap, format);
fprintf(f, "(pid=%d) ", GetCurrentProcessId());
vfprintf(f, format, ap);
va_end(ap);
fclose(f);
va_start(ap, format);
fprintf(stdout, "(pid=%d) ", GetCurrentProcessId());
vfprintf(stdout, format, ap);
va_end(ap);
}
int ExecuteDaemon(const char* argv0) {
SECURITY_ATTRIBUTES sa;
sa.nLength = sizeof(SECURITY_ATTRIBUTES);
sa.bInheritHandle = FALSE;
sa.lpSecurityDescriptor = NULL;
PROCESS_INFORMATION processInfo = {0};
STARTUPINFOA startupInfo = {0};
char cmdline[1000];
size_t len = strlen(argv0);
strncpy(cmdline, argv0, len);
cmdline[len] = ' ';
cmdline[len + 1] = 'x';
cmdline[len + 2] = 0;
BOOL ok = CreateProcessA(
/* lpApplicationName */ NULL,
/* lpCommandLine */ cmdline,
/* lpProcessAttributes */ NULL,
/* lpThreadAttributes */ NULL,
/* bInheritHandles */ TRUE,
/* dwCreationFlags */ DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP,
/* lpEnvironment */ NULL,
/* lpCurrentDirectory */ NULL,
/* lpStartupInfo */ &startupInfo,
/* lpProcessInformation */ &processInfo);
if (!ok) {
log("ERROR[child] CreateProcess, err: %d\n", GetLastError());
return 1;
}
CloseHandle(processInfo.hProcess);
CloseHandle(processInfo.hThread);
return 0;
}
int main(int argc, char** argv) {
if (argc > 1) {
log("INFO[child] Sleep 10 sec\n");
Sleep(10000);
log("INFO[child] Done\n");
return 0;
} else {
log("INFO[parent] start -------------------\n");
int x = ExecuteDaemon(argv[0]);
log("INFO[parent] Created process, sleep 5 sec\n");
Sleep(5000);
log("INFO[parent] Done\n");
return x;
}
return 0;
}
C:\work>cl minibazel.cc
Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24213.1 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
minibazel.cc
Microsoft (R) Incremental Linker Version 14.00.24213.1
Copyright (C) Microsoft Corporation. All rights reserved.
/out:minibazel.exe
minibazel.obj
```
```
$ cat ./subshell.sh
#!/bin/bash
echo "$(date +%H:%M:%S) start subshell"
out=$(c:/work/minibazel.exe)
echo "$(date +%H:%M:%S) done subshell"
$ ./subshell.sh
15:24:01 start subshell
15:24:11 done subshell
```
### Environment info
* Operating System: Windows 10
* Bazel version (output of `bazel info release`): all, the problem is not in Bazel AFAICT
### Have you found anything relevant by searching the web?
Sadly, no.
|
1.0
|
Windows, testing: update tests to not call Bazel in a subshell - ### Description of the problem / feature request / question:
In Bazel's own shell integration tests, when they run Bazel it always starts a new server.
Culprit summary: MinGW Bash waits for all child processes of a process running in a subshell to terminate before terminating the subshell. Linux Bash's subshell exits as soon as the direct child process exited.
Finding the culprit required a solid day of debugging.
### Bug journey
#### Does this always happen?
No, it only happens if Bazel runs in a subshell. If the test runs:
```
pid1=$(bazel info server_pid)
pid2=$(bazel info server_pid)
echo "pid1=$pid1, pid2=$pid2"
```
the two PIDs are different. It also runs very slowly (~half a minute), the reason I'll explain later.
Running them directly is fast and prints the same PID:
```
bazel info server_pid
bazel info server_pid
```
#### Why is a new server started every time?
Because the `server.pid.txt` file is missing. It took a long time to find out that it was the server itself deleting that file, as part of an orderly shutdown.
#### Why does the server shut down?
Not because anything asks it... It does so because `--max_idle_secs` is 15 for tests, which elapses, and the server neatly shuts down and cleans up after itself, WAI. This is why two Bazel invocations in a subshell takes about half a minute.
#### Why does the subshell wait for the server to time out idling?
Because it turns out MSYS (and Cygwin) Bash waits for all children of a process running in a subshell to terminate, even if those processes are in a different process group (`CREATE_NEW_PROCESS_GROUP`).
The Bazel client starts the server process with `CREATE_NEW_PROCESS_GROUP`. The process tree (observed in Sysinternals' Process Explorer) shows that as long as the parent is running, the child is displayed as a child, but after the parent terminates the child becomes a top-level process. MSYS however doesn't know or doesn't care, and waits for the child process to exit.
#### Can we convince MSYS to not wait for the child process?
I don't know. I tried:
- double-`CreateProcess` (similar to double-`fork` idiom on Unixes), the middle process exiting thus orphaning the grandchild -- the process tree looks fine (orphaned process becomes a top-level one), yet Bash still waits for it
- creating the child process in a different job object and breaking away from the current one to avoid nesting job objects (see `CREATE_BREAKAWAY_FROM_JOB`)
- dynamically loading `msys-2.0.dll` and the `fork` and `setsid` methods, calling those to emulate a double-fork
None of the above works.
### If possible, provide a minimal example to reproduce the problem:
```
C:\work>type minibazel.cc
#include <windows.h>
#include <stdio.h>
void log(const char *format, ...) {
FILE* f = fopen("c:\\work\\minibazel.txt", "at");
va_list ap;
va_start(ap, format);
fprintf(f, "(pid=%d) ", GetCurrentProcessId());
vfprintf(f, format, ap);
va_end(ap);
fclose(f);
va_start(ap, format);
fprintf(stdout, "(pid=%d) ", GetCurrentProcessId());
vfprintf(stdout, format, ap);
va_end(ap);
}
int ExecuteDaemon(const char* argv0) {
SECURITY_ATTRIBUTES sa;
sa.nLength = sizeof(SECURITY_ATTRIBUTES);
sa.bInheritHandle = FALSE;
sa.lpSecurityDescriptor = NULL;
PROCESS_INFORMATION processInfo = {0};
STARTUPINFOA startupInfo = {0};
char cmdline[1000];
size_t len = strlen(argv0);
strncpy(cmdline, argv0, len);
cmdline[len] = ' ';
cmdline[len + 1] = 'x';
cmdline[len + 2] = 0;
BOOL ok = CreateProcessA(
/* lpApplicationName */ NULL,
/* lpCommandLine */ cmdline,
/* lpProcessAttributes */ NULL,
/* lpThreadAttributes */ NULL,
/* bInheritHandles */ TRUE,
/* dwCreationFlags */ DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP,
/* lpEnvironment */ NULL,
/* lpCurrentDirectory */ NULL,
/* lpStartupInfo */ &startupInfo,
/* lpProcessInformation */ &processInfo);
if (!ok) {
log("ERROR[child] CreateProcess, err: %d\n", GetLastError());
return 1;
}
CloseHandle(processInfo.hProcess);
CloseHandle(processInfo.hThread);
return 0;
}
int main(int argc, char** argv) {
if (argc > 1) {
log("INFO[child] Sleep 10 sec\n");
Sleep(10000);
log("INFO[child] Done\n");
return 0;
} else {
log("INFO[parent] start -------------------\n");
int x = ExecuteDaemon(argv[0]);
log("INFO[parent] Created process, sleep 5 sec\n");
Sleep(5000);
log("INFO[parent] Done\n");
return x;
}
return 0;
}
C:\work>cl minibazel.cc
Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24213.1 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
minibazel.cc
Microsoft (R) Incremental Linker Version 14.00.24213.1
Copyright (C) Microsoft Corporation. All rights reserved.
/out:minibazel.exe
minibazel.obj
```
```
$ cat ./subshell.sh
#!/bin/bash
echo "$(date +%H:%M:%S) start subshell"
out=$(c:/work/minibazel.exe)
echo "$(date +%H:%M:%S) done subshell"
$ ./subshell.sh
15:24:01 start subshell
15:24:11 done subshell
```
### Environment info
* Operating System: Windows 10
* Bazel version (output of `bazel info release`): all, the problem is not in Bazel AFAICT
### Have you found anything relevant by searching the web?
Sadly, no.
|
process
|
windows testing update tests to not call bazel in a subshell description of the problem feature request question in bazel s own shell integration tests when they run bazel it always starts a new server culprit summary mingw bash waits for all child processes of a process running in a subshell to terminate before terminating the subshell linux bash s subshell exits as soon as the direct child process exited finding the culprit required a solid day of debugging bug journey does this always happen no it only happens if bazel runs in a subshell if the test runs bazel info server pid bazel info server pid echo the two pids are different it also runs very slowly half a minute the reason i ll explain later running them directly is fast and prints the same pid bazel info server pid bazel info server pid why is a new server started every time because the server pid txt file is missing it took a long time to find out that it was the server itself deleting that file as part of an orderly shutdown why does the server shut down not because anything asks it it does so because max idle secs is for tests which elapses and the server neatly shuts down and cleans up after itself wai this is why two bazel invocations in a subshell takes about half a minute why does the subshell wait for the server to time out idling because it turns out msys and cygwin bash waits for all children of a process running in a subshell to terminate even if those processes are in a different process group create new process group the bazel client starts the server process with create new process group the process tree observed in sysinternals process explorer shows that as long as the parent is running the child is displayed as a child but after the parent terminates the child becomes a top level process msys however doesn t know or doesn t care and waits for the child process to exit can we convince msys to not wait for the child process i don t know i tried double createprocess similar to double fork idiom on unixes the middle process exiting thus orphaning the grandchild the process tree looks fine orphaned process becomes a top level one yet bash still waits for it creating the child process in a different job object and breaking away from the current one to avoid nesting job objects see create breakaway from job dynamically loading msys dll and the fork and setsid methods calling those to emulate a double fork none of the above works if possible provide a minimal example to reproduce the problem c work type minibazel cc include include void log const char format file f fopen c work minibazel txt at va list ap va start ap format fprintf f pid d getcurrentprocessid vfprintf f format ap va end ap fclose f va start ap format fprintf stdout pid d getcurrentprocessid vfprintf stdout format ap va end ap int executedaemon const char security attributes sa sa nlength sizeof security attributes sa binherithandle false sa lpsecuritydescriptor null process information processinfo startupinfoa startupinfo char cmdline size t len strlen strncpy cmdline len cmdline cmdline x cmdline bool ok createprocessa lpapplicationname null lpcommandline cmdline lpprocessattributes null lpthreadattributes null binherithandles true dwcreationflags detached process create new process group lpenvironment null lpcurrentdirectory null lpstartupinfo startupinfo lpprocessinformation processinfo if ok log error createprocess err d n getlasterror return closehandle processinfo hprocess closehandle processinfo hthread return int main int argc char argv if argc log info sleep sec n sleep log info done n return else log info start n int x executedaemon argv log info created process sleep sec n sleep log info done n return x return c work cl minibazel cc microsoft r c c optimizing compiler version for copyright c microsoft corporation all rights reserved minibazel cc microsoft r incremental linker version copyright c microsoft corporation all rights reserved out minibazel exe minibazel obj cat subshell sh bin bash echo date h m s start subshell out c work minibazel exe echo date h m s done subshell subshell sh start subshell done subshell environment info operating system windows bazel version output of bazel info release all the problem is not in bazel afaict have you found anything relevant by searching the web sadly no
| 1
|
646,889
| 21,081,675,455
|
IssuesEvent
|
2022-04-03 01:26:12
|
apcountryman/picolibrary-microchip-megaavr
|
https://api.github.com/repos/apcountryman/picolibrary-microchip-megaavr
|
closed
|
Fix Microchip megaAVR USART based variable configuration SPI basic controller configuration default construction
|
priority-normal status-complete type-bug
|
Fix Microchip megaAVR USART based variable configuration SPI basic controller configuration (`::picolibrary::Microchip::megaAVR::SPI::Variable_Configuration_Basic_Controller<Peripheral::USART>::Configuration`) default construction (default constructed register values are not consistent with non-default constructed register values).
|
1.0
|
Fix Microchip megaAVR USART based variable configuration SPI basic controller configuration default construction - Fix Microchip megaAVR USART based variable configuration SPI basic controller configuration (`::picolibrary::Microchip::megaAVR::SPI::Variable_Configuration_Basic_Controller<Peripheral::USART>::Configuration`) default construction (default constructed register values are not consistent with non-default constructed register values).
|
non_process
|
fix microchip megaavr usart based variable configuration spi basic controller configuration default construction fix microchip megaavr usart based variable configuration spi basic controller configuration picolibrary microchip megaavr spi variable configuration basic controller configuration default construction default constructed register values are not consistent with non default constructed register values
| 0
|
14,438
| 17,496,415,272
|
IssuesEvent
|
2021-08-10 01:19:36
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Document DwC-A relationship to TEXT Guide
|
Format - Text Docs - Text Guide Process - need evidence for demand
|
There is some confusion about DwC-A being an implementation of the TEXT Guidelines. It would be good to make this explicit - what are the requirements for a DwC-A and how does that differ from the TEXT Guide?
|
1.0
|
Document DwC-A relationship to TEXT Guide - There is some confusion about DwC-A being an implementation of the TEXT Guidelines. It would be good to make this explicit - what are the requirements for a DwC-A and how does that differ from the TEXT Guide?
|
process
|
document dwc a relationship to text guide there is some confusion about dwc a being an implementation of the text guidelines it would be good to make this explicit what are the requirements for a dwc a and how does that differ from the text guide
| 1
|
282,643
| 21,315,721,332
|
IssuesEvent
|
2022-04-16 08:30:32
|
froststein/pe
|
https://api.github.com/repos/froststein/pe
|
opened
|
Unclear indication of bracket usage in UG
|
severity.VeryLow type.DocumentationBug
|

I understand that the usage of `[ ]` indicates either or, but perhaps the explanation of the usages of the `[ ]` brackets could be at the start of the guide, explaining what those bracket means as the `[ ]` is used in multiple commands in your application. Instead of having it appear multiple times across multiple commands.
<!--session: 1650096022811-5bcea346-32bf-4983-9b03-b37b74ee715e-->
<!--Version: Web v3.4.2-->
|
1.0
|
Unclear indication of bracket usage in UG - 
I understand that the usage of `[ ]` indicates either or, but perhaps the explanation of the usages of the `[ ]` brackets could be at the start of the guide, explaining what those bracket means as the `[ ]` is used in multiple commands in your application. Instead of having it appear multiple times across multiple commands.
<!--session: 1650096022811-5bcea346-32bf-4983-9b03-b37b74ee715e-->
<!--Version: Web v3.4.2-->
|
non_process
|
unclear indication of bracket usage in ug i understand that the usage of indicates either or but perhaps the explanation of the usages of the brackets could be at the start of the guide explaining what those bracket means as the is used in multiple commands in your application instead of having it appear multiple times across multiple commands
| 0
|
2,593
| 5,353,014,140
|
IssuesEvent
|
2017-02-20 03:00:04
|
uccser/kordac
|
https://api.github.com/repos/uccser/kordac
|
closed
|
Implement {button-link} tag
|
processor implementation testing
|
Implement the button tag as used in existing CSFG
```
[link button]
regex: ^\{button ?(?P<args>[^\}]*)\}
function: create_link_button
```
|
1.0
|
Implement {button-link} tag - Implement the button tag as used in existing CSFG
```
[link button]
regex: ^\{button ?(?P<args>[^\}]*)\}
function: create_link_button
```
|
process
|
implement button link tag implement the button tag as used in existing csfg regex button p function create link button
| 1
|
176,933
| 13,671,004,497
|
IssuesEvent
|
2020-09-29 06:11:07
|
w3c/csswg-drafts
|
https://api.github.com/repos/w3c/csswg-drafts
|
closed
|
[css-text] Reconsider the resolution on #855
|
Closed Accepted by CSSWG Resolution Needs Edits Needs Testcase (WPT) css-text-3
|
In #855 it was resolved that CR gets treated as any other control char, but looking at the history, we had to make it render invisible in https://bugzilla.mozilla.org/show_bug.cgi?id=941940 for compat reasons.
It seems Gecko and WebKit render it invisible, and Blink just treats lone CRs as an space. Probably both behaviors are acceptable compat-wise, but that doesn't match the spec.
cc @jfkthame @litherum @kojiishi
|
1.0
|
[css-text] Reconsider the resolution on #855 - In #855 it was resolved that CR gets treated as any other control char, but looking at the history, we had to make it render invisible in https://bugzilla.mozilla.org/show_bug.cgi?id=941940 for compat reasons.
It seems Gecko and WebKit render it invisible, and Blink just treats lone CRs as an space. Probably both behaviors are acceptable compat-wise, but that doesn't match the spec.
cc @jfkthame @litherum @kojiishi
|
non_process
|
reconsider the resolution on in it was resolved that cr gets treated as any other control char but looking at the history we had to make it render invisible in for compat reasons it seems gecko and webkit render it invisible and blink just treats lone crs as an space probably both behaviors are acceptable compat wise but that doesn t match the spec cc jfkthame litherum kojiishi
| 0
|
213,239
| 23,969,135,016
|
IssuesEvent
|
2022-09-13 05:55:27
|
shaneclarke-whitesource/fancyBox
|
https://api.github.com/repos/shaneclarke-whitesource/fancyBox
|
closed
|
CVE-2020-11023 (Medium) detected in jquery-1.8.2.min.js - autoclosed
|
security vulnerability
|
## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.2/jquery.min.js</a></p>
<p>Path to dependency file: /demo/index.html</p>
<p>Path to vulnerable library: /demo/../lib/jquery-1.8.2.min.js,/lib/jquery-1.8.2.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2020-11023 (Medium) detected in jquery-1.8.2.min.js - autoclosed - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.2/jquery.min.js</a></p>
<p>Path to dependency file: /demo/index.html</p>
<p>Path to vulnerable library: /demo/../lib/jquery-1.8.2.min.js,/lib/jquery-1.8.2.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in jquery min js autoclosed cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file demo index html path to vulnerable library demo lib jquery min js lib jquery min js dependency hierarchy x jquery min js vulnerable library found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails
| 0
|
21,748
| 30,261,225,715
|
IssuesEvent
|
2023-07-07 08:22:36
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
Mongo DB: Using Case statement with month/day breaks View Native Query option
|
Type:Bug Priority:P3 .Team/QueryProcessor :hammer_and_wrench:
|
### Describe the bug
There seems to be some odd cases when using Custom Columns in Mongo. For instance using the case() function together with month() but it also fails when you use day() ...
`case(month(now) = month(now), "a", "b")`
`case(day(now) = day(now), "a", "b")`
However id doesn't break if you use concat:
`case(concat("a", "b") = concat("a", "b"), "a", "b")`
### To Reproduce
1. Go to New Question -> Mongo DB -> Custom Column -> Type `case(month(now) = month(now), "a", "b")`
2. Click on View the Native Query
<img width="1509" alt="image" src="https://github.com/metabase/metabase/assets/110378427/b0393ab2-ce7a-45ae-a044-752deb587fea">
### Expected behavior
The Native Query is populated
### Logs
None
### Information about your Metabase installation
```JSON
1.46.5 and master
```
### Severity
The GUI query still runs so it's not a blocking issue
### Additional context
_No response_
|
1.0
|
Mongo DB: Using Case statement with month/day breaks View Native Query option - ### Describe the bug
There seems to be some odd cases when using Custom Columns in Mongo. For instance using the case() function together with month() but it also fails when you use day() ...
`case(month(now) = month(now), "a", "b")`
`case(day(now) = day(now), "a", "b")`
However id doesn't break if you use concat:
`case(concat("a", "b") = concat("a", "b"), "a", "b")`
### To Reproduce
1. Go to New Question -> Mongo DB -> Custom Column -> Type `case(month(now) = month(now), "a", "b")`
2. Click on View the Native Query
<img width="1509" alt="image" src="https://github.com/metabase/metabase/assets/110378427/b0393ab2-ce7a-45ae-a044-752deb587fea">
### Expected behavior
The Native Query is populated
### Logs
None
### Information about your Metabase installation
```JSON
1.46.5 and master
```
### Severity
The GUI query still runs so it's not a blocking issue
### Additional context
_No response_
|
process
|
mongo db using case statement with month day breaks view native query option describe the bug there seems to be some odd cases when using custom columns in mongo for instance using the case function together with month but it also fails when you use day case month now month now a b case day now day now a b however id doesn t break if you use concat case concat a b concat a b a b to reproduce go to new question mongo db custom column type case month now month now a b click on view the native query img width alt image src expected behavior the native query is populated logs none information about your metabase installation json and master severity the gui query still runs so it s not a blocking issue additional context no response
| 1
|
14,665
| 17,786,745,368
|
IssuesEvent
|
2021-08-31 12:02:07
|
googleapis/google-api-python-client
|
https://api.github.com/repos/googleapis/google-api-python-client
|
closed
|
Dependency Dashboard
|
type: process
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Other Branches
These updates are pending. To force PRs open, click the checkbox below.
- [ ] <!-- other-branch=renovate/actions-github-script-4.x -->chore(deps): update actions/github-script action to v4.1.0
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Other Branches
These updates are pending. To force PRs open, click the checkbox below.
- [ ] <!-- other-branch=renovate/actions-github-script-4.x -->chore(deps): update actions/github-script action to v4.1.0
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses other branches these updates are pending to force prs open click the checkbox below chore deps update actions github script action to check this box to trigger a request for renovate to run again on this repository
| 1
|
229,967
| 7,602,566,811
|
IssuesEvent
|
2018-04-29 02:58:52
|
YannCaron/Game4Kids
|
https://api.github.com/repos/YannCaron/Game4Kids
|
closed
|
Extends the Phaser.Tween class
|
change medium priority
|
To Check the actor aliveness
Have a reflection about
??? Give ability to chain declaring
``` javascript
New Sequence()
.addTween( function (tween) { tween.to( { x: '-50' }, 500); } ))
.addTween( function (tween) { tween.to( { x: '+50' }, 500); } ))
.addSequence( function (sequence) { sequence... })
.repeat(1)
.start();
```
|
1.0
|
Extends the Phaser.Tween class - To Check the actor aliveness
Have a reflection about
??? Give ability to chain declaring
``` javascript
New Sequence()
.addTween( function (tween) { tween.to( { x: '-50' }, 500); } ))
.addTween( function (tween) { tween.to( { x: '+50' }, 500); } ))
.addSequence( function (sequence) { sequence... })
.repeat(1)
.start();
```
|
non_process
|
extends the phaser tween class to check the actor aliveness have a reflection about give ability to chain declaring javascript new sequence addtween function tween tween to x addtween function tween tween to x addsequence function sequence sequence repeat start
| 0
|
14,420
| 17,468,312,906
|
IssuesEvent
|
2021-08-06 20:34:43
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
closed
|
Add yoshi-python group to CODEOWNERS
|
api: bigquery type: process
|
PR approvals by non-Google maintainers should also count towards the required reviews threshold, and most of them (all?) are members of the [yoshi-python]( https://github.com/orgs/googleapis/teams/yoshi-python) group. This group should thus be added to CODEOWNERS, similar to the agreement for the Python Pub/Sub repo.
|
1.0
|
Add yoshi-python group to CODEOWNERS - PR approvals by non-Google maintainers should also count towards the required reviews threshold, and most of them (all?) are members of the [yoshi-python]( https://github.com/orgs/googleapis/teams/yoshi-python) group. This group should thus be added to CODEOWNERS, similar to the agreement for the Python Pub/Sub repo.
|
process
|
add yoshi python group to codeowners pr approvals by non google maintainers should also count towards the required reviews threshold and most of them all are members of the group this group should thus be added to codeowners similar to the agreement for the python pub sub repo
| 1
|
3,726
| 6,732,939,112
|
IssuesEvent
|
2017-10-18 13:21:40
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Cvt pathway should be an autophagy
|
Autophagy Project cellular processes
|
After community feedback, we have decided to make the Cvt pathway a type of autophagy. Since currently autophagy is defined as being a catabolic pathway, we will change the name of the current term to catabolic autophagy and create a new autophagy grouping class. Catabolic autophagy and the Cvt pathway will then become children of the new class.
|
1.0
|
Cvt pathway should be an autophagy - After community feedback, we have decided to make the Cvt pathway a type of autophagy. Since currently autophagy is defined as being a catabolic pathway, we will change the name of the current term to catabolic autophagy and create a new autophagy grouping class. Catabolic autophagy and the Cvt pathway will then become children of the new class.
|
process
|
cvt pathway should be an autophagy after community feedback we have decided to make the cvt pathway a type of autophagy since currently autophagy is defined as being a catabolic pathway we will change the name of the current term to catabolic autophagy and create a new autophagy grouping class catabolic autophagy and the cvt pathway will then become children of the new class
| 1
|
10,348
| 13,174,385,519
|
IssuesEvent
|
2020-08-11 22:20:33
|
googleapis/github-repo-automation
|
https://api.github.com/repos/googleapis/github-repo-automation
|
closed
|
Support alternate master branch names
|
type: process
|
**Is your feature request related to a problem? Please describe.**
Hello! Recently there has been an increasing push to to [change away from master/slave language](https://www.bbc.com/news/technology-53050955#:~:text=The%20world's%20biggest%20site%20for,code%20%2D%20to%20a%20neutral%20term) in source control repositories. One notable impact of this shift is an increasing number of repositories without a `master` branch, instead shifting to names such as`main` or `trunk` instead. At the moment, this toolkit does not support these repositories due to the base branch being hardcoded as `master`. An example of this is below: https://github.com/googleapis/github-repo-automation/blob/3b5f955b24722fa883e6d5b423da3f11217be64a/src/lib/github.ts#L405-L411
**Describe the solution you'd like**
I've created a pull request which replaces this hardcoded value with the default branch, as indicated by GitHub. This pull request also adds an optional `baseBranchOverride` config value to the `config.yml` to allow changing of this behavior if necessary.
|
1.0
|
Support alternate master branch names - **Is your feature request related to a problem? Please describe.**
Hello! Recently there has been an increasing push to to [change away from master/slave language](https://www.bbc.com/news/technology-53050955#:~:text=The%20world's%20biggest%20site%20for,code%20%2D%20to%20a%20neutral%20term) in source control repositories. One notable impact of this shift is an increasing number of repositories without a `master` branch, instead shifting to names such as`main` or `trunk` instead. At the moment, this toolkit does not support these repositories due to the base branch being hardcoded as `master`. An example of this is below: https://github.com/googleapis/github-repo-automation/blob/3b5f955b24722fa883e6d5b423da3f11217be64a/src/lib/github.ts#L405-L411
**Describe the solution you'd like**
I've created a pull request which replaces this hardcoded value with the default branch, as indicated by GitHub. This pull request also adds an optional `baseBranchOverride` config value to the `config.yml` to allow changing of this behavior if necessary.
|
process
|
support alternate master branch names is your feature request related to a problem please describe hello recently there has been an increasing push to to in source control repositories one notable impact of this shift is an increasing number of repositories without a master branch instead shifting to names such as main or trunk instead at the moment this toolkit does not support these repositories due to the base branch being hardcoded as master an example of this is below describe the solution you d like i ve created a pull request which replaces this hardcoded value with the default branch as indicated by github this pull request also adds an optional basebranchoverride config value to the config yml to allow changing of this behavior if necessary
| 1
|
74,822
| 25,346,109,034
|
IssuesEvent
|
2022-11-19 07:54:11
|
naev/naev
|
https://api.github.com/repos/naev/naev
|
closed
|
Player escorts try to use player-defined weapon sets as auto weapon sets
|
Type-Defect Priority-High
|
bobbens
—
今日 22:00
oh my, I think I know why my escort uses the phermone emitter
so, the AI assumes that the outfits are being automatically handled and set
however, the player escorts use the weapon sets the player set, instead of the automagic ones
so the ai tries to trigger weapon set 5 as an instant set to launch fighters or whatever, but instead activates the phermone emitter set on that weapon set
I guess the best solution would be to move the weapon set setting stuff to lua-side in the AI, and have the player-side completely independent. Sort of like we do for the "special" outfits now
|
1.0
|
Player escorts try to use player-defined weapon sets as auto weapon sets - bobbens
—
今日 22:00
oh my, I think I know why my escort uses the phermone emitter
so, the AI assumes that the outfits are being automatically handled and set
however, the player escorts use the weapon sets the player set, instead of the automagic ones
so the ai tries to trigger weapon set 5 as an instant set to launch fighters or whatever, but instead activates the phermone emitter set on that weapon set
I guess the best solution would be to move the weapon set setting stuff to lua-side in the AI, and have the player-side completely independent. Sort of like we do for the "special" outfits now
|
non_process
|
player escorts try to use player defined weapon sets as auto weapon sets bobbens — 今日 oh my i think i know why my escort uses the phermone emitter so the ai assumes that the outfits are being automatically handled and set however the player escorts use the weapon sets the player set instead of the automagic ones so the ai tries to trigger weapon set as an instant set to launch fighters or whatever but instead activates the phermone emitter set on that weapon set i guess the best solution would be to move the weapon set setting stuff to lua side in the ai and have the player side completely independent sort of like we do for the special outfits now
| 0
|
9,466
| 12,451,166,459
|
IssuesEvent
|
2020-05-27 10:00:06
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
closed
|
Preprocessor generality
|
preprocessor
|
Some preprocessors like `zonal_mean` and `area_average` have recently (#825) been expanded to allow more generic operation beyond just a simple mean. This means that while they're called `mean` or `average`, they can also do medians, standard deviations, variance, etc.
It would be great to add this functionality to the other spatial and temporal preprocessors:
- `average_volume`
- `seasonal_mean`
- `time_average`
While we're there, we should also change `zonal_mean`'s argument from `mean_type` to `operation` to match `average_region`.
We wouldn't even need to change our recipes if we kept the current preprocessors names like `zonal_mean` or `average_volume` as special cases of the more generic functions.
|
1.0
|
Preprocessor generality - Some preprocessors like `zonal_mean` and `area_average` have recently (#825) been expanded to allow more generic operation beyond just a simple mean. This means that while they're called `mean` or `average`, they can also do medians, standard deviations, variance, etc.
It would be great to add this functionality to the other spatial and temporal preprocessors:
- `average_volume`
- `seasonal_mean`
- `time_average`
While we're there, we should also change `zonal_mean`'s argument from `mean_type` to `operation` to match `average_region`.
We wouldn't even need to change our recipes if we kept the current preprocessors names like `zonal_mean` or `average_volume` as special cases of the more generic functions.
|
process
|
preprocessor generality some preprocessors like zonal mean and area average have recently been expanded to allow more generic operation beyond just a simple mean this means that while they re called mean or average they can also do medians standard deviations variance etc it would be great to add this functionality to the other spatial and temporal preprocessors average volume seasonal mean time average while we re there we should also change zonal mean s argument from mean type to operation to match average region we wouldn t even need to change our recipes if we kept the current preprocessors names like zonal mean or average volume as special cases of the more generic functions
| 1
|
433,304
| 12,505,547,199
|
IssuesEvent
|
2020-06-02 10:56:11
|
input-output-hk/ouroboros-network
|
https://api.github.com/repos/input-output-hk/ouroboros-network
|
closed
|
Include genesis hash in hash chain
|
consensus priority high shelley mainnet transition
|
The prev-hash of the first block should be
```
hash( hash(final_byron_header) <> hash(shelley_genesis) )
```
|
1.0
|
Include genesis hash in hash chain - The prev-hash of the first block should be
```
hash( hash(final_byron_header) <> hash(shelley_genesis) )
```
|
non_process
|
include genesis hash in hash chain the prev hash of the first block should be hash hash final byron header hash shelley genesis
| 0
|
19,140
| 25,202,654,599
|
IssuesEvent
|
2022-11-13 09:48:07
|
ppy/osu-web
|
https://api.github.com/repos/ppy/osu-web
|
closed
|
Some beatmaps are missing the "Spotlight" label
|
area:beatmap-processing
|
Here are the beatmaps that have no "Spotlight" tag:
https://osu.ppy.sh/beatmapsets/1570536
https://osu.ppy.sh/beatmapsets/1373644
https://osu.ppy.sh/beatmapsets/1714190
https://osu.ppy.sh/beatmapsets/1774999
https://osu.ppy.sh/beatmapsets/1742131
https://osu.ppy.sh/beatmapsets/543917
https://osu.ppy.sh/beatmapsets/1593278
https://osu.ppy.sh/beatmapsets/1610294
https://osu.ppy.sh/beatmapsets/1611987
https://osu.ppy.sh/beatmapsets/1762719
https://osu.ppy.sh/beatmapsets/1453784
https://osu.ppy.sh/beatmapsets/1328323
https://osu.ppy.sh/beatmapsets/1646183
https://osu.ppy.sh/beatmapsets/1769222
https://osu.ppy.sh/beatmapsets/1687320
https://osu.ppy.sh/beatmapsets/1507426
https://osu.ppy.sh/beatmapsets/1678112
https://osu.ppy.sh/beatmapsets/978534
https://osu.ppy.sh/beatmapsets/1734128
https://osu.ppy.sh/beatmapsets/1774562
https://osu.ppy.sh/beatmapsets/1031588
https://osu.ppy.sh/beatmapsets/933984
https://osu.ppy.sh/beatmapsets/1592786
https://osu.ppy.sh/beatmapsets/1525823
https://osu.ppy.sh/beatmapsets/74274
https://osu.ppy.sh/beatmapsets/1734804
https://osu.ppy.sh/beatmapsets/506110
https://osu.ppy.sh/beatmapsets/1683802
https://osu.ppy.sh/beatmapsets/1435937
https://osu.ppy.sh/beatmapsets/1741009
https://osu.ppy.sh/beatmapsets/365716
https://osu.ppy.sh/beatmapsets/1683972
https://osu.ppy.sh/beatmapsets/1378488
https://osu.ppy.sh/beatmapsets/1326970
https://osu.ppy.sh/beatmapsets/1645306
I'm sorry if this is so many. These beatmaps appeared in the Summer 2022 Playlist C, but probably peppy forgot to add these.
|
1.0
|
Some beatmaps are missing the "Spotlight" label - Here are the beatmaps that have no "Spotlight" tag:
https://osu.ppy.sh/beatmapsets/1570536
https://osu.ppy.sh/beatmapsets/1373644
https://osu.ppy.sh/beatmapsets/1714190
https://osu.ppy.sh/beatmapsets/1774999
https://osu.ppy.sh/beatmapsets/1742131
https://osu.ppy.sh/beatmapsets/543917
https://osu.ppy.sh/beatmapsets/1593278
https://osu.ppy.sh/beatmapsets/1610294
https://osu.ppy.sh/beatmapsets/1611987
https://osu.ppy.sh/beatmapsets/1762719
https://osu.ppy.sh/beatmapsets/1453784
https://osu.ppy.sh/beatmapsets/1328323
https://osu.ppy.sh/beatmapsets/1646183
https://osu.ppy.sh/beatmapsets/1769222
https://osu.ppy.sh/beatmapsets/1687320
https://osu.ppy.sh/beatmapsets/1507426
https://osu.ppy.sh/beatmapsets/1678112
https://osu.ppy.sh/beatmapsets/978534
https://osu.ppy.sh/beatmapsets/1734128
https://osu.ppy.sh/beatmapsets/1774562
https://osu.ppy.sh/beatmapsets/1031588
https://osu.ppy.sh/beatmapsets/933984
https://osu.ppy.sh/beatmapsets/1592786
https://osu.ppy.sh/beatmapsets/1525823
https://osu.ppy.sh/beatmapsets/74274
https://osu.ppy.sh/beatmapsets/1734804
https://osu.ppy.sh/beatmapsets/506110
https://osu.ppy.sh/beatmapsets/1683802
https://osu.ppy.sh/beatmapsets/1435937
https://osu.ppy.sh/beatmapsets/1741009
https://osu.ppy.sh/beatmapsets/365716
https://osu.ppy.sh/beatmapsets/1683972
https://osu.ppy.sh/beatmapsets/1378488
https://osu.ppy.sh/beatmapsets/1326970
https://osu.ppy.sh/beatmapsets/1645306
I'm sorry if this is so many. These beatmaps appeared in the Summer 2022 Playlist C, but probably peppy forgot to add these.
|
process
|
some beatmaps are missing the spotlight label here are the beatmaps that have no spotlight tag i m sorry if this is so many these beatmaps appeared in the summer playlist c but probably peppy forgot to add these
| 1
|
745,980
| 26,008,621,860
|
IssuesEvent
|
2022-12-20 22:10:12
|
verocloud/obsidian-mindmap-nextgen
|
https://api.github.com/repos/verocloud/obsidian-mindmap-nextgen
|
closed
|
Compile and release version 1.2
|
enhancement priority:high
|
Now that all Issues are closed, Release 1.2 can be tagged and compiled. If you believe some other items are still to be solved part of Release 1.2, please open corresponding Issues to track those.
|
1.0
|
Compile and release version 1.2 - Now that all Issues are closed, Release 1.2 can be tagged and compiled. If you believe some other items are still to be solved part of Release 1.2, please open corresponding Issues to track those.
|
non_process
|
compile and release version now that all issues are closed release can be tagged and compiled if you believe some other items are still to be solved part of release please open corresponding issues to track those
| 0
|
162,102
| 12,619,831,752
|
IssuesEvent
|
2020-06-13 02:52:41
|
Scholar-6/brillder
|
https://api.github.com/repos/Scholar-6/brillder
|
closed
|
Remove 'Review and Submit' button and extend synthesis right so that padding is the same as on left hand side
|
Betatester Request Input Brick Onboarding | UX
|
- [x] remove button
- [x] extend synthesis area
|
1.0
|
Remove 'Review and Submit' button and extend synthesis right so that padding is the same as on left hand side - - [x] remove button
- [x] extend synthesis area
|
non_process
|
remove review and submit button and extend synthesis right so that padding is the same as on left hand side remove button extend synthesis area
| 0
|
109,382
| 13,766,828,861
|
IssuesEvent
|
2020-10-07 15:00:48
|
unchartedelixir/uncharted
|
https://api.github.com/repos/unchartedelixir/uncharted
|
opened
|
Chart Browser Testing
|
Design
|
Find and fix bugs related to multi-browser use. Also document browser usage.
|
1.0
|
Chart Browser Testing - Find and fix bugs related to multi-browser use. Also document browser usage.
|
non_process
|
chart browser testing find and fix bugs related to multi browser use also document browser usage
| 0
|
672,876
| 22,843,158,869
|
IssuesEvent
|
2022-07-13 01:21:05
|
PollBuddy/PollBuddy
|
https://api.github.com/repos/PollBuddy/PollBuddy
|
closed
|
Results CSV export and table endpoints
|
backend high-priority
|
**Please describe what has to be done**
Create an endpoint in the backend (probably something like `/api/polls/:id/results/csv`) that gets the results of a poll and converts it into a CSV format. Generate a header top row followed by data rows. Headers could be something like `UserName, Email, FirstName, LastName`, then dynamic per question `Q1Answer, Correct`, `Q2Answer, Correct`, etc. Each row would then have those fields filled out per user and per answer they submit, so if Q1 allows multiple answers, that would create a new row per answer given. I'm fine with each row only having one question answer, or each row can have every question answer, although that feels a bit less readable/parsable.
Also, create an endpoint (probably something like `/api/polls/:id/results/table`) that gets the results of a poll and generates JSON that the frontend can use to build a results table. See #647.
**Additional context**
Ideally the CSV endpoint would be a clickable link that could be presented as a button in the frontend, not needing a fetch call, but this isn't a hard requirement, just would probably be easier.
Also, these functions can probably share an internal function to gather the results.
Also also, Prof. Turner asked if we could sort by RCS ID (aka UserName), so that would be nice to do too.
|
1.0
|
Results CSV export and table endpoints - **Please describe what has to be done**
Create an endpoint in the backend (probably something like `/api/polls/:id/results/csv`) that gets the results of a poll and converts it into a CSV format. Generate a header top row followed by data rows. Headers could be something like `UserName, Email, FirstName, LastName`, then dynamic per question `Q1Answer, Correct`, `Q2Answer, Correct`, etc. Each row would then have those fields filled out per user and per answer they submit, so if Q1 allows multiple answers, that would create a new row per answer given. I'm fine with each row only having one question answer, or each row can have every question answer, although that feels a bit less readable/parsable.
Also, create an endpoint (probably something like `/api/polls/:id/results/table`) that gets the results of a poll and generates JSON that the frontend can use to build a results table. See #647.
**Additional context**
Ideally the CSV endpoint would be a clickable link that could be presented as a button in the frontend, not needing a fetch call, but this isn't a hard requirement, just would probably be easier.
Also, these functions can probably share an internal function to gather the results.
Also also, Prof. Turner asked if we could sort by RCS ID (aka UserName), so that would be nice to do too.
|
non_process
|
results csv export and table endpoints please describe what has to be done create an endpoint in the backend probably something like api polls id results csv that gets the results of a poll and converts it into a csv format generate a header top row followed by data rows headers could be something like username email firstname lastname then dynamic per question correct correct etc each row would then have those fields filled out per user and per answer they submit so if allows multiple answers that would create a new row per answer given i m fine with each row only having one question answer or each row can have every question answer although that feels a bit less readable parsable also create an endpoint probably something like api polls id results table that gets the results of a poll and generates json that the frontend can use to build a results table see additional context ideally the csv endpoint would be a clickable link that could be presented as a button in the frontend not needing a fetch call but this isn t a hard requirement just would probably be easier also these functions can probably share an internal function to gather the results also also prof turner asked if we could sort by rcs id aka username so that would be nice to do too
| 0
|
1,708
| 4,350,468,274
|
IssuesEvent
|
2016-07-31 08:37:17
|
AkkadianGames/Nanoshooter
|
https://api.github.com/repos/AkkadianGames/Nanoshooter
|
closed
|
Issue migration — Nanoshooter framework issues to Susa
|
Process Ready
|
## Criteria
- [x] All framework-related issues in Nanoshooter are migrated to the Susa project.
- [x] All of the same labels are established for Susa.
|
1.0
|
Issue migration — Nanoshooter framework issues to Susa - ## Criteria
- [x] All framework-related issues in Nanoshooter are migrated to the Susa project.
- [x] All of the same labels are established for Susa.
|
process
|
issue migration — nanoshooter framework issues to susa criteria all framework related issues in nanoshooter are migrated to the susa project all of the same labels are established for susa
| 1
|
224,740
| 7,472,359,964
|
IssuesEvent
|
2018-04-03 12:25:50
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Function registry not available when node is switched during the conditional auth flow
|
Component/Auth Framework Priority/High Type/Bug
|
Following error is reproduced in a cluster setup when the initial request is handled by node 1 and returning request from an authenticator is handled by node 2. As a result authentication flow doesn't work as expected.
[2018-04-02 14:14:45,223] ERROR {org.wso2.carbon.identity.application.authentication.framework.config.model.graph.JsGraphBuilder} - Error in executing the javascript for service provider : travelocity, Javascript Fragment :
function (context) {
var isAdmin = hasRole(context, 'admin');
Log.info("--------------- Has Admin " + isAdmin);
if (isAdmin) {
executeStep({id: '2'});
executeStep({id: '3'});
}
}
<eval>:2 ReferenceError: "hasRole" is not defined
at jdk.nashorn.internal.runtime.ECMAErrors.error(ECMAErrors.java:57)
at jdk.nashorn.internal.runtime.ECMAErrors.referenceError(ECMAErrors.java:319)
at jdk.nashorn.internal.runtime.ECMAErrors.referenceError(ECMAErrors.java:291)
at jdk.nashorn.internal.objects.Global.__noSuchProperty__(Global.java:1441)
|
1.0
|
Function registry not available when node is switched during the conditional auth flow - Following error is reproduced in a cluster setup when the initial request is handled by node 1 and returning request from an authenticator is handled by node 2. As a result authentication flow doesn't work as expected.
[2018-04-02 14:14:45,223] ERROR {org.wso2.carbon.identity.application.authentication.framework.config.model.graph.JsGraphBuilder} - Error in executing the javascript for service provider : travelocity, Javascript Fragment :
function (context) {
var isAdmin = hasRole(context, 'admin');
Log.info("--------------- Has Admin " + isAdmin);
if (isAdmin) {
executeStep({id: '2'});
executeStep({id: '3'});
}
}
<eval>:2 ReferenceError: "hasRole" is not defined
at jdk.nashorn.internal.runtime.ECMAErrors.error(ECMAErrors.java:57)
at jdk.nashorn.internal.runtime.ECMAErrors.referenceError(ECMAErrors.java:319)
at jdk.nashorn.internal.runtime.ECMAErrors.referenceError(ECMAErrors.java:291)
at jdk.nashorn.internal.objects.Global.__noSuchProperty__(Global.java:1441)
|
non_process
|
function registry not available when node is switched during the conditional auth flow following error is reproduced in a cluster setup when the initial request is handled by node and returning request from an authenticator is handled by node as a result authentication flow doesn t work as expected error org carbon identity application authentication framework config model graph jsgraphbuilder error in executing the javascript for service provider travelocity javascript fragment function context var isadmin hasrole context admin log info has admin isadmin if isadmin executestep id executestep id referenceerror hasrole is not defined at jdk nashorn internal runtime ecmaerrors error ecmaerrors java at jdk nashorn internal runtime ecmaerrors referenceerror ecmaerrors java at jdk nashorn internal runtime ecmaerrors referenceerror ecmaerrors java at jdk nashorn internal objects global nosuchproperty global java
| 0
|
8,033
| 11,210,786,559
|
IssuesEvent
|
2020-01-06 14:03:43
|
kubeflow/kfctl
|
https://api.github.com/repos/kubeflow/kfctl
|
closed
|
Move kfctl py code into kubeflow/kfctl
|
area/kfctl kind/process priority/p0
|
Related to #7 - move code for kfctl to kubeflow/kfctl
* We need to move the python code related to testing kfctl into kubeflow/kfctl
* We should create the directory py/kubeflow/kfctl and all code should live there
* The kfctl related code in [py/kubeflow/kubeflow/ci](https://github.com/kubeflow/kubeflow/tree/master/py/kubeflow/kubeflow/ci) should move to [py/kubeflow/kfctl/ci]
* Code in kubeflow/kubeflow [testing/kfctl](https://github.com/kubeflow/kubeflow/tree/master/testing/kfctl) should move to [py/kubeflow/kfctl/ci] as well
* The E2E test workflow should be updated to use this code.
* If there is general code in [testing](https://github.com/kubeflow/kubeflow/tree/master/testing) that might be useful for other repos not just kubeflow/kfctl then we should consider moving it into the kubeflow/testing repo instead.
|
1.0
|
Move kfctl py code into kubeflow/kfctl - Related to #7 - move code for kfctl to kubeflow/kfctl
* We need to move the python code related to testing kfctl into kubeflow/kfctl
* We should create the directory py/kubeflow/kfctl and all code should live there
* The kfctl related code in [py/kubeflow/kubeflow/ci](https://github.com/kubeflow/kubeflow/tree/master/py/kubeflow/kubeflow/ci) should move to [py/kubeflow/kfctl/ci]
* Code in kubeflow/kubeflow [testing/kfctl](https://github.com/kubeflow/kubeflow/tree/master/testing/kfctl) should move to [py/kubeflow/kfctl/ci] as well
* The E2E test workflow should be updated to use this code.
* If there is general code in [testing](https://github.com/kubeflow/kubeflow/tree/master/testing) that might be useful for other repos not just kubeflow/kfctl then we should consider moving it into the kubeflow/testing repo instead.
|
process
|
move kfctl py code into kubeflow kfctl related to move code for kfctl to kubeflow kfctl we need to move the python code related to testing kfctl into kubeflow kfctl we should create the directory py kubeflow kfctl and all code should live there the kfctl related code in should move to code in kubeflow kubeflow should move to as well the test workflow should be updated to use this code if there is general code in that might be useful for other repos not just kubeflow kfctl then we should consider moving it into the kubeflow testing repo instead
| 1
|
7,138
| 10,280,661,413
|
IssuesEvent
|
2019-08-26 06:17:20
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Watcher Tasks PowerShell support
|
Pri2 automation/svc cxp process-automation/subsvc product-question triaged
|
Hi all, unfortunately I cannot find a powershell command / module to administer/control watcher tasks. Is there any powershell support in latest Powershell Az module?
Thanks!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ff71649c-9431-4a1b-22e7-eac4315f9c5b
* Version Independent ID: a38685e3-5dbf-8d25-1134-c56376a42017
* Content: [Create a watcher task in the Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/automation-watchers-tutorial)
* Content Source: [articles/automation/automation-watchers-tutorial.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-watchers-tutorial.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @eamonoreilly
* Microsoft Alias: **eamono**
|
1.0
|
Watcher Tasks PowerShell support - Hi all, unfortunately I cannot find a powershell command / module to administer/control watcher tasks. Is there any powershell support in latest Powershell Az module?
Thanks!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ff71649c-9431-4a1b-22e7-eac4315f9c5b
* Version Independent ID: a38685e3-5dbf-8d25-1134-c56376a42017
* Content: [Create a watcher task in the Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/automation-watchers-tutorial)
* Content Source: [articles/automation/automation-watchers-tutorial.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-watchers-tutorial.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @eamonoreilly
* Microsoft Alias: **eamono**
|
process
|
watcher tasks powershell support hi all unfortunately i cannot find a powershell command module to administer control watcher tasks is there any powershell support in latest powershell az module thanks document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login eamonoreilly microsoft alias eamono
| 1
|
8,251
| 11,421,370,859
|
IssuesEvent
|
2020-02-03 12:02:40
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
Live Reload when changing locals for posthtml-expressions
|
:bug: Bug CSS Preprocessing Stale
|
# ❔ Question
Is there a way to tell Parcel to reload the page while developing, when one changes the locals that [`posthtml-expressions`](https://github.com/posthtml/posthtml-expressions) uses?
I tried putting parts of these locals into [a js file](https://github.com/optikfluffel/35c3.info/blob/master/src/shortcuts.js) and require them in my [`.posthtmlrc`](https://github.com/optikfluffel/35c3.info/blob/master/.posthtmlrc.js#L1-L4), but that doesn't do anything. Probably because these config files are only read once on startup.
## 🔦 Context
I'd like to be able to preview changes in the data `posthtml-expressions` uses, while running the dev server. Right now I have to restart it every time.
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | v1.10.3
| Node | v11.2.0
| npm | 6.4.1
| macOS | 10.14.1
|
1.0
|
Live Reload when changing locals for posthtml-expressions - # ❔ Question
Is there a way to tell Parcel to reload the page while developing, when one changes the locals that [`posthtml-expressions`](https://github.com/posthtml/posthtml-expressions) uses?
I tried putting parts of these locals into [a js file](https://github.com/optikfluffel/35c3.info/blob/master/src/shortcuts.js) and require them in my [`.posthtmlrc`](https://github.com/optikfluffel/35c3.info/blob/master/.posthtmlrc.js#L1-L4), but that doesn't do anything. Probably because these config files are only read once on startup.
## 🔦 Context
I'd like to be able to preview changes in the data `posthtml-expressions` uses, while running the dev server. Right now I have to restart it every time.
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | v1.10.3
| Node | v11.2.0
| npm | 6.4.1
| macOS | 10.14.1
|
process
|
live reload when changing locals for posthtml expressions ❔ question is there a way to tell parcel to reload the page while developing when one changes the locals that uses i tried putting parts of these locals into and require them in my but that doesn t do anything probably because these config files are only read once on startup 🔦 context i d like to be able to preview changes in the data posthtml expressions uses while running the dev server right now i have to restart it every time 🌍 your environment software version s parcel node npm macos
| 1
|
19,125
| 25,172,847,256
|
IssuesEvent
|
2022-11-11 06:03:12
|
quark-engine/quark-engine
|
https://api.github.com/repos/quark-engine/quark-engine
|
closed
|
Web Report Improvement
|
issue-processing-state-06
|
There are two points in the Web Report that need to be improved.
1. Fix the typo in the description in the Radar Chart section
"radar chart" is misspelled as "radare chart".

2. Darken the grid lines of the Radar Chart
To look up the chart easier, we need darker grid lines.

|
1.0
|
Web Report Improvement - There are two points in the Web Report that need to be improved.
1. Fix the typo in the description in the Radar Chart section
"radar chart" is misspelled as "radare chart".

2. Darken the grid lines of the Radar Chart
To look up the chart easier, we need darker grid lines.

|
process
|
web report improvement there are two points in the web report that need to be improved fix the typo in the description in the radar chart section radar chart is misspelled as radare chart darken the grid lines of the radar chart to look up the chart easier we need darker grid lines
| 1
|
18,555
| 24,555,453,329
|
IssuesEvent
|
2022-10-12 15:31:52
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Formstep > text choice > Participant is not able to select multiple answer options even though 'Multiple select' option is selected in the SB
|
Bug P1 iOS Process: Fixed Process: Tested QA Process: Tested dev
|
Steps:
1. In SB, create a study with a 'Text choice' response type by selecting multiple select option
2. Launch the study
3. Sign up or sign in to the mobile app
4. Enroll to the study
5. Click on text choice activity
6. Try to select multiple answer options
AR: Participant is not able to select multiple answer options
ER: Participant should be able to select multiple answer options when the 'Multiple select' option is selected in the SB
https://user-images.githubusercontent.com/71445210/186621282-c5ac4ade-9aec-406d-9eaf-145e46c93716.MOV
|
3.0
|
[iOS] Formstep > text choice > Participant is not able to select multiple answer options even though 'Multiple select' option is selected in the SB - Steps:
1. In SB, create a study with a 'Text choice' response type by selecting multiple select option
2. Launch the study
3. Sign up or sign in to the mobile app
4. Enroll to the study
5. Click on text choice activity
6. Try to select multiple answer options
AR: Participant is not able to select multiple answer options
ER: Participant should be able to select multiple answer options when the 'Multiple select' option is selected in the SB
https://user-images.githubusercontent.com/71445210/186621282-c5ac4ade-9aec-406d-9eaf-145e46c93716.MOV
|
process
|
formstep text choice participant is not able to select multiple answer options even though multiple select option is selected in the sb steps in sb create a study with a text choice response type by selecting multiple select option launch the study sign up or sign in to the mobile app enroll to the study click on text choice activity try to select multiple answer options ar participant is not able to select multiple answer options er participant should be able to select multiple answer options when the multiple select option is selected in the sb
| 1
|
357,578
| 25,176,407,180
|
IssuesEvent
|
2022-11-11 09:39:10
|
nickeltea/pe
|
https://api.github.com/repos/nickeltea/pe
|
opened
|
Miscategorised user story
|
severity.VeryLow type.DocumentationBug
|


As the abilty to delegate tasks is a part of the product's value proposition, it should be classified as essential (3 stars)
<!--session: 1668153092572-6a8c1793-80dc-4aae-86f0-710d52a5f041-->
<!--Version: Web v3.4.4-->
|
1.0
|
Miscategorised user story - 

As the abilty to delegate tasks is a part of the product's value proposition, it should be classified as essential (3 stars)
<!--session: 1668153092572-6a8c1793-80dc-4aae-86f0-710d52a5f041-->
<!--Version: Web v3.4.4-->
|
non_process
|
miscategorised user story as the abilty to delegate tasks is a part of the product s value proposition it should be classified as essential stars
| 0
|
2,092
| 3,276,054,009
|
IssuesEvent
|
2015-10-26 17:52:08
|
runspired/smoke-and-mirrors
|
https://api.github.com/repos/runspired/smoke-and-mirrors
|
closed
|
[FEAT] `scheduleIntoFrame`, an explicitly triggered requestAnimationFrame queue.
|
FEATURE PERFORMANCE
|
There is currently a flicker when prepending new items to a collection. This flicker occurs because wrapper components are inserted into the collection in a different frame from the one in which `scrollTop` is modified. To fix this, we need tighter control over frame scheduling.
`scheduleIntoFrame` gives you access to a `frameQueue`, similar in nature to `backburner.schedule` (`Ember.run.schedule`). This queue must be manually flushed, at which point it's work will all be performed in the `nextFrame`.
|
True
|
[FEAT] `scheduleIntoFrame`, an explicitly triggered requestAnimationFrame queue. - There is currently a flicker when prepending new items to a collection. This flicker occurs because wrapper components are inserted into the collection in a different frame from the one in which `scrollTop` is modified. To fix this, we need tighter control over frame scheduling.
`scheduleIntoFrame` gives you access to a `frameQueue`, similar in nature to `backburner.schedule` (`Ember.run.schedule`). This queue must be manually flushed, at which point it's work will all be performed in the `nextFrame`.
|
non_process
|
scheduleintoframe an explicitly triggered requestanimationframe queue there is currently a flicker when prepending new items to a collection this flicker occurs because wrapper components are inserted into the collection in a different frame from the one in which scrolltop is modified to fix this we need tighter control over frame scheduling scheduleintoframe gives you access to a framequeue similar in nature to backburner schedule ember run schedule this queue must be manually flushed at which point it s work will all be performed in the nextframe
| 0
|
19,440
| 26,981,718,290
|
IssuesEvent
|
2023-02-09 13:34:36
|
sekiguchi-nagisa/ydsh
|
https://api.github.com/repos/sekiguchi-nagisa/ydsh
|
closed
|
change user-defined completer behavior (quote, defualt completion)
|
incompatible change Completor
|
change the following
* does not quote completion candidates
* does not perform file name completion even if no candidates
|
True
|
change user-defined completer behavior (quote, defualt completion) - change the following
* does not quote completion candidates
* does not perform file name completion even if no candidates
|
non_process
|
change user defined completer behavior quote defualt completion change the following does not quote completion candidates does not perform file name completion even if no candidates
| 0
|
21,693
| 30,190,760,449
|
IssuesEvent
|
2023-07-04 15:11:48
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
"Expected native source query to be a string, got: clojure.lang.PersistentArrayMap" on nested queries
|
Type:Bug Priority:P1 Querying/Processor .Backend .Regression .Blocker
|
### Describe the bug
When using "explore results", there's something we're sending wrong to the backend which causes the exception
### To Reproduce
1) new GUI query -> invoices -> count by account plan and source. See the SQL and convert it so SQL. Save it
2) the click explore results, see the BE exception
### Expected behavior
It should work
### Logs
```
2023-07-04 15:10:34,726 ERROR middleware.catch-exceptions :: Error processing query: Expected native source query to be a string, got: clojure.lang.PersistentArrayMap
{:database_id 2,
:started_at #t "2023-07-04T15:10:34.406006Z[GMT]",
:error_type :invalid-query,
:json_query
{:database 2,
:query {:source-table "card__9"},
:type "query",
:parameters [],
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true}},
:native nil,
:status :failed,
:class clojure.lang.ExceptionInfo,
:stacktrace
["--> driver.sql.query_processor$sql_source_query.invokeStatic(query_processor.clj:60)"
"driver.sql.query_processor$sql_source_query.invoke(query_processor.clj:56)"
"driver.sql.query_processor$apply_source_query.invokeStatic(query_processor.clj:1403)"
"driver.sql.query_processor$apply_source_query.invoke(query_processor.clj:1391)"
"driver.sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:1420)"
"driver.sql.query_processor$apply_clauses.invoke(query_processor.clj:1412)"
"driver.sql.query_processor$mbql__GT_honeysql.invokeStatic(query_processor.clj:1447)"
"driver.sql.query_processor$mbql__GT_honeysql.invoke(query_processor.clj:1438)"
"driver.sql.query_processor$mbql__GT_native.invokeStatic(query_processor.clj:1456)"
"driver.sql.query_processor$mbql__GT_native.invoke(query_processor.clj:1452)"
"driver.sql$fn__86643.invokeStatic(sql.clj:42)"
"driver.sql$fn__86643.invoke(sql.clj:40)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invokeStatic(mbql_to_native.clj:14)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invoke(mbql_to_native.clj:9)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__75446.invoke(mbql_to_native.clj:21)"
"query_processor$fn__77554$combined_post_process__77559$combined_post_process_STAR___77560.invoke(query_processor.clj:260)"
"query_processor$fn__77554$combined_pre_process__77555$combined_pre_process_STAR___77556.invoke(query_processor.clj:257)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__76232$fn__76237.invoke(resolve_database_and_driver.clj:36)"
"driver$do_with_driver.invokeStatic(driver.clj:91)"
"driver$do_with_driver.invoke(driver.clj:86)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__76232.invoke(resolve_database_and_driver.clj:35)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__72216.invoke(fetch_source_query.clj:312)"
"query_processor.middleware.store$initialize_store$fn__72397$fn__72398.invoke(store.clj:12)"
"query_processor.store$do_with_store.invokeStatic(store.clj:56)"
"query_processor.store$do_with_store.invoke(store.clj:50)"
"query_processor.middleware.store$initialize_store$fn__72397.invoke(store.clj:11)"
"query_processor.middleware.normalize_query$normalize$fn__76528.invoke(normalize_query.clj:36)"
"metabase_enterprise.audit_app.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__82999.invoke(handle_audit_queries.clj:131)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__73922.invoke(constraints.clj:54)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__76457.invoke(process_userland_query.clj:151)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__76854.invoke(catch_exceptions.clj:171)"
"query_processor.reducible$async_qp$qp_STAR___66414$thunk__66416.invoke(reducible.clj:103)"
"query_processor.reducible$async_qp$qp_STAR___66414.invoke(reducible.clj:109)"
"query_processor.reducible$async_qp$qp_STAR___66414.invoke(reducible.clj:94)"
"query_processor.reducible$sync_qp$qp_STAR___66426.doInvoke(reducible.clj:129)"
"query_processor$process_userland_query.invokeStatic(query_processor.clj:383)"
"query_processor$process_userland_query.doInvoke(query_processor.clj:379)"
"query_processor$fn__77603$process_query_and_save_execution_BANG___77612$fn__77615.invoke(query_processor.clj:394)"
"query_processor$fn__77603$process_query_and_save_execution_BANG___77612.invoke(query_processor.clj:387)"
"query_processor$fn__77648$process_query_and_save_with_max_results_constraints_BANG___77657$fn__77660.invoke(query_processor.clj:406)"
"query_processor$fn__77648$process_query_and_save_with_max_results_constraints_BANG___77657.invoke(query_processor.clj:399)"
"api.dataset$run_query_async$fn__98388.invoke(dataset.clj:74)"
"query_processor.streaming$streaming_response_STAR_$fn__60960$fn__60961.invoke(streaming.clj:166)"
"query_processor.streaming$streaming_response_STAR_$fn__60960.invoke(streaming.clj:165)"
"async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:69)"
"async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:67)"
"async.streaming_response$do_f_async$task__39730.invoke(streaming_response.clj:88)"],
:card_id 9,
:context :ad-hoc,
:error "Expected native source query to be a string, got: clojure.lang.PersistentArrayMap",
:row_count 0,
:running_time 0,
:preprocessed
{:database 2,
:query
{:source-card-id 9,
:source-metadata
[{:display_name "accounts__via__account_id__plan",
:field_ref [:field "accounts__via__account_id__plan" {:base-type :type/Text}],
:name "accounts__via__account_id__plan",
:base_type :type/Text,
:effective_type :type/Text,
:semantic_type nil,
:fingerprint
{:global {:distinct-count 1, :nil% 0.0},
:type
{:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :percent-state 0.0, :average-length 5.0}}}}
{:display_name "accounts__via__account_id__source",
:field_ref [:field "accounts__via__account_id__source" {:base-type :type/Text}],
:name "accounts__via__account_id__source",
:base_type :type/Text,
:effective_type :type/Text,
:semantic_type :type/Source,
:fingerprint
{:global {:distinct-count 1, :nil% 0.0},
:type
{:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :percent-state 0.0, :average-length 8.0}}}}
{:display_name "count",
:field_ref [:field "count" {:base-type :type/BigInteger}],
:name "count",
:base_type :type/BigInteger,
:effective_type :type/BigInteger,
:semantic_type :type/Quantity,
:fingerprint
{:global {:distinct-count 1, :nil% 0.0},
:type {:type/Number {:min 1962.0, :q1 1962.0, :q3 1962.0, :max 1962.0, :sd nil, :avg 1962.0}}}}],
:fields
[[:field "accounts__via__account_id__plan" {:base-type :type/Text}]
[:field "accounts__via__account_id__source" {:base-type :type/Text}]
[:field "count" {:base-type :type/BigInteger}]],
:source-query
{:collection "invoices",
:native
{:collection "invoices",
:query
"SELECT\n \"accounts__via__account_id\".\"plan\" AS \"accounts__via__account_id__plan\",\n \"accounts__via__account_id\".\"source\" AS \"accounts__via__account_id__source\",\n COUNT(*) AS \"count\"\nFROM\n \"public\".\"invoices\"\n \nLEFT JOIN \"public\".\"accounts\" AS \"accounts__via__account_id\" ON \"public\".\"invoices\".\"account_id\" = \"accounts__via__account_id\".\"id\"\nGROUP BY\n \"accounts__via__account_id\".\"plan\",\n \"accounts__via__account_id\".\"source\"\nORDER BY\n \"accounts__via__account_id\".\"plan\" ASC,\n \"accounts__via__account_id\".\"source\" ASC"}},
:limit 1048575,
:metabase.query-processor.middleware.limit/original-limit nil},
:type :query,
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true},
:info {:executed-by 1, :context :ad-hoc, :card-id 9}},
:ex-data
{:type :invalid-query,
:query
{:collection "invoices",
:query
"SELECT\n \"accounts__via__account_id\".\"plan\" AS \"accounts__via__account_id__plan\",\n \"accounts__via__account_id\".\"source\" AS \"accounts__via__account_id__source\",\n COUNT(*) AS \"count\"\nFROM\n \"public\".\"invoices\"\n \nLEFT JOIN \"public\".\"accounts\" AS \"accounts__via__account_id\" ON \"public\".\"invoices\".\"account_id\" = \"accounts__via__account_id\".\"id\"\nGROUP BY\n \"accounts__via__account_id\".\"plan\",\n \"accounts__via__account_id\".\"source\"\nORDER BY\n \"accounts__via__account_id\".\"plan\" ASC,\n \"accounts__via__account_id\".\"source\" ASC"}},
:data {:rows [], :cols []}}
```
### Information about your Metabase installation
```JSON
v47.0-RC2
```
### Severity
P1
### Additional context
_No response_
|
1.0
|
"Expected native source query to be a string, got: clojure.lang.PersistentArrayMap" on nested queries - ### Describe the bug
When using "explore results", there's something we're sending wrong to the backend which causes the exception
### To Reproduce
1) new GUI query -> invoices -> count by account plan and source. See the SQL and convert it so SQL. Save it
2) the click explore results, see the BE exception
### Expected behavior
It should work
### Logs
```
2023-07-04 15:10:34,726 ERROR middleware.catch-exceptions :: Error processing query: Expected native source query to be a string, got: clojure.lang.PersistentArrayMap
{:database_id 2,
:started_at #t "2023-07-04T15:10:34.406006Z[GMT]",
:error_type :invalid-query,
:json_query
{:database 2,
:query {:source-table "card__9"},
:type "query",
:parameters [],
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true}},
:native nil,
:status :failed,
:class clojure.lang.ExceptionInfo,
:stacktrace
["--> driver.sql.query_processor$sql_source_query.invokeStatic(query_processor.clj:60)"
"driver.sql.query_processor$sql_source_query.invoke(query_processor.clj:56)"
"driver.sql.query_processor$apply_source_query.invokeStatic(query_processor.clj:1403)"
"driver.sql.query_processor$apply_source_query.invoke(query_processor.clj:1391)"
"driver.sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:1420)"
"driver.sql.query_processor$apply_clauses.invoke(query_processor.clj:1412)"
"driver.sql.query_processor$mbql__GT_honeysql.invokeStatic(query_processor.clj:1447)"
"driver.sql.query_processor$mbql__GT_honeysql.invoke(query_processor.clj:1438)"
"driver.sql.query_processor$mbql__GT_native.invokeStatic(query_processor.clj:1456)"
"driver.sql.query_processor$mbql__GT_native.invoke(query_processor.clj:1452)"
"driver.sql$fn__86643.invokeStatic(sql.clj:42)"
"driver.sql$fn__86643.invoke(sql.clj:40)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invokeStatic(mbql_to_native.clj:14)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invoke(mbql_to_native.clj:9)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__75446.invoke(mbql_to_native.clj:21)"
"query_processor$fn__77554$combined_post_process__77559$combined_post_process_STAR___77560.invoke(query_processor.clj:260)"
"query_processor$fn__77554$combined_pre_process__77555$combined_pre_process_STAR___77556.invoke(query_processor.clj:257)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__76232$fn__76237.invoke(resolve_database_and_driver.clj:36)"
"driver$do_with_driver.invokeStatic(driver.clj:91)"
"driver$do_with_driver.invoke(driver.clj:86)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__76232.invoke(resolve_database_and_driver.clj:35)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__72216.invoke(fetch_source_query.clj:312)"
"query_processor.middleware.store$initialize_store$fn__72397$fn__72398.invoke(store.clj:12)"
"query_processor.store$do_with_store.invokeStatic(store.clj:56)"
"query_processor.store$do_with_store.invoke(store.clj:50)"
"query_processor.middleware.store$initialize_store$fn__72397.invoke(store.clj:11)"
"query_processor.middleware.normalize_query$normalize$fn__76528.invoke(normalize_query.clj:36)"
"metabase_enterprise.audit_app.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__82999.invoke(handle_audit_queries.clj:131)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__73922.invoke(constraints.clj:54)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__76457.invoke(process_userland_query.clj:151)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__76854.invoke(catch_exceptions.clj:171)"
"query_processor.reducible$async_qp$qp_STAR___66414$thunk__66416.invoke(reducible.clj:103)"
"query_processor.reducible$async_qp$qp_STAR___66414.invoke(reducible.clj:109)"
"query_processor.reducible$async_qp$qp_STAR___66414.invoke(reducible.clj:94)"
"query_processor.reducible$sync_qp$qp_STAR___66426.doInvoke(reducible.clj:129)"
"query_processor$process_userland_query.invokeStatic(query_processor.clj:383)"
"query_processor$process_userland_query.doInvoke(query_processor.clj:379)"
"query_processor$fn__77603$process_query_and_save_execution_BANG___77612$fn__77615.invoke(query_processor.clj:394)"
"query_processor$fn__77603$process_query_and_save_execution_BANG___77612.invoke(query_processor.clj:387)"
"query_processor$fn__77648$process_query_and_save_with_max_results_constraints_BANG___77657$fn__77660.invoke(query_processor.clj:406)"
"query_processor$fn__77648$process_query_and_save_with_max_results_constraints_BANG___77657.invoke(query_processor.clj:399)"
"api.dataset$run_query_async$fn__98388.invoke(dataset.clj:74)"
"query_processor.streaming$streaming_response_STAR_$fn__60960$fn__60961.invoke(streaming.clj:166)"
"query_processor.streaming$streaming_response_STAR_$fn__60960.invoke(streaming.clj:165)"
"async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:69)"
"async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:67)"
"async.streaming_response$do_f_async$task__39730.invoke(streaming_response.clj:88)"],
:card_id 9,
:context :ad-hoc,
:error "Expected native source query to be a string, got: clojure.lang.PersistentArrayMap",
:row_count 0,
:running_time 0,
:preprocessed
{:database 2,
:query
{:source-card-id 9,
:source-metadata
[{:display_name "accounts__via__account_id__plan",
:field_ref [:field "accounts__via__account_id__plan" {:base-type :type/Text}],
:name "accounts__via__account_id__plan",
:base_type :type/Text,
:effective_type :type/Text,
:semantic_type nil,
:fingerprint
{:global {:distinct-count 1, :nil% 0.0},
:type
{:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :percent-state 0.0, :average-length 5.0}}}}
{:display_name "accounts__via__account_id__source",
:field_ref [:field "accounts__via__account_id__source" {:base-type :type/Text}],
:name "accounts__via__account_id__source",
:base_type :type/Text,
:effective_type :type/Text,
:semantic_type :type/Source,
:fingerprint
{:global {:distinct-count 1, :nil% 0.0},
:type
{:type/Text {:percent-json 0.0, :percent-url 0.0, :percent-email 0.0, :percent-state 0.0, :average-length 8.0}}}}
{:display_name "count",
:field_ref [:field "count" {:base-type :type/BigInteger}],
:name "count",
:base_type :type/BigInteger,
:effective_type :type/BigInteger,
:semantic_type :type/Quantity,
:fingerprint
{:global {:distinct-count 1, :nil% 0.0},
:type {:type/Number {:min 1962.0, :q1 1962.0, :q3 1962.0, :max 1962.0, :sd nil, :avg 1962.0}}}}],
:fields
[[:field "accounts__via__account_id__plan" {:base-type :type/Text}]
[:field "accounts__via__account_id__source" {:base-type :type/Text}]
[:field "count" {:base-type :type/BigInteger}]],
:source-query
{:collection "invoices",
:native
{:collection "invoices",
:query
"SELECT\n \"accounts__via__account_id\".\"plan\" AS \"accounts__via__account_id__plan\",\n \"accounts__via__account_id\".\"source\" AS \"accounts__via__account_id__source\",\n COUNT(*) AS \"count\"\nFROM\n \"public\".\"invoices\"\n \nLEFT JOIN \"public\".\"accounts\" AS \"accounts__via__account_id\" ON \"public\".\"invoices\".\"account_id\" = \"accounts__via__account_id\".\"id\"\nGROUP BY\n \"accounts__via__account_id\".\"plan\",\n \"accounts__via__account_id\".\"source\"\nORDER BY\n \"accounts__via__account_id\".\"plan\" ASC,\n \"accounts__via__account_id\".\"source\" ASC"}},
:limit 1048575,
:metabase.query-processor.middleware.limit/original-limit nil},
:type :query,
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true},
:info {:executed-by 1, :context :ad-hoc, :card-id 9}},
:ex-data
{:type :invalid-query,
:query
{:collection "invoices",
:query
"SELECT\n \"accounts__via__account_id\".\"plan\" AS \"accounts__via__account_id__plan\",\n \"accounts__via__account_id\".\"source\" AS \"accounts__via__account_id__source\",\n COUNT(*) AS \"count\"\nFROM\n \"public\".\"invoices\"\n \nLEFT JOIN \"public\".\"accounts\" AS \"accounts__via__account_id\" ON \"public\".\"invoices\".\"account_id\" = \"accounts__via__account_id\".\"id\"\nGROUP BY\n \"accounts__via__account_id\".\"plan\",\n \"accounts__via__account_id\".\"source\"\nORDER BY\n \"accounts__via__account_id\".\"plan\" ASC,\n \"accounts__via__account_id\".\"source\" ASC"}},
:data {:rows [], :cols []}}
```
### Information about your Metabase installation
```JSON
v47.0-RC2
```
### Severity
P1
### Additional context
_No response_
|
process
|
expected native source query to be a string got clojure lang persistentarraymap on nested queries describe the bug when using explore results there s something we re sending wrong to the backend which causes the exception to reproduce new gui query invoices count by account plan and source see the sql and convert it so sql save it the click explore results see the be exception expected behavior it should work logs error middleware catch exceptions error processing query expected native source query to be a string got clojure lang persistentarraymap database id started at t error type invalid query json query database query source table card type query parameters middleware js int to string true add default userland constraints true native nil status failed class clojure lang exceptioninfo stacktrace driver sql query processor sql source query invokestatic query processor clj driver sql query processor sql source query invoke query processor clj driver sql query processor apply source query invokestatic query processor clj driver sql query processor apply source query invoke query processor clj driver sql query processor apply clauses invokestatic query processor clj driver sql query processor apply clauses invoke query processor clj driver sql query processor mbql gt honeysql invokestatic query processor clj driver sql query processor mbql gt honeysql invoke query processor clj driver sql query processor mbql gt native invokestatic query processor clj driver sql query processor mbql gt native invoke query processor clj driver sql fn invokestatic sql clj driver sql fn invoke sql clj query processor middleware mbql to native query gt native form invokestatic mbql to native clj query processor middleware mbql to native query gt native form invoke mbql to native clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor fn combined post process combined post process star invoke query processor clj query processor fn combined pre process combined pre process star invoke query processor clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware normalize query normalize fn invoke normalize query clj metabase enterprise audit app query processor middleware handle audit queries handle internal queries fn invoke handle audit queries clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible sync qp qp star doinvoke reducible clj query processor process userland query invokestatic query processor clj query processor process userland query doinvoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj query processor fn process query and save with max results constraints bang fn invoke query processor clj query processor fn process query and save with max results constraints bang invoke query processor clj api dataset run query async fn invoke dataset clj query processor streaming streaming response star fn fn invoke streaming clj query processor streaming streaming response star fn invoke streaming clj async streaming response do f star invokestatic streaming response clj async streaming response do f star invoke streaming response clj async streaming response do f async task invoke streaming response clj card id context ad hoc error expected native source query to be a string got clojure lang persistentarraymap row count running time preprocessed database query source card id source metadata display name accounts via account id plan field ref name accounts via account id plan base type type text effective type type text semantic type nil fingerprint global distinct count nil type type text percent json percent url percent email percent state average length display name accounts via account id source field ref name accounts via account id source base type type text effective type type text semantic type type source fingerprint global distinct count nil type type text percent json percent url percent email percent state average length display name count field ref name count base type type biginteger effective type type biginteger semantic type type quantity fingerprint global distinct count nil type type number min max sd nil avg fields source query collection invoices native collection invoices query select n accounts via account id plan as accounts via account id plan n accounts via account id source as accounts via account id source n count as count nfrom n public invoices n nleft join public accounts as accounts via account id on public invoices account id accounts via account id id ngroup by n accounts via account id plan n accounts via account id source norder by n accounts via account id plan asc n accounts via account id source asc limit metabase query processor middleware limit original limit nil type query middleware js int to string true add default userland constraints true info executed by context ad hoc card id ex data type invalid query query collection invoices query select n accounts via account id plan as accounts via account id plan n accounts via account id source as accounts via account id source n count as count nfrom n public invoices n nleft join public accounts as accounts via account id on public invoices account id accounts via account id id ngroup by n accounts via account id plan n accounts via account id source norder by n accounts via account id plan asc n accounts via account id source asc data rows cols information about your metabase installation json severity additional context no response
| 1
|
364,847
| 10,774,014,577
|
IssuesEvent
|
2019-11-03 01:08:06
|
official-antistasi-community/A3-Antistasi
|
https://api.github.com/repos/official-antistasi-community/A3-Antistasi
|
closed
|
DS CTD Error on Tanoa
|
Priority bug
|
```
18:17:24 ErrorMessage: File mpmissions\__cur_mp.Tanoa\mission.sqm, line 440: .ScenarioData: Member already defined.
18:17:24 Application terminated intentionally
ErrorMessage: File mpmissions\__cur_mp.Tanoa\mission.sqm, line 440: .ScenarioData: Member already defined.
```
https://github.com/official-antistasi-community/A3-Antistasi/blob/0ecbf4d8f06a499558b9c4d2ce86ad7a89ac3b67/Map-Templates/Antistasi-WotP.Tanoa/mission.sqm#L152-L164
https://github.com/official-antistasi-community/A3-Antistasi/blob/0ecbf4d8f06a499558b9c4d2ce86ad7a89ac3b67/Map-Templates/Antistasi-WotP.Tanoa/mission.sqm#L437-L440
The error is pretty clear, but I don't know why its suddenly here now.
|
1.0
|
DS CTD Error on Tanoa - ```
18:17:24 ErrorMessage: File mpmissions\__cur_mp.Tanoa\mission.sqm, line 440: .ScenarioData: Member already defined.
18:17:24 Application terminated intentionally
ErrorMessage: File mpmissions\__cur_mp.Tanoa\mission.sqm, line 440: .ScenarioData: Member already defined.
```
https://github.com/official-antistasi-community/A3-Antistasi/blob/0ecbf4d8f06a499558b9c4d2ce86ad7a89ac3b67/Map-Templates/Antistasi-WotP.Tanoa/mission.sqm#L152-L164
https://github.com/official-antistasi-community/A3-Antistasi/blob/0ecbf4d8f06a499558b9c4d2ce86ad7a89ac3b67/Map-Templates/Antistasi-WotP.Tanoa/mission.sqm#L437-L440
The error is pretty clear, but I don't know why its suddenly here now.
|
non_process
|
ds ctd error on tanoa errormessage file mpmissions cur mp tanoa mission sqm line scenariodata member already defined application terminated intentionally errormessage file mpmissions cur mp tanoa mission sqm line scenariodata member already defined the error is pretty clear but i don t know why its suddenly here now
| 0
|
21,823
| 30,316,737,543
|
IssuesEvent
|
2023-07-10 16:04:15
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
New Term - superfamily
|
Term - add Class - Taxon normative Process - complete
|
## New Term
Submitter: Andréa Matsunaga
Justification: Among animals (e.g., mollusks and tardigrades), individuals are classified in a taxonomic category, subordinate to an order and superior to a family. In order to effectively capture this more refined classification information, I recommend the addition of superfamily. iDigBio has evidence that multiple data providers have this level of information and would like to share it with the community. Absence of this term in the DwC standard hinders sharing of this level of taxonomic information especially when using tools such as IPT.
Definition: a taxonomic category subordinate to an order and superior to a family. According to ICZN article 29.2, the suffix -OIDEA is used for a superfamily name.
Comment: Examples "Achatinoidea", "Cerithioidea", "Helicoidea", "Hypsibioidea", "Valvatoidea", "Zonitoidea"
Refines:
Has Domain: http://rs.tdwg.org/dwc/terms/Taxon
Has Range:
Replaces:
ABCD 2.06: ScientificNameIdentified/HigherTaxon/HigherTaxonRank (enumeration value: superfamilia)
|
1.0
|
New Term - superfamily - ## New Term
Submitter: Andréa Matsunaga
Justification: Among animals (e.g., mollusks and tardigrades), individuals are classified in a taxonomic category, subordinate to an order and superior to a family. In order to effectively capture this more refined classification information, I recommend the addition of superfamily. iDigBio has evidence that multiple data providers have this level of information and would like to share it with the community. Absence of this term in the DwC standard hinders sharing of this level of taxonomic information especially when using tools such as IPT.
Definition: a taxonomic category subordinate to an order and superior to a family. According to ICZN article 29.2, the suffix -OIDEA is used for a superfamily name.
Comment: Examples "Achatinoidea", "Cerithioidea", "Helicoidea", "Hypsibioidea", "Valvatoidea", "Zonitoidea"
Refines:
Has Domain: http://rs.tdwg.org/dwc/terms/Taxon
Has Range:
Replaces:
ABCD 2.06: ScientificNameIdentified/HigherTaxon/HigherTaxonRank (enumeration value: superfamilia)
|
process
|
new term superfamily new term submitter andréa matsunaga justification among animals e g mollusks and tardigrades individuals are classified in a taxonomic category subordinate to an order and superior to a family in order to effectively capture this more refined classification information i recommend the addition of superfamily idigbio has evidence that multiple data providers have this level of information and would like to share it with the community absence of this term in the dwc standard hinders sharing of this level of taxonomic information especially when using tools such as ipt definition a taxonomic category subordinate to an order and superior to a family according to iczn article the suffix oidea is used for a superfamily name comment examples achatinoidea cerithioidea helicoidea hypsibioidea valvatoidea zonitoidea refines has domain has range replaces abcd scientificnameidentified highertaxon highertaxonrank enumeration value superfamilia
| 1
|
12,586
| 14,991,507,685
|
IssuesEvent
|
2021-01-29 08:28:05
|
googleapis/python-spanner-django
|
https://api.github.com/repos/googleapis/python-spanner-django
|
closed
|
Fix parallel test failures
|
api: spanner priority: p2 type: process
|
Kokoro tests fail when run in parallel because of env and spanner resource limit issues. Fix system tests so that they can run on the emulator (in parallel?) and fix kokoro config so system tests can run against integ spanner without crashing.
|
1.0
|
Fix parallel test failures - Kokoro tests fail when run in parallel because of env and spanner resource limit issues. Fix system tests so that they can run on the emulator (in parallel?) and fix kokoro config so system tests can run against integ spanner without crashing.
|
process
|
fix parallel test failures kokoro tests fail when run in parallel because of env and spanner resource limit issues fix system tests so that they can run on the emulator in parallel and fix kokoro config so system tests can run against integ spanner without crashing
| 1
|
65,357
| 12,557,083,692
|
IssuesEvent
|
2020-06-07 11:52:22
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
[4.0b1] User Token shown on View profile page is different to the token shown on the edit profile page
|
J4 Issue No Code Attached Yet
|
### Steps to reproduce the issue
View a user profile on the frontend (https://example.com/index.php/author-login?view=profile)
Edit the same profile (yours) on the frontend (http://127.0.0.1:1025/index.php/change-password)
### Expected result
The token on both pages should surely be the same right?
### Actual result
Edit Profile:
<img width="890" alt="Screenshot 2020-05-31 at 17 25 58" src="https://user-images.githubusercontent.com/400092/83357368-d6db2f80-a363-11ea-9342-e6a23f7337ed.png">
View Profile:
<img width="861" alt="Screenshot 2020-05-31 at 17 26 22" src="https://user-images.githubusercontent.com/400092/83357376-dcd11080-a363-11ea-84a4-0856d588a0d9.png">
|
1.0
|
[4.0b1] User Token shown on View profile page is different to the token shown on the edit profile page - ### Steps to reproduce the issue
View a user profile on the frontend (https://example.com/index.php/author-login?view=profile)
Edit the same profile (yours) on the frontend (http://127.0.0.1:1025/index.php/change-password)
### Expected result
The token on both pages should surely be the same right?
### Actual result
Edit Profile:
<img width="890" alt="Screenshot 2020-05-31 at 17 25 58" src="https://user-images.githubusercontent.com/400092/83357368-d6db2f80-a363-11ea-9342-e6a23f7337ed.png">
View Profile:
<img width="861" alt="Screenshot 2020-05-31 at 17 26 22" src="https://user-images.githubusercontent.com/400092/83357376-dcd11080-a363-11ea-84a4-0856d588a0d9.png">
|
non_process
|
user token shown on view profile page is different to the token shown on the edit profile page steps to reproduce the issue view a user profile on the frontend edit the same profile yours on the frontend expected result the token on both pages should surely be the same right actual result edit profile img width alt screenshot at src view profile img width alt screenshot at src
| 0
|
8,990
| 12,101,143,567
|
IssuesEvent
|
2020-04-20 14:48:44
|
google/go-jsonnet
|
https://api.github.com/repos/google/go-jsonnet
|
closed
|
Bazel option causes errors in CI
|
process
|
See: https://travis-ci.org/github/google/go-jsonnet/jobs/674702615
```
ERROR: --local_resources is deprecated. Please use --local_ram_resources and/or --local_cpu_resources
```
|
1.0
|
Bazel option causes errors in CI - See: https://travis-ci.org/github/google/go-jsonnet/jobs/674702615
```
ERROR: --local_resources is deprecated. Please use --local_ram_resources and/or --local_cpu_resources
```
|
process
|
bazel option causes errors in ci see error local resources is deprecated please use local ram resources and or local cpu resources
| 1
|
69,853
| 17,872,116,403
|
IssuesEvent
|
2021-09-06 17:21:24
|
dotnet/efcore
|
https://api.github.com/repos/dotnet/efcore
|
opened
|
Avoid generating proxies when they're not needed
|
type-enhancement area-perf area-model-building
|
Proxy generation is a very heavy process perf-wise, and we may be doing it in cases where we don't have to (came out of investigating the model in #20135).
* For lazy loading proxies, a type without any virtual methods shouldn't require proxies
* For change tracking proxies, we currently set up all the entity types in the model (and throw if any type is sealed). We could have some sort of type-by-type opt-in/opt-out.
/cc @ajcvickers
|
1.0
|
Avoid generating proxies when they're not needed - Proxy generation is a very heavy process perf-wise, and we may be doing it in cases where we don't have to (came out of investigating the model in #20135).
* For lazy loading proxies, a type without any virtual methods shouldn't require proxies
* For change tracking proxies, we currently set up all the entity types in the model (and throw if any type is sealed). We could have some sort of type-by-type opt-in/opt-out.
/cc @ajcvickers
|
non_process
|
avoid generating proxies when they re not needed proxy generation is a very heavy process perf wise and we may be doing it in cases where we don t have to came out of investigating the model in for lazy loading proxies a type without any virtual methods shouldn t require proxies for change tracking proxies we currently set up all the entity types in the model and throw if any type is sealed we could have some sort of type by type opt in opt out cc ajcvickers
| 0
|
123,450
| 16,494,853,889
|
IssuesEvent
|
2021-05-25 09:13:10
|
nextcloud/server
|
https://api.github.com/repos/nextcloud/server
|
reopened
|
Progress bar for file copy/move [feature request]
|
1. to develop design enhancement feature: dav feature: files
|
When copying/moving large files, there is no progress bar shown other than a spinner (which goes away if you reload the page). When you have to move several folders sequentially, sometimes you forget and move the same folder (which results in an error).
It should at least display some kind of progress bar and a notification after the operation is complete. A drop down menu in the top navigation bar that displays an overlay on mouseover which lists each transfer with a progress bar would be nice.
|
1.0
|
Progress bar for file copy/move [feature request] - When copying/moving large files, there is no progress bar shown other than a spinner (which goes away if you reload the page). When you have to move several folders sequentially, sometimes you forget and move the same folder (which results in an error).
It should at least display some kind of progress bar and a notification after the operation is complete. A drop down menu in the top navigation bar that displays an overlay on mouseover which lists each transfer with a progress bar would be nice.
|
non_process
|
progress bar for file copy move when copying moving large files there is no progress bar shown other than a spinner which goes away if you reload the page when you have to move several folders sequentially sometimes you forget and move the same folder which results in an error it should at least display some kind of progress bar and a notification after the operation is complete a drop down menu in the top navigation bar that displays an overlay on mouseover which lists each transfer with a progress bar would be nice
| 0
|
408,015
| 11,940,727,913
|
IssuesEvent
|
2020-04-02 17:11:58
|
RobotLocomotion/drake
|
https://api.github.com/repos/RobotLocomotion/drake
|
closed
|
Port the DoorHinge functionality from Anzu to Drake
|
priority: medium team: dynamics team: manipulation type: feature request
|
@ggould-tri has recently implemented a force element `dish::DoorHinge` in Anzu to simulate the hinged door of a dishwasher. This is a very useful feature and could be used for other assets with doors. It has already been used in a different simulation since then.
We would like to add this feature to Drake. Additionally, a stop joint could potentially be added to the DoorHinge by adding another curve to simulate the stopping force when the joint hits the stop limit. I just want to throw this idea. The stop joint does not necessarily need to be implemented in the same PR.
|
1.0
|
Port the DoorHinge functionality from Anzu to Drake - @ggould-tri has recently implemented a force element `dish::DoorHinge` in Anzu to simulate the hinged door of a dishwasher. This is a very useful feature and could be used for other assets with doors. It has already been used in a different simulation since then.
We would like to add this feature to Drake. Additionally, a stop joint could potentially be added to the DoorHinge by adding another curve to simulate the stopping force when the joint hits the stop limit. I just want to throw this idea. The stop joint does not necessarily need to be implemented in the same PR.
|
non_process
|
port the doorhinge functionality from anzu to drake ggould tri has recently implemented a force element dish doorhinge in anzu to simulate the hinged door of a dishwasher this is a very useful feature and could be used for other assets with doors it has already been used in a different simulation since then we would like to add this feature to drake additionally a stop joint could potentially be added to the doorhinge by adding another curve to simulate the stopping force when the joint hits the stop limit i just want to throw this idea the stop joint does not necessarily need to be implemented in the same pr
| 0
|
13,457
| 15,936,389,891
|
IssuesEvent
|
2021-04-14 11:04:57
|
paul-buerkner/brms
|
https://api.github.com/repos/paul-buerkner/brms
|
closed
|
Consider exposing `get_dpar`
|
post-processing
|
Maybe I missed something obvious, but if I understand it correctly, when implementing a custom distribution, one has to suppose that `dpars` are not themselves predicted or manually write code to detect and handle both the "one global value" and "predicted" cases in their `posterior_predict_xx`, `log_lik_xx` implementations.
The built-in distributions use `get_dpar` that seems to handle this distinction automatically. Maybe it could be exposed to users?
If so, then the custom families vignette should be updated to reflect this.
As usual, if you like then I will (at some point :-) be happy to file a pull request implementing this.
|
1.0
|
Consider exposing `get_dpar` - Maybe I missed something obvious, but if I understand it correctly, when implementing a custom distribution, one has to suppose that `dpars` are not themselves predicted or manually write code to detect and handle both the "one global value" and "predicted" cases in their `posterior_predict_xx`, `log_lik_xx` implementations.
The built-in distributions use `get_dpar` that seems to handle this distinction automatically. Maybe it could be exposed to users?
If so, then the custom families vignette should be updated to reflect this.
As usual, if you like then I will (at some point :-) be happy to file a pull request implementing this.
|
process
|
consider exposing get dpar maybe i missed something obvious but if i understand it correctly when implementing a custom distribution one has to suppose that dpars are not themselves predicted or manually write code to detect and handle both the one global value and predicted cases in their posterior predict xx log lik xx implementations the built in distributions use get dpar that seems to handle this distinction automatically maybe it could be exposed to users if so then the custom families vignette should be updated to reflect this as usual if you like then i will at some point be happy to file a pull request implementing this
| 1
|
3,213
| 2,663,849,468
|
IssuesEvent
|
2015-03-20 10:10:34
|
IDgis/CRS2
|
https://api.github.com/repos/IDgis/CRS2
|
closed
|
bij toevoegen werkplanregel selecteer een beheertype hangt het systeem
|
fout wacht op input tester Werkplan
|
Probeer nieuw werkplan op te stellen, toevoegen werkplanregels, beheertype, selecteer een beheertype: system hangt zonder foutmelding. Kan vensters niet meer sluiten.
|
1.0
|
bij toevoegen werkplanregel selecteer een beheertype hangt het systeem - Probeer nieuw werkplan op te stellen, toevoegen werkplanregels, beheertype, selecteer een beheertype: system hangt zonder foutmelding. Kan vensters niet meer sluiten.
|
non_process
|
bij toevoegen werkplanregel selecteer een beheertype hangt het systeem probeer nieuw werkplan op te stellen toevoegen werkplanregels beheertype selecteer een beheertype system hangt zonder foutmelding kan vensters niet meer sluiten
| 0
|
89,599
| 8,209,537,556
|
IssuesEvent
|
2018-09-04 07:56:31
|
kubeflow/tf-operator
|
https://api.github.com/repos/kubeflow/tf-operator
|
opened
|
[docs] Add instructions about how to contribute e2e test cases
|
area/docs help wanted testing
|
Ref https://github.com/kubeflow/mxnet-operator/issues/8
I think we need to have a doc about how to write e2e test cases for operators, which will lower the barriers of participation.
|
1.0
|
[docs] Add instructions about how to contribute e2e test cases - Ref https://github.com/kubeflow/mxnet-operator/issues/8
I think we need to have a doc about how to write e2e test cases for operators, which will lower the barriers of participation.
|
non_process
|
add instructions about how to contribute test cases ref i think we need to have a doc about how to write test cases for operators which will lower the barriers of participation
| 0
|
160,840
| 25,241,068,406
|
IssuesEvent
|
2022-11-15 07:28:55
|
npocccties/chiloportal
|
https://api.github.com/repos/npocccties/chiloportal
|
closed
|
カテゴリの順番
|
backend design MUST
|
Figma https://www.figma.com/file/dCE06JShf29eqnvZ4vcE8U?node-id=737:6502#299020734

> カテゴリの順番を以下の通りに変更する.
> ・授業づくり
> ・教科等指導力
> ・子ども・人理解
> ・教員としての基本的資質
> ・協働
> ・人材の育成
> ・現代的課題 Hideki Akiba
>
> ---
>
> おそらく実装としてはDBに登録された順番に依存します。
>
> ---
>
> 並びは変更しましたが、教科等指導力については画像の差し替えが必要なためそのままにしています。
|
1.0
|
カテゴリの順番 - Figma https://www.figma.com/file/dCE06JShf29eqnvZ4vcE8U?node-id=737:6502#299020734

> カテゴリの順番を以下の通りに変更する.
> ・授業づくり
> ・教科等指導力
> ・子ども・人理解
> ・教員としての基本的資質
> ・協働
> ・人材の育成
> ・現代的課題 Hideki Akiba
>
> ---
>
> おそらく実装としてはDBに登録された順番に依存します。
>
> ---
>
> 並びは変更しましたが、教科等指導力については画像の差し替えが必要なためそのままにしています。
|
non_process
|
カテゴリの順番 figma カテゴリの順番を以下の通りに変更する. ・授業づくり ・教科等指導力 ・子ども・人理解 ・教員としての基本的資質 ・協働 ・人材の育成 ・現代的課題 hideki akiba おそらく実装としてはdbに登録された順番に依存します。 並びは変更しましたが、教科等指導力については画像の差し替えが必要なためそのままにしています。
| 0
|
18,545
| 24,555,178,010
|
IssuesEvent
|
2022-10-12 15:20:37
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Study activities > 'X+1 done' is displaying for each X run when participant submits response in offline
|
Bug P1 iOS Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Login and enroll into a study
2. Open any study activity
3. Switch off the internet connection
4. Submit the response
5. Observe the status showing '2 done'
6. Switch on the internet connection and observe still showing '2 done'
**Actual:** '2 done' is displaying for each run when participant submits response in offline
**Expected:** '1 done' should be displayed for each run
Refer attached video
https://user-images.githubusercontent.com/60386291/188067707-8dbd8b18-8b01-4920-bbe7-fd8450ef3df3.MOV
|
3.0
|
[iOS] Study activities > 'X+1 done' is displaying for each X run when participant submits response in offline - **Steps:**
1. Login and enroll into a study
2. Open any study activity
3. Switch off the internet connection
4. Submit the response
5. Observe the status showing '2 done'
6. Switch on the internet connection and observe still showing '2 done'
**Actual:** '2 done' is displaying for each run when participant submits response in offline
**Expected:** '1 done' should be displayed for each run
Refer attached video
https://user-images.githubusercontent.com/60386291/188067707-8dbd8b18-8b01-4920-bbe7-fd8450ef3df3.MOV
|
process
|
study activities x done is displaying for each x run when participant submits response in offline steps login and enroll into a study open any study activity switch off the internet connection submit the response observe the status showing done switch on the internet connection and observe still showing done actual done is displaying for each run when participant submits response in offline expected done should be displayed for each run refer attached video
| 1
|
198,660
| 6,975,387,555
|
IssuesEvent
|
2017-12-12 06:46:21
|
webpack/webpack-cli
|
https://api.github.com/repos/webpack/webpack-cli
|
closed
|
Use babel-preset-env instead of 2015
|
enhancement Feature Request Good First Contribution Priority: High v1
|
**Do you want to request a *feature* or report a *bug*?**
Feature.
**What is the current behavior?**
The CLI initializes a project with the 2015 preset.
**What is the expected behavior?**
The CLI should use the env preset instead.
**If this is a feature request, what is motivation or use case for changing the behavior?**
It does the same as 2015, with additional fine-grained control for environment targets if the user cares to configure it. Also gets rid of the warning, bothersome or even confusing to some users.
I'd be happy to create a pull request for this 🙂
|
1.0
|
Use babel-preset-env instead of 2015 - **Do you want to request a *feature* or report a *bug*?**
Feature.
**What is the current behavior?**
The CLI initializes a project with the 2015 preset.
**What is the expected behavior?**
The CLI should use the env preset instead.
**If this is a feature request, what is motivation or use case for changing the behavior?**
It does the same as 2015, with additional fine-grained control for environment targets if the user cares to configure it. Also gets rid of the warning, bothersome or even confusing to some users.
I'd be happy to create a pull request for this 🙂
|
non_process
|
use babel preset env instead of do you want to request a feature or report a bug feature what is the current behavior the cli initializes a project with the preset what is the expected behavior the cli should use the env preset instead if this is a feature request what is motivation or use case for changing the behavior it does the same as with additional fine grained control for environment targets if the user cares to configure it also gets rid of the warning bothersome or even confusing to some users i d be happy to create a pull request for this 🙂
| 0
|
10,902
| 13,676,946,709
|
IssuesEvent
|
2020-09-29 14:28:25
|
GSA/CIW
|
https://api.github.com/repos/GSA/CIW
|
opened
|
Do Not Update End Date of Existing Contracts
|
Topic: Upload/Processing Type: Requirement Change
|
Per client request, the process should no longer update an existing contract's end date. At present, when a CIW's contract listed matches an existing CIW in GCIMS, the DB is updated to match the later contract end date. , the end date imported to GCIMS that match an existing contract in the database update the contract end date if the end date on the CIW is further in the future than the contract end date in the DB.
|
1.0
|
Do Not Update End Date of Existing Contracts - Per client request, the process should no longer update an existing contract's end date. At present, when a CIW's contract listed matches an existing CIW in GCIMS, the DB is updated to match the later contract end date. , the end date imported to GCIMS that match an existing contract in the database update the contract end date if the end date on the CIW is further in the future than the contract end date in the DB.
|
process
|
do not update end date of existing contracts per client request the process should no longer update an existing contract s end date at present when a ciw s contract listed matches an existing ciw in gcims the db is updated to match the later contract end date the end date imported to gcims that match an existing contract in the database update the contract end date if the end date on the ciw is further in the future than the contract end date in the db
| 1
|
332,763
| 10,110,430,853
|
IssuesEvent
|
2019-07-30 10:14:18
|
fossasia/open-event-server
|
https://api.github.com/repos/fossasia/open-event-server
|
closed
|
App tries to use google or amazon storage buckets even though local storage is selected in Admin dashboard.
|
Priority: URGENT
|
Could be a docker issue if it works locally, the point is, irrespective of this error being thrown server still returns a garbage corrupt PDF, so it looks like everything is fine, on the frontend, but the tickets don't actually open.
```
FileNotFoundError: [Errno 2] No such file or directory: '/data/app/static/generated/tickets/attendees/tickets/pdf/5c7e4019-b90a-4100-ab01-3746bc25fdcc/U3V2Wmh2bD/5c7e4019-b90a-4100-ab01-3746bc25fdcc.pdf'
```
|
1.0
|
App tries to use google or amazon storage buckets even though local storage is selected in Admin dashboard. - Could be a docker issue if it works locally, the point is, irrespective of this error being thrown server still returns a garbage corrupt PDF, so it looks like everything is fine, on the frontend, but the tickets don't actually open.
```
FileNotFoundError: [Errno 2] No such file or directory: '/data/app/static/generated/tickets/attendees/tickets/pdf/5c7e4019-b90a-4100-ab01-3746bc25fdcc/U3V2Wmh2bD/5c7e4019-b90a-4100-ab01-3746bc25fdcc.pdf'
```
|
non_process
|
app tries to use google or amazon storage buckets even though local storage is selected in admin dashboard could be a docker issue if it works locally the point is irrespective of this error being thrown server still returns a garbage corrupt pdf so it looks like everything is fine on the frontend but the tickets don t actually open filenotfounderror no such file or directory data app static generated tickets attendees tickets pdf pdf
| 0
|
78,078
| 14,946,138,497
|
IssuesEvent
|
2021-01-26 06:05:04
|
UBC-Thunderbots/Software
|
https://api.github.com/repos/UBC-Thunderbots/Software
|
closed
|
Investigate Gamecontroller usage in Simulated Tests
|
Difficulty - 21 G2 - Simulation G3 - Code Quality G6 - Gameplay and Navigation T - Enhancement
|
### Description of the task
<!--
What does this work depend on?
What interface will this work use or create?
What are the main components of the task?
Where does this work fit in the larger project?
It is important to define this task sufficiently so that an untrained
team member can take it on and know where to start. Feel free to
link to resources or other team member which could guide the assignee to
complete the task
-->
SSL Gamecontroller has a CI mode that we could use to handle all the validation of the ssl rules as opposed to writing validation functions ourselves (this may also require running a refbox).
Investigate to see how easy it would be to use and if the complexity of integration is worth it.
Pros
* Use the most up-to-date rules as defined by the SSL. No bugs from us mis-writing rules
* Easy to update for rule changes by downloading a new binary
* We don't have to write any functions for SSL rules
Cons
* Possibly more complexity to integrate
* Possibly harder to use to debug if messages and failure modes aren't clear
For context, a theoretical test setup with the gamecontroller might look like
* Run the gamecontroller alongside the simulated test executable
* For every simulated "camera frame", send that information over the network to the gamecontroller
* Block until the gamecontroller sends an update back, and check for any faults or errors (eg. gamecontroller reports robot collision or a max speed violation)
* Continue until all validation functions pass or the test times out
### Acceptance criteria
<!--
Checkbox list that outlines what needs to be done in order for this task
to be considered "complete".
Specify any implementation requirements such as data structures,
functionalities, testing requirements, documentation, etc.
-->
- [ ] See if it's worth using the gamecontroller in CI mode
### Blocked By
<!--
List all other issues that need to be completed before this one, ex:
- #123
- #374
-->
|
1.0
|
Investigate Gamecontroller usage in Simulated Tests - ### Description of the task
<!--
What does this work depend on?
What interface will this work use or create?
What are the main components of the task?
Where does this work fit in the larger project?
It is important to define this task sufficiently so that an untrained
team member can take it on and know where to start. Feel free to
link to resources or other team member which could guide the assignee to
complete the task
-->
SSL Gamecontroller has a CI mode that we could use to handle all the validation of the ssl rules as opposed to writing validation functions ourselves (this may also require running a refbox).
Investigate to see how easy it would be to use and if the complexity of integration is worth it.
Pros
* Use the most up-to-date rules as defined by the SSL. No bugs from us mis-writing rules
* Easy to update for rule changes by downloading a new binary
* We don't have to write any functions for SSL rules
Cons
* Possibly more complexity to integrate
* Possibly harder to use to debug if messages and failure modes aren't clear
For context, a theoretical test setup with the gamecontroller might look like
* Run the gamecontroller alongside the simulated test executable
* For every simulated "camera frame", send that information over the network to the gamecontroller
* Block until the gamecontroller sends an update back, and check for any faults or errors (eg. gamecontroller reports robot collision or a max speed violation)
* Continue until all validation functions pass or the test times out
### Acceptance criteria
<!--
Checkbox list that outlines what needs to be done in order for this task
to be considered "complete".
Specify any implementation requirements such as data structures,
functionalities, testing requirements, documentation, etc.
-->
- [ ] See if it's worth using the gamecontroller in CI mode
### Blocked By
<!--
List all other issues that need to be completed before this one, ex:
- #123
- #374
-->
|
non_process
|
investigate gamecontroller usage in simulated tests description of the task what does this work depend on what interface will this work use or create what are the main components of the task where does this work fit in the larger project it is important to define this task sufficiently so that an untrained team member can take it on and know where to start feel free to link to resources or other team member which could guide the assignee to complete the task ssl gamecontroller has a ci mode that we could use to handle all the validation of the ssl rules as opposed to writing validation functions ourselves this may also require running a refbox investigate to see how easy it would be to use and if the complexity of integration is worth it pros use the most up to date rules as defined by the ssl no bugs from us mis writing rules easy to update for rule changes by downloading a new binary we don t have to write any functions for ssl rules cons possibly more complexity to integrate possibly harder to use to debug if messages and failure modes aren t clear for context a theoretical test setup with the gamecontroller might look like run the gamecontroller alongside the simulated test executable for every simulated camera frame send that information over the network to the gamecontroller block until the gamecontroller sends an update back and check for any faults or errors eg gamecontroller reports robot collision or a max speed violation continue until all validation functions pass or the test times out acceptance criteria checkbox list that outlines what needs to be done in order for this task to be considered complete specify any implementation requirements such as data structures functionalities testing requirements documentation etc see if it s worth using the gamecontroller in ci mode blocked by list all other issues that need to be completed before this one ex
| 0
|
7,351
| 10,483,167,781
|
IssuesEvent
|
2019-09-24 13:30:39
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
opened
|
Error while fetching email from inbox will block processing of other mails in inbox
|
bug mail processing prioritized by payment verified
|
* Used Zammad version: 3.1.x
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
*Ticket #: 1053593
### Expected behavior:
* Fetch mails via IMAP/POP3 channel backend
* Error occurs while fetching email
* Log expressive error to log / channel / maintenance endpoit
* Process with other emails
### Actual behavior:
* Fetch mails via IMAP/POP3 channel backend
* Error occurs while fetching email
* Log exact error to log
* No further fetching/processing of other mails in inbox
### Steps to reproduce the behavior:
* Have an unfetchable mail in your inbox
* See a log message like:
```
I, [2019-09-16T19:07:44.077396 #22185-70321870965020] INFO -- : fetching imap (mail.example.com/info@example.com port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true) I, [2019-09-16T19:07:44.351265 #22185-70321870965020] INFO -- : - message 1/80 E, [2019-09-16T19:07:44.446517 #22185-70321870965020] ERROR -- : Can't use Channel::Driver::Imap: #<Net::IMAP::ResponseParseError: unknown token - "\"Jetzt"> E, [2019-09-16T19:07:44.446559 #22185-70321870965020] ERROR -- : unknown token - "\"Jetzt" (Net::IMAP::ResponseParseError)
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3492:in `parse_error'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3437:in `next_token'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3354:in `lookahead'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3252:in `nstring'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2399:in `envelope'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2386:in `envelope_data'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2359:in `msg_att'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2339:in `numeric_response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2281:in `response_untagged'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2252:in `response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2178:in `parse'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1242:in `get_response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1145:in `receive_responses'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1120:in `block in initialize'
/usr/local/rvm/gems/ruby-2.5.5/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
```
Or:
```
I, [2019-09-24T10:38:48.837126 #29702-19506700] INFO -- : - message 1/2
E, [2019-09-24T10:38:48.889973 #29702-19506700] ERROR -- : Can't use Channel::Driver::Imap: #<Net::IMAP::ResponseParseError: unknown token - "\"RE:">
E, [2019-09-24T10:38:48.890016 #29702-19506700] ERROR -- : unknown token - "\"RE:" (Net::IMAP::ResponseParseError)
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3492:in `parse_error'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3437:in `next_token'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3354:in `lookahead'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3252:in `nstring'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2399:in `envelope'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2386:in `envelope_data'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2359:in `msg_att'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2339:in `numeric_response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2281:in `response_untagged'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2252:in `response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2178:in `parse'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1242:in `get_response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1145:in `receive_responses'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1120:in `block in initialize'
/usr/local/rvm/gems/ruby-2.5.5/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
I, [2019-09-24T10:38:49.126744 #29702-19506700] INFO -- : fetching imap (example.com/info@example.com port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2019-09-24T10:38:49.397862 #29702-19506700] INFO -- : - no message
```
* See mails piling up in your mailbox
There is an example mail in T#1053593.
Yes I'm sure this is a bug and no feature request or a general question.
|
1.0
|
Error while fetching email from inbox will block processing of other mails in inbox - * Used Zammad version: 3.1.x
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
*Ticket #: 1053593
### Expected behavior:
* Fetch mails via IMAP/POP3 channel backend
* Error occurs while fetching email
* Log expressive error to log / channel / maintenance endpoit
* Process with other emails
### Actual behavior:
* Fetch mails via IMAP/POP3 channel backend
* Error occurs while fetching email
* Log exact error to log
* No further fetching/processing of other mails in inbox
### Steps to reproduce the behavior:
* Have an unfetchable mail in your inbox
* See a log message like:
```
I, [2019-09-16T19:07:44.077396 #22185-70321870965020] INFO -- : fetching imap (mail.example.com/info@example.com port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true) I, [2019-09-16T19:07:44.351265 #22185-70321870965020] INFO -- : - message 1/80 E, [2019-09-16T19:07:44.446517 #22185-70321870965020] ERROR -- : Can't use Channel::Driver::Imap: #<Net::IMAP::ResponseParseError: unknown token - "\"Jetzt"> E, [2019-09-16T19:07:44.446559 #22185-70321870965020] ERROR -- : unknown token - "\"Jetzt" (Net::IMAP::ResponseParseError)
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3492:in `parse_error'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3437:in `next_token'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3354:in `lookahead'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3252:in `nstring'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2399:in `envelope'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2386:in `envelope_data'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2359:in `msg_att'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2339:in `numeric_response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2281:in `response_untagged'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2252:in `response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2178:in `parse'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1242:in `get_response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1145:in `receive_responses'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1120:in `block in initialize'
/usr/local/rvm/gems/ruby-2.5.5/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
```
Or:
```
I, [2019-09-24T10:38:48.837126 #29702-19506700] INFO -- : - message 1/2
E, [2019-09-24T10:38:48.889973 #29702-19506700] ERROR -- : Can't use Channel::Driver::Imap: #<Net::IMAP::ResponseParseError: unknown token - "\"RE:">
E, [2019-09-24T10:38:48.890016 #29702-19506700] ERROR -- : unknown token - "\"RE:" (Net::IMAP::ResponseParseError)
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3492:in `parse_error'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3437:in `next_token'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3354:in `lookahead'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3252:in `nstring'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2399:in `envelope'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2386:in `envelope_data'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2359:in `msg_att'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2339:in `numeric_response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2281:in `response_untagged'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2252:in `response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2178:in `parse'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1242:in `get_response'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1145:in `receive_responses'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1120:in `block in initialize'
/usr/local/rvm/gems/ruby-2.5.5/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
I, [2019-09-24T10:38:49.126744 #29702-19506700] INFO -- : fetching imap (example.com/info@example.com port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2019-09-24T10:38:49.397862 #29702-19506700] INFO -- : - no message
```
* See mails piling up in your mailbox
There is an example mail in T#1053593.
Yes I'm sure this is a bug and no feature request or a general question.
|
process
|
error while fetching email from inbox will block processing of other mails in inbox used zammad version x installation method source package any operating system any database version any elasticsearch version any browser version any ticket expected behavior fetch mails via imap channel backend error occurs while fetching email log expressive error to log channel maintenance endpoit process with other emails actual behavior fetch mails via imap channel backend error occurs while fetching email log exact error to log no further fetching processing of other mails in inbox steps to reproduce the behavior have an unfetchable mail in your inbox see a log message like i info fetching imap mail example com info example com port ssl true starttls false folder inbox keep on server true i info message e error can t use channel driver imap e error unknown token jetzt net imap responseparseerror usr local rvm rubies ruby lib ruby net imap rb in parse error usr local rvm rubies ruby lib ruby net imap rb in next token usr local rvm rubies ruby lib ruby net imap rb in lookahead usr local rvm rubies ruby lib ruby net imap rb in nstring usr local rvm rubies ruby lib ruby net imap rb in envelope usr local rvm rubies ruby lib ruby net imap rb in envelope data usr local rvm rubies ruby lib ruby net imap rb in msg att usr local rvm rubies ruby lib ruby net imap rb in numeric response usr local rvm rubies ruby lib ruby net imap rb in response untagged usr local rvm rubies ruby lib ruby net imap rb in response usr local rvm rubies ruby lib ruby net imap rb in parse usr local rvm rubies ruby lib ruby net imap rb in get response usr local rvm rubies ruby lib ruby net imap rb in receive responses usr local rvm rubies ruby lib ruby net imap rb in block in initialize usr local rvm gems ruby gems logging lib logging diagnostic context rb in block in create with logging context or i info message e error can t use channel driver imap e error unknown token re net imap responseparseerror usr local rvm rubies ruby lib ruby net imap rb in parse error usr local rvm rubies ruby lib ruby net imap rb in next token usr local rvm rubies ruby lib ruby net imap rb in lookahead usr local rvm rubies ruby lib ruby net imap rb in nstring usr local rvm rubies ruby lib ruby net imap rb in envelope usr local rvm rubies ruby lib ruby net imap rb in envelope data usr local rvm rubies ruby lib ruby net imap rb in msg att usr local rvm rubies ruby lib ruby net imap rb in numeric response usr local rvm rubies ruby lib ruby net imap rb in response untagged usr local rvm rubies ruby lib ruby net imap rb in response usr local rvm rubies ruby lib ruby net imap rb in parse usr local rvm rubies ruby lib ruby net imap rb in get response usr local rvm rubies ruby lib ruby net imap rb in receive responses usr local rvm rubies ruby lib ruby net imap rb in block in initialize usr local rvm gems ruby gems logging lib logging diagnostic context rb in block in create with logging context i info fetching imap example com info example com port ssl true starttls false folder inbox keep on server true i info no message see mails piling up in your mailbox there is an example mail in t yes i m sure this is a bug and no feature request or a general question
| 1
|
14,362
| 17,382,012,971
|
IssuesEvent
|
2021-07-31 22:58:30
|
AcademySoftwareFoundation/OpenCue
|
https://api.github.com/repos/AcademySoftwareFoundation/OpenCue
|
closed
|
Upgrade gRPC
|
process triaged
|
**Describe the process**
gRPC doesn't publish wheels for older versions of gRPC on newer versions of Python. For example, I just ran into this trying to set up a Windows dev environment using Python 3.8.
This makes the install process extremely complex, at least on Windows where it's very difficult to get the build environment set up correctly.
This will require a minor version bump -- it's likely newer compiled protos will not be compatible with older versions, or vice versa.
|
1.0
|
Upgrade gRPC - **Describe the process**
gRPC doesn't publish wheels for older versions of gRPC on newer versions of Python. For example, I just ran into this trying to set up a Windows dev environment using Python 3.8.
This makes the install process extremely complex, at least on Windows where it's very difficult to get the build environment set up correctly.
This will require a minor version bump -- it's likely newer compiled protos will not be compatible with older versions, or vice versa.
|
process
|
upgrade grpc describe the process grpc doesn t publish wheels for older versions of grpc on newer versions of python for example i just ran into this trying to set up a windows dev environment using python this makes the install process extremely complex at least on windows where it s very difficult to get the build environment set up correctly this will require a minor version bump it s likely newer compiled protos will not be compatible with older versions or vice versa
| 1
|
227,895
| 17,402,217,289
|
IssuesEvent
|
2021-08-02 21:30:56
|
unitaryfund/mitiq
|
https://api.github.com/repos/unitaryfund/mitiq
|
closed
|
Add code from the mitiq paper
|
documentation feature-request
|
The co-authors have agreed on making public the code relevant to the code snippets and plots appearing figures of the Mitiq paper (https://arxiv.org/abs/2009.04417).
We discussed different possible routes. One was to have a separate repository, `mitiq-paper`. The one we are implementing, to ensure that the code keeps working and to reduce overhead, is to have the code snippets and plots included in the documentation of the library (run and tested with continuous integration).
We proposed to have the jupyter notebooks along with other examples in the **Mitiq Examples** section of the Users Guide in the documentation.
We will also have accompanying data, that I propose we include simply in a `data/` subfolder there, under `docs/source/examples`. It is not heavy so it shouldn't be worth to selectively exclude it when cloning the repository.
We also mentioned the fact that tags for the arXiv versions could help us pinpoint eventual modifications in the repository.
I propose to also add some information about it under the **Research** section of the documentation.
We did not create a Zenodo repository to avoid duplications of bibliographic records, with respect to the paper (once published) as this generates a DOI.
|
1.0
|
Add code from the mitiq paper - The co-authors have agreed on making public the code relevant to the code snippets and plots appearing figures of the Mitiq paper (https://arxiv.org/abs/2009.04417).
We discussed different possible routes. One was to have a separate repository, `mitiq-paper`. The one we are implementing, to ensure that the code keeps working and to reduce overhead, is to have the code snippets and plots included in the documentation of the library (run and tested with continuous integration).
We proposed to have the jupyter notebooks along with other examples in the **Mitiq Examples** section of the Users Guide in the documentation.
We will also have accompanying data, that I propose we include simply in a `data/` subfolder there, under `docs/source/examples`. It is not heavy so it shouldn't be worth to selectively exclude it when cloning the repository.
We also mentioned the fact that tags for the arXiv versions could help us pinpoint eventual modifications in the repository.
I propose to also add some information about it under the **Research** section of the documentation.
We did not create a Zenodo repository to avoid duplications of bibliographic records, with respect to the paper (once published) as this generates a DOI.
|
non_process
|
add code from the mitiq paper the co authors have agreed on making public the code relevant to the code snippets and plots appearing figures of the mitiq paper we discussed different possible routes one was to have a separate repository mitiq paper the one we are implementing to ensure that the code keeps working and to reduce overhead is to have the code snippets and plots included in the documentation of the library run and tested with continuous integration we proposed to have the jupyter notebooks along with other examples in the mitiq examples section of the users guide in the documentation we will also have accompanying data that i propose we include simply in a data subfolder there under docs source examples it is not heavy so it shouldn t be worth to selectively exclude it when cloning the repository we also mentioned the fact that tags for the arxiv versions could help us pinpoint eventual modifications in the repository i propose to also add some information about it under the research section of the documentation we did not create a zenodo repository to avoid duplications of bibliographic records with respect to the paper once published as this generates a doi
| 0
|
20,489
| 27,146,569,594
|
IssuesEvent
|
2023-02-16 20:27:19
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Honey SQL 2 `InlineValue` behavior for `clojure.lang.Ratio` is busted
|
Type:Bug Priority:P2 Querying/Processor .Backend
|
We're relying on this for a few things. We need to add a mapping so it doesn't do something dumb.
```clj
;; current behavior
(sql/format {:select [[[:/
[:inline 4]
[:inline (/ 1 3)]]
:x]]})
["SELECT 4 / 1/3 AS x"]
```
```
4 / 1 / 3
=> (4 / 1) / 3
=> 4 / 3
=> 1.3333...
```
which is not what we wanted.
This should actually be
```clj
["SELECT 4 / (1 / 3) AS x"]
```
```
4 / (1 / 3)
=> 4 / 0.3333...
=> 12
```
|
1.0
|
Honey SQL 2 `InlineValue` behavior for `clojure.lang.Ratio` is busted - We're relying on this for a few things. We need to add a mapping so it doesn't do something dumb.
```clj
;; current behavior
(sql/format {:select [[[:/
[:inline 4]
[:inline (/ 1 3)]]
:x]]})
["SELECT 4 / 1/3 AS x"]
```
```
4 / 1 / 3
=> (4 / 1) / 3
=> 4 / 3
=> 1.3333...
```
which is not what we wanted.
This should actually be
```clj
["SELECT 4 / (1 / 3) AS x"]
```
```
4 / (1 / 3)
=> 4 / 0.3333...
=> 12
```
|
process
|
honey sql inlinevalue behavior for clojure lang ratio is busted we re relying on this for a few things we need to add a mapping so it doesn t do something dumb clj current behavior sql format select x which is not what we wanted this should actually be clj
| 1
|
2,042
| 4,848,587,119
|
IssuesEvent
|
2016-11-10 17:56:07
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
opened
|
Cannot go to task from process
|
browser: firefox browser: safari bug comp: activiti-processList
|
1. Start a process
2. Click on task
3. Notice no actions occur

Works fine in Chrome but not Firefox or Safari
|
1.0
|
Cannot go to task from process - 1. Start a process
2. Click on task
3. Notice no actions occur

Works fine in Chrome but not Firefox or Safari
|
process
|
cannot go to task from process start a process click on task notice no actions occur works fine in chrome but not firefox or safari
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.