Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17,438
| 23,263,025,328
|
IssuesEvent
|
2022-08-04 14:55:57
|
googleapis/java-shared-dependencies
|
https://api.github.com/repos/googleapis/java-shared-dependencies
|
closed
|
GitHub Check to find linkage errors
|
type: process
|
The current setup of Linkage Monitor does not detect linkage errors between the member of the shared dependencies BOM. This is because Linkage Monitor detects conflicts in the libraries BOM and the Libraries BOM does not import the shared dependencies BOM.
Jeff came up with an idea in running Linkage Monitor in the similar way as the existing "downstream" checks. I'll explore that option.
|
1.0
|
GitHub Check to find linkage errors - The current setup of Linkage Monitor does not detect linkage errors between the member of the shared dependencies BOM. This is because Linkage Monitor detects conflicts in the libraries BOM and the Libraries BOM does not import the shared dependencies BOM.
Jeff came up with an idea in running Linkage Monitor in the similar way as the existing "downstream" checks. I'll explore that option.
|
process
|
github check to find linkage errors the current setup of linkage monitor does not detect linkage errors between the member of the shared dependencies bom this is because linkage monitor detects conflicts in the libraries bom and the libraries bom does not import the shared dependencies bom jeff came up with an idea in running linkage monitor in the similar way as the existing downstream checks i ll explore that option
| 1
|
96,605
| 12,144,906,570
|
IssuesEvent
|
2020-04-24 08:24:34
|
se701-group6/A2
|
https://api.github.com/repos/se701-group6/A2
|
closed
|
Webapp can be used without logging in.
|
approved bug design front-end
|
**Describe the bug**
I can access any page without having to login first.
**To Reproduce**
Steps to reproduce the behavior:
1. Start the webapp
2. Go to login page and login
3. Copy the URL
4. Go to a different browser or private browsing (of the same browser)
5. Paste the URL
6. Go to the URL
OR
1. Start the webapp
2. Go to localhost:3000/#/home/split
3. Create a bill
4. Press "Split Bill"
**Severity**
Mark with an x.
[x]: Minor effect eg. graphical
[]: Functional error eg. App does not function correctly
[]: Severe eg. Crashing
**Reproducibility**
Mark with an x.
[x]: Consistent
[]: Occasional
[]: Cannot reproduce at the moment eg. unsure
**Expected behavior**
I should be redirected to the login screen if I haven't already logged in on the browser.
**Screenshots, Images and Traces**


**Conditions**
I used Ubuntu OS with FireFox, FireFox Private Browsing and Chromium. I have not tested this on Windows OS.
**Additional context**
* The backend will return a message saying "User not logged in", meaning it is responding correctly to not logging in.
* Using the second set of instructions the new bill isn't saved due to the above stated error message.
|
1.0
|
Webapp can be used without logging in. - **Describe the bug**
I can access any page without having to login first.
**To Reproduce**
Steps to reproduce the behavior:
1. Start the webapp
2. Go to login page and login
3. Copy the URL
4. Go to a different browser or private browsing (of the same browser)
5. Paste the URL
6. Go to the URL
OR
1. Start the webapp
2. Go to localhost:3000/#/home/split
3. Create a bill
4. Press "Split Bill"
**Severity**
Mark with an x.
[x]: Minor effect eg. graphical
[]: Functional error eg. App does not function correctly
[]: Severe eg. Crashing
**Reproducibility**
Mark with an x.
[x]: Consistent
[]: Occasional
[]: Cannot reproduce at the moment eg. unsure
**Expected behavior**
I should be redirected to the login screen if I haven't already logged in on the browser.
**Screenshots, Images and Traces**


**Conditions**
I used Ubuntu OS with FireFox, FireFox Private Browsing and Chromium. I have not tested this on Windows OS.
**Additional context**
* The backend will return a message saying "User not logged in", meaning it is responding correctly to not logging in.
* Using the second set of instructions the new bill isn't saved due to the above stated error message.
|
non_process
|
webapp can be used without logging in describe the bug i can access any page without having to login first to reproduce steps to reproduce the behavior start the webapp go to login page and login copy the url go to a different browser or private browsing of the same browser paste the url go to the url or start the webapp go to localhost home split create a bill press split bill severity mark with an x minor effect eg graphical functional error eg app does not function correctly severe eg crashing reproducibility mark with an x consistent occasional cannot reproduce at the moment eg unsure expected behavior i should be redirected to the login screen if i haven t already logged in on the browser screenshots images and traces conditions i used ubuntu os with firefox firefox private browsing and chromium i have not tested this on windows os additional context the backend will return a message saying user not logged in meaning it is responding correctly to not logging in using the second set of instructions the new bill isn t saved due to the above stated error message
| 0
|
98,158
| 29,497,233,802
|
IssuesEvent
|
2023-06-02 18:06:00
|
openvinotoolkit/openvino
|
https://api.github.com/repos/openvinotoolkit/openvino
|
closed
|
ie_precision.hpp defines == operator but not !=
|
category: inference category: build platform: win32
|
See https://github.com/openvinotoolkit/openvino/blob/031f2cc7d1a9aa10fb8a242057735d7ef1fd7f71/src/inference/include/ie/ie_precision.hpp#L162
I just upgraded to Visual Studio 17.6 and in our codebase this header file generates error [C2666](https://learn.microsoft.com/en-us/cpp/error-messages/compiler-errors-2/compiler-error-c2666?view=msvc-170) . We are using `/std:c++latest`. I believe it disregards the == operator because of a missing != operator. Indeed adding
```c++
bool operator!=(const Precision& p) const noexcept { return !(*this == p); }
```
fixes our code.
I'm not in a position to compile OpenVino itself at the moment, otherwise I'd have done a PR with this suggestion, or is there a reason not to add the != operator?
|
1.0
|
ie_precision.hpp defines == operator but not != - See https://github.com/openvinotoolkit/openvino/blob/031f2cc7d1a9aa10fb8a242057735d7ef1fd7f71/src/inference/include/ie/ie_precision.hpp#L162
I just upgraded to Visual Studio 17.6 and in our codebase this header file generates error [C2666](https://learn.microsoft.com/en-us/cpp/error-messages/compiler-errors-2/compiler-error-c2666?view=msvc-170) . We are using `/std:c++latest`. I believe it disregards the == operator because of a missing != operator. Indeed adding
```c++
bool operator!=(const Precision& p) const noexcept { return !(*this == p); }
```
fixes our code.
I'm not in a position to compile OpenVino itself at the moment, otherwise I'd have done a PR with this suggestion, or is there a reason not to add the != operator?
|
non_process
|
ie precision hpp defines operator but not see i just upgraded to visual studio and in our codebase this header file generates error we are using std c latest i believe it disregards the operator because of a missing operator indeed adding c bool operator const precision p const noexcept return this p fixes our code i m not in a position to compile openvino itself at the moment otherwise i d have done a pr with this suggestion or is there a reason not to add the operator
| 0
|
12,606
| 15,008,151,099
|
IssuesEvent
|
2021-01-31 08:44:21
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
Expiration dates do not match payment plans.
|
process_duplicate type_bug
|
1) I deployed Two VDCs today, 28/01/2021
2) One with 15 TFT on testnet ( silver ) one with 30 TFT on testnet ( gold )
I would expect both these to be valid until 28/02/2021 ( or at least 30 days ) based on what was said when I picked the specific plans.
Reality is =>

|
1.0
|
Expiration dates do not match payment plans. - 1) I deployed Two VDCs today, 28/01/2021
2) One with 15 TFT on testnet ( silver ) one with 30 TFT on testnet ( gold )
I would expect both these to be valid until 28/02/2021 ( or at least 30 days ) based on what was said when I picked the specific plans.
Reality is =>

|
process
|
expiration dates do not match payment plans i deployed two vdcs today one with tft on testnet silver one with tft on testnet gold i would expect both these to be valid until or at least days based on what was said when i picked the specific plans reality is
| 1
|
3,077
| 3,321,485,916
|
IssuesEvent
|
2015-11-09 09:15:30
|
gama-platform/gama
|
https://api.github.com/repos/gama-platform/gama
|
opened
|
Remember the gama workspace after a crash of the application
|
> Question Affects Usability
|
It should be nice to remember the workspace after a crash of the application. Maybe to save the workspace after each file closing/opening should be better than to save the workspace after a clean application closing...
|
True
|
Remember the gama workspace after a crash of the application - It should be nice to remember the workspace after a crash of the application. Maybe to save the workspace after each file closing/opening should be better than to save the workspace after a clean application closing...
|
non_process
|
remember the gama workspace after a crash of the application it should be nice to remember the workspace after a crash of the application maybe to save the workspace after each file closing opening should be better than to save the workspace after a clean application closing
| 0
|
19,289
| 25,466,313,287
|
IssuesEvent
|
2022-11-25 05:00:31
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[GCI] [PM] Getting 'An internal error has occurred' message in participant manager when tried to sign in
|
Bug Blocker P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Add organizational user in the participant manager
2. Complete set up your account process
3. Sign in with registered credentials and Verify
**AR:** Getting an internal error has occurred message when tried to sign in
**ER:** Admin's should be able to sign without any error's
**Participant manager**

|
3.0
|
[GCI] [PM] Getting 'An internal error has occurred' message in participant manager when tried to sign in - **Steps:**
1. Add organizational user in the participant manager
2. Complete set up your account process
3. Sign in with registered credentials and Verify
**AR:** Getting an internal error has occurred message when tried to sign in
**ER:** Admin's should be able to sign without any error's
**Participant manager**

|
process
|
getting an internal error has occurred message in participant manager when tried to sign in steps add organizational user in the participant manager complete set up your account process sign in with registered credentials and verify ar getting an internal error has occurred message when tried to sign in er admin s should be able to sign without any error s participant manager
| 1
|
641,880
| 20,842,760,139
|
IssuesEvent
|
2022-03-21 03:44:52
|
patternfly/patternfly-elements
|
https://api.github.com/repos/patternfly/patternfly-elements
|
closed
|
Add UTC example to pfe-datetime
|
priority: low update component request
|
# pfe-datetime
Add an example of how to specify the time-zone at UTC for `pfe-datetime`.
|
1.0
|
Add UTC example to pfe-datetime - # pfe-datetime
Add an example of how to specify the time-zone at UTC for `pfe-datetime`.
|
non_process
|
add utc example to pfe datetime pfe datetime add an example of how to specify the time zone at utc for pfe datetime
| 0
|
117,117
| 15,055,665,158
|
IssuesEvent
|
2021-02-03 19:06:25
|
ctoec/data-collection
|
https://api.github.com/repos/ctoec/data-collection
|
opened
|
Redesign Reporting Period select
|
design-team
|
As an ECE Reporter user, when I choose reporting periods:
- I can more easily find and select the reporting period that accurately represents the child's enrollment
- I am not confused by a long drop-down with many different options
|
1.0
|
Redesign Reporting Period select - As an ECE Reporter user, when I choose reporting periods:
- I can more easily find and select the reporting period that accurately represents the child's enrollment
- I am not confused by a long drop-down with many different options
|
non_process
|
redesign reporting period select as an ece reporter user when i choose reporting periods i can more easily find and select the reporting period that accurately represents the child s enrollment i am not confused by a long drop down with many different options
| 0
|
10,610
| 13,435,983,847
|
IssuesEvent
|
2020-09-07 13:43:53
|
googleapis/python-pubsub
|
https://api.github.com/repos/googleapis/python-pubsub
|
closed
|
Transition the library to the new microgenerator
|
api: pubsub type: process
|
With the new code generator ready to be rolled out, we can make the transition here in PubSub. **This implies dropping support for Pythom 2.7 and 3.5!**
|
1.0
|
Transition the library to the new microgenerator - With the new code generator ready to be rolled out, we can make the transition here in PubSub. **This implies dropping support for Pythom 2.7 and 3.5!**
|
process
|
transition the library to the new microgenerator with the new code generator ready to be rolled out we can make the transition here in pubsub this implies dropping support for pythom and
| 1
|
6,581
| 9,661,648,962
|
IssuesEvent
|
2019-05-20 18:36:42
|
AmpersandTarski/Ampersand
|
https://api.github.com/repos/AmpersandTarski/Ampersand
|
closed
|
On Travis CI: Temp database creation failed!
|
priority:high software process
|
## What happens?
Lately we get a failure when testing on Travis CI, with this error: [Here is an example of such a build](https://travis-ci.org/AmpersandTarski/Ampersand/jobs/511197166).
Because of this, we get false negatives as a build result.
## Diagnosis
I have seen several locations where different tests go wrong. It seems to occur with random test cases.
|
1.0
|
On Travis CI: Temp database creation failed! - ## What happens?
Lately we get a failure when testing on Travis CI, with this error: [Here is an example of such a build](https://travis-ci.org/AmpersandTarski/Ampersand/jobs/511197166).
Because of this, we get false negatives as a build result.
## Diagnosis
I have seen several locations where different tests go wrong. It seems to occur with random test cases.
|
process
|
on travis ci temp database creation failed what happens lately we get a failure when testing on travis ci with this error because of this we get false negatives as a build result diagnosis i have seen several locations where different tests go wrong it seems to occur with random test cases
| 1
|
27,411
| 5,346,105,901
|
IssuesEvent
|
2017-02-17 18:46:31
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
opened
|
Update documentation, Global Configuration
|
documentation-required
|
Add new global option to https://github.com/bridgedotnet/Bridge/wiki/global-configuration
## disabledAnnotatedFunctionNames
**Type:** `boolean`
True to disable function name adding if Name attribute is applied to a C# method. False is default value.
#### Example
C# method example
```cs
[Name("sm")]
public void SomeMethod();
```
Generated code js code for `false`
```js
sm: function sm () ...
```
Generated code js code for `true`
```js
sm: function () ...
```
#### Example of `bridge.json`
```js
"disabledAnnotatedFunctionNames": true
```
|
1.0
|
Update documentation, Global Configuration - Add new global option to https://github.com/bridgedotnet/Bridge/wiki/global-configuration
## disabledAnnotatedFunctionNames
**Type:** `boolean`
True to disable function name adding if Name attribute is applied to a C# method. False is default value.
#### Example
C# method example
```cs
[Name("sm")]
public void SomeMethod();
```
Generated code js code for `false`
```js
sm: function sm () ...
```
Generated code js code for `true`
```js
sm: function () ...
```
#### Example of `bridge.json`
```js
"disabledAnnotatedFunctionNames": true
```
|
non_process
|
update documentation global configuration add new global option to disabledannotatedfunctionnames type boolean true to disable function name adding if name attribute is applied to a c method false is default value example c method example cs public void somemethod generated code js code for false js sm function sm generated code js code for true js sm function example of bridge json js disabledannotatedfunctionnames true
| 0
|
8,821
| 11,937,527,816
|
IssuesEvent
|
2020-04-02 12:20:24
|
ComposableWeb/poolbase
|
https://api.github.com/repos/ComposableWeb/poolbase
|
opened
|
feat(plugin): process known formats with a plugin
|
epic: processing
|
Example: process cooking recipe from known sites or from microformat
|
1.0
|
feat(plugin): process known formats with a plugin - Example: process cooking recipe from known sites or from microformat
|
process
|
feat plugin process known formats with a plugin example process cooking recipe from known sites or from microformat
| 1
|
10,870
| 13,640,424,138
|
IssuesEvent
|
2020-09-25 12:43:59
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
closed
|
New `truncate` remap function
|
domain: mapping domain: processing type: feature
|
The `truncate` function truncates a string to the provided length.
## Example
Given the following event:
```js
{
"message": "Supercalifragilisticexpialidocious"
}
```
### Default
And the following remap instruction set:
```
.message = truncate(.message, 5)
```
Would result in:
```js
{
"message": "Super"
}
```
### Ellipsis
And the following remap instruction set:
```
.message = truncate(.message, 5, true)
```
Would result in:
```js
{
"message": "Super..."
}
```
Or, if the limit exceeds the length of the string, then no `...` is added:
```
.message = truncate(.message, 9999999, true)
```
Would result in:
```js
{
"message": "Supercalifragilisticexpialidocious"
}
```
|
1.0
|
New `truncate` remap function - The `truncate` function truncates a string to the provided length.
## Example
Given the following event:
```js
{
"message": "Supercalifragilisticexpialidocious"
}
```
### Default
And the following remap instruction set:
```
.message = truncate(.message, 5)
```
Would result in:
```js
{
"message": "Super"
}
```
### Ellipsis
And the following remap instruction set:
```
.message = truncate(.message, 5, true)
```
Would result in:
```js
{
"message": "Super..."
}
```
Or, if the limit exceeds the length of the string, then no `...` is added:
```
.message = truncate(.message, 9999999, true)
```
Would result in:
```js
{
"message": "Supercalifragilisticexpialidocious"
}
```
|
process
|
new truncate remap function the truncate function truncates a string to the provided length example given the following event js message supercalifragilisticexpialidocious default and the following remap instruction set message truncate message would result in js message super ellipsis and the following remap instruction set message truncate message true would result in js message super or if the limit exceeds the length of the string then no is added message truncate message true would result in js message supercalifragilisticexpialidocious
| 1
|
20,571
| 27,231,130,187
|
IssuesEvent
|
2023-02-21 13:19:45
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
PS7.3 (using .NET 7) locks up when used with screen on Linux and macOS
|
area-System.Diagnostics.Process untriaged in-pr
|
### Description
When using PS7.2 (built on .NET 6) with screen, it works fine on Linux (still seems to lock up on macOS).
PS7.3 (built on .NET 7) with screen will lock up (seems to be waiting on stdin) after running a native command:
```bash
screen ./pwsh
ls
```
Running cmdlets (like `get-childitem`) works fine. Only when executing a native command does it seem to be blocking and because the process doesn't finish, the PS prompt never comes.
I did rebuild 7.2 (which works with screen on Linux) against .NET 7 and that also exhibits the same locking behavior, so it seems that it's due to a change in .NET 7.
Using `set DOTNET_SYSTEM_CONSOLE_USENET6COMPATREADKEY=1` does not resolve the issue, so it seems unrelated to those changes.
### Reproduction Steps
I tried creating a simple repl that starts exes, but it doesn't repro the issue. PS7 does more complex work reading keyboard and handling starting exes.
```bash
screen ./pwsh
ls
```
### Expected behavior
`ls` to output and PS prompt to come back
### Actual behavior
`ls` shows output, and process is blocked
### Regression?
From .NET 6
### Known Workarounds
_No response_
### Configuration
Specifically tested under Ubuntu 22.04 using WSL
### Other information
_No response_
|
1.0
|
PS7.3 (using .NET 7) locks up when used with screen on Linux and macOS - ### Description
When using PS7.2 (built on .NET 6) with screen, it works fine on Linux (still seems to lock up on macOS).
PS7.3 (built on .NET 7) with screen will lock up (seems to be waiting on stdin) after running a native command:
```bash
screen ./pwsh
ls
```
Running cmdlets (like `get-childitem`) works fine. Only when executing a native command does it seem to be blocking and because the process doesn't finish, the PS prompt never comes.
I did rebuild 7.2 (which works with screen on Linux) against .NET 7 and that also exhibits the same locking behavior, so it seems that it's due to a change in .NET 7.
Using `set DOTNET_SYSTEM_CONSOLE_USENET6COMPATREADKEY=1` does not resolve the issue, so it seems unrelated to those changes.
### Reproduction Steps
I tried creating a simple repl that starts exes, but it doesn't repro the issue. PS7 does more complex work reading keyboard and handling starting exes.
```bash
screen ./pwsh
ls
```
### Expected behavior
`ls` to output and PS prompt to come back
### Actual behavior
`ls` shows output, and process is blocked
### Regression?
From .NET 6
### Known Workarounds
_No response_
### Configuration
Specifically tested under Ubuntu 22.04 using WSL
### Other information
_No response_
|
process
|
using net locks up when used with screen on linux and macos description when using built on net with screen it works fine on linux still seems to lock up on macos built on net with screen will lock up seems to be waiting on stdin after running a native command bash screen pwsh ls running cmdlets like get childitem works fine only when executing a native command does it seem to be blocking and because the process doesn t finish the ps prompt never comes i did rebuild which works with screen on linux against net and that also exhibits the same locking behavior so it seems that it s due to a change in net using set dotnet system console does not resolve the issue so it seems unrelated to those changes reproduction steps i tried creating a simple repl that starts exes but it doesn t repro the issue does more complex work reading keyboard and handling starting exes bash screen pwsh ls expected behavior ls to output and ps prompt to come back actual behavior ls shows output and process is blocked regression from net known workarounds no response configuration specifically tested under ubuntu using wsl other information no response
| 1
|
5,264
| 12,288,694,738
|
IssuesEvent
|
2020-05-09 17:54:41
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
avoid major-version package names
|
area/code-organization kind/feature lifecycle/rotten priority/important-longterm sig/api-machinery sig/architecture
|
Packages in this repository with paths like `k8s.io/api/core/v1` are named according to their major version (`package v1`) instead of a more semantically meaningful part of the path (`package core`).
That forces users of these packages to pretty much always rename them upon import.
Unfortunately, fixing the package names for existing packages would be a breaking change.
However, for future packages, it might be a good idea to use more descriptive package names in the [package clause](https://golang.org/ref/spec#Package_clause).
See also https://blog.golang.org/package-names.
|
1.0
|
avoid major-version package names - Packages in this repository with paths like `k8s.io/api/core/v1` are named according to their major version (`package v1`) instead of a more semantically meaningful part of the path (`package core`).
That forces users of these packages to pretty much always rename them upon import.
Unfortunately, fixing the package names for existing packages would be a breaking change.
However, for future packages, it might be a good idea to use more descriptive package names in the [package clause](https://golang.org/ref/spec#Package_clause).
See also https://blog.golang.org/package-names.
|
non_process
|
avoid major version package names packages in this repository with paths like io api core are named according to their major version package instead of a more semantically meaningful part of the path package core that forces users of these packages to pretty much always rename them upon import unfortunately fixing the package names for existing packages would be a breaking change however for future packages it might be a good idea to use more descriptive package names in the see also
| 0
|
351,240
| 31,990,316,766
|
IssuesEvent
|
2023-09-21 05:05:13
|
redpanda-data/redpanda
|
https://api.github.com/repos/redpanda-data/redpanda
|
closed
|
CI Failure (TimeoutError) in `UpgradeFromPriorFeatureVersionCloudStorageTest.test_rolling_upgrade`
|
kind/bug ci-failure area/cloud-storage ci-disabled-test sev/high
|
<!-- Before creating an issue look through existing ci-issues to avoid duplicates
https://github.com/redpanda-data/redpanda/issues?q=is%3Aissue+is%3Aopen+label%3Aci-failure -->
https://buildkite.com/redpanda/redpanda/builds/36165#018a4cc9-65ad-4b5a-a56e-08ff1a45006a
<!-- Copy the summary from the "Failed Tests" section of report.html -->
```
Module: rptest.tests.upgrade_test
Class: UpgradeFromPriorFeatureVersionCloudStorageTest
Method: test_rolling_upgrade
Arguments:
{
"cloud_storage_type": 1
}
```
<!-- Copy the summary from report.txt if it's too vague to differentiate key symptoms (e.g. TimeoutError is thrown for multiple reason so it makes to dig deeper) - fetch the tarball, check debug and redpanda logs -->
```
test_id: rptest.tests.upgrade_test.UpgradeFromPriorFeatureVersionCloudStorageTest.test_rolling_upgrade.cloud_storage_type=CloudStorageType.S3
status: FAIL
run time: 1 minute 43.579 seconds
TimeoutError('New manifest was not uploaded post upgrade')
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/ducktape/tests/runner_client.py", line 135, in run
data = self.run_test()
File "/usr/local/lib/python3.10/dist-packages/ducktape/tests/runner_client.py", line 227, in run_test
return self.test_context.function(self.test)
File "/usr/local/lib/python3.10/dist-packages/ducktape/mark/_mark.py", line 481, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
File "/root/tests/rptest/services/cluster.py", line 82, in wrapped
r = f(self, *args, **kwargs)
File "/root/tests/rptest/tests/upgrade_test.py", line 582, in test_rolling_upgrade
wait_until(insync_offset_advanced,
File "/usr/local/lib/python3.10/dist-packages/ducktape/utils/util.py", line 57, in wait_until
raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from last_exception
ducktape.errors.TimeoutError: New manifest was not uploaded post upgrade
```
|
1.0
|
CI Failure (TimeoutError) in `UpgradeFromPriorFeatureVersionCloudStorageTest.test_rolling_upgrade` - <!-- Before creating an issue look through existing ci-issues to avoid duplicates
https://github.com/redpanda-data/redpanda/issues?q=is%3Aissue+is%3Aopen+label%3Aci-failure -->
https://buildkite.com/redpanda/redpanda/builds/36165#018a4cc9-65ad-4b5a-a56e-08ff1a45006a
<!-- Copy the summary from the "Failed Tests" section of report.html -->
```
Module: rptest.tests.upgrade_test
Class: UpgradeFromPriorFeatureVersionCloudStorageTest
Method: test_rolling_upgrade
Arguments:
{
"cloud_storage_type": 1
}
```
<!-- Copy the summary from report.txt if it's too vague to differentiate key symptoms (e.g. TimeoutError is thrown for multiple reason so it makes to dig deeper) - fetch the tarball, check debug and redpanda logs -->
```
test_id: rptest.tests.upgrade_test.UpgradeFromPriorFeatureVersionCloudStorageTest.test_rolling_upgrade.cloud_storage_type=CloudStorageType.S3
status: FAIL
run time: 1 minute 43.579 seconds
TimeoutError('New manifest was not uploaded post upgrade')
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/ducktape/tests/runner_client.py", line 135, in run
data = self.run_test()
File "/usr/local/lib/python3.10/dist-packages/ducktape/tests/runner_client.py", line 227, in run_test
return self.test_context.function(self.test)
File "/usr/local/lib/python3.10/dist-packages/ducktape/mark/_mark.py", line 481, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
File "/root/tests/rptest/services/cluster.py", line 82, in wrapped
r = f(self, *args, **kwargs)
File "/root/tests/rptest/tests/upgrade_test.py", line 582, in test_rolling_upgrade
wait_until(insync_offset_advanced,
File "/usr/local/lib/python3.10/dist-packages/ducktape/utils/util.py", line 57, in wait_until
raise TimeoutError(err_msg() if callable(err_msg) else err_msg) from last_exception
ducktape.errors.TimeoutError: New manifest was not uploaded post upgrade
```
|
non_process
|
ci failure timeouterror in upgradefrompriorfeatureversioncloudstoragetest test rolling upgrade before creating an issue look through existing ci issues to avoid duplicates module rptest tests upgrade test class upgradefrompriorfeatureversioncloudstoragetest method test rolling upgrade arguments cloud storage type test id rptest tests upgrade test upgradefrompriorfeatureversioncloudstoragetest test rolling upgrade cloud storage type cloudstoragetype status fail run time minute seconds timeouterror new manifest was not uploaded post upgrade traceback most recent call last file usr local lib dist packages ducktape tests runner client py line in run data self run test file usr local lib dist packages ducktape tests runner client py line in run test return self test context function self test file usr local lib dist packages ducktape mark mark py line in wrapper return functools partial f args kwargs w args w kwargs file root tests rptest services cluster py line in wrapped r f self args kwargs file root tests rptest tests upgrade test py line in test rolling upgrade wait until insync offset advanced file usr local lib dist packages ducktape utils util py line in wait until raise timeouterror err msg if callable err msg else err msg from last exception ducktape errors timeouterror new manifest was not uploaded post upgrade
| 0
|
176,373
| 13,638,673,846
|
IssuesEvent
|
2020-09-25 09:44:10
|
Scholar-6/brillder
|
https://api.github.com/repos/Scholar-6/brillder
|
closed
|
Only show 'Choose more than one option' text during Review (not Investigation)
|
Betatester Request Input Brick
|
<img width="475" alt="Screenshot 2020-09-24 at 15 31 14" src="https://user-images.githubusercontent.com/59654112/94151805-20074000-fe7b-11ea-91bc-17b8e22df85b.png">
|
1.0
|
Only show 'Choose more than one option' text during Review (not Investigation) - <img width="475" alt="Screenshot 2020-09-24 at 15 31 14" src="https://user-images.githubusercontent.com/59654112/94151805-20074000-fe7b-11ea-91bc-17b8e22df85b.png">
|
non_process
|
only show choose more than one option text during review not investigation img width alt screenshot at src
| 0
|
2,324
| 7,655,810,795
|
IssuesEvent
|
2018-05-10 14:25:20
|
City-Bureau/city-scrapers
|
https://api.github.com/repos/City-Bureau/city-scrapers
|
closed
|
New Spider Request: CPS Community Action Council
|
architecture: spiders good first issue new spider needed priority: normal (important to have)
|
Link to the site is here: http://cps.edu/FACE/Pages/CAC.aspx
Looks like a pretty straightforward spider with one main exception—meetings aren't listed individually, they're listed on a recurring schedule:

|
1.0
|
New Spider Request: CPS Community Action Council - Link to the site is here: http://cps.edu/FACE/Pages/CAC.aspx
Looks like a pretty straightforward spider with one main exception—meetings aren't listed individually, they're listed on a recurring schedule:

|
non_process
|
new spider request cps community action council link to the site is here looks like a pretty straightforward spider with one main exception—meetings aren t listed individually they re listed on a recurring schedule
| 0
|
81,870
| 10,263,320,759
|
IssuesEvent
|
2019-08-22 14:09:00
|
angristan/openvpn-install
|
https://api.github.com/repos/angristan/openvpn-install
|
closed
|
Need "#security-and-encryption" section in README.md
|
documentation
|
Hello,
```
Do you want to customize encryption settings?
Unless you know what you're doing, you should stick with the default parameters provided by the script.
Note that whatever you choose, all the choices presented in the script are safe. (Unlike OpenVPN's defaults)
See https://github.com/angristan/openvpn-install#security-and-encryption to learn more.
```
Link: https://github.com/angristan/openvpn-install#security-and-encryption
At: https://github.com/angristan/openvpn-install/blob/master/README.md
Best,
Max Base
|
1.0
|
Need "#security-and-encryption" section in README.md - Hello,
```
Do you want to customize encryption settings?
Unless you know what you're doing, you should stick with the default parameters provided by the script.
Note that whatever you choose, all the choices presented in the script are safe. (Unlike OpenVPN's defaults)
See https://github.com/angristan/openvpn-install#security-and-encryption to learn more.
```
Link: https://github.com/angristan/openvpn-install#security-and-encryption
At: https://github.com/angristan/openvpn-install/blob/master/README.md
Best,
Max Base
|
non_process
|
need security and encryption section in readme md hello do you want to customize encryption settings unless you know what you re doing you should stick with the default parameters provided by the script note that whatever you choose all the choices presented in the script are safe unlike openvpn s defaults see to learn more link at best max base
| 0
|
22,803
| 4,839,084,763
|
IssuesEvent
|
2016-11-09 07:55:30
|
baidu/Paddle
|
https://api.github.com/repos/baidu/Paddle
|
closed
|
Need to rewrite English document on Distributed Training
|
documentation help welcome
|
New documents created in https://github.com/baidu/Paddle/pull/293 are extensions to the [SSH-base d distributed training document](https://github.com/baidu/Paddle/blob/develop/doc/cluster/opensource/cluster_train.md) . To make the new document preferable, we need to polish the latter.
|
1.0
|
Need to rewrite English document on Distributed Training - New documents created in https://github.com/baidu/Paddle/pull/293 are extensions to the [SSH-base d distributed training document](https://github.com/baidu/Paddle/blob/develop/doc/cluster/opensource/cluster_train.md) . To make the new document preferable, we need to polish the latter.
|
non_process
|
need to rewrite english document on distributed training new documents created in are extensions to the to make the new document preferable we need to polish the latter
| 0
|
482,175
| 13,902,101,689
|
IssuesEvent
|
2020-10-20 04:33:44
|
canonical-web-and-design/maas-ui
|
https://api.github.com/repos/canonical-web-and-design/maas-ui
|
closed
|
Settings table headers are squashed
|
Bug 🐛 Priority: High
|
The table headers for all the tables in settings are squashed:
<img width="958" alt="Screen Shot 2020-10-14 at 6 07 29 pm" src="https://user-images.githubusercontent.com/361637/95955327-64697880-0e48-11eb-96dc-289f1442a5ad.png">
|
1.0
|
Settings table headers are squashed - The table headers for all the tables in settings are squashed:
<img width="958" alt="Screen Shot 2020-10-14 at 6 07 29 pm" src="https://user-images.githubusercontent.com/361637/95955327-64697880-0e48-11eb-96dc-289f1442a5ad.png">
|
non_process
|
settings table headers are squashed the table headers for all the tables in settings are squashed img width alt screen shot at pm src
| 0
|
10,375
| 13,192,163,463
|
IssuesEvent
|
2020-08-13 13:22:11
|
kubeflow/kubeflow
|
https://api.github.com/repos/kubeflow/kubeflow
|
closed
|
Update release guide for website docs
|
area/docs kind/feature kind/process priority/p2
|
/kind process
Filing this issue to ensure we update the release guide for the website to include improvements to the release process from 1.1.
I think we should making running @RFMVasconcelos's script to add outdated banners to all pages a part of the process (see kubeflow/website#2066).
I think this is a good way to ensure that every page gets at least a once over every 60-90 days.
Per kubeflow/website#2067 I think we want to make it platform owners responsibility to update the shortcodes.
/assign @RFMVasconcelos @8bitmp3
|
1.0
|
Update release guide for website docs - /kind process
Filing this issue to ensure we update the release guide for the website to include improvements to the release process from 1.1.
I think we should making running @RFMVasconcelos's script to add outdated banners to all pages a part of the process (see kubeflow/website#2066).
I think this is a good way to ensure that every page gets at least a once over every 60-90 days.
Per kubeflow/website#2067 I think we want to make it platform owners responsibility to update the shortcodes.
/assign @RFMVasconcelos @8bitmp3
|
process
|
update release guide for website docs kind process filing this issue to ensure we update the release guide for the website to include improvements to the release process from i think we should making running rfmvasconcelos s script to add outdated banners to all pages a part of the process see kubeflow website i think this is a good way to ensure that every page gets at least a once over every days per kubeflow website i think we want to make it platform owners responsibility to update the shortcodes assign rfmvasconcelos
| 1
|
5,319
| 4,892,351,902
|
IssuesEvent
|
2016-11-18 19:27:31
|
statsmodels/statsmodels
|
https://api.github.com/repos/statsmodels/statsmodels
|
opened
|
SUMM/ENH: RLM/GLM nonlinear function optimization
|
comp-genmod comp-robust Performance type-enh
|
We would like to extend RLM #3261 and GLM #2179 to nonlinear mean functions.
several other related issues that I didn't look for
We can use the default scipy optimizers. But it might be more performant to have something more specific, e.g. based on the new scipy leastsq functions. This would be something like a nonlinear equivalent of IRLS.
(parking this reference)
Gay, David M., and Roy E. Welsch. 1988. “Maximum Likelihood and Quasi-Likelihood for Nonlinear Exponential Family Regression Models.” Journal of the American Statistical Association 83 (404): 990–98. doi:10.2307/2290125.
They discuss approximating the "messy" part of the Hessian. This sounds a bit similar to OIM versus EIM (observed versus expected information matrix) and the expensive part of the current GLM hessian.
I'm a bit surprised in how few iterations their estimator converges, but they use mostly very small sample sizes.
Also, for RLM we still need to get score_obs and hessian or the `..._factor` versions if applicable.
|
True
|
SUMM/ENH: RLM/GLM nonlinear function optimization - We would like to extend RLM #3261 and GLM #2179 to nonlinear mean functions.
several other related issues that I didn't look for
We can use the default scipy optimizers. But it might be more performant to have something more specific, e.g. based on the new scipy leastsq functions. This would be something like a nonlinear equivalent of IRLS.
(parking this reference)
Gay, David M., and Roy E. Welsch. 1988. “Maximum Likelihood and Quasi-Likelihood for Nonlinear Exponential Family Regression Models.” Journal of the American Statistical Association 83 (404): 990–98. doi:10.2307/2290125.
They discuss approximating the "messy" part of the Hessian. This sounds a bit similar to OIM versus EIM (observed versus expected information matrix) and the expensive part of the current GLM hessian.
I'm a bit surprised in how few iterations their estimator converges, but they use mostly very small sample sizes.
Also, for RLM we still need to get score_obs and hessian or the `..._factor` versions if applicable.
|
non_process
|
summ enh rlm glm nonlinear function optimization we would like to extend rlm and glm to nonlinear mean functions several other related issues that i didn t look for we can use the default scipy optimizers but it might be more performant to have something more specific e g based on the new scipy leastsq functions this would be something like a nonlinear equivalent of irls parking this reference gay david m and roy e welsch “maximum likelihood and quasi likelihood for nonlinear exponential family regression models ” journal of the american statistical association – doi they discuss approximating the messy part of the hessian this sounds a bit similar to oim versus eim observed versus expected information matrix and the expensive part of the current glm hessian i m a bit surprised in how few iterations their estimator converges but they use mostly very small sample sizes also for rlm we still need to get score obs and hessian or the factor versions if applicable
| 0
|
164,154
| 13,936,338,063
|
IssuesEvent
|
2020-10-22 12:48:59
|
DeepRegNet/DeepReg
|
https://api.github.com/repos/DeepRegNet/DeepReg
|
opened
|
Integration with 3DSlicer
|
documentation enhancement
|
## Subject of the feature
It would be nice to have integration (either through a scripted module or otherwise) with 3D Slicer to enable the underlying DeepReg models to be used for inference within a more user-friendly/less-technical avenue.
|
1.0
|
Integration with 3DSlicer - ## Subject of the feature
It would be nice to have integration (either through a scripted module or otherwise) with 3D Slicer to enable the underlying DeepReg models to be used for inference within a more user-friendly/less-technical avenue.
|
non_process
|
integration with subject of the feature it would be nice to have integration either through a scripted module or otherwise with slicer to enable the underlying deepreg models to be used for inference within a more user friendly less technical avenue
| 0
|
18,903
| 24,842,910,945
|
IssuesEvent
|
2022-10-26 13:57:31
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Possible mistake in this page
|
automation/svc triaged cxp doc-enhancement process-automation/subsvc Pri2
|
I think there is a mistake in this document. In this section https://learn.microsoft.com/en-us/azure/automation/automation-secure-asset-encryption#use-of-customer-managed-keys-for-an-automation-account, it states "The managed identity is available only after the storage account is created." I think this should read "The managed identity is available only after the automation account is created."
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: b96959e5-3ca5-7725-4f06-4d2abbf4a48e
* Version Independent ID: 377c1f35-a67a-5a79-5dc2-d58c3fbba7ef
* Content: [Encryption of secure assets in Azure Automation](https://learn.microsoft.com/en-us/azure/automation/automation-secure-asset-encryption)
* Content Source: [articles/automation/automation-secure-asset-encryption.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-secure-asset-encryption.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @snehithm
* Microsoft Alias: **snmuvva**
|
1.0
|
Possible mistake in this page -
I think there is a mistake in this document. In this section https://learn.microsoft.com/en-us/azure/automation/automation-secure-asset-encryption#use-of-customer-managed-keys-for-an-automation-account, it states "The managed identity is available only after the storage account is created." I think this should read "The managed identity is available only after the automation account is created."
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: b96959e5-3ca5-7725-4f06-4d2abbf4a48e
* Version Independent ID: 377c1f35-a67a-5a79-5dc2-d58c3fbba7ef
* Content: [Encryption of secure assets in Azure Automation](https://learn.microsoft.com/en-us/azure/automation/automation-secure-asset-encryption)
* Content Source: [articles/automation/automation-secure-asset-encryption.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-secure-asset-encryption.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @snehithm
* Microsoft Alias: **snmuvva**
|
process
|
possible mistake in this page i think there is a mistake in this document in this section it states the managed identity is available only after the storage account is created i think this should read the managed identity is available only after the automation account is created document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehithm microsoft alias snmuvva
| 1
|
5,521
| 8,381,046,629
|
IssuesEvent
|
2018-10-07 20:46:27
|
MichiganDataScienceTeam/googleanalytics
|
https://api.github.com/repos/MichiganDataScienceTeam/googleanalytics
|
opened
|
Preprocess: u'geoNetwork.country', u'geoNetwork.latitude', u'geoNetwork.longitude', u'geoNetwork.metro',
|
easy preprocessing
|
Preprocess the following features:
u'geoNetwork.country',
u'geoNetwork.latitude',
u'geoNetwork.longitude',
u'geoNetwork.metro',
1. Standardization: [http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling](http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling)
2. Impute missing values: [http://scikit-learn.org/stable/modules/impute.html](http://scikit-learn.org/stable/modules/impute.html)
3. Normalization: [http://scikit-learn.org/stable/modules/preprocessing.html#normalization](http://scikit-learn.org/stable/modules/preprocessing.html#normalization)
4. Encode categorical features (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features](http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features)
5. Discretization (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#discretization](http://scikit-learn.org/stable/modules/preprocessing.html#discretization)
[http://scikit-learn.org/stable/modules/preprocessing.html](http://scikit-learn.org/stable/modules/preprocessing.html)
|
1.0
|
Preprocess: u'geoNetwork.country', u'geoNetwork.latitude', u'geoNetwork.longitude', u'geoNetwork.metro', - Preprocess the following features:
u'geoNetwork.country',
u'geoNetwork.latitude',
u'geoNetwork.longitude',
u'geoNetwork.metro',
1. Standardization: [http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling](http://scikit-learn.org/stable/modules/preprocessing.html#standardization-or-mean-removal-and-variance-scaling)
2. Impute missing values: [http://scikit-learn.org/stable/modules/impute.html](http://scikit-learn.org/stable/modules/impute.html)
3. Normalization: [http://scikit-learn.org/stable/modules/preprocessing.html#normalization](http://scikit-learn.org/stable/modules/preprocessing.html#normalization)
4. Encode categorical features (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features](http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features)
5. Discretization (optional): [http://scikit-learn.org/stable/modules/preprocessing.html#discretization](http://scikit-learn.org/stable/modules/preprocessing.html#discretization)
[http://scikit-learn.org/stable/modules/preprocessing.html](http://scikit-learn.org/stable/modules/preprocessing.html)
|
process
|
preprocess u geonetwork country u geonetwork latitude u geonetwork longitude u geonetwork metro preprocess the following features u geonetwork country u geonetwork latitude u geonetwork longitude u geonetwork metro standardization impute missing values normalization encode categorical features optional discretization optional
| 1
|
100,278
| 11,184,839,610
|
IssuesEvent
|
2019-12-31 20:32:38
|
garbagemule/MobArena
|
https://api.github.com/repos/garbagemule/MobArena
|
closed
|
Modernize all existing documentation
|
documentation enhancement high priority
|
# Summary
* This issue is a…
* [ ] Bug report
* [ ] Feature request
* [X] Documentation request
* [ ] Other issue
* [ ] Question
* **Describe the issue / feature in 1-2 sentences**: Update all documentation for the new [ReadTheDocs site](https://mobarena.readthedocs.io/en/latest/), modernize documentation for current codebase where needed
# Background
**What does it do?**: Improves usability of plugin and makes it more user-friendly
This ticket is more of a tracking ticket for the existing effort of modernizing documentation. I'll try to update this ticket as I progress through the documentation. Modernized documentation is the first critical part of my [community revitalization proposal](https://docs.google.com/document/d/1kEhNl9XedF8fIIpDzCG7816VLrPFTi0tYAZjq78Mens/edit?usp=sharing).
# Details
* [X] Announcements (#400)
* [X] Arena setup (#401)
* [ ] Class chests (#402)
* [ ] Commands (#413)
* [ ] Getting started
* [x] Item and reward syntax (#416)
* [ ] Leaderboards
* [ ] Monster types
* [ ] Permissions
* [ ] Setting up config file
* [ ] Setting up waves
* [ ] Using MobArena
* [ ] Wave formulas
|
1.0
|
Modernize all existing documentation - # Summary
* This issue is a…
* [ ] Bug report
* [ ] Feature request
* [X] Documentation request
* [ ] Other issue
* [ ] Question
* **Describe the issue / feature in 1-2 sentences**: Update all documentation for the new [ReadTheDocs site](https://mobarena.readthedocs.io/en/latest/), modernize documentation for current codebase where needed
# Background
**What does it do?**: Improves usability of plugin and makes it more user-friendly
This ticket is more of a tracking ticket for the existing effort of modernizing documentation. I'll try to update this ticket as I progress through the documentation. Modernized documentation is the first critical part of my [community revitalization proposal](https://docs.google.com/document/d/1kEhNl9XedF8fIIpDzCG7816VLrPFTi0tYAZjq78Mens/edit?usp=sharing).
# Details
* [X] Announcements (#400)
* [X] Arena setup (#401)
* [ ] Class chests (#402)
* [ ] Commands (#413)
* [ ] Getting started
* [x] Item and reward syntax (#416)
* [ ] Leaderboards
* [ ] Monster types
* [ ] Permissions
* [ ] Setting up config file
* [ ] Setting up waves
* [ ] Using MobArena
* [ ] Wave formulas
|
non_process
|
modernize all existing documentation summary this issue is a… bug report feature request documentation request other issue question describe the issue feature in sentences update all documentation for the new modernize documentation for current codebase where needed background what does it do improves usability of plugin and makes it more user friendly this ticket is more of a tracking ticket for the existing effort of modernizing documentation i ll try to update this ticket as i progress through the documentation modernized documentation is the first critical part of my details announcements arena setup class chests commands getting started item and reward syntax leaderboards monster types permissions setting up config file setting up waves using mobarena wave formulas
| 0
|
22,620
| 31,844,667,016
|
IssuesEvent
|
2023-09-14 18:56:32
|
sqlc-dev/sqlc
|
https://api.github.com/repos/sqlc-dev/sqlc
|
closed
|
Support YAML anchors in sqlc.yaml
|
bug panic :gear: process
|
### What do you want to change?
Currently I am working on a microservices project using SQLC. All is fine, but I find myself repeating the same configuration options for each codegen section. Using [YAML anchors](https://yaml.org/spec/1.2.2/#3222-anchors-and-aliases) this would be trivial and very nice, but this is currently not supported.
Then I could do something like the following.
```yaml
sql:
- schema: database
queries: services/a
engine: postgresql
codegen:
- out: gateway/src/gateway/services/organization
plugin: py
options: &base-options
query_parameter_limit: 1
package: gateway
output_models_file_name: null
emit_module: true
emit_generators: false
emit_async: true
- schema: database
queries: services/b
engine: postgresql
codegen:
- out: gateway/src/gateway/services/project
plugin: py
options: *base-options
```
https://play.sqlc.dev/p/32c6b52785ce13ddfeb2311a2a835800d2578b56bf1addd95887e3fe45d2ace0
### What database engines need to be changed?
_No response_
### What programming language backends need to be changed?
_No response_
|
1.0
|
Support YAML anchors in sqlc.yaml - ### What do you want to change?
Currently I am working on a microservices project using SQLC. All is fine, but I find myself repeating the same configuration options for each codegen section. Using [YAML anchors](https://yaml.org/spec/1.2.2/#3222-anchors-and-aliases) this would be trivial and very nice, but this is currently not supported.
Then I could do something like the following.
```yaml
sql:
- schema: database
queries: services/a
engine: postgresql
codegen:
- out: gateway/src/gateway/services/organization
plugin: py
options: &base-options
query_parameter_limit: 1
package: gateway
output_models_file_name: null
emit_module: true
emit_generators: false
emit_async: true
- schema: database
queries: services/b
engine: postgresql
codegen:
- out: gateway/src/gateway/services/project
plugin: py
options: *base-options
```
https://play.sqlc.dev/p/32c6b52785ce13ddfeb2311a2a835800d2578b56bf1addd95887e3fe45d2ace0
### What database engines need to be changed?
_No response_
### What programming language backends need to be changed?
_No response_
|
process
|
support yaml anchors in sqlc yaml what do you want to change currently i am working on a microservices project using sqlc all is fine but i find myself repeating the same configuration options for each codegen section using this would be trivial and very nice but this is currently not supported then i could do something like the following yaml sql schema database queries services a engine postgresql codegen out gateway src gateway services organization plugin py options base options query parameter limit package gateway output models file name null emit module true emit generators false emit async true schema database queries services b engine postgresql codegen out gateway src gateway services project plugin py options base options what database engines need to be changed no response what programming language backends need to be changed no response
| 1
|
9,656
| 7,781,071,770
|
IssuesEvent
|
2018-06-05 22:17:48
|
paypal/NNAnalytics
|
https://api.github.com/repos/paypal/NNAnalytics
|
opened
|
Add Auth0 support for authentication
|
enhancement security
|
Today NNA only supports authentication support for LDAP.
It would be nice to add Auth0 protocol support for those running SSO (Single-Sign-On).
pac4j should support it: http://www.pac4j.org/docs/clients/oauth.html
|
True
|
Add Auth0 support for authentication - Today NNA only supports authentication support for LDAP.
It would be nice to add Auth0 protocol support for those running SSO (Single-Sign-On).
pac4j should support it: http://www.pac4j.org/docs/clients/oauth.html
|
non_process
|
add support for authentication today nna only supports authentication support for ldap it would be nice to add protocol support for those running sso single sign on should support it
| 0
|
159,764
| 20,085,909,328
|
IssuesEvent
|
2022-02-05 01:10:50
|
DavidSpek/pipelines
|
https://api.github.com/repos/DavidSpek/pipelines
|
opened
|
CVE-2022-21733 (Medium) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl
|
security vulnerability
|
## CVE-2022-21733 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /contrib/components/openvino/ovms-deployer/containers/requirements.txt</p>
<p>Path to vulnerable library: /contrib/components/openvino/ovms-deployer/containers/requirements.txt,/pipelines/samples/core/ai_platform/training</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Tensorflow is an Open Source Machine Learning Framework. The implementation of `StringNGrams` can be used to trigger a denial of service attack by causing an out of memory condition after an integer overflow. We are missing a validation on `pad_witdh` and that result in computing a negative value for `ngram_width` which is later used to allocate parts of the output. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.
<p>Publish Date: 2022-02-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21733>CVE-2022-21733</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-98j8-c9q4-r38g">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-98j8-c9q4-r38g</a></p>
<p>Release Date: 2022-02-03</p>
<p>Fix Resolution: tensorflow - 2.5.3,2.6.3,2.7.1;tensorflow-cpu - 2.5.3,2.6.3,2.7.1;tensorflow-gpu - 2.5.3,2.6.3,2.7.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-21733 (Medium) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2022-21733 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /contrib/components/openvino/ovms-deployer/containers/requirements.txt</p>
<p>Path to vulnerable library: /contrib/components/openvino/ovms-deployer/containers/requirements.txt,/pipelines/samples/core/ai_platform/training</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Tensorflow is an Open Source Machine Learning Framework. The implementation of `StringNGrams` can be used to trigger a denial of service attack by causing an out of memory condition after an integer overflow. We are missing a validation on `pad_witdh` and that result in computing a negative value for `ngram_width` which is later used to allocate parts of the output. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.
<p>Publish Date: 2022-02-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21733>CVE-2022-21733</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-98j8-c9q4-r38g">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-98j8-c9q4-r38g</a></p>
<p>Release Date: 2022-02-03</p>
<p>Fix Resolution: tensorflow - 2.5.3,2.6.3,2.7.1;tensorflow-cpu - 2.5.3,2.6.3,2.7.1;tensorflow-gpu - 2.5.3,2.6.3,2.7.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file contrib components openvino ovms deployer containers requirements txt path to vulnerable library contrib components openvino ovms deployer containers requirements txt pipelines samples core ai platform training dependency hierarchy x tensorflow whl vulnerable library found in base branch master vulnerability details tensorflow is an open source machine learning framework the implementation of stringngrams can be used to trigger a denial of service attack by causing an out of memory condition after an integer overflow we are missing a validation on pad witdh and that result in computing a negative value for ngram width which is later used to allocate parts of the output the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
| 0
|
334,329
| 29,831,092,803
|
IssuesEvent
|
2023-06-18 09:30:43
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix raw_ops.test_tensorflow_Shape
|
TensorFlow Frontend Sub Task Failing Test
|
| | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5302683745/jobs/9597784377"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5302683745/jobs/9597784377"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5302683745/jobs/9597784377"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5302683745/jobs/9597784377"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5302683745/jobs/9597784377"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix raw_ops.test_tensorflow_Shape - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5302683745/jobs/9597784377"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5302683745/jobs/9597784377"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5302683745/jobs/9597784377"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5302683745/jobs/9597784377"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5302683745/jobs/9597784377"><img src=https://img.shields.io/badge/-success-success></a>
|
non_process
|
fix raw ops test tensorflow shape numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src
| 0
|
1,065
| 3,536,070,913
|
IssuesEvent
|
2016-01-17 00:23:29
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
closed
|
add compositing modes to the linked transform module
|
enhancement video processing
|
an extension of #260.
it should support compositing with the master module's dry input as well as over the linked module's input
|
1.0
|
add compositing modes to the linked transform module - an extension of #260.
it should support compositing with the master module's dry input as well as over the linked module's input
|
process
|
add compositing modes to the linked transform module an extension of it should support compositing with the master module s dry input as well as over the linked module s input
| 1
|
147,420
| 19,522,792,508
|
IssuesEvent
|
2021-12-29 22:21:58
|
swagger-api/swagger-codegen
|
https://api.github.com/repos/swagger-api/swagger-codegen
|
opened
|
WS-2018-0124 (Medium) detected in multiple libraries
|
security vulnerability
|
## WS-2018-0124 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-core-2.7.8.jar</b>, <b>jackson-core-2.4.5.jar</b>, <b>jackson-core-2.6.4.jar</b></p></summary>
<p>
<details><summary><b>jackson-core-2.7.8.jar</b></p></summary>
<p>Core Jackson abstractions, basic JSON streaming API implementation</p>
<p>Library home page: <a href="https://github.com/FasterXML/jackson-core">https://github.com/FasterXML/jackson-core</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-core/bundles/jackson-core-2.7.8.jar</p>
<p>
Dependency Hierarchy:
- lagom-scaladsl-api_2.11-1.3.8.jar (Root Library)
- lagom-api_2.11-1.3.8.jar
- play_2.11-2.5.13.jar
- jackson-databind-2.7.8.jar
- :x: **jackson-core-2.7.8.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-core-2.4.5.jar</b></p></summary>
<p>Core Jackson abstractions, basic JSON streaming API implementation</p>
<p>Library home page: <a href="https://github.com/FasterXML/jackson">https://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /samples/client/petstore/scala/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.4.5/6fb96728ee26edb19fe329d94f3bd4df1a97652a/jackson-core-2.4.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.4.5/6fb96728ee26edb19fe329d94f3bd4df1a97652a/jackson-core-2.4.5.jar</p>
<p>
Dependency Hierarchy:
- swagger-core-1.5.8.jar (Root Library)
- jackson-dataformat-yaml-2.4.5.jar
- :x: **jackson-core-2.4.5.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-core-2.6.4.jar</b></p></summary>
<p>Core Jackson abstractions, basic JSON streaming API implementation</p>
<p>Library home page: <a href="https://github.com/FasterXML/jackson-core">https://github.com/FasterXML/jackson-core</a></p>
<p>Path to dependency file: /samples/client/petstore/java/jersey1/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.6.4/27d3a9f7bbdcf72d93c9b2da7017e39551bfa9fb/jackson-core-2.6.4.jar,/aches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.6.4/27d3a9f7bbdcf72d93c9b2da7017e39551bfa9fb/jackson-core-2.6.4.jar</p>
<p>
Dependency Hierarchy:
- jackson-databind-2.6.4.jar (Root Library)
- :x: **jackson-core-2.6.4.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/swagger-api/swagger-codegen/commit/4b7a8d7d7384aa6a27d6309c35ade0916edae7ed">4b7a8d7d7384aa6a27d6309c35ade0916edae7ed</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Jackson Core before version 2.8.6 if the REST endpoint consumes POST requests with JSON or XML data and data are invalid, the first unrecognized token is printed to server.log. If the first token is word of length 10MB, the whole word is printed. This is potentially dangerous and can be used to attack the server by filling the disk with logs.
<p>Publish Date: 2018-06-24
<p>URL: <a href=https://issues.jboss.org/browse/JBEAP-6316>WS-2018-0124</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=WS-2018-0124">https://cve.mitre.org/cgi-bin/cvename.cgi?name=WS-2018-0124</a></p>
<p>Release Date: 2018-01-24</p>
<p>Fix Resolution: 2.8.6</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-core","packageVersion":"2.7.8","packageFilePaths":[null],"isTransitiveDependency":true,"dependencyTree":"com.lightbend.lagom:lagom-scaladsl-api_2.11:1.3.8;com.lightbend.lagom:lagom-api_2.11:1.3.8;com.typesafe.play:play_2.11:2.5.13;com.fasterxml.jackson.core:jackson-databind:2.7.8;com.fasterxml.jackson.core:jackson-core:2.7.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.6","isBinary":false},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-core","packageVersion":"2.4.5","packageFilePaths":["/samples/client/petstore/scala/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"io.swagger:swagger-core:1.5.8;com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.4.5;com.fasterxml.jackson.core:jackson-core:2.4.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.6","isBinary":false},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-core","packageVersion":"2.6.4","packageFilePaths":["/samples/client/petstore/java/jersey1/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.6.4;com.fasterxml.jackson.core:jackson-core:2.6.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.6","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2018-0124","vulnerabilityDetails":"In Jackson Core before version 2.8.6 if the REST endpoint consumes POST requests with JSON or XML data and data are invalid, the first unrecognized token is printed to server.log. If the first token is word of length 10MB, the whole word is printed. This is potentially dangerous and can be used to attack the server by filling the disk with logs.","vulnerabilityUrl":"https://issues.jboss.org/browse/JBEAP-6316","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2018-0124 (Medium) detected in multiple libraries - ## WS-2018-0124 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-core-2.7.8.jar</b>, <b>jackson-core-2.4.5.jar</b>, <b>jackson-core-2.6.4.jar</b></p></summary>
<p>
<details><summary><b>jackson-core-2.7.8.jar</b></p></summary>
<p>Core Jackson abstractions, basic JSON streaming API implementation</p>
<p>Library home page: <a href="https://github.com/FasterXML/jackson-core">https://github.com/FasterXML/jackson-core</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-core/bundles/jackson-core-2.7.8.jar</p>
<p>
Dependency Hierarchy:
- lagom-scaladsl-api_2.11-1.3.8.jar (Root Library)
- lagom-api_2.11-1.3.8.jar
- play_2.11-2.5.13.jar
- jackson-databind-2.7.8.jar
- :x: **jackson-core-2.7.8.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-core-2.4.5.jar</b></p></summary>
<p>Core Jackson abstractions, basic JSON streaming API implementation</p>
<p>Library home page: <a href="https://github.com/FasterXML/jackson">https://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /samples/client/petstore/scala/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.4.5/6fb96728ee26edb19fe329d94f3bd4df1a97652a/jackson-core-2.4.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.4.5/6fb96728ee26edb19fe329d94f3bd4df1a97652a/jackson-core-2.4.5.jar</p>
<p>
Dependency Hierarchy:
- swagger-core-1.5.8.jar (Root Library)
- jackson-dataformat-yaml-2.4.5.jar
- :x: **jackson-core-2.4.5.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-core-2.6.4.jar</b></p></summary>
<p>Core Jackson abstractions, basic JSON streaming API implementation</p>
<p>Library home page: <a href="https://github.com/FasterXML/jackson-core">https://github.com/FasterXML/jackson-core</a></p>
<p>Path to dependency file: /samples/client/petstore/java/jersey1/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.6.4/27d3a9f7bbdcf72d93c9b2da7017e39551bfa9fb/jackson-core-2.6.4.jar,/aches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.6.4/27d3a9f7bbdcf72d93c9b2da7017e39551bfa9fb/jackson-core-2.6.4.jar</p>
<p>
Dependency Hierarchy:
- jackson-databind-2.6.4.jar (Root Library)
- :x: **jackson-core-2.6.4.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/swagger-api/swagger-codegen/commit/4b7a8d7d7384aa6a27d6309c35ade0916edae7ed">4b7a8d7d7384aa6a27d6309c35ade0916edae7ed</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Jackson Core before version 2.8.6 if the REST endpoint consumes POST requests with JSON or XML data and data are invalid, the first unrecognized token is printed to server.log. If the first token is word of length 10MB, the whole word is printed. This is potentially dangerous and can be used to attack the server by filling the disk with logs.
<p>Publish Date: 2018-06-24
<p>URL: <a href=https://issues.jboss.org/browse/JBEAP-6316>WS-2018-0124</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=WS-2018-0124">https://cve.mitre.org/cgi-bin/cvename.cgi?name=WS-2018-0124</a></p>
<p>Release Date: 2018-01-24</p>
<p>Fix Resolution: 2.8.6</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-core","packageVersion":"2.7.8","packageFilePaths":[null],"isTransitiveDependency":true,"dependencyTree":"com.lightbend.lagom:lagom-scaladsl-api_2.11:1.3.8;com.lightbend.lagom:lagom-api_2.11:1.3.8;com.typesafe.play:play_2.11:2.5.13;com.fasterxml.jackson.core:jackson-databind:2.7.8;com.fasterxml.jackson.core:jackson-core:2.7.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.6","isBinary":false},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-core","packageVersion":"2.4.5","packageFilePaths":["/samples/client/petstore/scala/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"io.swagger:swagger-core:1.5.8;com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.4.5;com.fasterxml.jackson.core:jackson-core:2.4.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.6","isBinary":false},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-core","packageVersion":"2.6.4","packageFilePaths":["/samples/client/petstore/java/jersey1/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.6.4;com.fasterxml.jackson.core:jackson-core:2.6.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.6","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2018-0124","vulnerabilityDetails":"In Jackson Core before version 2.8.6 if the REST endpoint consumes POST requests with JSON or XML data and data are invalid, the first unrecognized token is printed to server.log. If the first token is word of length 10MB, the whole word is printed. This is potentially dangerous and can be used to attack the server by filling the disk with logs.","vulnerabilityUrl":"https://issues.jboss.org/browse/JBEAP-6316","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
ws medium detected in multiple libraries ws medium severity vulnerability vulnerable libraries jackson core jar jackson core jar jackson core jar jackson core jar core jackson abstractions basic json streaming api implementation library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson core bundles jackson core jar dependency hierarchy lagom scaladsl api jar root library lagom api jar play jar jackson databind jar x jackson core jar vulnerable library jackson core jar core jackson abstractions basic json streaming api implementation library home page a href path to dependency file samples client petstore scala build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson core jackson core jar home wss scanner gradle caches modules files com fasterxml jackson core jackson core jackson core jar dependency hierarchy swagger core jar root library jackson dataformat yaml jar x jackson core jar vulnerable library jackson core jar core jackson abstractions basic json streaming api implementation library home page a href path to dependency file samples client petstore java build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson core jackson core jar aches modules files com fasterxml jackson core jackson core jackson core jar dependency hierarchy jackson databind jar root library x jackson core jar vulnerable library found in head commit a href found in base branch master vulnerability details in jackson core before version if the rest endpoint consumes post requests with json or xml data and data are invalid the first unrecognized token is printed to server log if the first token is word of length the whole word is printed this is potentially dangerous and can be used to attack the server by filling the disk with logs publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com lightbend lagom lagom scaladsl api com lightbend lagom lagom api com typesafe play play com fasterxml jackson core jackson databind com fasterxml jackson core jackson core isminimumfixversionavailable true minimumfixversion isbinary false packagetype java groupid com fasterxml jackson core packagename jackson core packageversion packagefilepaths istransitivedependency true dependencytree io swagger swagger core com fasterxml jackson dataformat jackson dataformat yaml com fasterxml jackson core jackson core isminimumfixversionavailable true minimumfixversion isbinary false packagetype java groupid com fasterxml jackson core packagename jackson core packageversion packagefilepaths istransitivedependency true dependencytree com fasterxml jackson core jackson databind com fasterxml jackson core jackson core isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier ws vulnerabilitydetails in jackson core before version if the rest endpoint consumes post requests with json or xml data and data are invalid the first unrecognized token is printed to server log if the first token is word of length the whole word is printed this is potentially dangerous and can be used to attack the server by filling the disk with logs vulnerabilityurl
| 0
|
160,514
| 13,791,553,169
|
IssuesEvent
|
2020-10-09 12:20:16
|
godaddy/tartufo
|
https://api.github.com/repos/godaddy/tartufo
|
closed
|
Consider using sphinx-click
|
documentation help wanted
|
## 📃 Summary
Right now, the usage shown both in the `README.md` and the `docs/configuration.rst` is copy-pasted by hand. This is error prone and can easily become outdated. Especially as #39 gets implemented, this situation will only become worse. It'd be great if we could automate this somehow.
[sphinx-click](https://pypi.org/project/sphinx-click/) claims to provide that. So, we should try it out and see if we can make it work for our needs. That would be ideal!
## Expected documentation
We will want automatically updated documentation of the usage of the primary command, as well as all sub-commands.
|
1.0
|
Consider using sphinx-click - ## 📃 Summary
Right now, the usage shown both in the `README.md` and the `docs/configuration.rst` is copy-pasted by hand. This is error prone and can easily become outdated. Especially as #39 gets implemented, this situation will only become worse. It'd be great if we could automate this somehow.
[sphinx-click](https://pypi.org/project/sphinx-click/) claims to provide that. So, we should try it out and see if we can make it work for our needs. That would be ideal!
## Expected documentation
We will want automatically updated documentation of the usage of the primary command, as well as all sub-commands.
|
non_process
|
consider using sphinx click 📃 summary right now the usage shown both in the readme md and the docs configuration rst is copy pasted by hand this is error prone and can easily become outdated especially as gets implemented this situation will only become worse it d be great if we could automate this somehow claims to provide that so we should try it out and see if we can make it work for our needs that would be ideal expected documentation we will want automatically updated documentation of the usage of the primary command as well as all sub commands
| 0
|
10,068
| 13,044,161,814
|
IssuesEvent
|
2020-07-29 03:47:26
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `TimestampLiteral` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `TimestampLiteral` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `TimestampLiteral` from TiDB -
## Description
Port the scalar function `TimestampLiteral` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function timestampliteral from tidb description port the scalar function timestampliteral from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
22,363
| 31,076,124,669
|
IssuesEvent
|
2023-08-12 14:00:51
|
brucemiller/LaTeXML
|
https://api.github.com/repos/brucemiller/LaTeXML
|
closed
|
ugly lists with SIMPLIFY_HTML
|
enhancement postprocessing
|
Lists are now ugly when passing `--xsltparam=SIMPLIFY_HTML:true`:

because the `display: inline` rule is activated by the selector `.ltx_item .ltx_tag + .ltx_para`, and there is no `.ltx_tag` when the output is simplified.
|
1.0
|
ugly lists with SIMPLIFY_HTML - Lists are now ugly when passing `--xsltparam=SIMPLIFY_HTML:true`:

because the `display: inline` rule is activated by the selector `.ltx_item .ltx_tag + .ltx_para`, and there is no `.ltx_tag` when the output is simplified.
|
process
|
ugly lists with simplify html lists are now ugly when passing xsltparam simplify html true because the display inline rule is activated by the selector ltx item ltx tag ltx para and there is no ltx tag when the output is simplified
| 1
|
571,697
| 17,023,348,470
|
IssuesEvent
|
2021-07-03 01:33:33
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
update potlatch presets and autocomplete
|
Component: potlatch (flash editor) Priority: major Resolution: fixed Type: enhancement
|
**[Submitted to the original trac issue database at 11.07pm, Monday, 12th January 2009]**
Please update preset and autocomplete lists
Add to presets
highway=track tracktype=(grade1/grade2/grade3/grade4/grade5)
highway=path foot=designated bicycle=designated segregated=(yes/no) surface=(paved/unpaved)
(combined footway/cycletrack)
Add to autocomplete
tracktype=(grade1/grade2/grade3/grade4/grade5)
access/bicycle/foot/horse/agricultural=designated
building=yes
barrier=(bollard/hedge/fence/wall) as way
barrier=(block/bollard/gate/kissing_gate/entrance) as point
Change in autocomplete
"bus_stop" from "highway/point" to POI (recommented in wiki)
power/tower form POI to part of power/line
Thanks in advance
|
1.0
|
update potlatch presets and autocomplete - **[Submitted to the original trac issue database at 11.07pm, Monday, 12th January 2009]**
Please update preset and autocomplete lists
Add to presets
highway=track tracktype=(grade1/grade2/grade3/grade4/grade5)
highway=path foot=designated bicycle=designated segregated=(yes/no) surface=(paved/unpaved)
(combined footway/cycletrack)
Add to autocomplete
tracktype=(grade1/grade2/grade3/grade4/grade5)
access/bicycle/foot/horse/agricultural=designated
building=yes
barrier=(bollard/hedge/fence/wall) as way
barrier=(block/bollard/gate/kissing_gate/entrance) as point
Change in autocomplete
"bus_stop" from "highway/point" to POI (recommented in wiki)
power/tower form POI to part of power/line
Thanks in advance
|
non_process
|
update potlatch presets and autocomplete please update preset and autocomplete lists add to presets highway track tracktype highway path foot designated bicycle designated segregated yes no surface paved unpaved combined footway cycletrack add to autocomplete tracktype access bicycle foot horse agricultural designated building yes barrier bollard hedge fence wall as way barrier block bollard gate kissing gate entrance as point change in autocomplete bus stop from highway point to poi recommented in wiki power tower form poi to part of power line thanks in advance
| 0
|
14,070
| 16,903,815,454
|
IssuesEvent
|
2021-06-24 03:16:30
|
theislab/scanpy
|
https://api.github.com/repos/theislab/scanpy
|
closed
|
Drop python 3.6 for 1.8
|
Development Process 🚀
|
Numpy has dropped support for python 3.6 in 1.20.0, and have recommended that packages in the pydata ecosystem adopt a similar schedule for python versions ([NEP 29](https://numpy.org/neps/nep-0029-deprecation_policy.html)). Should we follow this?
Some cool stuff we get by dropping 3.6:
* [Postponed evaluation of type annotations](https://docs.python.org/3/whatsnew/3.7.html#pep-563-postponed-evaluation-of-annotations)
* [`singledispatch` can register from type annotations](https://docs.python.org/3/whatsnew/3.7.html#functools)
* [Customized access to module attributes](https://docs.python.org/3/whatsnew/3.7.html#pep-562-customization-of-access-to-module-attributes)
Also, users will be pinned to old versions of numpy if they're using 3.6, and maybe we don't want to deal with that.
|
1.0
|
Drop python 3.6 for 1.8 - Numpy has dropped support for python 3.6 in 1.20.0, and have recommended that packages in the pydata ecosystem adopt a similar schedule for python versions ([NEP 29](https://numpy.org/neps/nep-0029-deprecation_policy.html)). Should we follow this?
Some cool stuff we get by dropping 3.6:
* [Postponed evaluation of type annotations](https://docs.python.org/3/whatsnew/3.7.html#pep-563-postponed-evaluation-of-annotations)
* [`singledispatch` can register from type annotations](https://docs.python.org/3/whatsnew/3.7.html#functools)
* [Customized access to module attributes](https://docs.python.org/3/whatsnew/3.7.html#pep-562-customization-of-access-to-module-attributes)
Also, users will be pinned to old versions of numpy if they're using 3.6, and maybe we don't want to deal with that.
|
process
|
drop python for numpy has dropped support for python in and have recommended that packages in the pydata ecosystem adopt a similar schedule for python versions should we follow this some cool stuff we get by dropping also users will be pinned to old versions of numpy if they re using and maybe we don t want to deal with that
| 1
|
15,023
| 18,739,050,671
|
IssuesEvent
|
2021-11-04 11:22:51
|
ethereum/EIPs
|
https://api.github.com/repos/ethereum/EIPs
|
closed
|
EIP-1: Stronger rules for discussion-url
|
type: EIP1 (Process)
|
This is sparked by [EIP-1046](https://eips.ethereum.org/EIPS/eip-1046) and [EIP-1047](https://eips.ethereum.org/EIPS/eip-1047) linking to reddit threads:
- https://www.reddit.com/r/raredigitalart/comments/8hfh1g/erc20_metadata_extension_eip_1046/
- https://www.reddit.com/r/raredigitalart/comments/8hfk2a/token_metadata_json_schema_eip_1047/
Both of those now display this:

That means their discussion urls are read-only, which defeats the purpose.
---
I do not have a clear proposal here, but only a conversation starter. Likely allowing the following makes the most sense based on past experience:
- `https://github.com/ethereum/EIPs/issues/*`
- `https://ethereum-magicians.org/*`
- `https://ethresear.ch/*`
The other currently used URLs are:
- reddit (as above)
- `t.me` (telegram)
- `https://discuss.ens.domains/t/new-standard-proposal-ens-multicoin-support/1148`
- `https://gitter.im/ethereum/topics/topic/5ac4d974109bb043328911ce/eip-969-discussion`
- `https://gitter.im/ethereum/EIPs`
- `https://gitter.im/ethereum/AllCoreDevs`
- and countless cases of `github.com/<username>/issues/<n>`
We already disallow `https://github.com/ethereum/EIPs/pull/*` (as an archaic decision), we could perhaps just disallow reddit as well?
Probably we need to ask the question here: what is the goal of the discussion url? Is it for ephemeral discussions only? Is it something which is important after something becomes Final? If so, should it be removed after it became Final?
|
1.0
|
EIP-1: Stronger rules for discussion-url - This is sparked by [EIP-1046](https://eips.ethereum.org/EIPS/eip-1046) and [EIP-1047](https://eips.ethereum.org/EIPS/eip-1047) linking to reddit threads:
- https://www.reddit.com/r/raredigitalart/comments/8hfh1g/erc20_metadata_extension_eip_1046/
- https://www.reddit.com/r/raredigitalart/comments/8hfk2a/token_metadata_json_schema_eip_1047/
Both of those now display this:

That means their discussion urls are read-only, which defeats the purpose.
---
I do not have a clear proposal here, but only a conversation starter. Likely allowing the following makes the most sense based on past experience:
- `https://github.com/ethereum/EIPs/issues/*`
- `https://ethereum-magicians.org/*`
- `https://ethresear.ch/*`
The other currently used URLs are:
- reddit (as above)
- `t.me` (telegram)
- `https://discuss.ens.domains/t/new-standard-proposal-ens-multicoin-support/1148`
- `https://gitter.im/ethereum/topics/topic/5ac4d974109bb043328911ce/eip-969-discussion`
- `https://gitter.im/ethereum/EIPs`
- `https://gitter.im/ethereum/AllCoreDevs`
- and countless cases of `github.com/<username>/issues/<n>`
We already disallow `https://github.com/ethereum/EIPs/pull/*` (as an archaic decision), we could perhaps just disallow reddit as well?
Probably we need to ask the question here: what is the goal of the discussion url? Is it for ephemeral discussions only? Is it something which is important after something becomes Final? If so, should it be removed after it became Final?
|
process
|
eip stronger rules for discussion url this is sparked by and linking to reddit threads both of those now display this that means their discussion urls are read only which defeats the purpose i do not have a clear proposal here but only a conversation starter likely allowing the following makes the most sense based on past experience the other currently used urls are reddit as above t me telegram and countless cases of github com issues we already disallow as an archaic decision we could perhaps just disallow reddit as well probably we need to ask the question here what is the goal of the discussion url is it for ephemeral discussions only is it something which is important after something becomes final if so should it be removed after it became final
| 1
|
259,056
| 8,182,956,487
|
IssuesEvent
|
2018-08-29 07:30:29
|
telstra/open-kilda
|
https://api.github.com/repos/telstra/open-kilda
|
closed
|
Wait X seconds after ISL goes UP or DOWN before we start issue reroutes
|
area/storm/event enhancement priority/1-highest
|
### Problem:
From time to time we observe some bouncing ISLs and for every event we perform attempts to evacuate active flows from these ISLs or to reroute DOWN flows to build them through newly discovered ISLs. Because of that we have a bunch of reroutes which redundantly load entire system.
### How it can be solved
We can implement a "sliding window" for all events and we will be able to process all events together after we get a "stable" network. In other words we should wait X seconds before passing such events to TE. It can be implemented as new bolt in event (wfm) topology that will collect all events and wait some time before sending it for further processing.
**Important**
We need to provide limit of window size, so window won't be too large. In other words "max_duration" is needed to prevent not sending messages further to KafkaBolt in case when we are receiving message infinitely and every time we reset time counter, so event topology might not send events at all.
### Initial diagram:

|
1.0
|
Wait X seconds after ISL goes UP or DOWN before we start issue reroutes - ### Problem:
From time to time we observe some bouncing ISLs and for every event we perform attempts to evacuate active flows from these ISLs or to reroute DOWN flows to build them through newly discovered ISLs. Because of that we have a bunch of reroutes which redundantly load entire system.
### How it can be solved
We can implement a "sliding window" for all events and we will be able to process all events together after we get a "stable" network. In other words we should wait X seconds before passing such events to TE. It can be implemented as new bolt in event (wfm) topology that will collect all events and wait some time before sending it for further processing.
**Important**
We need to provide limit of window size, so window won't be too large. In other words "max_duration" is needed to prevent not sending messages further to KafkaBolt in case when we are receiving message infinitely and every time we reset time counter, so event topology might not send events at all.
### Initial diagram:

|
non_process
|
wait x seconds after isl goes up or down before we start issue reroutes problem from time to time we observe some bouncing isls and for every event we perform attempts to evacuate active flows from these isls or to reroute down flows to build them through newly discovered isls because of that we have a bunch of reroutes which redundantly load entire system how it can be solved we can implement a sliding window for all events and we will be able to process all events together after we get a stable network in other words we should wait x seconds before passing such events to te it can be implemented as new bolt in event wfm topology that will collect all events and wait some time before sending it for further processing important we need to provide limit of window size so window won t be too large in other words max duration is needed to prevent not sending messages further to kafkabolt in case when we are receiving message infinitely and every time we reset time counter so event topology might not send events at all initial diagram
| 0
|
9,090
| 12,156,872,929
|
IssuesEvent
|
2020-04-25 19:14:37
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
opened
|
[FALSE-POSITIVE?] fullsizechevy.com
|
upstream issue whitelisting process
|
**Domains or links**
Please list any domains and links listed here which you believe are a false positive.
`fullsizechevy.com`
`www.fullsizechevy.com`
**Have you requested removal from other sources?**
Please include all relevant links to your existing removals / whitelistings.
Sent e-mail to `malwaredomains@riskanalytics.com` with request to validate and removal if possible.
**Additional context**
Add any other context about the problem here.
This is just a reminder for myself if for some reason upstream decline the removal or no answer to whitelist it here.
This site is a forum for Chevy Truck(s) enthusiasts and related repairs.
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
|
1.0
|
[FALSE-POSITIVE?] fullsizechevy.com - **Domains or links**
Please list any domains and links listed here which you believe are a false positive.
`fullsizechevy.com`
`www.fullsizechevy.com`
**Have you requested removal from other sources?**
Please include all relevant links to your existing removals / whitelistings.
Sent e-mail to `malwaredomains@riskanalytics.com` with request to validate and removal if possible.
**Additional context**
Add any other context about the problem here.
This is just a reminder for myself if for some reason upstream decline the removal or no answer to whitelist it here.
This site is a forum for Chevy Truck(s) enthusiasts and related repairs.
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
|
process
|
fullsizechevy com domains or links please list any domains and links listed here which you believe are a false positive fullsizechevy com have you requested removal from other sources please include all relevant links to your existing removals whitelistings sent e mail to malwaredomains riskanalytics com with request to validate and removal if possible additional context add any other context about the problem here this is just a reminder for myself if for some reason upstream decline the removal or no answer to whitelist it here this site is a forum for chevy truck s enthusiasts and related repairs exclamation we understand being listed on a list like this can be frustrating and embarrassing for many web site owners the first step is to remain calm the second step is to rest assured one of our maintainers will address your issue as soon as possible please make sure you have provided as much information as possible to help speed up the process
| 1
|
246
| 2,667,493,305
|
IssuesEvent
|
2015-03-22 17:23:16
|
ignatov/intellij-erlang
|
https://api.github.com/repos/ignatov/intellij-erlang
|
closed
|
Inject references into spawn call parameters
|
enhancement preprocessor-support
|
When using a macro as either the Module or Function parameter of the spawn function, the plugin says that the function is unused. See the picture for an example.
Edit: issue also occurs when using apply
Edit 2: Warning occurs in Erlang compiler as well. Please close this issue.

|
1.0
|
Inject references into spawn call parameters - When using a macro as either the Module or Function parameter of the spawn function, the plugin says that the function is unused. See the picture for an example.
Edit: issue also occurs when using apply
Edit 2: Warning occurs in Erlang compiler as well. Please close this issue.

|
process
|
inject references into spawn call parameters when using a macro as either the module or function parameter of the spawn function the plugin says that the function is unused see the picture for an example edit issue also occurs when using apply edit warning occurs in erlang compiler as well please close this issue
| 1
|
19,294
| 25,466,383,010
|
IssuesEvent
|
2022-11-25 05:06:54
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[IDP] [PM] UI issue in 'Phone number' format
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
UI of Phone number format should be similar in both SB and PM
**PM:**

|
3.0
|
[IDP] [PM] UI issue in 'Phone number' format - UI of Phone number format should be similar in both SB and PM
**PM:**

|
process
|
ui issue in phone number format ui of phone number format should be similar in both sb and pm pm
| 1
|
7,250
| 10,418,413,441
|
IssuesEvent
|
2019-09-15 08:23:31
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Insecure pyyaml call to load()
|
Bug Processing
|
PyYAML calls to load() by Processing allow to execute arbitrary function calls.
Please see https://github.com/yaml/pyyaml/issues/265
**Describe the bug**
It is reported that in PyYAML before 4.1, usage of yaml.load() function on untrusted input could lead to arbitrary code execution. It is therefore recommended to use yaml.safe_load() instead. With 4.1, yaml.load() has been changed to call safe_load().
* Report: http://seclists.org/oss-sec/2018/q2/240
* Upstream change: https://github.com/yaml/pyyaml/pull/74
**QGIS and OS versions**
QGIS 3.4.7-Madeira
**Patch**
Please consider changeing /usr/share/qgis/python/plugins/processing/algs/help/__init__.py , inside loadShortHelp, line 46
from:
`for k, v in yaml.load(stream).items():`
to:
`for k, v in yaml.safe_load(stream).items():`
|
1.0
|
Insecure pyyaml call to load() - PyYAML calls to load() by Processing allow to execute arbitrary function calls.
Please see https://github.com/yaml/pyyaml/issues/265
**Describe the bug**
It is reported that in PyYAML before 4.1, usage of yaml.load() function on untrusted input could lead to arbitrary code execution. It is therefore recommended to use yaml.safe_load() instead. With 4.1, yaml.load() has been changed to call safe_load().
* Report: http://seclists.org/oss-sec/2018/q2/240
* Upstream change: https://github.com/yaml/pyyaml/pull/74
**QGIS and OS versions**
QGIS 3.4.7-Madeira
**Patch**
Please consider changeing /usr/share/qgis/python/plugins/processing/algs/help/__init__.py , inside loadShortHelp, line 46
from:
`for k, v in yaml.load(stream).items():`
to:
`for k, v in yaml.safe_load(stream).items():`
|
process
|
insecure pyyaml call to load pyyaml calls to load by processing allow to execute arbitrary function calls please see describe the bug it is reported that in pyyaml before usage of yaml load function on untrusted input could lead to arbitrary code execution it is therefore recommended to use yaml safe load instead with yaml load has been changed to call safe load report upstream change qgis and os versions qgis madeira patch please consider changeing usr share qgis python plugins processing algs help init py inside loadshorthelp line from for k v in yaml load stream items to for k v in yaml safe load stream items
| 1
|
8,822
| 11,937,619,477
|
IssuesEvent
|
2020-04-02 12:29:40
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Schemas with incomplete @relation annotations should be rejected during Client generation
|
process/candidate
|
## Bug description
The following schema is considered valid, when in fact it isn't.
## How to reproduce
Try this schema:
```
model User {
id Int @id
posts Post[]
}
model Post {
id Int @id
author User @relation(references: [id])
}
```
This leads to issues when you run a query against this schema. According to people on Slack, this is an invalid schema since the `@relation` annotation does not have a `field` attribute, and there is no scalar field in `Post` to define the `author` relation on.
## Expected behavior
`prisma generate` (& the VSCode plugin) should reject this schema, saying that the `@relation` annotation is incorrectly defined: it needs a `field` attribute.
## Environment & setup
- OS: macOS
- Database: PostgreSQL
- Prisma version:
```
@prisma/cli : 2.0.0-alpha.1024
Current platform : darwin
Query Engine : prisma 15b42f2c499c65576f47fa77108e39d86645242b
Migration Engine : migration-engine-cli 15b42f2c499c65576f47fa77108e39d86645242b
Introspection Engine : introspection-core 15b42f2c499c65576f47fa77108e39d86645242b
```
- Node.js version: 13.10.1
|
1.0
|
Schemas with incomplete @relation annotations should be rejected during Client generation - ## Bug description
The following schema is considered valid, when in fact it isn't.
## How to reproduce
Try this schema:
```
model User {
id Int @id
posts Post[]
}
model Post {
id Int @id
author User @relation(references: [id])
}
```
This leads to issues when you run a query against this schema. According to people on Slack, this is an invalid schema since the `@relation` annotation does not have a `field` attribute, and there is no scalar field in `Post` to define the `author` relation on.
## Expected behavior
`prisma generate` (& the VSCode plugin) should reject this schema, saying that the `@relation` annotation is incorrectly defined: it needs a `field` attribute.
## Environment & setup
- OS: macOS
- Database: PostgreSQL
- Prisma version:
```
@prisma/cli : 2.0.0-alpha.1024
Current platform : darwin
Query Engine : prisma 15b42f2c499c65576f47fa77108e39d86645242b
Migration Engine : migration-engine-cli 15b42f2c499c65576f47fa77108e39d86645242b
Introspection Engine : introspection-core 15b42f2c499c65576f47fa77108e39d86645242b
```
- Node.js version: 13.10.1
|
process
|
schemas with incomplete relation annotations should be rejected during client generation bug description the following schema is considered valid when in fact it isn t how to reproduce try this schema model user id int id posts post model post id int id author user relation references this leads to issues when you run a query against this schema according to people on slack this is an invalid schema since the relation annotation does not have a field attribute and there is no scalar field in post to define the author relation on expected behavior prisma generate the vscode plugin should reject this schema saying that the relation annotation is incorrectly defined it needs a field attribute environment setup os macos database postgresql prisma version prisma cli alpha current platform darwin query engine prisma migration engine migration engine cli introspection engine introspection core node js version
| 1
|
49,510
| 13,187,223,463
|
IssuesEvent
|
2020-08-13 02:44:24
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
[DOMLauncher] tests gone wild! (Trac #1563)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1563">https://code.icecube.wisc.edu/ticket/1563</a>, reported by nega and owned by cweaver</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-04-28T16:27:59",
"description": "see #1561 and #1562\n\n{{{\n21246 ? Rl 26420:41 python /home/nega/i3/combo/src/DOMLauncher/resources/test/LC-logicTest.py\n}}}\n\n{{{\n(gdb) bt\n#0 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#1 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x3f57140, n=55) at fileops.c:1251\n#2 0x00007f1f4853d39f in new_do_write (to_do=55, data=0x3f57140 \"\\n *** Break *** write on a pipe with no one to read it\\n\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#3 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=<optimized out>, n=55) at fileops.c:1330\n#4 0x00007f1f48532488 in __GI__IO_fputs (str=0x3f57140 \"\\n *** Break *** write on a pipe with no one to read it\\n\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofputs.c:40\n#5 0x00007f1f43c3a436 in DebugPrint(char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#6 0x00007f1f43c3ad04 in DefaultErrorHandler(int, bool, char const*, char const*) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#7 0x00007f1f43c3a66a in ErrorHandler () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#8 0x00007f1f43c3a97f in Break(char const*, char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#9 0x00007f1f43cc9e2f in TUnixSystem::DispatchSignals(ESignals) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#10 <signal handler called>\n#11 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#12 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x7f1f48c694dc, n=1) at fileops.c:1251\n#13 0x00007f1f4853d39f in new_do_write (to_do=1, data=0x7f1f48c694dc \".\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#14 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=<optimized out>, n=1) at fileops.c:1330\n#15 0x00007f1f48532b69 in __GI__IO_fwrite (buf=0x7f1f48c694dc, size=size@entry=1, count=1, fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofwrite.c:43\n#16 0x0000000000551c02 in file_write.lto_priv () at ../Objects/fileobject.c:1852\n#17 0x00000000004ccd05 in call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4035\n#18 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#19 0x00000000004cd4e2 in fast_function (nk=<optimized out>, na=<optimized out>, n=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4121\n#20 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4056\n#21 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#22 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#23 function_call.lto_priv () at ../Objects/funcobject.c:526\n#24 0x00000000004cf239 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#25 ext_do_call (nk=<optimized out>, na=<optimized out>, flags=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4348\n#26 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#27 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#28 function_call.lto_priv () at ../Objects/funcobject.c:526\n#29 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#30 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#31 0x0000000000573bfd in PyObject_Call (kw=0x0, arg=\n (<TextTestResult(_original_stdout=<file at remote 0x7f1f48c9c150>, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=<file at remote 0x7f1f48c9c1e0>) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=<file at remote 0x7f1f48c9c1e0>, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=<type at remote 0x10a0dc0>, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=<instancemethod at remote 0x7f1f3bba9190>) at ../Objects/abstract.c:2529\n#32 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#33 0x00000000004cd9ab in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#34 do_call (nk=<optimized out>, na=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4253\n#35 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4058\n#36 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#37 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#38 function_call.lto_priv () at ../Objects/funcobject.c:526\n#39 0x00000000004cf239 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#40 ext_do_call (nk=<optimized out>, na=<optimized out>, flags=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4348\n#41 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#42 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#43 function_call.lto_priv () at ../Objects/funcobject.c:526\n#44 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#45 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#46 0x0000000000573bfd in PyObject_Call (kw=0x0, \n arg=(<TextTestResult(_original_stdout=<file at remote 0x7f1f48c9c150>, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=<file at remote 0x7f1f48c9c1e0>) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=<file at remote 0x7f1f48c9c1e0>, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=<type at remote 0x10a0dc0>, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=<instancemethod at remote 0x7f1f3bba9230>) at ../Objects/abstract.c:2529\n#47 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#48 0x00000000004cd9ab in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#49 do_call (nk=<optimized out>, na=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4253\n#50 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4058\n#51 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#52 0x00000000004cd4e2 in fast_function (nk=<optimized out>, na=<optimized out>, n=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4121\n#53 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4056\n#54 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#55 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#56 function_call.lto_priv () at ../Objects/funcobject.c:526\n#57 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#58 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#59 0x00000000004d437b in PyObject_Call (kw=<optimized out>, arg=(<I3Frame at remote 0x7f1f3954c398>,), func=<instancemethod at remote 0x7f1f3bba9140>) at ../Objects/abstract.c:2529\n#60 PyEval_CallObjectWithKeywords () at ../Python/ceval.c:3904\n#61 0x0000000000495b80 in PyEval_CallFunction (obj=<instancemethod at remote 0x7f1f3bba9140>, format=<optimized out>) at ../Python/modsupport.c:557\n#62 0x00007f1f46b9bcd0 in boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::bad_any_cast> >::clone_impl (this=0x6a8711e3b3cd6900, x=..., __in_chrg=<optimized out>, __vtt_parm=<optimized out>)\n at /usr/include/boost/exception/exception.hpp:446\n#63 0x00007f1f46b99877 in std::_Deque_base<boost::shared_ptr<I3Frame>, std::allocator<boost::shared_ptr<I3Frame> > >::_M_destroy_nodes (this=0x7ffd571daa60, __nstart=0x1730df0, __nfinish=0x7ffd571daaa0)\n at /usr/include/c++/4.9/bits/stl_deque.h:647\n#64 0x00007f1f46c0187b in PythonModule<I3Module>::Physics (this=0x6a8711e3b3cd6900, frame=...) at ../../src/icetray/private/icetray/PythonModule.cxx:249\n#65 0x00007f1f46b8bcdf in boost::python::objects::make_ptr_instance<I3Context, boost::python::objects::pointer_holder<I3Context*, I3Context> >::get_class_object_impl<I3Context> (p=0x7ffd571daa80)\n at /usr/include/boost/python/object/make_ptr_instance.hpp:51\n#66 0x00007ffd571dab80 in ?? ()\n#67 0x0000000001730de8 in ?? ()\n#68 0x0000000001730f80 in ?? ()\n#69 0x0000000001730da0 in ?? ()\n#70 0x00007ffd571daad0 in ?? ()\n#71 0x6a8711e3b3cd6900 in ?? ()\n#72 0x00007ffd571dab10 in ?? ()\n#73 0x0000000001401000 in ?? ()\n#74 0x00007ffd571dace0 in ?? ()\n#75 0x00007f1f46b8537b in boost::function1<boost::shared_ptr<I3ServiceFactory>, I3Context const&>::function1 (this=0xd3ffd78948c68948, f=...) at /usr/include/boost/function/function_template.hpp:749\nBacktrace stopped: previous frame inner to this frame (corrupt stack?)\n}}}",
"reporter": "nega",
"cc": "sflis",
"resolution": "fixed",
"_ts": "1461860879759677",
"component": "combo simulation",
"summary": "[DOMLauncher] tests gone wild!",
"priority": "normal",
"keywords": "domlauncher, tests, SIGPIPE, signal-handler, root",
"time": "2016-02-23T05:00:46",
"milestone": "",
"owner": "cweaver",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[DOMLauncher] tests gone wild! (Trac #1563) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1563">https://code.icecube.wisc.edu/ticket/1563</a>, reported by nega and owned by cweaver</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-04-28T16:27:59",
"description": "see #1561 and #1562\n\n{{{\n21246 ? Rl 26420:41 python /home/nega/i3/combo/src/DOMLauncher/resources/test/LC-logicTest.py\n}}}\n\n{{{\n(gdb) bt\n#0 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#1 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x3f57140, n=55) at fileops.c:1251\n#2 0x00007f1f4853d39f in new_do_write (to_do=55, data=0x3f57140 \"\\n *** Break *** write on a pipe with no one to read it\\n\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#3 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=<optimized out>, n=55) at fileops.c:1330\n#4 0x00007f1f48532488 in __GI__IO_fputs (str=0x3f57140 \"\\n *** Break *** write on a pipe with no one to read it\\n\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofputs.c:40\n#5 0x00007f1f43c3a436 in DebugPrint(char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#6 0x00007f1f43c3ad04 in DefaultErrorHandler(int, bool, char const*, char const*) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#7 0x00007f1f43c3a66a in ErrorHandler () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#8 0x00007f1f43c3a97f in Break(char const*, char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#9 0x00007f1f43cc9e2f in TUnixSystem::DispatchSignals(ESignals) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#10 <signal handler called>\n#11 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#12 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x7f1f48c694dc, n=1) at fileops.c:1251\n#13 0x00007f1f4853d39f in new_do_write (to_do=1, data=0x7f1f48c694dc \".\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#14 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=<optimized out>, n=1) at fileops.c:1330\n#15 0x00007f1f48532b69 in __GI__IO_fwrite (buf=0x7f1f48c694dc, size=size@entry=1, count=1, fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofwrite.c:43\n#16 0x0000000000551c02 in file_write.lto_priv () at ../Objects/fileobject.c:1852\n#17 0x00000000004ccd05 in call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4035\n#18 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#19 0x00000000004cd4e2 in fast_function (nk=<optimized out>, na=<optimized out>, n=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4121\n#20 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4056\n#21 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#22 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#23 function_call.lto_priv () at ../Objects/funcobject.c:526\n#24 0x00000000004cf239 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#25 ext_do_call (nk=<optimized out>, na=<optimized out>, flags=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4348\n#26 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#27 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#28 function_call.lto_priv () at ../Objects/funcobject.c:526\n#29 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#30 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#31 0x0000000000573bfd in PyObject_Call (kw=0x0, arg=\n (<TextTestResult(_original_stdout=<file at remote 0x7f1f48c9c150>, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=<file at remote 0x7f1f48c9c1e0>) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=<file at remote 0x7f1f48c9c1e0>, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=<type at remote 0x10a0dc0>, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=<instancemethod at remote 0x7f1f3bba9190>) at ../Objects/abstract.c:2529\n#32 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#33 0x00000000004cd9ab in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#34 do_call (nk=<optimized out>, na=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4253\n#35 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4058\n#36 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#37 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#38 function_call.lto_priv () at ../Objects/funcobject.c:526\n#39 0x00000000004cf239 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#40 ext_do_call (nk=<optimized out>, na=<optimized out>, flags=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4348\n#41 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#42 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#43 function_call.lto_priv () at ../Objects/funcobject.c:526\n#44 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#45 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#46 0x0000000000573bfd in PyObject_Call (kw=0x0, \n arg=(<TextTestResult(_original_stdout=<file at remote 0x7f1f48c9c150>, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=<file at remote 0x7f1f48c9c1e0>) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=<file at remote 0x7f1f48c9c1e0>, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=<type at remote 0x10a0dc0>, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=<instancemethod at remote 0x7f1f3bba9230>) at ../Objects/abstract.c:2529\n#47 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#48 0x00000000004cd9ab in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#49 do_call (nk=<optimized out>, na=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4253\n#50 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4058\n#51 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#52 0x00000000004cd4e2 in fast_function (nk=<optimized out>, na=<optimized out>, n=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4121\n#53 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4056\n#54 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#55 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#56 function_call.lto_priv () at ../Objects/funcobject.c:526\n#57 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#58 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#59 0x00000000004d437b in PyObject_Call (kw=<optimized out>, arg=(<I3Frame at remote 0x7f1f3954c398>,), func=<instancemethod at remote 0x7f1f3bba9140>) at ../Objects/abstract.c:2529\n#60 PyEval_CallObjectWithKeywords () at ../Python/ceval.c:3904\n#61 0x0000000000495b80 in PyEval_CallFunction (obj=<instancemethod at remote 0x7f1f3bba9140>, format=<optimized out>) at ../Python/modsupport.c:557\n#62 0x00007f1f46b9bcd0 in boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::bad_any_cast> >::clone_impl (this=0x6a8711e3b3cd6900, x=..., __in_chrg=<optimized out>, __vtt_parm=<optimized out>)\n at /usr/include/boost/exception/exception.hpp:446\n#63 0x00007f1f46b99877 in std::_Deque_base<boost::shared_ptr<I3Frame>, std::allocator<boost::shared_ptr<I3Frame> > >::_M_destroy_nodes (this=0x7ffd571daa60, __nstart=0x1730df0, __nfinish=0x7ffd571daaa0)\n at /usr/include/c++/4.9/bits/stl_deque.h:647\n#64 0x00007f1f46c0187b in PythonModule<I3Module>::Physics (this=0x6a8711e3b3cd6900, frame=...) at ../../src/icetray/private/icetray/PythonModule.cxx:249\n#65 0x00007f1f46b8bcdf in boost::python::objects::make_ptr_instance<I3Context, boost::python::objects::pointer_holder<I3Context*, I3Context> >::get_class_object_impl<I3Context> (p=0x7ffd571daa80)\n at /usr/include/boost/python/object/make_ptr_instance.hpp:51\n#66 0x00007ffd571dab80 in ?? ()\n#67 0x0000000001730de8 in ?? ()\n#68 0x0000000001730f80 in ?? ()\n#69 0x0000000001730da0 in ?? ()\n#70 0x00007ffd571daad0 in ?? ()\n#71 0x6a8711e3b3cd6900 in ?? ()\n#72 0x00007ffd571dab10 in ?? ()\n#73 0x0000000001401000 in ?? ()\n#74 0x00007ffd571dace0 in ?? ()\n#75 0x00007f1f46b8537b in boost::function1<boost::shared_ptr<I3ServiceFactory>, I3Context const&>::function1 (this=0xd3ffd78948c68948, f=...) at /usr/include/boost/function/function_template.hpp:749\nBacktrace stopped: previous frame inner to this frame (corrupt stack?)\n}}}",
"reporter": "nega",
"cc": "sflis",
"resolution": "fixed",
"_ts": "1461860879759677",
"component": "combo simulation",
"summary": "[DOMLauncher] tests gone wild!",
"priority": "normal",
"keywords": "domlauncher, tests, SIGPIPE, signal-handler, root",
"time": "2016-02-23T05:00:46",
"milestone": "",
"owner": "cweaver",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
tests gone wild trac migrated from json status closed changetime description see and n n rl python home nega combo src domlauncher resources test lc logictest py n n n n gdb bt n in write at sysdeps unix syscall template s n in io new file write f data n at fileops c n in new do write to do data n break write on a pipe with no one to read it n fp at fileops c n io new file xsputn f data n at fileops c n in gi io fputs str n break write on a pipe with no one to read it n fp at iofputs c n in debugprint char const from home nega ports root lib libcore so n in defaulterrorhandler int bool char const char const from home nega ports root lib libcore so n in errorhandler from home nega ports root lib libcore so n in break char const char const from home nega ports root lib libcore so n in tunixsystem dispatchsignals esignals from home nega ports root lib libcore so n n in write at sysdeps unix syscall template s n in io new file write f data n at fileops c n in new do write to do data fp at fileops c n io new file xsputn f data n at fileops c n in gi io fwrite buf size size entry count fp at iofwrite c n in file write lto priv at objects fileobject c n in call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in fast function nk na n pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n ext do call nk na flags pp stack func at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n instancemethod call lto priv at objects classobject c n in pyobject call kw arg n dots true skipped mirroroutput false stream at remote testsrun buffer false original stderr showall false stdout buffer none stderr buffer none modulesetupfailed false expectedfailures errors descriptions true previoustestclass unexpectedsuccesses failures testrunentered true shouldstop false failfast false at remote func at objects abstract c n slot tp call lto priv at objects typeobject c n in pyobject call kw arg func at objects abstract c n do call nk na pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n ext do call nk na flags pp stack func at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n instancemethod call lto priv at objects classobject c n in pyobject call kw n arg dots true skipped mirroroutput false stream at remote testsrun buffer false original stderr showall false stdout buffer none stderr buffer none modulesetupfailed false expectedfailures errors descriptions true previoustestclass unexpectedsuccesses failures testrunentered true shouldstop false failfast false at remote func at objects abstract c n slot tp call lto priv at objects typeobject c n in pyobject call kw arg func at objects abstract c n do call nk na pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in fast function nk na n pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n instancemethod call lto priv at objects classobject c n in pyobject call kw arg func at objects abstract c n pyeval callobjectwithkeywords at python ceval c n in pyeval callfunction obj format at python modsupport c n in boost exception detail clone impl clone impl this x in chrg vtt parm n at usr include boost exception exception hpp n in std deque base std allocator m destroy nodes this nstart nfinish n at usr include c bits stl deque h n in pythonmodule physics this frame at src icetray private icetray pythonmodule cxx n in boost python objects make ptr instance get class object impl p n at usr include boost python object make ptr instance hpp n in n in n in n in n in n in n in n in n in n in boost const this f at usr include boost function function template hpp nbacktrace stopped previous frame inner to this frame corrupt stack n reporter nega cc sflis resolution fixed ts component combo simulation summary tests gone wild priority normal keywords domlauncher tests sigpipe signal handler root time milestone owner cweaver type defect
| 0
|
3,164
| 6,221,455,219
|
IssuesEvent
|
2017-07-10 05:48:46
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
opened
|
Closing a host document causes Parse Error
|
bug critical parse-tree-processing vbe-events
|
There would seem to be a caching issue with host documents that have been closed.
Steps to reproduce:
1. Start Excel
2. Ensure a default workbook (like "Book1") is open
3. Open VBE and allow RD to load and parse
4. Close the workbook
5. Refresh RD
6. Experience Parse Error
```text
2017-07-10 15:11:31.1559;ERROR-2.0.13.32288;Rubberduck.Parsing.VBA.ParseCoordinator;Unexpected exception thrown in parsing run. (thread 28).;System.Runtime.InteropServices.COMException (0x80010105): The server threw an exception. (Exception from HRESULT: 0x80010105 (RPC_E_SERVERFAULT))
at Microsoft.Vbe.Interop._VBProject.get_Protection()
at Rubberduck.VBEditor.SafeComWrappers.VBA.VBProject.get_Protection() in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\Rubberduck.VBEEditor\SafeComWrappers\VBA\VBProject.cs:line 68
at Rubberduck.Navigation.CodeExplorer.CodeExplorerProjectViewModel..ctor(FolderHelper folderHelper, Declaration declaration, IEnumerable`1 declarations) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\RetailCoder.VBE\Navigation\CodeExplorer\CodeExplorerProjectViewModel.cs:line 39
at Rubberduck.Navigation.CodeExplorer.CodeExplorerViewModel.<ParserState_StateChanged>b__46_3(IGrouping`2 grouping) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\RetailCoder.VBE\Navigation\CodeExplorer\CodeExplorerViewModel.cs:line 272
at System.Linq.Enumerable.WhereSelectListIterator`2.MoveNext()
at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
at Rubberduck.Navigation.CodeExplorer.CodeExplorerViewModel.ParserState_StateChanged(Object sender, ParserStateEventArgs e) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\RetailCoder.VBE\Navigation\CodeExplorer\CodeExplorerViewModel.cs:line 271
at System.EventHandler`1.Invoke(Object sender, TEventArgs e)
at Rubberduck.Parsing.VBA.RubberduckParserState.OnStateChanged(Object requestor, ParserState state) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\Rubberduck.Parsing\VBA\RubberduckParserState.cs:line 310
at Rubberduck.Parsing.VBA.ParseCoordinator.ExecuteCommonParseActivities(ICollection`1 toParse, CancellationToken token) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\Rubberduck.Parsing\VBA\ParseCoordinator.cs:line 201
at Rubberduck.Parsing.VBA.ParseCoordinator.ParseAllInternal(Object requestor, CancellationToken token) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\Rubberduck.Parsing\VBA\ParseCoordinator.cs:line 749
at Rubberduck.Parsing.VBA.ParseCoordinator.ParseAll(Object requestor, CancellationToken token) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\Rubberduck.Parsing\VBA\ParseCoordinator.cs:line 682
2017-07-10 15:11:31.1559;DEBUG-2.0.13.32288;Rubberduck.Parsing.VBA.RubberduckParserState;RubberduckParserState raised StateChanged (Error);
2017-07-10 15:11:31.3130;DEBUG-2.0.13.32288;Rubberduck.Parsing.VBA.ParseCoordinator;Parsing run finished after 1s. (thread 28).;
```
The `get_Protection` error is merely a symptom of the caching, but not the actual cause of the error. Accessing a property of a non-existent project will tend to throw.
|
1.0
|
Closing a host document causes Parse Error - There would seem to be a caching issue with host documents that have been closed.
Steps to reproduce:
1. Start Excel
2. Ensure a default workbook (like "Book1") is open
3. Open VBE and allow RD to load and parse
4. Close the workbook
5. Refresh RD
6. Experience Parse Error
```text
2017-07-10 15:11:31.1559;ERROR-2.0.13.32288;Rubberduck.Parsing.VBA.ParseCoordinator;Unexpected exception thrown in parsing run. (thread 28).;System.Runtime.InteropServices.COMException (0x80010105): The server threw an exception. (Exception from HRESULT: 0x80010105 (RPC_E_SERVERFAULT))
at Microsoft.Vbe.Interop._VBProject.get_Protection()
at Rubberduck.VBEditor.SafeComWrappers.VBA.VBProject.get_Protection() in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\Rubberduck.VBEEditor\SafeComWrappers\VBA\VBProject.cs:line 68
at Rubberduck.Navigation.CodeExplorer.CodeExplorerProjectViewModel..ctor(FolderHelper folderHelper, Declaration declaration, IEnumerable`1 declarations) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\RetailCoder.VBE\Navigation\CodeExplorer\CodeExplorerProjectViewModel.cs:line 39
at Rubberduck.Navigation.CodeExplorer.CodeExplorerViewModel.<ParserState_StateChanged>b__46_3(IGrouping`2 grouping) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\RetailCoder.VBE\Navigation\CodeExplorer\CodeExplorerViewModel.cs:line 272
at System.Linq.Enumerable.WhereSelectListIterator`2.MoveNext()
at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
at Rubberduck.Navigation.CodeExplorer.CodeExplorerViewModel.ParserState_StateChanged(Object sender, ParserStateEventArgs e) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\RetailCoder.VBE\Navigation\CodeExplorer\CodeExplorerViewModel.cs:line 271
at System.EventHandler`1.Invoke(Object sender, TEventArgs e)
at Rubberduck.Parsing.VBA.RubberduckParserState.OnStateChanged(Object requestor, ParserState state) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\Rubberduck.Parsing\VBA\RubberduckParserState.cs:line 310
at Rubberduck.Parsing.VBA.ParseCoordinator.ExecuteCommonParseActivities(ICollection`1 toParse, CancellationToken token) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\Rubberduck.Parsing\VBA\ParseCoordinator.cs:line 201
at Rubberduck.Parsing.VBA.ParseCoordinator.ParseAllInternal(Object requestor, CancellationToken token) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\Rubberduck.Parsing\VBA\ParseCoordinator.cs:line 749
at Rubberduck.Parsing.VBA.ParseCoordinator.ParseAll(Object requestor, CancellationToken token) in C:\Users\Mathieu\Documents\GitHub\Rubberduck (main)\Rubberduck\Rubberduck.Parsing\VBA\ParseCoordinator.cs:line 682
2017-07-10 15:11:31.1559;DEBUG-2.0.13.32288;Rubberduck.Parsing.VBA.RubberduckParserState;RubberduckParserState raised StateChanged (Error);
2017-07-10 15:11:31.3130;DEBUG-2.0.13.32288;Rubberduck.Parsing.VBA.ParseCoordinator;Parsing run finished after 1s. (thread 28).;
```
The `get_Protection` error is merely a symptom of the caching, but not the actual cause of the error. Accessing a property of a non-existent project will tend to throw.
|
process
|
closing a host document causes parse error there would seem to be a caching issue with host documents that have been closed steps to reproduce start excel ensure a default workbook like is open open vbe and allow rd to load and parse close the workbook refresh rd experience parse error text error rubberduck parsing vba parsecoordinator unexpected exception thrown in parsing run thread system runtime interopservices comexception the server threw an exception exception from hresult rpc e serverfault at microsoft vbe interop vbproject get protection at rubberduck vbeditor safecomwrappers vba vbproject get protection in c users mathieu documents github rubberduck main rubberduck rubberduck vbeeditor safecomwrappers vba vbproject cs line at rubberduck navigation codeexplorer codeexplorerprojectviewmodel ctor folderhelper folderhelper declaration declaration ienumerable declarations in c users mathieu documents github rubberduck main rubberduck retailcoder vbe navigation codeexplorer codeexplorerprojectviewmodel cs line at rubberduck navigation codeexplorer codeexplorerviewmodel b igrouping grouping in c users mathieu documents github rubberduck main rubberduck retailcoder vbe navigation codeexplorer codeexplorerviewmodel cs line at system linq enumerable whereselectlistiterator movenext at system collections generic list ctor ienumerable collection at system linq enumerable tolist ienumerable source at rubberduck navigation codeexplorer codeexplorerviewmodel parserstate statechanged object sender parserstateeventargs e in c users mathieu documents github rubberduck main rubberduck retailcoder vbe navigation codeexplorer codeexplorerviewmodel cs line at system eventhandler invoke object sender teventargs e at rubberduck parsing vba rubberduckparserstate onstatechanged object requestor parserstate state in c users mathieu documents github rubberduck main rubberduck rubberduck parsing vba rubberduckparserstate cs line at rubberduck parsing vba parsecoordinator executecommonparseactivities icollection toparse cancellationtoken token in c users mathieu documents github rubberduck main rubberduck rubberduck parsing vba parsecoordinator cs line at rubberduck parsing vba parsecoordinator parseallinternal object requestor cancellationtoken token in c users mathieu documents github rubberduck main rubberduck rubberduck parsing vba parsecoordinator cs line at rubberduck parsing vba parsecoordinator parseall object requestor cancellationtoken token in c users mathieu documents github rubberduck main rubberduck rubberduck parsing vba parsecoordinator cs line debug rubberduck parsing vba rubberduckparserstate rubberduckparserstate raised statechanged error debug rubberduck parsing vba parsecoordinator parsing run finished after thread the get protection error is merely a symptom of the caching but not the actual cause of the error accessing a property of a non existent project will tend to throw
| 1
|
14,827
| 18,167,860,112
|
IssuesEvent
|
2021-09-27 16:23:29
|
googleapis/python-storage
|
https://api.github.com/repos/googleapis/python-storage
|
opened
|
'test_new_bucket_created_w_unspecified_pap' systest flakes
|
type: process flaky
|
From [this Kokoro failure](https://source.cloud.google.com/results/invocations/7ace8da7-f27c-4364-b826-21df53a9c3d3/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-storage%2Fpresubmit%2Fsystem-3.8/log):
```python
__________________ test_new_bucket_created_w_unspecified_pap ___________________
storage_client = <google.cloud.storage.client.Client object at 0x7fca9111bc10>
buckets_to_delete = [<Bucket: new-w-pap-unspecified-1632759084261>]
blobs_to_delete = []
def test_new_bucket_created_w_unspecified_pap(
storage_client, buckets_to_delete, blobs_to_delete,
):
from google.cloud.storage import constants
bucket_name = _helpers.unique_name("new-w-pap-unspecified")
bucket = storage_client.bucket(bucket_name)
bucket.iam_configuration.uniform_bucket_level_access_enabled = True
bucket.create()
buckets_to_delete.append(bucket)
> assert (
bucket.iam_configuration.public_access_prevention
== constants.PUBLIC_ACCESS_PREVENTION_UNSPECIFIED
)
E AssertionError: assert 'inherited' == 'unspecified'
E - unspecified
E + inherited
tests/system/test_bucket.py:820: AssertionError
```
|
1.0
|
'test_new_bucket_created_w_unspecified_pap' systest flakes - From [this Kokoro failure](https://source.cloud.google.com/results/invocations/7ace8da7-f27c-4364-b826-21df53a9c3d3/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-storage%2Fpresubmit%2Fsystem-3.8/log):
```python
__________________ test_new_bucket_created_w_unspecified_pap ___________________
storage_client = <google.cloud.storage.client.Client object at 0x7fca9111bc10>
buckets_to_delete = [<Bucket: new-w-pap-unspecified-1632759084261>]
blobs_to_delete = []
def test_new_bucket_created_w_unspecified_pap(
storage_client, buckets_to_delete, blobs_to_delete,
):
from google.cloud.storage import constants
bucket_name = _helpers.unique_name("new-w-pap-unspecified")
bucket = storage_client.bucket(bucket_name)
bucket.iam_configuration.uniform_bucket_level_access_enabled = True
bucket.create()
buckets_to_delete.append(bucket)
> assert (
bucket.iam_configuration.public_access_prevention
== constants.PUBLIC_ACCESS_PREVENTION_UNSPECIFIED
)
E AssertionError: assert 'inherited' == 'unspecified'
E - unspecified
E + inherited
tests/system/test_bucket.py:820: AssertionError
```
|
process
|
test new bucket created w unspecified pap systest flakes from python test new bucket created w unspecified pap storage client buckets to delete blobs to delete def test new bucket created w unspecified pap storage client buckets to delete blobs to delete from google cloud storage import constants bucket name helpers unique name new w pap unspecified bucket storage client bucket bucket name bucket iam configuration uniform bucket level access enabled true bucket create buckets to delete append bucket assert bucket iam configuration public access prevention constants public access prevention unspecified e assertionerror assert inherited unspecified e unspecified e inherited tests system test bucket py assertionerror
| 1
|
18,657
| 24,581,407,738
|
IssuesEvent
|
2022-10-13 15:53:09
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[FHIR] Questionnaire resources > Start date and end date are not getting displayed
|
Bug P1 Response datastore Process: Fixed Process: Tested dev
|
Steps:
1. Log in to SB
2. Create a study
3. Add a questionnaire with Regular > Onetime > Custom start date and end date
4. Launch the study
5. Go to the google cloud console
6. Go to Questionnaire resource of FHIR datastore and observe
AR: Questionnaire resources > Start date and end date are not getting displayed
ER: Start date and end date should get displayed

|
2.0
|
[FHIR] Questionnaire resources > Start date and end date are not getting displayed - Steps:
1. Log in to SB
2. Create a study
3. Add a questionnaire with Regular > Onetime > Custom start date and end date
4. Launch the study
5. Go to the google cloud console
6. Go to Questionnaire resource of FHIR datastore and observe
AR: Questionnaire resources > Start date and end date are not getting displayed
ER: Start date and end date should get displayed

|
process
|
questionnaire resources start date and end date are not getting displayed steps log in to sb create a study add a questionnaire with regular onetime custom start date and end date launch the study go to the google cloud console go to questionnaire resource of fhir datastore and observe ar questionnaire resources start date and end date are not getting displayed er start date and end date should get displayed
| 1
|
168,867
| 26,708,354,694
|
IssuesEvent
|
2023-01-27 20:29:02
|
learningequality/studio
|
https://api.github.com/repos/learningequality/studio
|
closed
|
Consolidate opt-ins in Studio sign-up
|
P2 - normal TODO: needs decisions TAG: ux update design: complete
|
<!--
Note that anything written between these symbols will not appear in the actual, published issue. They serve as instructions for filling out this template. Please use the 'preview' tab above this textbox to verify formatting before submitting.
Instructions:
- Start by replacing the content in "[Title]" and give a "[Brief description]" of the issue above
- Please remove any unused, optional sections below.
-->
This feedback was given by @tomiwaoLE
## Desired behavior
<!-- Briefly describe the behavior you would like to see -->
The opt-in agreement of the privacy policy and terms of service can be combined into one checkbox.

Figma design file: https://www.figma.com/file/1tQQ0Sw1yDjzO66hyjLVPM?node-id=556:4666#323944643
## Current behavior
<!-- Briefly describe the current behavior; you may include screenshots, code, and notes -->
Currently there's a checkbox for each document

## Value add
<!-- (Optional) Explain why this should be added or changed in KDS and where it could be used -->
A first time user has to agree to two mandatory opt-ins at sign up - Terms of Service and privacy policy. Since we can’t skip either one as both are required, it would make sense to combine them into one opt-in sentence and link both pages either inline or underneath to reduce signup friction.
## References
https://www.notion.so/learningequality/Suggestion-on-opt-ins-at-sign-up-eb425ac5843b4d90a94f484943d47645
|
1.0
|
Consolidate opt-ins in Studio sign-up - <!--
Note that anything written between these symbols will not appear in the actual, published issue. They serve as instructions for filling out this template. Please use the 'preview' tab above this textbox to verify formatting before submitting.
Instructions:
- Start by replacing the content in "[Title]" and give a "[Brief description]" of the issue above
- Please remove any unused, optional sections below.
-->
This feedback was given by @tomiwaoLE
## Desired behavior
<!-- Briefly describe the behavior you would like to see -->
The opt-in agreement of the privacy policy and terms of service can be combined into one checkbox.

Figma design file: https://www.figma.com/file/1tQQ0Sw1yDjzO66hyjLVPM?node-id=556:4666#323944643
## Current behavior
<!-- Briefly describe the current behavior; you may include screenshots, code, and notes -->
Currently there's a checkbox for each document

## Value add
<!-- (Optional) Explain why this should be added or changed in KDS and where it could be used -->
A first time user has to agree to two mandatory opt-ins at sign up - Terms of Service and privacy policy. Since we can’t skip either one as both are required, it would make sense to combine them into one opt-in sentence and link both pages either inline or underneath to reduce signup friction.
## References
https://www.notion.so/learningequality/Suggestion-on-opt-ins-at-sign-up-eb425ac5843b4d90a94f484943d47645
|
non_process
|
consolidate opt ins in studio sign up note that anything written between these symbols will not appear in the actual published issue they serve as instructions for filling out this template please use the preview tab above this textbox to verify formatting before submitting instructions start by replacing the content in and give a of the issue above please remove any unused optional sections below this feedback was given by tomiwaole desired behavior the opt in agreement of the privacy policy and terms of service can be combined into one checkbox figma design file current behavior currently there s a checkbox for each document value add a first time user has to agree to two mandatory opt ins at sign up terms of service and privacy policy since we can’t skip either one as both are required it would make sense to combine them into one opt in sentence and link both pages either inline or underneath to reduce signup friction references
| 0
|
8,662
| 11,798,046,986
|
IssuesEvent
|
2020-03-18 13:47:22
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
opened
|
Observability | Prometheus config
|
EPIC - Auto Batch Process :oncoming_automobile:
|
## User want
As a technical user
I want set Prometheus to collect and digest doc index updater logs
So that I can enable monitoring and alerting tools
## Technical acceptance criteria
Prometheus should collect and digest logs from doc index updater
## Data - Potential impact
**Size**
**Value**
**Effort**
### Exit Criteria met
- [ ] Backlog
- [ ] Discovery
- [ ] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
1.0
|
Observability | Prometheus config - ## User want
As a technical user
I want set Prometheus to collect and digest doc index updater logs
So that I can enable monitoring and alerting tools
## Technical acceptance criteria
Prometheus should collect and digest logs from doc index updater
## Data - Potential impact
**Size**
**Value**
**Effort**
### Exit Criteria met
- [ ] Backlog
- [ ] Discovery
- [ ] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
process
|
observability prometheus config user want as a technical user i want set prometheus to collect and digest doc index updater logs so that i can enable monitoring and alerting tools technical acceptance criteria prometheus should collect and digest logs from doc index updater data potential impact size value effort exit criteria met backlog discovery duxd development quality assurance release and validate
| 1
|
13,563
| 16,104,493,830
|
IssuesEvent
|
2021-04-27 13:29:22
|
edwardsmarc/CASFRI
|
https://api.github.com/repos/edwardsmarc/CASFRI
|
opened
|
TT_ProduceInvGeoHistory() fails on some inventories
|
blocker bug post-translation process
|
Topology errors on ON01-xxxxxxxxxxxx565-xxxxxxx565-x565048172-2713932, QC01-xxxxxxxx32B04NO-7548550850-xxxxxx1319-xxxxxxx and SK01-xxxxxxxxxxxxUTM-xxxxxxxxxx-1269593033-1035665
|
1.0
|
TT_ProduceInvGeoHistory() fails on some inventories - Topology errors on ON01-xxxxxxxxxxxx565-xxxxxxx565-x565048172-2713932, QC01-xxxxxxxx32B04NO-7548550850-xxxxxx1319-xxxxxxx and SK01-xxxxxxxxxxxxUTM-xxxxxxxxxx-1269593033-1035665
|
process
|
tt produceinvgeohistory fails on some inventories topology errors on xxxxxxx and xxxxxxxxxxxxutm xxxxxxxxxx
| 1
|
2,391
| 5,187,643,726
|
IssuesEvent
|
2017-01-20 17:25:08
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
closed
|
Navigation incorrect for completed task within process
|
browser: all bug comp: activiti-processList
|
1. Click on completed or active user task within process details
**Expected results**
Navigated to completed/active task within tasklist (behaviour in Activiti)
**Actual results**
Navigated to completed/active task within task list (correct)
Task list does not show any item (incorrect)
No filter is selected or compelted filter is selected (incorrect)
**Component**

**Activiti**

|
1.0
|
Navigation incorrect for completed task within process - 1. Click on completed or active user task within process details
**Expected results**
Navigated to completed/active task within tasklist (behaviour in Activiti)
**Actual results**
Navigated to completed/active task within task list (correct)
Task list does not show any item (incorrect)
No filter is selected or compelted filter is selected (incorrect)
**Component**

**Activiti**

|
process
|
navigation incorrect for completed task within process click on completed or active user task within process details expected results navigated to completed active task within tasklist behaviour in activiti actual results navigated to completed active task within task list correct task list does not show any item incorrect no filter is selected or compelted filter is selected incorrect component activiti
| 1
|
248,248
| 7,928,554,276
|
IssuesEvent
|
2018-07-06 12:10:23
|
config-i1/smooth
|
https://api.github.com/repos/config-i1/smooth
|
closed
|
Correct multistep cost functions in Rcpp
|
fix highest priority
|
So that they are divided by (obs - hor) instead of obs.
|
1.0
|
Correct multistep cost functions in Rcpp - So that they are divided by (obs - hor) instead of obs.
|
non_process
|
correct multistep cost functions in rcpp so that they are divided by obs hor instead of obs
| 0
|
251,899
| 27,218,182,344
|
IssuesEvent
|
2023-02-21 01:12:58
|
liorzilberg/struts
|
https://api.github.com/repos/liorzilberg/struts
|
opened
|
CVE-2022-41966 (High) detected in xstream-1.4.10.jar
|
security vulnerability
|
## CVE-2022-41966 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.10.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: /plugins/rest/pom.xml</p>
<p>Path to vulnerable library: /.m2/repository/com/thoughtworks/xstream/xstream/1.4.10/xstream-1.4.10.jar,/.m2/repository/com/thoughtworks/xstream/xstream/1.4.10/xstream-1.4.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.10.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/liorzilberg/struts/commit/6950763af860884188f4080d19a18c5ede16cd74">6950763af860884188f4080d19a18c5ede16cd74</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream serializes Java objects to XML and back again. Versions prior to 1.4.20 may allow a remote attacker to terminate the application with a stack overflow error, resulting in a denial of service only via manipulation the processed input stream. The attack uses the hash code implementation for collections and maps to force recursive hash calculation causing a stack overflow. This issue is patched in version 1.4.20 which handles the stack overflow and raises an InputManipulationException instead. A potential workaround for users who only use HashMap or HashSet and whose XML refers these only as default map or set, is to change the default implementation of java.util.Map and java.util per the code example in the referenced advisory. However, this implies that your application does not care about the implementation of the map and all elements are comparable.
<p>Publish Date: 2022-12-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41966>CVE-2022-41966</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-j563-grx4-pjpv">https://github.com/advisories/GHSA-j563-grx4-pjpv</a></p>
<p>Release Date: 2022-12-28</p>
<p>Fix Resolution: 1.4.12-java7</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
True
|
CVE-2022-41966 (High) detected in xstream-1.4.10.jar - ## CVE-2022-41966 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.10.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: /plugins/rest/pom.xml</p>
<p>Path to vulnerable library: /.m2/repository/com/thoughtworks/xstream/xstream/1.4.10/xstream-1.4.10.jar,/.m2/repository/com/thoughtworks/xstream/xstream/1.4.10/xstream-1.4.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.10.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/liorzilberg/struts/commit/6950763af860884188f4080d19a18c5ede16cd74">6950763af860884188f4080d19a18c5ede16cd74</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream serializes Java objects to XML and back again. Versions prior to 1.4.20 may allow a remote attacker to terminate the application with a stack overflow error, resulting in a denial of service only via manipulation the processed input stream. The attack uses the hash code implementation for collections and maps to force recursive hash calculation causing a stack overflow. This issue is patched in version 1.4.20 which handles the stack overflow and raises an InputManipulationException instead. A potential workaround for users who only use HashMap or HashSet and whose XML refers these only as default map or set, is to change the default implementation of java.util.Map and java.util per the code example in the referenced advisory. However, this implies that your application does not care about the implementation of the map and all elements are comparable.
<p>Publish Date: 2022-12-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41966>CVE-2022-41966</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-j563-grx4-pjpv">https://github.com/advisories/GHSA-j563-grx4-pjpv</a></p>
<p>Release Date: 2022-12-28</p>
<p>Fix Resolution: 1.4.12-java7</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
non_process
|
cve high detected in xstream jar cve high severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back library home page a href path to dependency file plugins rest pom xml path to vulnerable library repository com thoughtworks xstream xstream xstream jar repository com thoughtworks xstream xstream xstream jar dependency hierarchy x xstream jar vulnerable library found in head commit a href found in base branch master vulnerability details xstream serializes java objects to xml and back again versions prior to may allow a remote attacker to terminate the application with a stack overflow error resulting in a denial of service only via manipulation the processed input stream the attack uses the hash code implementation for collections and maps to force recursive hash calculation causing a stack overflow this issue is patched in version which handles the stack overflow and raises an inputmanipulationexception instead a potential workaround for users who only use hashmap or hashset and whose xml refers these only as default map or set is to change the default implementation of java util map and java util per the code example in the referenced advisory however this implies that your application does not care about the implementation of the map and all elements are comparable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr
| 0
|
167,145
| 6,333,081,887
|
IssuesEvent
|
2017-07-26 14:04:36
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
reopened
|
[studio-ui] Cursor is not in the first field when creating a new article page in editorial bp
|
bug Priority: Low
|
Using the website_editorial bp create a new site, then create a new article page, notice that the cursor is not in the first field (**Page URL**) all the time when creating a new page

|
1.0
|
[studio-ui] Cursor is not in the first field when creating a new article page in editorial bp - Using the website_editorial bp create a new site, then create a new article page, notice that the cursor is not in the first field (**Page URL**) all the time when creating a new page

|
non_process
|
cursor is not in the first field when creating a new article page in editorial bp using the website editorial bp create a new site then create a new article page notice that the cursor is not in the first field page url all the time when creating a new page
| 0
|
8,135
| 11,339,423,735
|
IssuesEvent
|
2020-01-23 01:56:04
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
opened
|
Add applicant status pills to non student landing page
|
Apply Process Landing page
|
Who: Non student applicants
What: Add application status pillls
Why: to provide visual and make the page consistent
Acceptance Criteria:
- Add applicant status pills to the non student landing page

|
1.0
|
Add applicant status pills to non student landing page - Who: Non student applicants
What: Add application status pillls
Why: to provide visual and make the page consistent
Acceptance Criteria:
- Add applicant status pills to the non student landing page

|
process
|
add applicant status pills to non student landing page who non student applicants what add application status pillls why to provide visual and make the page consistent acceptance criteria add applicant status pills to the non student landing page
| 1
|
7,879
| 11,047,024,765
|
IssuesEvent
|
2019-12-09 18:04:08
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
2: effector-dependent suppression by symbiont of host immune response
|
multi-species process
|
After
https://github.com/geneontology/go-ontology/issues/18324
GO:NEW effector-dependent suppression by symbiont of host immune innate response
~exact synonym: effector-triggered susceptibility~
def
propose
Any process that involves the suppression of a host immune response by a secreted symbiont molecule; The host is defined as the larger of the organisms involved in a symbiotic interaction.
Effector-triggered susceptibility is achieved through the interaction between a pathogen effector and its host target that eventually impinges on immune signaling. Mechanistically, this interaction results in a modification of the target and its cellular fate (Kamoun, 2007)
|
1.0
|
2: effector-dependent suppression by symbiont of host immune response - After
https://github.com/geneontology/go-ontology/issues/18324
GO:NEW effector-dependent suppression by symbiont of host immune innate response
~exact synonym: effector-triggered susceptibility~
def
propose
Any process that involves the suppression of a host immune response by a secreted symbiont molecule; The host is defined as the larger of the organisms involved in a symbiotic interaction.
Effector-triggered susceptibility is achieved through the interaction between a pathogen effector and its host target that eventually impinges on immune signaling. Mechanistically, this interaction results in a modification of the target and its cellular fate (Kamoun, 2007)
|
process
|
effector dependent suppression by symbiont of host immune response after go new effector dependent suppression by symbiont of host immune innate response exact synonym effector triggered susceptibility def propose any process that involves the suppression of a host immune response by a secreted symbiont molecule the host is defined as the larger of the organisms involved in a symbiotic interaction effector triggered susceptibility is achieved through the interaction between a pathogen effector and its host target that eventually impinges on immune signaling mechanistically this interaction results in a modification of the target and its cellular fate kamoun
| 1
|
105,333
| 4,233,964,357
|
IssuesEvent
|
2016-07-05 09:57:30
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
BeforeSuite {Kubernetes e2e suite}
|
kind/flake priority/P2
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13272/
Failed: BeforeSuite {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:159
Jul 5 02:20:03.844: Error waiting for all pods to be running and ready: Not all pods in namespace 'kube-system' running and ready within 10m0s
```
Previous issues for this test: #26135 #26236
|
1.0
|
BeforeSuite {Kubernetes e2e suite} - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-test/13272/
Failed: BeforeSuite {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:159
Jul 5 02:20:03.844: Error waiting for all pods to be running and ready: Not all pods in namespace 'kube-system' running and ready within 10m0s
```
Previous issues for this test: #26135 #26236
|
non_process
|
beforesuite kubernetes suite failed beforesuite kubernetes suite go src io kubernetes output dockerized go src io kubernetes test go jul error waiting for all pods to be running and ready not all pods in namespace kube system running and ready within previous issues for this test
| 0
|
48,743
| 7,452,785,925
|
IssuesEvent
|
2018-03-29 09:34:16
|
cofoundry-cms/cofoundry
|
https://api.github.com/repos/cofoundry-cms/cofoundry
|
opened
|
User Areas: Better documentation of creating user management screens
|
Documentation
|
We don't have any guidance on creating user area management screens yet. We should create a sample site and add more to the [User Area documentation](https://github.com/cofoundry-cms/cofoundry/wiki/User-Areas), including how to make use of `AccountManagementControllerHelper`, `AuthenticationControllerHelper` and `UserManagementControllerHelper` to create user account management screens.
|
1.0
|
User Areas: Better documentation of creating user management screens - We don't have any guidance on creating user area management screens yet. We should create a sample site and add more to the [User Area documentation](https://github.com/cofoundry-cms/cofoundry/wiki/User-Areas), including how to make use of `AccountManagementControllerHelper`, `AuthenticationControllerHelper` and `UserManagementControllerHelper` to create user account management screens.
|
non_process
|
user areas better documentation of creating user management screens we don t have any guidance on creating user area management screens yet we should create a sample site and add more to the including how to make use of accountmanagementcontrollerhelper authenticationcontrollerhelper and usermanagementcontrollerhelper to create user account management screens
| 0
|
787,647
| 27,725,763,297
|
IssuesEvent
|
2023-03-15 01:59:59
|
AY2223S2-CS2103T-W14-2/tp
|
https://api.github.com/repos/AY2223S2-CS2103T-W14-2/tp
|
closed
|
As a user, I can get recommendations for where to meet my friends
|
priority.High new type.Story
|
ToDo
- [X] Implement Location to replace Address
- [X] Implement Meet Command for Singapore scope
- [ ] Implement Meet Command for NUS scope
|
1.0
|
As a user, I can get recommendations for where to meet my friends - ToDo
- [X] Implement Location to replace Address
- [X] Implement Meet Command for Singapore scope
- [ ] Implement Meet Command for NUS scope
|
non_process
|
as a user i can get recommendations for where to meet my friends todo implement location to replace address implement meet command for singapore scope implement meet command for nus scope
| 0
|
3,876
| 6,812,389,664
|
IssuesEvent
|
2017-11-06 02:45:45
|
swig/swig
|
https://api.github.com/repos/swig/swig
|
closed
|
Macro processing with empty parameter
|
preprocessor
|
Following case fails with swig:
#define MACROTEST(a, b, c) int a b(int c)
MACROTEST(, b, c);
with error "Macro 'MACROTEST' expects 3 arguments
This macro expansion works fine with gcc and VC++
|
1.0
|
Macro processing with empty parameter - Following case fails with swig:
#define MACROTEST(a, b, c) int a b(int c)
MACROTEST(, b, c);
with error "Macro 'MACROTEST' expects 3 arguments
This macro expansion works fine with gcc and VC++
|
process
|
macro processing with empty parameter following case fails with swig define macrotest a b c int a b int c macrotest b c with error macro macrotest expects arguments this macro expansion works fine with gcc and vc
| 1
|
28,265
| 8,129,441,717
|
IssuesEvent
|
2018-08-17 15:04:37
|
jewish-calendar/calendar
|
https://api.github.com/repos/jewish-calendar/calendar
|
closed
|
Merge subprojects
|
build
|
Usually, I am "one package per sub-project" guy, but in this case:
- packages are small;
- parasitic cyclical dependencies are not a real danger,
since I am the only author;
- there are no users demanding lightweight packaging for
when they only need dates and not astronomy,
and even when there will be users, they'll need everything
to generate calendars with halachic times;
- reusing tests from one sub-project in another is bothersome - and
indicates that they are tightly coupled;
- I failed to persuade JCenter to deploy artifacts from my monorepo
side by side, so I'll go around it by deploying just one artifact.
Hence:
- [x] move "generate" stuff into the "dates" subproject
- [x] remove "generate" subproject
- [x] move "atronomy" stuff into the "dates" subproject
- [x] remove "astronomy" subproject
|
1.0
|
Merge subprojects - Usually, I am "one package per sub-project" guy, but in this case:
- packages are small;
- parasitic cyclical dependencies are not a real danger,
since I am the only author;
- there are no users demanding lightweight packaging for
when they only need dates and not astronomy,
and even when there will be users, they'll need everything
to generate calendars with halachic times;
- reusing tests from one sub-project in another is bothersome - and
indicates that they are tightly coupled;
- I failed to persuade JCenter to deploy artifacts from my monorepo
side by side, so I'll go around it by deploying just one artifact.
Hence:
- [x] move "generate" stuff into the "dates" subproject
- [x] remove "generate" subproject
- [x] move "atronomy" stuff into the "dates" subproject
- [x] remove "astronomy" subproject
|
non_process
|
merge subprojects usually i am one package per sub project guy but in this case packages are small parasitic cyclical dependencies are not a real danger since i am the only author there are no users demanding lightweight packaging for when they only need dates and not astronomy and even when there will be users they ll need everything to generate calendars with halachic times reusing tests from one sub project in another is bothersome and indicates that they are tightly coupled i failed to persuade jcenter to deploy artifacts from my monorepo side by side so i ll go around it by deploying just one artifact hence move generate stuff into the dates subproject remove generate subproject move atronomy stuff into the dates subproject remove astronomy subproject
| 0
|
4,442
| 7,313,452,730
|
IssuesEvent
|
2018-03-01 01:12:22
|
P2Poker/RandomCat
|
https://api.github.com/repos/P2Poker/RandomCat
|
opened
|
As a developer, I need a clear and concise directory structure for source code
|
c) dev origin d) release 0.1 e) dev tools f) priority 2 g) change request h) in process j) difficult workaround l) minor completion cost l) no risk l) no ux impact n) no impact n) no users affected o) as a dev p) triage completed
|
## Story **(REQUIRED)**
As a developer, I need a clear and concise directory structure for source code.
## Explanation **(REQUIRED)**
The project needs a directory structure for source code, which will make it easier to manage future source code files.
Source code should be split into as few directories as possible, while separating source code for different binaries.
|
1.0
|
As a developer, I need a clear and concise directory structure for source code - ## Story **(REQUIRED)**
As a developer, I need a clear and concise directory structure for source code.
## Explanation **(REQUIRED)**
The project needs a directory structure for source code, which will make it easier to manage future source code files.
Source code should be split into as few directories as possible, while separating source code for different binaries.
|
process
|
as a developer i need a clear and concise directory structure for source code story required as a developer i need a clear and concise directory structure for source code explanation required the project needs a directory structure for source code which will make it easier to manage future source code files source code should be split into as few directories as possible while separating source code for different binaries
| 1
|
25,604
| 12,262,820,310
|
IssuesEvent
|
2020-05-06 23:06:41
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Create new connector from alert flyout form throw an error messages in external plugins
|
Feature:Alerting Team:Alerting Services bug
|
**Steps to reproduce:**
1. Open Uptime plugin
2. Click Alert dropdown -> Create alert -> TLS alert
3. Click on Slack action (or any other action type where is no connectors yet)
4. Click Create a connector button and observe the errors in the browser console and dialog was not appeared.
|
1.0
|
Create new connector from alert flyout form throw an error messages in external plugins - **Steps to reproduce:**
1. Open Uptime plugin
2. Click Alert dropdown -> Create alert -> TLS alert
3. Click on Slack action (or any other action type where is no connectors yet)
4. Click Create a connector button and observe the errors in the browser console and dialog was not appeared.
|
non_process
|
create new connector from alert flyout form throw an error messages in external plugins steps to reproduce open uptime plugin click alert dropdown create alert tls alert click on slack action or any other action type where is no connectors yet click create a connector button and observe the errors in the browser console and dialog was not appeared
| 0
|
35,567
| 4,996,288,492
|
IssuesEvent
|
2016-12-09 13:21:47
|
MohammadYounes/AlertifyJS
|
https://api.github.com/repos/MohammadYounes/AlertifyJS
|
closed
|
I have problems with notification in a modal
|
needs test case troubleshooting
|
Well my problem is that in my html I have a modal where there is a form and I have put a send button and there I am calling the alert
_<button type = "submit" class = "btn btn-success" onclick = "alertify.success ('Added');"> <i class = "fa fa-floppy-o" aria-hidden = "true" / I> Save Changes </ #@button>_
But it turns out that the alert is not executed. Sometimes it looks but a little to the right and barely and it is visualized #
|
1.0
|
I have problems with notification in a modal - Well my problem is that in my html I have a modal where there is a form and I have put a send button and there I am calling the alert
_<button type = "submit" class = "btn btn-success" onclick = "alertify.success ('Added');"> <i class = "fa fa-floppy-o" aria-hidden = "true" / I> Save Changes </ #@button>_
But it turns out that the alert is not executed. Sometimes it looks but a little to the right and barely and it is visualized #
|
non_process
|
i have problems with notification in a modal well my problem is that in my html i have a modal where there is a form and i have put a send button and there i am calling the alert save changes but it turns out that the alert is not executed sometimes it looks but a little to the right and barely and it is visualized
| 0
|
66,122
| 8,881,384,053
|
IssuesEvent
|
2019-01-14 09:58:55
|
commercetools/ui-kit
|
https://api.github.com/repos/commercetools/ui-kit
|
closed
|
README storybook integration broken
|
✍️ Type: Documentation 🐛 Type: Bug 👶 good first contribution
|

When viewing the deployed UI-Kit the README is only visible after a page refresh.
Reproduction:
1. open https://uikit.commercetools.com
2. switch to README tab
Expectation:
Readme is visible
Observed Behaviour:
"README.md was not added" is shown.
A page refresh at that point solves the problem.
This is likely to be a bug in Storybook itself.
|
1.0
|
README storybook integration broken -

When viewing the deployed UI-Kit the README is only visible after a page refresh.
Reproduction:
1. open https://uikit.commercetools.com
2. switch to README tab
Expectation:
Readme is visible
Observed Behaviour:
"README.md was not added" is shown.
A page refresh at that point solves the problem.
This is likely to be a bug in Storybook itself.
|
non_process
|
readme storybook integration broken when viewing the deployed ui kit the readme is only visible after a page refresh reproduction open switch to readme tab expectation readme is visible observed behaviour readme md was not added is shown a page refresh at that point solves the problem this is likely to be a bug in storybook itself
| 0
|
326,197
| 9,948,619,530
|
IssuesEvent
|
2019-07-04 09:20:43
|
netdata/netdata
|
https://api.github.com/repos/netdata/netdata
|
opened
|
netdata daemon crash during shutdown
|
area/daemon area/database bug priority/medium size:3
|
<!---
When creating a bug report please:
- Verify first that your issue is not already reported on GitHub
- Test if the latest release and master branch are affected too.
- Provide a clear and concise description of what the bug is in "Bug report
summary" section.
- Try to provide as much information about your environment (OS distribution,
running in container, etc.) as possible to allow us reproduce this bug faster.
- Write which component is affected. We group our components the same way our
code is structured so basically:
component name = dir in top level directory of repository
- Describe how you found this bug and how we can reproduce it. Preferable with
a minimal test-case scenario. You can paste gist.github.com links for larger
files
- Provide a clear and concise description of what you expected to happen.
-->
##### Bug report summary
netdata service has been obvserved to crash during shutdown. This is a rare occurrence.
##### OS / Environment
Linux
##### Netdata version (ouput of `netdata -V`)
v1.15.0-129-gc862a026
##### Component Name
daemon
##### Steps To Reproduce
Shut down netdata service.
[valgrind1.xml.txt](https://github.com/netdata/netdata/files/3358604/valgrind1.xml.txt)
##### Expected behavior
Not crash.
|
1.0
|
netdata daemon crash during shutdown - <!---
When creating a bug report please:
- Verify first that your issue is not already reported on GitHub
- Test if the latest release and master branch are affected too.
- Provide a clear and concise description of what the bug is in "Bug report
summary" section.
- Try to provide as much information about your environment (OS distribution,
running in container, etc.) as possible to allow us reproduce this bug faster.
- Write which component is affected. We group our components the same way our
code is structured so basically:
component name = dir in top level directory of repository
- Describe how you found this bug and how we can reproduce it. Preferable with
a minimal test-case scenario. You can paste gist.github.com links for larger
files
- Provide a clear and concise description of what you expected to happen.
-->
##### Bug report summary
netdata service has been obvserved to crash during shutdown. This is a rare occurrence.
##### OS / Environment
Linux
##### Netdata version (ouput of `netdata -V`)
v1.15.0-129-gc862a026
##### Component Name
daemon
##### Steps To Reproduce
Shut down netdata service.
[valgrind1.xml.txt](https://github.com/netdata/netdata/files/3358604/valgrind1.xml.txt)
##### Expected behavior
Not crash.
|
non_process
|
netdata daemon crash during shutdown when creating a bug report please verify first that your issue is not already reported on github test if the latest release and master branch are affected too provide a clear and concise description of what the bug is in bug report summary section try to provide as much information about your environment os distribution running in container etc as possible to allow us reproduce this bug faster write which component is affected we group our components the same way our code is structured so basically component name dir in top level directory of repository describe how you found this bug and how we can reproduce it preferable with a minimal test case scenario you can paste gist github com links for larger files provide a clear and concise description of what you expected to happen bug report summary netdata service has been obvserved to crash during shutdown this is a rare occurrence os environment linux netdata version ouput of netdata v component name daemon steps to reproduce shut down netdata service expected behavior not crash
| 0
|
255,538
| 19,306,702,214
|
IssuesEvent
|
2021-12-13 12:19:26
|
Yann-Claudon/Histoire-de-Nos-Villes
|
https://api.github.com/repos/Yann-Claudon/Histoire-de-Nos-Villes
|
closed
|
Digramme d'activité de l'application
|
documentation
|
Faire le diagramme d'activité de l'application avec comme acteur l'utilisateur, avec la géolocalisation , la visualisation des marqueurs, les notifs
|
1.0
|
Digramme d'activité de l'application - Faire le diagramme d'activité de l'application avec comme acteur l'utilisateur, avec la géolocalisation , la visualisation des marqueurs, les notifs
|
non_process
|
digramme d activité de l application faire le diagramme d activité de l application avec comme acteur l utilisateur avec la géolocalisation la visualisation des marqueurs les notifs
| 0
|
446,460
| 31,477,886,617
|
IssuesEvent
|
2023-08-30 12:03:47
|
SovereignCloudStack/k8s-cluster-api-provider
|
https://api.github.com/repos/SovereignCloudStack/k8s-cluster-api-provider
|
closed
|
Jq not found after update
|
bug documentation Container Sprint Jena
|
In the PR #424 a new package is installed `jq` (https://github.com/SovereignCloudStack/k8s-cluster-api-provider/blob/main/terraform/files/bin/bootstrap.sh#L24C29-L24C29). When having a machine created before this patch and then upgrading to the latest version this error arises when creating a cluster.
This, as part of #466 got merged into the `maintained/v5.x`. I just want to discuss whether we take the need for new software as a breaking change or not.
However, a few things should be addressed.
- [ ] Adding the installation of `jq` into the upgrade script.
- [ ] Documenting the need to install `jq` in docs or release notes for `maintained/v5.x` or consider removing the feature from the branch.
```
ubuntu@fd-mgmtcluster:~ [0]$ bin/create_cluster.sh testfd2
Switched to context "kind-kind".
> Cluster testfd2 does not exist. Creating a new cluster namespace...
namespace/testfd2 created
Context "kind-kind" modified.
> Namespace changed to testfd2
Adding docker.io/hosts.toml to the KubeadmControlPlane files
Adding docker.io/hosts.toml to the KubeadmConfigTemplate files
Switched to context "kind-kind".
> Cluster testfd2 does not exist, and new namespace creation is disabled.
#Created AppCred 11e495527840490da933ac524048b441
#Info: Changing OPENSTACK_CLOUD from testfd1 to fd-testfd2
Waiting for image ubuntu-capi-image-v1.25.11 to become active: d2619787-4d17-46d8-aa62-4f7c2969e47d active
Switched to context "kind-kind".
> Cluster testfd2 does not exist, and new namespace creation is disabled.
# show used variables for clustertemplate /home/ubuntu/testfd2/cluster-template.yaml
Adding server groups e246ed70-0fef-40f3-899d-8f1b00f2c339 and a785f99a-889a-4194-9b57-128bb82a76e6 to /home/ubuntu/testfd2/clusterctl.yaml
/home/ubuntu/bin/flavor_disk.sh: line 40: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 44: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 45: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 47: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 49: test: !=: unary operator expected
/home/ubuntu/bin/flavor_disk.sh: line 54: test: 0: unary operator expected
/home/ubuntu/bin/flavor_disk.sh: line 40: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 44: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 45: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 47: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 49: test: !=: unary operator expected
/home/ubuntu/bin/flavor_disk.sh: line 54: test: 0: unary operator expected
```
|
1.0
|
Jq not found after update - In the PR #424 a new package is installed `jq` (https://github.com/SovereignCloudStack/k8s-cluster-api-provider/blob/main/terraform/files/bin/bootstrap.sh#L24C29-L24C29). When having a machine created before this patch and then upgrading to the latest version this error arises when creating a cluster.
This, as part of #466 got merged into the `maintained/v5.x`. I just want to discuss whether we take the need for new software as a breaking change or not.
However, a few things should be addressed.
- [ ] Adding the installation of `jq` into the upgrade script.
- [ ] Documenting the need to install `jq` in docs or release notes for `maintained/v5.x` or consider removing the feature from the branch.
```
ubuntu@fd-mgmtcluster:~ [0]$ bin/create_cluster.sh testfd2
Switched to context "kind-kind".
> Cluster testfd2 does not exist. Creating a new cluster namespace...
namespace/testfd2 created
Context "kind-kind" modified.
> Namespace changed to testfd2
Adding docker.io/hosts.toml to the KubeadmControlPlane files
Adding docker.io/hosts.toml to the KubeadmConfigTemplate files
Switched to context "kind-kind".
> Cluster testfd2 does not exist, and new namespace creation is disabled.
#Created AppCred 11e495527840490da933ac524048b441
#Info: Changing OPENSTACK_CLOUD from testfd1 to fd-testfd2
Waiting for image ubuntu-capi-image-v1.25.11 to become active: d2619787-4d17-46d8-aa62-4f7c2969e47d active
Switched to context "kind-kind".
> Cluster testfd2 does not exist, and new namespace creation is disabled.
# show used variables for clustertemplate /home/ubuntu/testfd2/cluster-template.yaml
Adding server groups e246ed70-0fef-40f3-899d-8f1b00f2c339 and a785f99a-889a-4194-9b57-128bb82a76e6 to /home/ubuntu/testfd2/clusterctl.yaml
/home/ubuntu/bin/flavor_disk.sh: line 40: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 44: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 45: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 47: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 49: test: !=: unary operator expected
/home/ubuntu/bin/flavor_disk.sh: line 54: test: 0: unary operator expected
/home/ubuntu/bin/flavor_disk.sh: line 40: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 44: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 45: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 47: jq: command not found
/home/ubuntu/bin/flavor_disk.sh: line 49: test: !=: unary operator expected
/home/ubuntu/bin/flavor_disk.sh: line 54: test: 0: unary operator expected
```
|
non_process
|
jq not found after update in the pr a new package is installed jq when having a machine created before this patch and then upgrading to the latest version this error arises when creating a cluster this as part of got merged into the maintained x i just want to discuss whether we take the need for new software as a breaking change or not however a few things should be addressed adding the installation of jq into the upgrade script documenting the need to install jq in docs or release notes for maintained x or consider removing the feature from the branch ubuntu fd mgmtcluster bin create cluster sh switched to context kind kind cluster does not exist creating a new cluster namespace namespace created context kind kind modified namespace changed to adding docker io hosts toml to the kubeadmcontrolplane files adding docker io hosts toml to the kubeadmconfigtemplate files switched to context kind kind cluster does not exist and new namespace creation is disabled created appcred info changing openstack cloud from to fd waiting for image ubuntu capi image to become active active switched to context kind kind cluster does not exist and new namespace creation is disabled show used variables for clustertemplate home ubuntu cluster template yaml adding server groups and to home ubuntu clusterctl yaml home ubuntu bin flavor disk sh line jq command not found home ubuntu bin flavor disk sh line jq command not found home ubuntu bin flavor disk sh line jq command not found home ubuntu bin flavor disk sh line jq command not found home ubuntu bin flavor disk sh line test unary operator expected home ubuntu bin flavor disk sh line test unary operator expected home ubuntu bin flavor disk sh line jq command not found home ubuntu bin flavor disk sh line jq command not found home ubuntu bin flavor disk sh line jq command not found home ubuntu bin flavor disk sh line jq command not found home ubuntu bin flavor disk sh line test unary operator expected home ubuntu bin flavor disk sh line test unary operator expected
| 0
|
13,348
| 15,807,162,246
|
IssuesEvent
|
2021-04-04 09:06:30
|
klarEDA/klar-EDA
|
https://api.github.com/repos/klarEDA/klar-EDA
|
opened
|
Implement a method for date feature extraction in csv data preprocessor
|
Level3 data-preprocessing gssoc21
|
**Description**
> a. Write a method to identify the columns of type `date` (this may include iterating over the list of columns and using an appropriate strategy to identify if a column has values of type `date`)
> b. Implement another method that should be able to convert the date column into a specific static format (for example - YYYY-MM-DD) and split the date column into separate columns with the following attribute values:
> 1. Date of the month (for example - 28 for '2021-12-28')
> 2. Month (Numerical)
> 3. Year
> 4. Day of the week
> c. Appropriate test methods should be implemented in the `date_format_tests` file
**Assumptions**
> The following assumptions can be made during the implementation
> 1. No time is present in the given input date.
> 2. The data frame must contain column names
> 3. A list of input patterns can be assumed. (For example - you can assume the input will be in either of any known formats mentioned).
`input_date_format = [ 'DD/MM/YYYY', 'YYYY/DD/MM', 'MM/DD/YYYY', 'YYYY/MM/DD', 'DD-MM-YYYY', 'YYYY-DD-MM', 'MM-DD-YYYY', 'YYYY-MM-DD' ]`
**Input** (Method -1)
> None
**Output** (Method-1)
> list of column names with values of type date
**Method details**
> Use the data frame from the `self.df` variable.
**Input** (Method -2)
> An expected format the input date should be converted to
**Output** (Method-2)
> None
**Method details**
> Use the data frame from the `self.df` variable.
> Implement a method for the same with appropriate name and parameters in the `csv_preprocess.py` file.
> In the implementation use the method `convert_date_format` for converting the date into a specific format & the method-1 mentioned above to get a list of columns with date type.
**Note**
The use of standard python libraries is highly recommended.
JOIN THE SLACK CHANNEL [`HERE`](https://join.slack.com/t/klareda/shared_invite/zt-ndytbmnq-ARgpyJLPMbeFyYo87jrTjA) if you wish to contribute to this issue.
|
1.0
|
Implement a method for date feature extraction in csv data preprocessor - **Description**
> a. Write a method to identify the columns of type `date` (this may include iterating over the list of columns and using an appropriate strategy to identify if a column has values of type `date`)
> b. Implement another method that should be able to convert the date column into a specific static format (for example - YYYY-MM-DD) and split the date column into separate columns with the following attribute values:
> 1. Date of the month (for example - 28 for '2021-12-28')
> 2. Month (Numerical)
> 3. Year
> 4. Day of the week
> c. Appropriate test methods should be implemented in the `date_format_tests` file
**Assumptions**
> The following assumptions can be made during the implementation
> 1. No time is present in the given input date.
> 2. The data frame must contain column names
> 3. A list of input patterns can be assumed. (For example - you can assume the input will be in either of any known formats mentioned).
`input_date_format = [ 'DD/MM/YYYY', 'YYYY/DD/MM', 'MM/DD/YYYY', 'YYYY/MM/DD', 'DD-MM-YYYY', 'YYYY-DD-MM', 'MM-DD-YYYY', 'YYYY-MM-DD' ]`
**Input** (Method -1)
> None
**Output** (Method-1)
> list of column names with values of type date
**Method details**
> Use the data frame from the `self.df` variable.
**Input** (Method -2)
> An expected format the input date should be converted to
**Output** (Method-2)
> None
**Method details**
> Use the data frame from the `self.df` variable.
> Implement a method for the same with appropriate name and parameters in the `csv_preprocess.py` file.
> In the implementation use the method `convert_date_format` for converting the date into a specific format & the method-1 mentioned above to get a list of columns with date type.
**Note**
The use of standard python libraries is highly recommended.
JOIN THE SLACK CHANNEL [`HERE`](https://join.slack.com/t/klareda/shared_invite/zt-ndytbmnq-ARgpyJLPMbeFyYo87jrTjA) if you wish to contribute to this issue.
|
process
|
implement a method for date feature extraction in csv data preprocessor description a write a method to identify the columns of type date this may include iterating over the list of columns and using an appropriate strategy to identify if a column has values of type date b implement another method that should be able to convert the date column into a specific static format for example yyyy mm dd and split the date column into separate columns with the following attribute values date of the month for example for month numerical year day of the week c appropriate test methods should be implemented in the date format tests file assumptions the following assumptions can be made during the implementation no time is present in the given input date the data frame must contain column names a list of input patterns can be assumed for example you can assume the input will be in either of any known formats mentioned input date format input method none output method list of column names with values of type date method details use the data frame from the self df variable input method an expected format the input date should be converted to output method none method details use the data frame from the self df variable implement a method for the same with appropriate name and parameters in the csv preprocess py file in the implementation use the method convert date format for converting the date into a specific format the method mentioned above to get a list of columns with date type note the use of standard python libraries is highly recommended join the slack channel if you wish to contribute to this issue
| 1
|
421,121
| 12,248,827,857
|
IssuesEvent
|
2020-05-05 18:10:35
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
[python3]Get a error when I use python's BaseManager to share Process memory
|
disposition/requires reporter action kind/bug lang/Python priority/P2
|
<!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
grpc 1.24.3
python 3.7.0
### What operating system (Linux, Windows,...) and version?
CentOS Linux release 7.7.1908 (Core) - Linux version 3.10.0-693.el7.x86_64
### What runtime / compiler are you using (e.g. python version or version of gcc)
python 3.7.0
### What did you do?
I want to use python's BaseManager to share Process memory to reduce memory usage, the server run successfully, but the client raise a error. My code is as follows:
**server code :**
```Python
import time
import multiprocessing
from concurrent import futures
from multiprocessing.managers import BaseManager
import grpc
import test_pb2
import test_pb2_grpc
class TestClass(object):
def __init__(self, test_string = "init"):
self.test_string = test_string
def add(self):
self.test_string += "a"
def get_num(self, sentence):
return sentence + " result_string = " + str(self.test_string)
class Grpc_Mq_Test(test_pb2_grpc.Grpc_Mq_TestServicer):
def __init__(self):
self.test_class = TestClass()
def TextPrediction(self, Request, context):
string = Request.string
return test_pb2.RequestReply(result = self.test_class.get_num(string))
def _run_server(grpc_bind_address, ALL_PROCESS_COUNT, Grpc_Mq_Test):
options = (('grpc.so_reuseport', 1),)
grpc_server = grpc.server(futures.ThreadPoolExecutor(max_workers = ALL_PROCESS_COUNT,),options = options)
test_pb2_grpc.add_Grpc_Mq_TestServicer_to_server(Grpc_Mq_Test, grpc_server)
grpc_server.add_insecure_port(grpc_bind_address)
grpc_server.start()
while True:
time.sleep(3600)
class MyManager(BaseManager):
pass
def sever_launcher(grpc_bind_address):
manager = MyManager()
manager.register('Grpc_Mq_Test', Grpc_Mq_Test)
manager.start()
textprediction = manager.Grpc_Mq_Test()
workers = []
ALL_PROCESS_COUNT = 4
for _ in range(ALL_PROCESS_COUNT):
worker = multiprocessing.Process(target=_run_server, args=(grpc_bind_address, ALL_PROCESS_COUNT, textprediction,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
if __name__ == '__main__':
grpc_bind_address = "127.0.0.1:50051"
sever_launcher(grpc_bind_address)
```
**client code**
```Python
import grpc
import test_pb2
import test_pb2_grpc
with grpc.insecure_channel('127.0.0.1:50051') as channel:
stub = test_pb2_grpc.Grpc_Mq_TestStub(channel)
response = stub.TextPrediction(test_pb2.Request(string = 'test'))
print("Greeter client received: " + response.result)
```
test_pb2.py and test_pb2_grpc.py come from this **.proto**:
```
syntax = "proto3";
package test;
message Request {
string string = 1;;
}
message RequestReply {
string result = 1;
}
service Grpc_Mq_Test {
rpc TextPrediction (Request) returns (RequestReply) {}
}
```
the server run successfully, but the client raise a error. this is **error**:
```
Traceback (most recent call last):
File "rpc_client.py", line 15, in <module>
run()
File "rpc_client.py", line 11, in run
response = stub.TextPrediction(test_pb2.Request(string = 'haa'))
File "/usr/local/python3/lib/python3.7/site-packages/grpc/_channel.py", line 604, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/python3/lib/python3.7/site-packages/grpc/_channel.py", line 506, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Exception calling application: no default __reduce__ due to non-trivial __cinit__"
debug_error_string = "{"created":"********","description":"Error received from peer ipv4:127.0.0.1:50051","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"Exception calling application: no default __reduce__ due to non-trivial __cinit__","grpc_status":2}"
>
```
When I **give up the BaseManager**, change the **sever_launcher** function of server code like this:
```Python
def sever_launcher(grpc_bind_address):
textprediction = Grpc_Mq_Test()
workers = []
ALL_PROCESS_COUNT = 4
for _ in range(ALL_PROCESS_COUNT):
worker = multiprocessing.Process(target=_run_server, args=(grpc_bind_address, ALL_PROCESS_COUNT, textprediction,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
```
The error disappear, client print this string:
`Greeter client received: haa class_string = None`
### What did you expect to see?
Process memory can be shared.
I was troubled by this problem for several days. Thanks for your any help!!!!
|
1.0
|
[python3]Get a error when I use python's BaseManager to share Process memory - <!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
grpc 1.24.3
python 3.7.0
### What operating system (Linux, Windows,...) and version?
CentOS Linux release 7.7.1908 (Core) - Linux version 3.10.0-693.el7.x86_64
### What runtime / compiler are you using (e.g. python version or version of gcc)
python 3.7.0
### What did you do?
I want to use python's BaseManager to share Process memory to reduce memory usage, the server run successfully, but the client raise a error. My code is as follows:
**server code :**
```Python
import time
import multiprocessing
from concurrent import futures
from multiprocessing.managers import BaseManager
import grpc
import test_pb2
import test_pb2_grpc
class TestClass(object):
def __init__(self, test_string = "init"):
self.test_string = test_string
def add(self):
self.test_string += "a"
def get_num(self, sentence):
return sentence + " result_string = " + str(self.test_string)
class Grpc_Mq_Test(test_pb2_grpc.Grpc_Mq_TestServicer):
def __init__(self):
self.test_class = TestClass()
def TextPrediction(self, Request, context):
string = Request.string
return test_pb2.RequestReply(result = self.test_class.get_num(string))
def _run_server(grpc_bind_address, ALL_PROCESS_COUNT, Grpc_Mq_Test):
options = (('grpc.so_reuseport', 1),)
grpc_server = grpc.server(futures.ThreadPoolExecutor(max_workers = ALL_PROCESS_COUNT,),options = options)
test_pb2_grpc.add_Grpc_Mq_TestServicer_to_server(Grpc_Mq_Test, grpc_server)
grpc_server.add_insecure_port(grpc_bind_address)
grpc_server.start()
while True:
time.sleep(3600)
class MyManager(BaseManager):
pass
def sever_launcher(grpc_bind_address):
manager = MyManager()
manager.register('Grpc_Mq_Test', Grpc_Mq_Test)
manager.start()
textprediction = manager.Grpc_Mq_Test()
workers = []
ALL_PROCESS_COUNT = 4
for _ in range(ALL_PROCESS_COUNT):
worker = multiprocessing.Process(target=_run_server, args=(grpc_bind_address, ALL_PROCESS_COUNT, textprediction,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
if __name__ == '__main__':
grpc_bind_address = "127.0.0.1:50051"
sever_launcher(grpc_bind_address)
```
**client code**
```Python
import grpc
import test_pb2
import test_pb2_grpc
with grpc.insecure_channel('127.0.0.1:50051') as channel:
stub = test_pb2_grpc.Grpc_Mq_TestStub(channel)
response = stub.TextPrediction(test_pb2.Request(string = 'test'))
print("Greeter client received: " + response.result)
```
test_pb2.py and test_pb2_grpc.py come from this **.proto**:
```
syntax = "proto3";
package test;
message Request {
string string = 1;;
}
message RequestReply {
string result = 1;
}
service Grpc_Mq_Test {
rpc TextPrediction (Request) returns (RequestReply) {}
}
```
the server run successfully, but the client raise a error. this is **error**:
```
Traceback (most recent call last):
File "rpc_client.py", line 15, in <module>
run()
File "rpc_client.py", line 11, in run
response = stub.TextPrediction(test_pb2.Request(string = 'haa'))
File "/usr/local/python3/lib/python3.7/site-packages/grpc/_channel.py", line 604, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/python3/lib/python3.7/site-packages/grpc/_channel.py", line 506, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Exception calling application: no default __reduce__ due to non-trivial __cinit__"
debug_error_string = "{"created":"********","description":"Error received from peer ipv4:127.0.0.1:50051","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"Exception calling application: no default __reduce__ due to non-trivial __cinit__","grpc_status":2}"
>
```
When I **give up the BaseManager**, change the **sever_launcher** function of server code like this:
```Python
def sever_launcher(grpc_bind_address):
textprediction = Grpc_Mq_Test()
workers = []
ALL_PROCESS_COUNT = 4
for _ in range(ALL_PROCESS_COUNT):
worker = multiprocessing.Process(target=_run_server, args=(grpc_bind_address, ALL_PROCESS_COUNT, textprediction,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
```
The error disappear, client print this string:
`Greeter client received: haa class_string = None`
### What did you expect to see?
Process memory can be shared.
I was troubled by this problem for several days. Thanks for your any help!!!!
|
non_process
|
get a error when i use python s basemanager to share process memory this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers here grpc io mailing list stackoverflow with grpc tag issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using grpc python what operating system linux windows and version centos linux release core linux version what runtime compiler are you using e g python version or version of gcc python what did you do i want to use python s basemanager to share process memory to reduce memory usage the server run successfully but the client raise a error my code is as follows: server code python import time import multiprocessing from concurrent import futures from multiprocessing managers import basemanager import grpc import test import test grpc class testclass object def init self test string init self test string test string def add self self test string a def get num self sentence return sentence result string str self test string class grpc mq test test grpc grpc mq testservicer def init self self test class testclass def textprediction self request context string request string return test requestreply result self test class get num string def run server grpc bind address all process count grpc mq test options grpc so reuseport grpc server grpc server futures threadpoolexecutor max workers all process count options options test grpc add grpc mq testservicer to server grpc mq test grpc server grpc server add insecure port grpc bind address grpc server start while true time sleep class mymanager basemanager pass def sever launcher grpc bind address manager mymanager manager register grpc mq test grpc mq test manager start textprediction manager grpc mq test workers all process count for in range all process count worker multiprocessing process target run server args grpc bind address all process count textprediction worker start workers append worker for worker in workers worker join if name main grpc bind address sever launcher grpc bind address client code python import grpc import test import test grpc with grpc insecure channel as channel stub test grpc grpc mq teststub channel response stub textprediction test request string test print greeter client received response result test py and test grpc py come from this proto syntax package test message request string string message requestreply string result service grpc mq test rpc textprediction request returns requestreply the server run successfully but the client raise a error this is error traceback most recent call last file rpc client py line in run file rpc client py line in run response stub textprediction test request string haa file usr local lib site packages grpc channel py line in call return end unary response blocking state call false none file usr local lib site packages grpc channel py line in end unary response blocking raise rendezvous state none none deadline grpc channel rendezvous rendezvous of rpc that terminated with status statuscode unknown details exception calling application no default reduce due to non trivial cinit debug error string created description error received from peer file src core lib surface call cc file line grpc message exception calling application no default reduce due to non trivial cinit grpc status when i give up the basemanager change the sever launcher function of server code like this python def sever launcher grpc bind address textprediction grpc mq test workers all process count for in range all process count worker multiprocessing process target run server args grpc bind address all process count textprediction worker start workers append worker for worker in workers worker join the error disappear client print this string greeter client received haa class string none what did you expect to see process memory can be shared i was troubled by this problem for several days thanks for your any help
| 0
|
22,260
| 30,810,969,862
|
IssuesEvent
|
2023-08-01 10:21:03
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Row chart not working on dashboard but it does when you click on it
|
Type:Bug Querying/Parameters & Variables .Backend .Team/QueryProcessor :hammer_and_wrench:
|
### Describe the bug
On dashboard 494 of our stats instance, there's a row chart that doesn't render, but it does when you click on it: Breakdown of ticket solve times by business day
### To Reproduce
Unfortunately I can't reproduce
### Expected behavior
_No response_
### Logs
All BE
```
[2210b034-c7bf-4952-83e1-133a4dd7ef88] 2023-07-21T21:02:38-03:00 ERROR metabase.query-processor.middleware.catch-exceptions Error processing query: Cannot determine the source table or query for Field clause [:field 277973 {:temporal-unit :default}]
{:database_id 26,
:started_at #t "2023-07-22T00:02:37.020899Z[GMT]",
:action_id nil,
:error_type :invalid-query,
:json_query
{:constraints {:max-results 10000, :max-results-bare-rows 2000},
:type :query,
:middleware {:js-int-to-string? true, :ignore-cached-results? false},
:database 26,
:query
{:source-table 37922,
:expressions
{"Business day solved time"
[:case
[[[:< [:field 559042 nil] 540] "1: Less than one business day"]
[[:between [:field 559042 nil] 540 1620] "2: 1 - 3 business days"]
[[:between [:field 559042 nil] 1621 2700] "3: 3 - 5 business days"]
[[:> [:field 559042 nil] 2700] "4: Greater than a week"]]
{:default "Empty"}]},
:aggregation [[:distinct [:field 559034 {:base-type :type/BigInteger}]]],
:breakout [[:expression "Business day solved time"]],
:filter
[:and
[:= [:field 278001 {:source-field 559034}] 360003320414]
[:= [:field 278018 {:source-field 559034}] "closed" "solved"]]},
:parameters
[{:type :date/all-options, :value "past7days", :id "ce19ef12", :target [:dimension [:field 277973 nil]]}],
:async? true,
:cache-ttl 3600},
:native nil,
:status :failed,
:class clojure.lang.ExceptionInfo,
:stacktrace
["--> query_processor.util.add_alias_info$field_source_table_alias.invokeStatic(add_alias_info.clj:184)"
"query_processor.util.add_alias_info$field_source_table_alias.invoke(add_alias_info.clj:173)"
"query_processor.util.add_alias_info$fn__69447.invokeStatic(add_alias_info.clj:330)"
"query_processor.util.add_alias_info$fn__69447.invoke(add_alias_info.clj:327)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484$fn__69491$fn__69492.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484$fn__69491.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484.invoke(add_alias_info.clj:385)"
"mbql.util.match.impl$replace_in_collection.invokeStatic(impl.cljc:47)"
"mbql.util.match.impl$replace_in_collection.invoke(impl.cljc:38)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484$fn__69491.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484.invoke(add_alias_info.clj:385)"
"mbql.util.match.impl$replace_in_collection.invokeStatic(impl.cljc:47)"
"mbql.util.match.impl$replace_in_collection.invoke(impl.cljc:38)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484$fn__69491.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484.invoke(add_alias_info.clj:385)"
"mbql.util.match.impl$replace_in_collection$iter__30881__30885$fn__30886.invoke(impl.cljc:44)"
"mbql.util.match.impl$replace_in_collection.invokeStatic(impl.cljc:43)"
"mbql.util.match.impl$replace_in_collection.invoke(impl.cljc:38)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484$fn__69491.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_.invokeStatic(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_.invoke(add_alias_info.clj:382)"
"query_processor.util.add_alias_info$add_alias_info$fn__69501.invoke(add_alias_info.clj:429)"
"query_processor.util.add_alias_info$add_alias_info.invokeStatic(add_alias_info.clj:424)"
"query_processor.util.add_alias_info$add_alias_info.invoke(add_alias_info.clj:395)"
"driver.sql.query_processor$fn__71536.invokeStatic(query_processor.clj:1436)"
"driver.sql.query_processor$fn__71536.invoke(query_processor.clj:1434)"
"driver.sql.query_processor$mbql__GT_honeysql.invokeStatic(query_processor.clj:1445)"
"driver.sql.query_processor$mbql__GT_honeysql.invoke(query_processor.clj:1438)"
"driver.sql.query_processor$mbql__GT_native.invokeStatic(query_processor.clj:1456)"
"driver.sql.query_processor$mbql__GT_native.invoke(query_processor.clj:1452)"
"driver.sql$fn__88895.invokeStatic(sql.clj:42)"
"driver.sql$fn__88895.invoke(sql.clj:40)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invokeStatic(mbql_to_native.clj:14)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invoke(mbql_to_native.clj:9)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__76461.invoke(mbql_to_native.clj:21)"
"query_processor$fn__78569$combined_post_process__78574$combined_post_process_STAR___78575.invoke(query_processor.clj:260)"
"query_processor$fn__78569$combined_pre_process__78570$combined_pre_process_STAR___78571.invoke(query_processor.clj:257)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__77247$fn__77252.invoke(resolve_database_and_driver.clj:36)"
"driver$do_with_driver.invokeStatic(driver.clj:91)"
"driver$do_with_driver.invoke(driver.clj:86)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__77247.invoke(resolve_database_and_driver.clj:35)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__72962.invoke(fetch_source_query.clj:316)"
"query_processor.middleware.store$initialize_store$fn__73143$fn__73144.invoke(store.clj:12)"
"query_processor.store$do_with_store.invokeStatic(store.clj:56)"
"query_processor.store$do_with_store.invoke(store.clj:50)"
"query_processor.middleware.store$initialize_store$fn__73143.invoke(store.clj:11)"
"query_processor.middleware.normalize_query$normalize$fn__77543.invoke(normalize_query.clj:36)"
"metabase_enterprise.audit_app.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__85519.invoke(handle_audit_queries.clj:131)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__74937.invoke(constraints.clj:54)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__77472.invoke(process_userland_query.clj:152)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__77869.invoke(catch_exceptions.clj:171)"
"query_processor.reducible$async_qp$qp_STAR___67158$thunk__67160.invoke(reducible.clj:103)"
"query_processor.reducible$async_qp$qp_STAR___67158$fn__67162.invoke(reducible.clj:108)"],
:card_id 4628,
:context :dashboard,
:error "Cannot determine the source table or query for Field clause [:field 277973 {:temporal-unit :default}]",
:row_count 0,
:running_time 0,
:preprocessed
{:constraints {:max-results 10000, :max-results-bare-rows 2000},
:type :query,
:middleware {:js-int-to-string? true, :ignore-cached-results? false},
:user-parameters
[{:type :date/all-options, :value "past7days", :id "ce19ef12", :target [:dimension [:field 277973 nil]]}],
:info
{:executed-by 59,
:context :dashboard,
:card-id 4628,
:card-name "Breakdown of ticket solve times by business day",
:dashboard-id 494},
:database 26,
:query
{:source-table 37922,
:expressions
{"Business day solved time"
[:case
[[[:<
[:field 559042 nil]
[:value
540
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"1: Less than one business day"]
[[:between
[:field 559042 nil]
[:value
540
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]
[:value
1620
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"2: 1 - 3 business days"]
[[:between
[:field 559042 nil]
[:value
1621
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]
[:value
2700
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"3: 3 - 5 business days"]
[[:>
[:field 559042 nil]
[:value
2700
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"4: Greater than a week"]]
{:default "Empty"}]},
:aggregation [[:aggregation-options [:distinct [:field 559034 {:base-type :type/BigInteger}]] {:name "count"}]],
:breakout [[:expression "Business day solved time"]],
:filter
[:and
[:=
[:field 278001 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
360003320414
{:base_type :type/BigInteger,
:effective_type :type/BigInteger,
:coercion_strategy nil,
:semantic_type :type/FK,
:database_type "int8",
:name "group_id"}]]
[:or
[:=
[:field 278018 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
"closed"
{:base_type :type/Text,
:effective_type :type/Text,
:coercion_strategy nil,
:semantic_type :type/Category,
:database_type "text",
:name "status"}]]
[:=
[:field 278018 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
"solved"
{:base_type :type/Text,
:effective_type :type/Text,
:coercion_strategy nil,
:semantic_type :type/Category,
:database_type "text",
:name "status"}]]]
[:>= [:field 277973 {:temporal-unit :default}] [:relative-datetime -7 :day]]
[:< [:field 277973 {:temporal-unit :default}] [:relative-datetime 0 :day]]],
:order-by [[:asc [:expression "Business day solved time"]]],
:joins
[{:alias "zendesk_ticket__via__ticket_id",
:strategy :left-join,
:condition [:= [:field 559034 nil] [:field 547803 {:join-alias "zendesk_ticket__via__ticket_id"}]],
:source-table 37240,
:fk-field-id 559034}]},
:async? true,
:cache-ttl 3600},
:ex-data
{:type :invalid-query,
:clause [:field 277973 {:temporal-unit :default}],
:query
{:source-table 37922,
:expressions
{"Business day solved time"
[:case
[[[:<
[:field 559042 nil]
[:value
540
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"1: Less than one business day"]
[[:between
[:field 559042 nil]
[:value
540
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]
[:value
1620
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"2: 1 - 3 business days"]
[[:between
[:field 559042 nil]
[:value
1621
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]
[:value
2700
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"3: 3 - 5 business days"]
[[:>
[:field 559042 nil]
[:value
2700
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"4: Greater than a week"]]
{:default "Empty"}]},
:aggregation [[:aggregation-options [:distinct [:field 559034 {:base-type :type/BigInteger}]] {:name "count"}]],
:breakout [[:expression "Business day solved time"]],
:filter
[:and
[:=
[:field 278001 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
360003320414
{:base_type :type/BigInteger,
:effective_type :type/BigInteger,
:coercion_strategy nil,
:semantic_type :type/FK,
:database_type "int8",
:name "group_id"}]]
[:or
[:=
[:field 278018 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
"closed"
{:base_type :type/Text,
:effective_type :type/Text,
:coercion_strategy nil,
:semantic_type :type/Category,
:database_type "text",
:name "status"}]]
[:=
[:field 278018 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
"solved"
{:base_type :type/Text,
:effective_type :type/Text,
:coercion_strategy nil,
:semantic_type :type/Category,
:database_type "text",
:name "status"}]]]
[:>= [:field 277973 {:temporal-unit :default}] [:relative-datetime -7 :day]]
[:< [:field 277973 {:temporal-unit :default}] [:relative-datetime 0 :day]]],
:order-by [[:asc [:expression "Business day solved time"]]],
:joins
[{:alias "zendesk_ticket__via__ticket_id",
:strategy :left-join,
:condition [:= [:field 559034 nil] [:field 547803 {:join-alias "zendesk_ticket__via__ticket_id"}]],
:source-table 37240,
:fk-field-id 559034}]}},
:data {:rows [], :cols []}}
```
### Information about your Metabase installation
```JSON
master
```
### Severity
P2
### Additional context
_No response_
|
1.0
|
Row chart not working on dashboard but it does when you click on it - ### Describe the bug
On dashboard 494 of our stats instance, there's a row chart that doesn't render, but it does when you click on it: Breakdown of ticket solve times by business day
### To Reproduce
Unfortunately I can't reproduce
### Expected behavior
_No response_
### Logs
All BE
```
[2210b034-c7bf-4952-83e1-133a4dd7ef88] 2023-07-21T21:02:38-03:00 ERROR metabase.query-processor.middleware.catch-exceptions Error processing query: Cannot determine the source table or query for Field clause [:field 277973 {:temporal-unit :default}]
{:database_id 26,
:started_at #t "2023-07-22T00:02:37.020899Z[GMT]",
:action_id nil,
:error_type :invalid-query,
:json_query
{:constraints {:max-results 10000, :max-results-bare-rows 2000},
:type :query,
:middleware {:js-int-to-string? true, :ignore-cached-results? false},
:database 26,
:query
{:source-table 37922,
:expressions
{"Business day solved time"
[:case
[[[:< [:field 559042 nil] 540] "1: Less than one business day"]
[[:between [:field 559042 nil] 540 1620] "2: 1 - 3 business days"]
[[:between [:field 559042 nil] 1621 2700] "3: 3 - 5 business days"]
[[:> [:field 559042 nil] 2700] "4: Greater than a week"]]
{:default "Empty"}]},
:aggregation [[:distinct [:field 559034 {:base-type :type/BigInteger}]]],
:breakout [[:expression "Business day solved time"]],
:filter
[:and
[:= [:field 278001 {:source-field 559034}] 360003320414]
[:= [:field 278018 {:source-field 559034}] "closed" "solved"]]},
:parameters
[{:type :date/all-options, :value "past7days", :id "ce19ef12", :target [:dimension [:field 277973 nil]]}],
:async? true,
:cache-ttl 3600},
:native nil,
:status :failed,
:class clojure.lang.ExceptionInfo,
:stacktrace
["--> query_processor.util.add_alias_info$field_source_table_alias.invokeStatic(add_alias_info.clj:184)"
"query_processor.util.add_alias_info$field_source_table_alias.invoke(add_alias_info.clj:173)"
"query_processor.util.add_alias_info$fn__69447.invokeStatic(add_alias_info.clj:330)"
"query_processor.util.add_alias_info$fn__69447.invoke(add_alias_info.clj:327)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484$fn__69491$fn__69492.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484$fn__69491.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484.invoke(add_alias_info.clj:385)"
"mbql.util.match.impl$replace_in_collection.invokeStatic(impl.cljc:47)"
"mbql.util.match.impl$replace_in_collection.invoke(impl.cljc:38)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484$fn__69491.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484.invoke(add_alias_info.clj:385)"
"mbql.util.match.impl$replace_in_collection.invokeStatic(impl.cljc:47)"
"mbql.util.match.impl$replace_in_collection.invoke(impl.cljc:38)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484$fn__69491.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484.invoke(add_alias_info.clj:385)"
"mbql.util.match.impl$replace_in_collection$iter__30881__30885$fn__30886.invoke(impl.cljc:44)"
"mbql.util.match.impl$replace_in_collection.invokeStatic(impl.cljc:43)"
"mbql.util.match.impl$replace_in_collection.invoke(impl.cljc:38)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484$fn__69491.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_$replace_69483__69484.invoke(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_.invokeStatic(add_alias_info.clj:385)"
"query_processor.util.add_alias_info$add_alias_info_STAR_.invoke(add_alias_info.clj:382)"
"query_processor.util.add_alias_info$add_alias_info$fn__69501.invoke(add_alias_info.clj:429)"
"query_processor.util.add_alias_info$add_alias_info.invokeStatic(add_alias_info.clj:424)"
"query_processor.util.add_alias_info$add_alias_info.invoke(add_alias_info.clj:395)"
"driver.sql.query_processor$fn__71536.invokeStatic(query_processor.clj:1436)"
"driver.sql.query_processor$fn__71536.invoke(query_processor.clj:1434)"
"driver.sql.query_processor$mbql__GT_honeysql.invokeStatic(query_processor.clj:1445)"
"driver.sql.query_processor$mbql__GT_honeysql.invoke(query_processor.clj:1438)"
"driver.sql.query_processor$mbql__GT_native.invokeStatic(query_processor.clj:1456)"
"driver.sql.query_processor$mbql__GT_native.invoke(query_processor.clj:1452)"
"driver.sql$fn__88895.invokeStatic(sql.clj:42)"
"driver.sql$fn__88895.invoke(sql.clj:40)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invokeStatic(mbql_to_native.clj:14)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invoke(mbql_to_native.clj:9)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__76461.invoke(mbql_to_native.clj:21)"
"query_processor$fn__78569$combined_post_process__78574$combined_post_process_STAR___78575.invoke(query_processor.clj:260)"
"query_processor$fn__78569$combined_pre_process__78570$combined_pre_process_STAR___78571.invoke(query_processor.clj:257)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__77247$fn__77252.invoke(resolve_database_and_driver.clj:36)"
"driver$do_with_driver.invokeStatic(driver.clj:91)"
"driver$do_with_driver.invoke(driver.clj:86)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__77247.invoke(resolve_database_and_driver.clj:35)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__72962.invoke(fetch_source_query.clj:316)"
"query_processor.middleware.store$initialize_store$fn__73143$fn__73144.invoke(store.clj:12)"
"query_processor.store$do_with_store.invokeStatic(store.clj:56)"
"query_processor.store$do_with_store.invoke(store.clj:50)"
"query_processor.middleware.store$initialize_store$fn__73143.invoke(store.clj:11)"
"query_processor.middleware.normalize_query$normalize$fn__77543.invoke(normalize_query.clj:36)"
"metabase_enterprise.audit_app.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__85519.invoke(handle_audit_queries.clj:131)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__74937.invoke(constraints.clj:54)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__77472.invoke(process_userland_query.clj:152)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__77869.invoke(catch_exceptions.clj:171)"
"query_processor.reducible$async_qp$qp_STAR___67158$thunk__67160.invoke(reducible.clj:103)"
"query_processor.reducible$async_qp$qp_STAR___67158$fn__67162.invoke(reducible.clj:108)"],
:card_id 4628,
:context :dashboard,
:error "Cannot determine the source table or query for Field clause [:field 277973 {:temporal-unit :default}]",
:row_count 0,
:running_time 0,
:preprocessed
{:constraints {:max-results 10000, :max-results-bare-rows 2000},
:type :query,
:middleware {:js-int-to-string? true, :ignore-cached-results? false},
:user-parameters
[{:type :date/all-options, :value "past7days", :id "ce19ef12", :target [:dimension [:field 277973 nil]]}],
:info
{:executed-by 59,
:context :dashboard,
:card-id 4628,
:card-name "Breakdown of ticket solve times by business day",
:dashboard-id 494},
:database 26,
:query
{:source-table 37922,
:expressions
{"Business day solved time"
[:case
[[[:<
[:field 559042 nil]
[:value
540
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"1: Less than one business day"]
[[:between
[:field 559042 nil]
[:value
540
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]
[:value
1620
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"2: 1 - 3 business days"]
[[:between
[:field 559042 nil]
[:value
1621
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]
[:value
2700
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"3: 3 - 5 business days"]
[[:>
[:field 559042 nil]
[:value
2700
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"4: Greater than a week"]]
{:default "Empty"}]},
:aggregation [[:aggregation-options [:distinct [:field 559034 {:base-type :type/BigInteger}]] {:name "count"}]],
:breakout [[:expression "Business day solved time"]],
:filter
[:and
[:=
[:field 278001 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
360003320414
{:base_type :type/BigInteger,
:effective_type :type/BigInteger,
:coercion_strategy nil,
:semantic_type :type/FK,
:database_type "int8",
:name "group_id"}]]
[:or
[:=
[:field 278018 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
"closed"
{:base_type :type/Text,
:effective_type :type/Text,
:coercion_strategy nil,
:semantic_type :type/Category,
:database_type "text",
:name "status"}]]
[:=
[:field 278018 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
"solved"
{:base_type :type/Text,
:effective_type :type/Text,
:coercion_strategy nil,
:semantic_type :type/Category,
:database_type "text",
:name "status"}]]]
[:>= [:field 277973 {:temporal-unit :default}] [:relative-datetime -7 :day]]
[:< [:field 277973 {:temporal-unit :default}] [:relative-datetime 0 :day]]],
:order-by [[:asc [:expression "Business day solved time"]]],
:joins
[{:alias "zendesk_ticket__via__ticket_id",
:strategy :left-join,
:condition [:= [:field 559034 nil] [:field 547803 {:join-alias "zendesk_ticket__via__ticket_id"}]],
:source-table 37240,
:fk-field-id 559034}]},
:async? true,
:cache-ttl 3600},
:ex-data
{:type :invalid-query,
:clause [:field 277973 {:temporal-unit :default}],
:query
{:source-table 37922,
:expressions
{"Business day solved time"
[:case
[[[:<
[:field 559042 nil]
[:value
540
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"1: Less than one business day"]
[[:between
[:field 559042 nil]
[:value
540
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]
[:value
1620
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"2: 1 - 3 business days"]
[[:between
[:field 559042 nil]
[:value
1621
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]
[:value
2700
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"3: 3 - 5 business days"]
[[:>
[:field 559042 nil]
[:value
2700
{:base_type :type/BigInteger,
:effective_type :type/Decimal,
:coercion_strategy nil,
:semantic_type nil,
:database_type "int8",
:name "full_resolution_time_in_minutes_business"}]]
"4: Greater than a week"]]
{:default "Empty"}]},
:aggregation [[:aggregation-options [:distinct [:field 559034 {:base-type :type/BigInteger}]] {:name "count"}]],
:breakout [[:expression "Business day solved time"]],
:filter
[:and
[:=
[:field 278001 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
360003320414
{:base_type :type/BigInteger,
:effective_type :type/BigInteger,
:coercion_strategy nil,
:semantic_type :type/FK,
:database_type "int8",
:name "group_id"}]]
[:or
[:=
[:field 278018 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
"closed"
{:base_type :type/Text,
:effective_type :type/Text,
:coercion_strategy nil,
:semantic_type :type/Category,
:database_type "text",
:name "status"}]]
[:=
[:field 278018 {:source-field 559034, :join-alias "zendesk_ticket__via__ticket_id"}]
[:value
"solved"
{:base_type :type/Text,
:effective_type :type/Text,
:coercion_strategy nil,
:semantic_type :type/Category,
:database_type "text",
:name "status"}]]]
[:>= [:field 277973 {:temporal-unit :default}] [:relative-datetime -7 :day]]
[:< [:field 277973 {:temporal-unit :default}] [:relative-datetime 0 :day]]],
:order-by [[:asc [:expression "Business day solved time"]]],
:joins
[{:alias "zendesk_ticket__via__ticket_id",
:strategy :left-join,
:condition [:= [:field 559034 nil] [:field 547803 {:join-alias "zendesk_ticket__via__ticket_id"}]],
:source-table 37240,
:fk-field-id 559034}]}},
:data {:rows [], :cols []}}
```
### Information about your Metabase installation
```JSON
master
```
### Severity
P2
### Additional context
_No response_
|
process
|
row chart not working on dashboard but it does when you click on it describe the bug on dashboard of our stats instance there s a row chart that doesn t render but it does when you click on it breakdown of ticket solve times by business day to reproduce unfortunately i can t reproduce expected behavior no response logs all be error metabase query processor middleware catch exceptions error processing query cannot determine the source table or query for field clause database id started at t action id nil error type invalid query json query constraints max results max results bare rows type query middleware js int to string true ignore cached results false database query source table expressions business day solved time case less than one business day business days business days greater than a week default empty aggregation breakout filter and closed solved parameters async true cache ttl native nil status failed class clojure lang exceptioninfo stacktrace query processor util add alias info field source table alias invokestatic add alias info clj query processor util add alias info field source table alias invoke add alias info clj query processor util add alias info fn invokestatic add alias info clj query processor util add alias info fn invoke add alias info clj query processor util add alias info add alias info star replace fn fn invoke add alias info clj query processor util add alias info add alias info star replace fn invoke add alias info clj query processor util add alias info add alias info star replace invoke add alias info clj mbql util match impl replace in collection invokestatic impl cljc mbql util match impl replace in collection invoke impl cljc query processor util add alias info add alias info star replace fn invoke add alias info clj query processor util add alias info add alias info star replace invoke add alias info clj mbql util match impl replace in collection invokestatic impl cljc mbql util match impl replace in collection invoke impl cljc query processor util add alias info add alias info star replace fn invoke add alias info clj query processor util add alias info add alias info star replace invoke add alias info clj mbql util match impl replace in collection iter fn invoke impl cljc mbql util match impl replace in collection invokestatic impl cljc mbql util match impl replace in collection invoke impl cljc query processor util add alias info add alias info star replace fn invoke add alias info clj query processor util add alias info add alias info star replace invoke add alias info clj query processor util add alias info add alias info star invokestatic add alias info clj query processor util add alias info add alias info star invoke add alias info clj query processor util add alias info add alias info fn invoke add alias info clj query processor util add alias info add alias info invokestatic add alias info clj query processor util add alias info add alias info invoke add alias info clj driver sql query processor fn invokestatic query processor clj driver sql query processor fn invoke query processor clj driver sql query processor mbql gt honeysql invokestatic query processor clj driver sql query processor mbql gt honeysql invoke query processor clj driver sql query processor mbql gt native invokestatic query processor clj driver sql query processor mbql gt native invoke query processor clj driver sql fn invokestatic sql clj driver sql fn invoke sql clj query processor middleware mbql to native query gt native form invokestatic mbql to native clj query processor middleware mbql to native query gt native form invoke mbql to native clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor fn combined post process combined post process star invoke query processor clj query processor fn combined pre process combined pre process star invoke query processor clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware normalize query normalize fn invoke normalize query clj metabase enterprise audit app query processor middleware handle audit queries handle internal queries fn invoke handle audit queries clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star fn invoke reducible clj card id context dashboard error cannot determine the source table or query for field clause row count running time preprocessed constraints max results max results bare rows type query middleware js int to string true ignore cached results false user parameters info executed by context dashboard card id card name breakdown of ticket solve times by business day dashboard id database query source table expressions business day solved time case value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business less than one business day between value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business business days between value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business business days value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business greater than a week default empty aggregation name count breakout filter and value base type type biginteger effective type type biginteger coercion strategy nil semantic type type fk database type name group id or value closed base type type text effective type type text coercion strategy nil semantic type type category database type text name status value solved base type type text effective type type text coercion strategy nil semantic type type category database type text name status order by joins alias zendesk ticket via ticket id strategy left join condition source table fk field id async true cache ttl ex data type invalid query clause query source table expressions business day solved time case value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business less than one business day between value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business business days between value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business business days value base type type biginteger effective type type decimal coercion strategy nil semantic type nil database type name full resolution time in minutes business greater than a week default empty aggregation name count breakout filter and value base type type biginteger effective type type biginteger coercion strategy nil semantic type type fk database type name group id or value closed base type type text effective type type text coercion strategy nil semantic type type category database type text name status value solved base type type text effective type type text coercion strategy nil semantic type type category database type text name status order by joins alias zendesk ticket via ticket id strategy left join condition source table fk field id data rows cols information about your metabase installation json master severity additional context no response
| 1
|
29,165
| 13,057,511,736
|
IssuesEvent
|
2020-07-30 07:28:28
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
Changes related to dedicated host group automatic placement
|
Compute Feature Request Service Team Support Request
|
**Resource Provider**
<!--- What is the Azure resource provider your feature is part of? --->
Microsoft.Compute
**Description of Feature or Work Requested**
<!--- Provide a brief description of the feature or work requested. A link to conceptual documentation may be helpful too. --->
Today customers can place a VM on a dedicated host, by providing the ARM resource id of the dedicated host in the VM input.
The new feature is to support automatic placement of VMs and VMScaleSets on a dedicated host group. This means in the VM/VMScaleSet input, customers can specify the ARM resource id of the dedicated host group, on which they want their VMs/VMScaleSets to be placed, and Azure will select an appropriate dedicated host under the dedicated host group for each VM and allocate the VM.
There are four major changes related to this feature:
1. "supportAutomaticPlacement" is a new property introduced under DedicatedHostGroup.properties. Customers can specify either true or false. It defaults to true, if no input is provided. If a dedicated host group has automatic placement enabled, VMs or VMScaleSets can be placed on the dedicated host group using automatic placement.
2. GET dedicated host group now supports instance view query parameter. Dedicated host group instance view will return instance view of the dedicated hosts under the dedicated host group.
3. "hostGroup" is a new property introduced under VM.properties and VMScaleSet.properties. The input should be the resource id of the dedicated host group, on which the customer wants his VM/VMScaleSet placed using automatic placement.
4. "assignedHost" is a new property introduced under VMInstanceView and VMScaleSetVMInstanceView. It is the resource id of the dedicated host, on which the queried VM/VMScaleSetVM is placed using atuomatic placement.
Design document: https://microsoft.sharepoint.com/:w:/t/ComputeVM/EV1bTk0XSvNLtvDnLn9PSpoBH2NvlfAFQhduoCSJvatreg?e=6AiqWp
**Minimum API Version Required**
<!--- What is the minimum API version of your service required to implement your feature? --->
2020-06-01
**Swagger Link**
<!--- Provide a link to the location of your feature(s) in the REST API specs repo. If your feature(s) has corresponding commit or pull request in the REST API specs repo, provide them. This should be on the master branch of the REST API specs repo. --->
https://github.com/Azure/azure-rest-api-specs/pull/9684
**Target Date**
<!--- If you have a target date for release of this feature/work, please provide it. While we can't guarantee these dates,
it will help us prioritize your request against other requests. --->
August 2020
|
1.0
|
Changes related to dedicated host group automatic placement - **Resource Provider**
<!--- What is the Azure resource provider your feature is part of? --->
Microsoft.Compute
**Description of Feature or Work Requested**
<!--- Provide a brief description of the feature or work requested. A link to conceptual documentation may be helpful too. --->
Today customers can place a VM on a dedicated host, by providing the ARM resource id of the dedicated host in the VM input.
The new feature is to support automatic placement of VMs and VMScaleSets on a dedicated host group. This means in the VM/VMScaleSet input, customers can specify the ARM resource id of the dedicated host group, on which they want their VMs/VMScaleSets to be placed, and Azure will select an appropriate dedicated host under the dedicated host group for each VM and allocate the VM.
There are four major changes related to this feature:
1. "supportAutomaticPlacement" is a new property introduced under DedicatedHostGroup.properties. Customers can specify either true or false. It defaults to true, if no input is provided. If a dedicated host group has automatic placement enabled, VMs or VMScaleSets can be placed on the dedicated host group using automatic placement.
2. GET dedicated host group now supports instance view query parameter. Dedicated host group instance view will return instance view of the dedicated hosts under the dedicated host group.
3. "hostGroup" is a new property introduced under VM.properties and VMScaleSet.properties. The input should be the resource id of the dedicated host group, on which the customer wants his VM/VMScaleSet placed using automatic placement.
4. "assignedHost" is a new property introduced under VMInstanceView and VMScaleSetVMInstanceView. It is the resource id of the dedicated host, on which the queried VM/VMScaleSetVM is placed using atuomatic placement.
Design document: https://microsoft.sharepoint.com/:w:/t/ComputeVM/EV1bTk0XSvNLtvDnLn9PSpoBH2NvlfAFQhduoCSJvatreg?e=6AiqWp
**Minimum API Version Required**
<!--- What is the minimum API version of your service required to implement your feature? --->
2020-06-01
**Swagger Link**
<!--- Provide a link to the location of your feature(s) in the REST API specs repo. If your feature(s) has corresponding commit or pull request in the REST API specs repo, provide them. This should be on the master branch of the REST API specs repo. --->
https://github.com/Azure/azure-rest-api-specs/pull/9684
**Target Date**
<!--- If you have a target date for release of this feature/work, please provide it. While we can't guarantee these dates,
it will help us prioritize your request against other requests. --->
August 2020
|
non_process
|
changes related to dedicated host group automatic placement resource provider microsoft compute description of feature or work requested today customers can place a vm on a dedicated host by providing the arm resource id of the dedicated host in the vm input the new feature is to support automatic placement of vms and vmscalesets on a dedicated host group this means in the vm vmscaleset input customers can specify the arm resource id of the dedicated host group on which they want their vms vmscalesets to be placed and azure will select an appropriate dedicated host under the dedicated host group for each vm and allocate the vm there are four major changes related to this feature supportautomaticplacement is a new property introduced under dedicatedhostgroup properties customers can specify either true or false it defaults to true if no input is provided if a dedicated host group has automatic placement enabled vms or vmscalesets can be placed on the dedicated host group using automatic placement get dedicated host group now supports instance view query parameter dedicated host group instance view will return instance view of the dedicated hosts under the dedicated host group hostgroup is a new property introduced under vm properties and vmscaleset properties the input should be the resource id of the dedicated host group on which the customer wants his vm vmscaleset placed using automatic placement assignedhost is a new property introduced under vminstanceview and vmscalesetvminstanceview it is the resource id of the dedicated host on which the queried vm vmscalesetvm is placed using atuomatic placement design document minimum api version required swagger link target date if you have a target date for release of this feature work please provide it while we can t guarantee these dates it will help us prioritize your request against other requests august
| 0
|
529,510
| 15,390,218,454
|
IssuesEvent
|
2021-03-03 13:08:58
|
mantidproject/mantid
|
https://api.github.com/repos/mantidproject/mantid
|
closed
|
ConvertToMD other dimensions enhancement
|
Framework Low Priority Stale
|
This issue was originally [TRAC 8027](http://trac.mantidproject.org/mantid/ticket/8027)
Load HYS_13656_event, rebin with a step of 10 preserving events, then ConvertToMD, using CopyToMD, and OtherDimensions='s1'. It would be nice if MD events were associated with the correct log value, instead of the first log value.
|
1.0
|
ConvertToMD other dimensions enhancement - This issue was originally [TRAC 8027](http://trac.mantidproject.org/mantid/ticket/8027)
Load HYS_13656_event, rebin with a step of 10 preserving events, then ConvertToMD, using CopyToMD, and OtherDimensions='s1'. It would be nice if MD events were associated with the correct log value, instead of the first log value.
|
non_process
|
converttomd other dimensions enhancement this issue was originally load hys event rebin with a step of preserving events then converttomd using copytomd and otherdimensions it would be nice if md events were associated with the correct log value instead of the first log value
| 0
|
9,775
| 12,794,019,145
|
IssuesEvent
|
2020-07-02 05:52:06
|
jupyter/nbconvert
|
https://api.github.com/repos/jupyter/nbconvert
|
closed
|
Add support for new nbformat execution timing metadata
|
Preprocessor:Execute help wanted
|
<!--- Is your feature request related to a problem? What are you trying to achieve? --->
https://github.com/jupyter/nbformat/pull/144 was recently merged in nbformat, we should support this improvement in the execute_preprocessor (or in the project it lands in if it's being lifted out).
<!--- Describe the solution you'd like. --->
Record the kernel message timestamps in the notebook metadata.
<!--- If possible, describe alternatives you've considered. Why are they insufficent? --->
|
1.0
|
Add support for new nbformat execution timing metadata - <!--- Is your feature request related to a problem? What are you trying to achieve? --->
https://github.com/jupyter/nbformat/pull/144 was recently merged in nbformat, we should support this improvement in the execute_preprocessor (or in the project it lands in if it's being lifted out).
<!--- Describe the solution you'd like. --->
Record the kernel message timestamps in the notebook metadata.
<!--- If possible, describe alternatives you've considered. Why are they insufficent? --->
|
process
|
add support for new nbformat execution timing metadata was recently merged in nbformat we should support this improvement in the execute preprocessor or in the project it lands in if it s being lifted out record the kernel message timestamps in the notebook metadata
| 1
|
24,213
| 3,924,621,042
|
IssuesEvent
|
2016-04-22 15:48:28
|
googlei18n/libphonenumber
|
https://api.github.com/repos/googlei18n/libphonenumber
|
reopened
|
Not validating service numbers of India
|
priority-medium type-defect
|
Imported from [Google Code issue #334](https://code.google.com/p/libphonenumber/issues/detail?id=334) created by [nayak.chandra1](https://code.google.com/u/103360693919767078519/) on 2013-08-06T11:15:54.000Z:
----
<b>What steps will reproduce the problem?</b>
1. Go to "Phone Number Parser Demo" page, under "Demo" section.
2. Give any emergency numbers like 100(local police), 101(fire service) etc
3. Result shows Invalid Number.
<b>What is the expected output? What do you see instead?</b>
1. It should show the given number as a valid one.
2. The getNumberType() should be EMERGENCY.
<b>What version of the product are you using? On what operating system?</b>
<b>Please provide any additional information below.</b>
|
1.0
|
Not validating service numbers of India - Imported from [Google Code issue #334](https://code.google.com/p/libphonenumber/issues/detail?id=334) created by [nayak.chandra1](https://code.google.com/u/103360693919767078519/) on 2013-08-06T11:15:54.000Z:
----
<b>What steps will reproduce the problem?</b>
1. Go to "Phone Number Parser Demo" page, under "Demo" section.
2. Give any emergency numbers like 100(local police), 101(fire service) etc
3. Result shows Invalid Number.
<b>What is the expected output? What do you see instead?</b>
1. It should show the given number as a valid one.
2. The getNumberType() should be EMERGENCY.
<b>What version of the product are you using? On what operating system?</b>
<b>Please provide any additional information below.</b>
|
non_process
|
not validating service numbers of india imported from created by on what steps will reproduce the problem go to quot phone number parser demo quot page under quot demo quot section give any emergency numbers like local police fire service etc result shows invalid number what is the expected output what do you see instead it should show the given number as a valid one the getnumbertype should be emergency what version of the product are you using on what operating system please provide any additional information below
| 0
|
582,248
| 17,356,974,780
|
IssuesEvent
|
2021-07-29 15:25:21
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Make "Refresh" button in brave://rewards-internals > Promotions page actually check for new promotions
|
OS/Android OS/Desktop QA/Yes feature/rewards priority/P3
|
## Description
Currently, there is no way to force the browser to check/try and fetch any available promotions (claim buttons). In brave://rewards-internals > Promotions, there is a "Refresh" button, but it doesn't seem to actually check the promotions endpoint.
This would be very helpful during and for QA.
**Edit:** This should be restricted to working only against _staging_.
## Solution
Make the "Refresh" button actually force a check to the promotions endpoint.
cc: @LaurenWags
|
1.0
|
Make "Refresh" button in brave://rewards-internals > Promotions page actually check for new promotions - ## Description
Currently, there is no way to force the browser to check/try and fetch any available promotions (claim buttons). In brave://rewards-internals > Promotions, there is a "Refresh" button, but it doesn't seem to actually check the promotions endpoint.
This would be very helpful during and for QA.
**Edit:** This should be restricted to working only against _staging_.
## Solution
Make the "Refresh" button actually force a check to the promotions endpoint.
cc: @LaurenWags
|
non_process
|
make refresh button in brave rewards internals promotions page actually check for new promotions description currently there is no way to force the browser to check try and fetch any available promotions claim buttons in brave rewards internals promotions there is a refresh button but it doesn t seem to actually check the promotions endpoint this would be very helpful during and for qa edit this should be restricted to working only against staging solution make the refresh button actually force a check to the promotions endpoint cc laurenwags
| 0
|
88,020
| 25,281,481,510
|
IssuesEvent
|
2022-11-16 16:04:38
|
intel/media-driver
|
https://api.github.com/repos/intel/media-driver
|
closed
|
[Bug]: Cannot build on musl
|
Build Common
|
### Which component impacted?
Build
### Is it regression? Good in old configuration?
_No response_
### What happened?
Using musl as libc, I am not able to build because execinfo.h is not found.
### What's the usage scenario when you are seeing the problem?
Others
### What impacted?
_No response_
### Debug Information
Here is the output of the build :
media_softlet/linux/common/os/osservice/mos_utilities_specific.cpp:37:10: fatal error: execinfo.h: No such file or directory
### Do you want to contribute a patch to fix the issue?
[Here](https://raw.githubusercontent.com/void-linux/void-packages/master/srcpkgs/intel-media-driver/patches/execinfo.patch) is a patch from void linux. It doesn't work on master, but I'll try to edit it.
|
1.0
|
[Bug]: Cannot build on musl - ### Which component impacted?
Build
### Is it regression? Good in old configuration?
_No response_
### What happened?
Using musl as libc, I am not able to build because execinfo.h is not found.
### What's the usage scenario when you are seeing the problem?
Others
### What impacted?
_No response_
### Debug Information
Here is the output of the build :
media_softlet/linux/common/os/osservice/mos_utilities_specific.cpp:37:10: fatal error: execinfo.h: No such file or directory
### Do you want to contribute a patch to fix the issue?
[Here](https://raw.githubusercontent.com/void-linux/void-packages/master/srcpkgs/intel-media-driver/patches/execinfo.patch) is a patch from void linux. It doesn't work on master, but I'll try to edit it.
|
non_process
|
cannot build on musl which component impacted build is it regression good in old configuration no response what happened using musl as libc i am not able to build because execinfo h is not found what s the usage scenario when you are seeing the problem others what impacted no response debug information here is the output of the build media softlet linux common os osservice mos utilities specific cpp fatal error execinfo h no such file or directory do you want to contribute a patch to fix the issue is a patch from void linux it doesn t work on master but i ll try to edit it
| 0
|
260,592
| 8,212,150,250
|
IssuesEvent
|
2018-09-04 15:32:26
|
opentargets/genetics
|
https://api.github.com/repos/opentargets/genetics
|
closed
|
n_cases is missing from raw data
|
Kind: Bug Priority: Medium Status: New
|
The column `n_cases` comes empty from studies raw data file for `v2d` data.
```
SELECT count(*)
FROM ot.v2d
WHERE isNotNull(n_cases)
┌─count()─┐
│ 0 │
└─────────┘
1 rows in set. Elapsed: 0.006 sec. Processed 3.08 million rows, 15.40 MB (555.66 million rows/s., 2.78 GB/s.)
clickhouse-node01.c.open-targets-genetics.internal :) select count(*) from ot.v2d where n_cases is null
SELECT count(*)
FROM ot.v2d
WHERE isNull(n_cases)
┌─count()─┐
│ 3079964 │
└─────────┘
1 rows in set. Elapsed: 0.005 sec. Processed 3.08 million rows, 15.40 MB (578.40 million rows/s., 2.89 GB/s.)
```
|
1.0
|
n_cases is missing from raw data - The column `n_cases` comes empty from studies raw data file for `v2d` data.
```
SELECT count(*)
FROM ot.v2d
WHERE isNotNull(n_cases)
┌─count()─┐
│ 0 │
└─────────┘
1 rows in set. Elapsed: 0.006 sec. Processed 3.08 million rows, 15.40 MB (555.66 million rows/s., 2.78 GB/s.)
clickhouse-node01.c.open-targets-genetics.internal :) select count(*) from ot.v2d where n_cases is null
SELECT count(*)
FROM ot.v2d
WHERE isNull(n_cases)
┌─count()─┐
│ 3079964 │
└─────────┘
1 rows in set. Elapsed: 0.005 sec. Processed 3.08 million rows, 15.40 MB (578.40 million rows/s., 2.89 GB/s.)
```
|
non_process
|
n cases is missing from raw data the column n cases comes empty from studies raw data file for data select count from ot where isnotnull n cases ┌─count ─┐ │ │ └─────────┘ rows in set elapsed sec processed million rows mb million rows s gb s clickhouse c open targets genetics internal select count from ot where n cases is null select count from ot where isnull n cases ┌─count ─┐ │ │ └─────────┘ rows in set elapsed sec processed million rows mb million rows s gb s
| 0
|
12,348
| 14,884,130,509
|
IssuesEvent
|
2021-01-20 14:13:26
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
closed
|
Support for different layouts in different tasks
|
area/process kind/user-story solution/app-frontend
|
## Description
After implementation of #5234, it is clear that there is a need for specifying which form layouts belong to which task.
For example, with 2 different data models, one for each data task, we would need to specify the form layouts relating to each of the tasks, so that only the relevant form layouts are loaded.
## Screenshots
> Screenshots or links to Figma (make sure your sketch is public)
## Considerations
We already have a Settings.json file where we could set this.
- Should we specify taskID, or should we specify the related data model instead?
- Could we specify multiple taskIDs, f.ex. for forms with multiple data tasks using the same data model?
## Acceptance criteria
- It is possible to have 2 or more data tasks, with different data model and form layouts for each task
- Only the layouts relevant for the given task are loaded and shown to the user
## Specification tasks
- [ ] Development tasks are defined
- [ ] Test design / decide test need
## Development tasks
> Add tasks here
## Definition of done
Verify that this issue meets [DoD](https://confluence.brreg.no/display/T3KP/Definition+of+Done#DefinitionofDone-DoD%E2%80%93utvikling) (Only for project members) before closing.
- [ ] Documentation is updated (if relevant)
- [ ] Technical documentation (docs.altinn.studio)
- [ ] User documentation (altinn.github.io/docs)
- [ ] QA
- [ ] Manual test is complete (if relevant)
- [ ] Automated test is implemented (if relevant)
- [ ] All tasks in this userstory are closed (i.e. remaining tasks are moved to other user stories or marked obsolete)
|
1.0
|
Support for different layouts in different tasks - ## Description
After implementation of #5234, it is clear that there is a need for specifying which form layouts belong to which task.
For example, with 2 different data models, one for each data task, we would need to specify the form layouts relating to each of the tasks, so that only the relevant form layouts are loaded.
## Screenshots
> Screenshots or links to Figma (make sure your sketch is public)
## Considerations
We already have a Settings.json file where we could set this.
- Should we specify taskID, or should we specify the related data model instead?
- Could we specify multiple taskIDs, f.ex. for forms with multiple data tasks using the same data model?
## Acceptance criteria
- It is possible to have 2 or more data tasks, with different data model and form layouts for each task
- Only the layouts relevant for the given task are loaded and shown to the user
## Specification tasks
- [ ] Development tasks are defined
- [ ] Test design / decide test need
## Development tasks
> Add tasks here
## Definition of done
Verify that this issue meets [DoD](https://confluence.brreg.no/display/T3KP/Definition+of+Done#DefinitionofDone-DoD%E2%80%93utvikling) (Only for project members) before closing.
- [ ] Documentation is updated (if relevant)
- [ ] Technical documentation (docs.altinn.studio)
- [ ] User documentation (altinn.github.io/docs)
- [ ] QA
- [ ] Manual test is complete (if relevant)
- [ ] Automated test is implemented (if relevant)
- [ ] All tasks in this userstory are closed (i.e. remaining tasks are moved to other user stories or marked obsolete)
|
process
|
support for different layouts in different tasks description after implementation of it is clear that there is a need for specifying which form layouts belong to which task for example with different data models one for each data task we would need to specify the form layouts relating to each of the tasks so that only the relevant form layouts are loaded screenshots screenshots or links to figma make sure your sketch is public considerations we already have a settings json file where we could set this should we specify taskid or should we specify the related data model instead could we specify multiple taskids f ex for forms with multiple data tasks using the same data model acceptance criteria it is possible to have or more data tasks with different data model and form layouts for each task only the layouts relevant for the given task are loaded and shown to the user specification tasks development tasks are defined test design decide test need development tasks add tasks here definition of done verify that this issue meets only for project members before closing documentation is updated if relevant technical documentation docs altinn studio user documentation altinn github io docs qa manual test is complete if relevant automated test is implemented if relevant all tasks in this userstory are closed i e remaining tasks are moved to other user stories or marked obsolete
| 1
|
6,359
| 9,416,063,527
|
IssuesEvent
|
2019-04-10 13:57:39
|
brandon1roadgears/Interpreter-of-programming-language-of-Turing-Machine
|
https://api.github.com/repos/brandon1roadgears/Interpreter-of-programming-language-of-Turing-Machine
|
opened
|
Написать функцию выполняющую действия в правилах.
|
C++ Work in process
|
* ### Требуется написать функцию. которая будет изменять строку в соответствие с правилами.
* ### Скорее всего понадобтся еще несколько функций: вывод строки на момент определённого состояния, функия для перемещения указателя на символ, функция замены символа.
|
1.0
|
Написать функцию выполняющую действия в правилах. - * ### Требуется написать функцию. которая будет изменять строку в соответствие с правилами.
* ### Скорее всего понадобтся еще несколько функций: вывод строки на момент определённого состояния, функия для перемещения указателя на символ, функция замены символа.
|
process
|
написать функцию выполняющую действия в правилах требуется написать функцию которая будет изменять строку в соответствие с правилами скорее всего понадобтся еще несколько функций вывод строки на момент определённого состояния функия для перемещения указателя на символ функция замены символа
| 1
|
9,092
| 12,158,213,929
|
IssuesEvent
|
2020-04-26 02:34:29
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
Support stream aggregator push down.
|
component/coprocessor priority/P2
|
After #4481 , we have supported stream aggregator in TiDB layer. But if the aggregate key is a prefix of some index, we can push this stream aggregator down to coprocessor. The similar case will always happen after we support clustered index.
|
1.0
|
Support stream aggregator push down. - After #4481 , we have supported stream aggregator in TiDB layer. But if the aggregate key is a prefix of some index, we can push this stream aggregator down to coprocessor. The similar case will always happen after we support clustered index.
|
process
|
support stream aggregator push down after we have supported stream aggregator in tidb layer but if the aggregate key is a prefix of some index we can push this stream aggregator down to coprocessor the similar case will always happen after we support clustered index
| 1
|
22,150
| 30,692,156,359
|
IssuesEvent
|
2023-07-26 15:51:13
|
brucemiller/LaTeXML
|
https://api.github.com/repos/brucemiller/LaTeXML
|
closed
|
Wrong spacing in lists with suppressed labels
|
bug postprocessing minor
|
If you suppress a label in an `enumerate` or `itemize` environment with `\item[]` the resulting item has incorrect spacing.

```
\documentclass{article}
\begin{document}
No suppressed labels:
\begin{itemize}
\item Test
\item Test
\item Test
\end{itemize}
Second label suppressed:
\begin{itemize}
\item Test
\item[] Test
\item Test
\end{itemize}
\end{document}
```
|
1.0
|
Wrong spacing in lists with suppressed labels - If you suppress a label in an `enumerate` or `itemize` environment with `\item[]` the resulting item has incorrect spacing.

```
\documentclass{article}
\begin{document}
No suppressed labels:
\begin{itemize}
\item Test
\item Test
\item Test
\end{itemize}
Second label suppressed:
\begin{itemize}
\item Test
\item[] Test
\item Test
\end{itemize}
\end{document}
```
|
process
|
wrong spacing in lists with suppressed labels if you suppress a label in an enumerate or itemize environment with item the resulting item has incorrect spacing documentclass article begin document no suppressed labels begin itemize item test item test item test end itemize second label suppressed begin itemize item test item test item test end itemize end document
| 1
|
802,441
| 28,962,558,309
|
IssuesEvent
|
2023-05-10 04:44:43
|
kubermatic/dashboard
|
https://api.github.com/repos/kubermatic/dashboard
|
closed
|
dark.css is being cached and will lead to a suboptimal experience after KKP updates where it contained changes
|
kind/bug priority/low sig/ui
|
### What happened
Upgraded from KKP 2.20 to 2.21.4, then all logos on buttons in dark theme were displayed in a tiny size, see #5576. This was most likely due to a cached `dark.css`, while `styles.*.css` was not cached due to the `*` (version/random string) part.
### Expected behavior
No CSS should be cached, i.e. measures should be taken to make sure a new `dark.css` is loaded after there were changed.
### Affected user persona
admins, end users
|
1.0
|
dark.css is being cached and will lead to a suboptimal experience after KKP updates where it contained changes - ### What happened
Upgraded from KKP 2.20 to 2.21.4, then all logos on buttons in dark theme were displayed in a tiny size, see #5576. This was most likely due to a cached `dark.css`, while `styles.*.css` was not cached due to the `*` (version/random string) part.
### Expected behavior
No CSS should be cached, i.e. measures should be taken to make sure a new `dark.css` is loaded after there were changed.
### Affected user persona
admins, end users
|
non_process
|
dark css is being cached and will lead to a suboptimal experience after kkp updates where it contained changes what happened upgraded from kkp to then all logos on buttons in dark theme were displayed in a tiny size see this was most likely due to a cached dark css while styles css was not cached due to the version random string part expected behavior no css should be cached i e measures should be taken to make sure a new dark css is loaded after there were changed affected user persona admins end users
| 0
|
15,359
| 19,530,571,594
|
IssuesEvent
|
2021-12-30 16:00:48
|
MikeKSmith/The_Lazy_Producer
|
https://api.github.com/repos/MikeKSmith/The_Lazy_Producer
|
opened
|
Discuss sound design
|
process
|
Linked to #9 . How to use automation within sound design to keep the sounds interesting and evolving.
Discuss “good properties” of sounds e.g. overtones and “richness” of sounds where automation on filters, wave table “scrolling”, modulation on sound envelope can add evolution to the sound.
Also discuss use of effects on sounds as part of sound design.
|
1.0
|
Discuss sound design - Linked to #9 . How to use automation within sound design to keep the sounds interesting and evolving.
Discuss “good properties” of sounds e.g. overtones and “richness” of sounds where automation on filters, wave table “scrolling”, modulation on sound envelope can add evolution to the sound.
Also discuss use of effects on sounds as part of sound design.
|
process
|
discuss sound design linked to how to use automation within sound design to keep the sounds interesting and evolving discuss “good properties” of sounds e g overtones and “richness” of sounds where automation on filters wave table “scrolling” modulation on sound envelope can add evolution to the sound also discuss use of effects on sounds as part of sound design
| 1
|
398,939
| 11,742,531,206
|
IssuesEvent
|
2020-03-12 01:08:23
|
omni-compiler/omni-compiler
|
https://api.github.com/repos/omni-compiler/omni-compiler
|
closed
|
分散配列の参照許可属性
|
Kind: Bug Module: F_Trans Priority: Middle
|
分散配列に参照許可属性(`public`または`private`)が指定されており、`implicit none`を暗黙に指定するオプション(gfrotranの場合`-fimplicit-none`)が指定されていると、エラーになる。
```fortran
module mmm
!$xmp nodes p(2)
!$xmp template t(100)
!$xmp distribute t(block) onto p
real, private :: a(100)
!$xmp align a(i) with t(i)
end module mmm
```
```
$ xmpf90 -c -fimplicit-none foo.f90
/tmp/__omni_tmp__12682/foo_f90_in.F90:7:13:
INTEGER :: XMP_DESC_a_size_0
1
Error: Symbol ‘a’ at (1) has no IMPLICIT type
/tmp/__omni_tmp__12682/foo_f90_in.F90:12:5:
1
Fatal Error: Can't open module file ‘mmm.mod’ for reading at (1): そのようなファイルやディレクトリはありません
compilation terminated.
```
|
1.0
|
分散配列の参照許可属性 - 分散配列に参照許可属性(`public`または`private`)が指定されており、`implicit none`を暗黙に指定するオプション(gfrotranの場合`-fimplicit-none`)が指定されていると、エラーになる。
```fortran
module mmm
!$xmp nodes p(2)
!$xmp template t(100)
!$xmp distribute t(block) onto p
real, private :: a(100)
!$xmp align a(i) with t(i)
end module mmm
```
```
$ xmpf90 -c -fimplicit-none foo.f90
/tmp/__omni_tmp__12682/foo_f90_in.F90:7:13:
INTEGER :: XMP_DESC_a_size_0
1
Error: Symbol ‘a’ at (1) has no IMPLICIT type
/tmp/__omni_tmp__12682/foo_f90_in.F90:12:5:
1
Fatal Error: Can't open module file ‘mmm.mod’ for reading at (1): そのようなファイルやディレクトリはありません
compilation terminated.
```
|
non_process
|
分散配列の参照許可属性 分散配列に参照許可属性 public または private が指定されており、 implicit none を暗黙に指定するオプション gfrotranの場合 fimplicit none が指定されていると、エラーになる。 fortran module mmm xmp nodes p xmp template t xmp distribute t block onto p real private a xmp align a i with t i end module mmm c fimplicit none foo tmp omni tmp foo in integer xmp desc a size error symbol ‘a’ at has no implicit type tmp omni tmp foo in fatal error can t open module file ‘mmm mod’ for reading at そのようなファイルやディレクトリはありません compilation terminated
| 0
|
8,812
| 11,912,842,480
|
IssuesEvent
|
2020-03-31 10:57:07
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
QGIS 3.10.3 Some GDAL functions not working. Example gdal_merge
|
Bug Feedback Processing
|
GDAL tools are not working via QGIS Desktop 3.10.3 on Windows 10 using the Python Module syntax.
This is similar to issues [33386](https://github.com/qgis/QGIS/issues/33386) and [34001](https://github.com/qgis/QGIS/issues/34001). That issue has been closed but it seems to have returned.
For instance GDAL edit and GDAL merge do not work:
```
C:\PROGRA~1\QGIS3~1.10\bin\python3.exe: No module named gdal_merge
C:\PROGRA~1\QGIS3~1.10\bin\python3.exe: No module named gdal_edit
```
The command syntax is as follows:
```
python3 -m gdal_edit -a_srs EPSG:27700 D:/qgis/test.tif
```
These tools work correctly on Linux but not on Windows (they are called in a different way)
Steps to reproduce:
Example 1) Attempt to use the Merge command under Raster > Miscellaneous, on any input raster

Example 2) Attempt to use the Assign Projection command under Raster > Projections, on any input raster

Installation executable used:
QGIS-OSGeo4W-3.10.3-2-Setup-x86_64.exe
OS Info:
Windows 10 Pro
Version 1903
OS Build 18362.693
QGIS About Info:
QGIS version
3.10.3-A Coruña
QGIS code revision
0e1f846438
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.0.4
Running against GDAL/OGR
3.0.4
Compiled against GEOS
3.8.0-CAPI-1.13.1
Running against GEOS
3.8.0-CAPI-1.13.1
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.1
Running against PROJ
Rel. 6.3.1, February 10th, 2020
OS Version
Windows 10 (10.0)
Active python plugins
envsysCoastal3;
envsysSend2google_earth;
GroupStats;
mapswipetool_plugin;
mmqgis;
OSTranslatorII;
pointsamplingtool;
qdraw;
qfieldsync;
Qgis2threejs;
SemiAutomaticClassificationPlugin;
splitmultipart;
db_manager;
MetaSearch;
processing
|
1.0
|
QGIS 3.10.3 Some GDAL functions not working. Example gdal_merge - GDAL tools are not working via QGIS Desktop 3.10.3 on Windows 10 using the Python Module syntax.
This is similar to issues [33386](https://github.com/qgis/QGIS/issues/33386) and [34001](https://github.com/qgis/QGIS/issues/34001). That issue has been closed but it seems to have returned.
For instance GDAL edit and GDAL merge do not work:
```
C:\PROGRA~1\QGIS3~1.10\bin\python3.exe: No module named gdal_merge
C:\PROGRA~1\QGIS3~1.10\bin\python3.exe: No module named gdal_edit
```
The command syntax is as follows:
```
python3 -m gdal_edit -a_srs EPSG:27700 D:/qgis/test.tif
```
These tools work correctly on Linux but not on Windows (they are called in a different way)
Steps to reproduce:
Example 1) Attempt to use the Merge command under Raster > Miscellaneous, on any input raster

Example 2) Attempt to use the Assign Projection command under Raster > Projections, on any input raster

Installation executable used:
QGIS-OSGeo4W-3.10.3-2-Setup-x86_64.exe
OS Info:
Windows 10 Pro
Version 1903
OS Build 18362.693
QGIS About Info:
QGIS version
3.10.3-A Coruña
QGIS code revision
0e1f846438
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.0.4
Running against GDAL/OGR
3.0.4
Compiled against GEOS
3.8.0-CAPI-1.13.1
Running against GEOS
3.8.0-CAPI-1.13.1
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.1
Running against PROJ
Rel. 6.3.1, February 10th, 2020
OS Version
Windows 10 (10.0)
Active python plugins
envsysCoastal3;
envsysSend2google_earth;
GroupStats;
mapswipetool_plugin;
mmqgis;
OSTranslatorII;
pointsamplingtool;
qdraw;
qfieldsync;
Qgis2threejs;
SemiAutomaticClassificationPlugin;
splitmultipart;
db_manager;
MetaSearch;
processing
|
process
|
qgis some gdal functions not working example gdal merge gdal tools are not working via qgis desktop on windows using the python module syntax this is similar to issues and that issue has been closed but it seems to have returned for instance gdal edit and gdal merge do not work c progra bin exe no module named gdal merge c progra bin exe no module named gdal edit the command syntax is as follows m gdal edit a srs epsg d qgis test tif these tools work correctly on linux but not on windows they are called in a different way steps to reproduce example attempt to use the merge command under raster miscellaneous on any input raster example attempt to use the assign projection command under raster projections on any input raster installation executable used qgis setup exe os info windows pro version os build qgis about info qgis version a coruña qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel february os version windows active python plugins earth groupstats mapswipetool plugin mmqgis ostranslatorii pointsamplingtool qdraw qfieldsync semiautomaticclassificationplugin splitmultipart db manager metasearch processing
| 1
|
16,493
| 21,467,371,928
|
IssuesEvent
|
2022-04-26 06:04:58
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
CockroachDB: re-add Oid native type
|
kind/regression process/candidate team/schema topic: cockroachdb
|
It was removed because not-documented, but it does exist. We should re-add it.
|
1.0
|
CockroachDB: re-add Oid native type - It was removed because not-documented, but it does exist. We should re-add it.
|
process
|
cockroachdb re add oid native type it was removed because not documented but it does exist we should re add it
| 1
|
626,398
| 19,822,581,138
|
IssuesEvent
|
2022-01-20 00:21:37
|
googleapis/repo-automation-bots
|
https://api.github.com/repos/googleapis/repo-automation-bots
|
opened
|
release-trigger(python): Release PR was not opened after a docs change was merged.
|
type: bug priority: p2 bot: release-trigger
|
Thanks for stopping by to let us know something could be better!
**PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response.
1) Is this a client library issue or a product issue?
This issue was found in python-contact-center-insights repo.
If the support paths suggested above still do not result in a resolution, please provide the following details.
#### Steps to reproduce
1. Merge a docs change in a python repo in the googleapis org
2. A release PR is not opened.
This docs change was merged in [python-contact-center-insights](https://github.com/googleapis/python-contact-center-insights/pull/123) however a release PR was not opened. In the past, release PRs were opened after a docs change. For example, see https://github.com/googleapis/python-dialogflow/pull/412
Making sure to follow these steps will guarantee the quickest resolution possible.
Thanks!
|
1.0
|
release-trigger(python): Release PR was not opened after a docs change was merged. - Thanks for stopping by to let us know something could be better!
**PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response.
1) Is this a client library issue or a product issue?
This issue was found in python-contact-center-insights repo.
If the support paths suggested above still do not result in a resolution, please provide the following details.
#### Steps to reproduce
1. Merge a docs change in a python repo in the googleapis org
2. A release PR is not opened.
This docs change was merged in [python-contact-center-insights](https://github.com/googleapis/python-contact-center-insights/pull/123) however a release PR was not opened. In the past, release PRs were opened after a docs change. For example, see https://github.com/googleapis/python-dialogflow/pull/412
Making sure to follow these steps will guarantee the quickest resolution possible.
Thanks!
|
non_process
|
release trigger python release pr was not opened after a docs change was merged thanks for stopping by to let us know something could be better please read if you have a support contract with google please create an issue in the instead of filing on github this will ensure a timely response is this a client library issue or a product issue this issue was found in python contact center insights repo if the support paths suggested above still do not result in a resolution please provide the following details steps to reproduce merge a docs change in a python repo in the googleapis org a release pr is not opened this docs change was merged in however a release pr was not opened in the past release prs were opened after a docs change for example see making sure to follow these steps will guarantee the quickest resolution possible thanks
| 0
|
5,683
| 2,793,435,343
|
IssuesEvent
|
2015-05-11 10:59:12
|
Mobicents/RestComm
|
https://api.github.com/repos/Mobicents/RestComm
|
closed
|
Provide SSO services for all restcomm tools
|
1. Bug Admin UI Visual App Designer
|
Found in 7.2.0 candidate.
To reproduce:
1. Login to /restcomm-management
2. Go to Visual Designer
3. See that login is still required
|
1.0
|
Provide SSO services for all restcomm tools - Found in 7.2.0 candidate.
To reproduce:
1. Login to /restcomm-management
2. Go to Visual Designer
3. See that login is still required
|
non_process
|
provide sso services for all restcomm tools found in candidate to reproduce login to restcomm management go to visual designer see that login is still required
| 0
|
10,230
| 13,094,508,246
|
IssuesEvent
|
2020-08-03 12:32:35
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
gdal:proximity doesn't output raster file in output location in PyQGIS standalone application
|
Bug Feedback Processing
|
**Describe the bug**
gdal:proximity does not save output file in output directory
**How to Reproduce**
Example code using python 3.7:
```
def publicgreen_qgis(self):
params = {'INPUT':'test/file/rasterized.tif','BAND':1,
'VALUES':'','UNITS':0,'MAX_DISTANCE':0,'REPLACE':0,
'NODATA':0,'OPTIONS':'','EXTRA':'','DATA_TYPE':5,'OUTPUT':"test/file/proximity.tif"}
res2 = processing.run("gdal:proximity", params)
```
UPDATED EDIT:
Added more information
```
import sys
# Import only what you're going to use
from qgis.core import QgsApplication, QgsRasterLayer
# You can add this path to your system's environment so you don't have to type it here
sys.path.append('C:/OSGeo4W64/apps/qgis/python/plugins')
import processing
# This is a 'fake' iface since we are not using a interface here
class DummyIface(object):
def __init__(self):
self.destCrs = None
# Pass QgsApplication to an object and you can use it just like a QApplication
app = QgsApplication([], True)
app.initQgis()
print("QGIS successfully Initialised")
# I took the freedom to change the way you used processing here, just to be sure it will work
processing.classFactory(DummyIface())
res = processing.run("gdal:rasterize", {
'INPUT': 'C:\\Users\\seteg\\Desktop\\LVI System\\Vector Files\\National Park SG\\National Park SG.shp',
'FIELD': '1', 'BURN': 1, 'UNITS': 1, 'WIDTH': 10, 'HEIGHT': 10,
'EXTENT': '2667.53800002985,56396.4399999964,15748.72099999,50256.3342999838 [EPSG:3414]', 'NODATA': 0,
'OPTIONS': '', 'DATA_TYPE': 5, 'INIT': None, 'INVERT': False, 'EXTRA': '', 'OUTPUT': 'C:/Users/seteg/Desktop/rasterize.tif'})
processing.run("gdal:proximity",
{'INPUT': res['OUTPUT'], 'BAND': 1, 'VALUES': '', 'UNITS': 0, 'MAX_DISTANCE': 0, 'REPLACE': 0,
'NODATA': 0, 'OPTIONS': '', 'EXTRA': '', 'DATA_TYPE': 5,
'OUTPUT': 'C:/Users/seteg/Desktop/proximity.tif'})
app.exit()
```
code structure taken from: https://gis.stackexchange.com/questions/231702/using-pyqgis-in-standalone-scripts-without-crashing
Code runs without errors but does not produce the output file in the directory.
**QGIS and OS versions**
QGIS version
3.12.2-București
QGIS code revision
8a1fb33634
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.0.4
Running against GDAL/OGR
3.0.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.1
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
db_manager;
MetaSearch;
processing
**Additional context**
Other GDAL functions such as rasterize and slope works
Have tried using the processing toolbox to run the process and it works as intended.
Input parameters above were copied from Processing > History from QGIS Desktop and worked as intended in QGIS Desktop.
EDIT:
More additional information:
-Additional code can be found above
-I'm using PyQt5 for GUI if that matters, the process should run after a button is pressed. (the button works for other processes)
-IDE used is Pycharm community edition 2018.3.5
-Have uploaded vector files I was working with
[National Park SG.zip](https://github.com/qgis/QGIS/files/4679340/National.Park.SG.zip)
|
1.0
|
gdal:proximity doesn't output raster file in output location in PyQGIS standalone application - **Describe the bug**
gdal:proximity does not save output file in output directory
**How to Reproduce**
Example code using python 3.7:
```
def publicgreen_qgis(self):
params = {'INPUT':'test/file/rasterized.tif','BAND':1,
'VALUES':'','UNITS':0,'MAX_DISTANCE':0,'REPLACE':0,
'NODATA':0,'OPTIONS':'','EXTRA':'','DATA_TYPE':5,'OUTPUT':"test/file/proximity.tif"}
res2 = processing.run("gdal:proximity", params)
```
UPDATED EDIT:
Added more information
```
import sys
# Import only what you're going to use
from qgis.core import QgsApplication, QgsRasterLayer
# You can add this path to your system's environment so you don't have to type it here
sys.path.append('C:/OSGeo4W64/apps/qgis/python/plugins')
import processing
# This is a 'fake' iface since we are not using a interface here
class DummyIface(object):
def __init__(self):
self.destCrs = None
# Pass QgsApplication to an object and you can use it just like a QApplication
app = QgsApplication([], True)
app.initQgis()
print("QGIS successfully Initialised")
# I took the freedom to change the way you used processing here, just to be sure it will work
processing.classFactory(DummyIface())
res = processing.run("gdal:rasterize", {
'INPUT': 'C:\\Users\\seteg\\Desktop\\LVI System\\Vector Files\\National Park SG\\National Park SG.shp',
'FIELD': '1', 'BURN': 1, 'UNITS': 1, 'WIDTH': 10, 'HEIGHT': 10,
'EXTENT': '2667.53800002985,56396.4399999964,15748.72099999,50256.3342999838 [EPSG:3414]', 'NODATA': 0,
'OPTIONS': '', 'DATA_TYPE': 5, 'INIT': None, 'INVERT': False, 'EXTRA': '', 'OUTPUT': 'C:/Users/seteg/Desktop/rasterize.tif'})
processing.run("gdal:proximity",
{'INPUT': res['OUTPUT'], 'BAND': 1, 'VALUES': '', 'UNITS': 0, 'MAX_DISTANCE': 0, 'REPLACE': 0,
'NODATA': 0, 'OPTIONS': '', 'EXTRA': '', 'DATA_TYPE': 5,
'OUTPUT': 'C:/Users/seteg/Desktop/proximity.tif'})
app.exit()
```
code structure taken from: https://gis.stackexchange.com/questions/231702/using-pyqgis-in-standalone-scripts-without-crashing
Code runs without errors but does not produce the output file in the directory.
**QGIS and OS versions**
QGIS version
3.12.2-București
QGIS code revision
8a1fb33634
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.0.4
Running against GDAL/OGR
3.0.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.1
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
db_manager;
MetaSearch;
processing
**Additional context**
Other GDAL functions such as rasterize and slope works
Have tried using the processing toolbox to run the process and it works as intended.
Input parameters above were copied from Processing > History from QGIS Desktop and worked as intended in QGIS Desktop.
EDIT:
More additional information:
-Additional code can be found above
-I'm using PyQt5 for GUI if that matters, the process should run after a button is pressed. (the button works for other processes)
-IDE used is Pycharm community edition 2018.3.5
-Have uploaded vector files I was working with
[National Park SG.zip](https://github.com/qgis/QGIS/files/4679340/National.Park.SG.zip)
|
process
|
gdal proximity doesn t output raster file in output location in pyqgis standalone application describe the bug gdal proximity does not save output file in output directory how to reproduce example code using python def publicgreen qgis self params input test file rasterized tif band values units max distance replace nodata options extra data type output test file proximity tif processing run gdal proximity params updated edit added more information import sys import only what you re going to use from qgis core import qgsapplication qgsrasterlayer you can add this path to your system s environment so you don t have to type it here sys path append c apps qgis python plugins import processing this is a fake iface since we are not using a interface here class dummyiface object def init self self destcrs none pass qgsapplication to an object and you can use it just like a qapplication app qgsapplication true app initqgis print qgis successfully initialised i took the freedom to change the way you used processing here just to be sure it will work processing classfactory dummyiface res processing run gdal rasterize input c users seteg desktop lvi system vector files national park sg national park sg shp field burn units width height extent nodata options data type init none invert false extra output c users seteg desktop rasterize tif processing run gdal proximity input res band values units max distance replace nodata options extra data type output c users seteg desktop proximity tif app exit code structure taken from code runs without errors but does not produce the output file in the directory qgis and os versions qgis version bucurești qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version windows active python plugins db manager metasearch processing additional context other gdal functions such as rasterize and slope works have tried using the processing toolbox to run the process and it works as intended input parameters above were copied from processing history from qgis desktop and worked as intended in qgis desktop edit more additional information additional code can be found above i m using for gui if that matters the process should run after a button is pressed the button works for other processes ide used is pycharm community edition have uploaded vector files i was working with
| 1
|
101,727
| 11,255,547,927
|
IssuesEvent
|
2020-01-12 10:10:39
|
raurodmen1997/decide
|
https://api.github.com/repos/raurodmen1997/decide
|
closed
|
[Doc] - Gestión de liberaciones, despliegue y entregas.
|
documentation
|
**Describir la funcionalidad/test**
Descripción de la gestión de desplegables y liberaciones.
Configuración de Travis, Codacy, etc
**Módulos involucrados**
Post-Procesado
**Justificación**
|
1.0
|
[Doc] - Gestión de liberaciones, despliegue y entregas. - **Describir la funcionalidad/test**
Descripción de la gestión de desplegables y liberaciones.
Configuración de Travis, Codacy, etc
**Módulos involucrados**
Post-Procesado
**Justificación**
|
non_process
|
gestión de liberaciones despliegue y entregas describir la funcionalidad test descripción de la gestión de desplegables y liberaciones configuración de travis codacy etc módulos involucrados post procesado justificación
| 0
|
17,691
| 23,537,268,600
|
IssuesEvent
|
2022-08-19 22:58:40
|
brucemiller/LaTeXML
|
https://api.github.com/repos/brucemiller/LaTeXML
|
closed
|
dates on split-off pages appear even if not in the document (before XSLT!)
|
bug postprocessing
|
Follow up on #1602: the postprocessor is adding `<ltx:date>` to the split files, even if no date is specified in the original document. Could this be the original cause of #1602?
Test:
```
latexmlpost --dest=mwe-post.xml --split mwe.xml
```
on any document without the date, e.g.
```latex
\documentclass{article}
\begin{document}
\section{A section}
\end{document}
```
|
1.0
|
dates on split-off pages appear even if not in the document (before XSLT!) - Follow up on #1602: the postprocessor is adding `<ltx:date>` to the split files, even if no date is specified in the original document. Could this be the original cause of #1602?
Test:
```
latexmlpost --dest=mwe-post.xml --split mwe.xml
```
on any document without the date, e.g.
```latex
\documentclass{article}
\begin{document}
\section{A section}
\end{document}
```
|
process
|
dates on split off pages appear even if not in the document before xslt follow up on the postprocessor is adding to the split files even if no date is specified in the original document could this be the original cause of test latexmlpost dest mwe post xml split mwe xml on any document without the date e g latex documentclass article begin document section a section end document
| 1
|
20,104
| 26,639,255,265
|
IssuesEvent
|
2023-01-25 02:00:08
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Wed, 25 Jan 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### PowerQuant: Automorphism Search for Non-Uniform Quantization
- **Authors:** Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, Kevin Bailly
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.09858
- **Pdf link:** https://arxiv.org/pdf/2301.09858
- **Abstract**
Deep neural networks (DNNs) are nowadays ubiquitous in many domains such as computer vision. However, due to their high latency, the deployment of DNNs hinges on the development of compression techniques such as quantization which consists in lowering the number of bits used to encode the weights and activations. Growing concerns for privacy and security have motivated the development of data-free techniques, at the expanse of accuracy. In this paper, we identity the uniformity of the quantization operator as a limitation of existing approaches, and propose a data-free non-uniform method. More specifically, we argue that to be readily usable without dedicated hardware and implementation, non-uniform quantization shall not change the nature of the mathematical operations performed by the DNN. This leads to search among the continuous automorphisms of $(\mathbb{R}_+^*,\times)$, which boils down to the power functions defined by their exponent. To find this parameter, we propose to optimize the reconstruction error of each layer: in particular, we show that this procedure is locally convex and admits a unique solution. At inference time, we show that our approach, dubbed PowerQuant, only require simple modifications in the quantized DNN activation functions. As such, with only negligible overhead, it significantly outperforms existing methods in a variety of configurations.
### K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
- **Authors:** Sara Fridovich-Keil, Giacomo Meanti, Frederik Warburg, Benjamin Recht, Angjoo Kanazawa
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.10241
- **Pdf link:** https://arxiv.org/pdf/2301.10241
- **Abstract**
We introduce k-planes, a white-box model for radiance fields in arbitrary dimensions. Our model uses d choose 2 planes to represent a d-dimensional scene, providing a seamless way to go from static (d=3) to dynamic (d=4) scenes. This planar factorization makes adding dimension-specific priors easy, e.g. temporal smoothness and multi-resolution spatial structure, and induces a natural decomposition of static and dynamic components of a scene. We use a linear feature decoder with a learned color basis that yields similar performance as a nonlinear black-box MLP decoder. Across a range of synthetic and real, static and dynamic, fixed and varying appearance scenes, k-planes yields competitive and often state-of-the-art reconstruction fidelity with low memory usage, achieving 1000x compression over a full 4D grid, and fast optimization with a pure PyTorch implementation. For video results and code, please see sarafridov.github.io/K-Planes.
## Keyword: RAW
There is no result
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Wed, 25 Jan 23 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### PowerQuant: Automorphism Search for Non-Uniform Quantization
- **Authors:** Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, Kevin Bailly
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.09858
- **Pdf link:** https://arxiv.org/pdf/2301.09858
- **Abstract**
Deep neural networks (DNNs) are nowadays ubiquitous in many domains such as computer vision. However, due to their high latency, the deployment of DNNs hinges on the development of compression techniques such as quantization which consists in lowering the number of bits used to encode the weights and activations. Growing concerns for privacy and security have motivated the development of data-free techniques, at the expanse of accuracy. In this paper, we identity the uniformity of the quantization operator as a limitation of existing approaches, and propose a data-free non-uniform method. More specifically, we argue that to be readily usable without dedicated hardware and implementation, non-uniform quantization shall not change the nature of the mathematical operations performed by the DNN. This leads to search among the continuous automorphisms of $(\mathbb{R}_+^*,\times)$, which boils down to the power functions defined by their exponent. To find this parameter, we propose to optimize the reconstruction error of each layer: in particular, we show that this procedure is locally convex and admits a unique solution. At inference time, we show that our approach, dubbed PowerQuant, only require simple modifications in the quantized DNN activation functions. As such, with only negligible overhead, it significantly outperforms existing methods in a variety of configurations.
### K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
- **Authors:** Sara Fridovich-Keil, Giacomo Meanti, Frederik Warburg, Benjamin Recht, Angjoo Kanazawa
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.10241
- **Pdf link:** https://arxiv.org/pdf/2301.10241
- **Abstract**
We introduce k-planes, a white-box model for radiance fields in arbitrary dimensions. Our model uses d choose 2 planes to represent a d-dimensional scene, providing a seamless way to go from static (d=3) to dynamic (d=4) scenes. This planar factorization makes adding dimension-specific priors easy, e.g. temporal smoothness and multi-resolution spatial structure, and induces a natural decomposition of static and dynamic components of a scene. We use a linear feature decoder with a learned color basis that yields similar performance as a nonlinear black-box MLP decoder. Across a range of synthetic and real, static and dynamic, fixed and varying appearance scenes, k-planes yields competitive and often state-of-the-art reconstruction fidelity with low memory usage, achieving 1000x compression over a full 4D grid, and fast optimization with a pure PyTorch implementation. For video results and code, please see sarafridov.github.io/K-Planes.
## Keyword: RAW
There is no result
## Keyword: raw image
There is no result
|
process
|
new submissions for wed jan keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression powerquant automorphism search for non uniform quantization authors edouard yvinec arnaud dapogny matthieu cord kevin bailly subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract deep neural networks dnns are nowadays ubiquitous in many domains such as computer vision however due to their high latency the deployment of dnns hinges on the development of compression techniques such as quantization which consists in lowering the number of bits used to encode the weights and activations growing concerns for privacy and security have motivated the development of data free techniques at the expanse of accuracy in this paper we identity the uniformity of the quantization operator as a limitation of existing approaches and propose a data free non uniform method more specifically we argue that to be readily usable without dedicated hardware and implementation non uniform quantization shall not change the nature of the mathematical operations performed by the dnn this leads to search among the continuous automorphisms of mathbb r times which boils down to the power functions defined by their exponent to find this parameter we propose to optimize the reconstruction error of each layer in particular we show that this procedure is locally convex and admits a unique solution at inference time we show that our approach dubbed powerquant only require simple modifications in the quantized dnn activation functions as such with only negligible overhead it significantly outperforms existing methods in a variety of configurations k planes explicit radiance fields in space time and appearance authors sara fridovich keil giacomo meanti frederik warburg benjamin recht angjoo kanazawa subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we introduce k planes a white box model for radiance fields in arbitrary dimensions our model uses d choose planes to represent a d dimensional scene providing a seamless way to go from static d to dynamic d scenes this planar factorization makes adding dimension specific priors easy e g temporal smoothness and multi resolution spatial structure and induces a natural decomposition of static and dynamic components of a scene we use a linear feature decoder with a learned color basis that yields similar performance as a nonlinear black box mlp decoder across a range of synthetic and real static and dynamic fixed and varying appearance scenes k planes yields competitive and often state of the art reconstruction fidelity with low memory usage achieving compression over a full grid and fast optimization with a pure pytorch implementation for video results and code please see sarafridov github io k planes keyword raw there is no result keyword raw image there is no result
| 1
|
773
| 10,258,249,967
|
IssuesEvent
|
2019-08-21 22:19:51
|
microsoft/botframework-sdk
|
https://api.github.com/repos/microsoft/botframework-sdk
|
opened
|
Generate architecture maps/poster(s) of the Bot Framework for CSS
|
4.6 ridealong supportability
|
To aid CSS in understanding the parts for the Framework, we need to produce one or more architecture maps that outline the architecture of the framework
|
True
|
Generate architecture maps/poster(s) of the Bot Framework for CSS - To aid CSS in understanding the parts for the Framework, we need to produce one or more architecture maps that outline the architecture of the framework
|
non_process
|
generate architecture maps poster s of the bot framework for css to aid css in understanding the parts for the framework we need to produce one or more architecture maps that outline the architecture of the framework
| 0
|
7,061
| 10,218,946,069
|
IssuesEvent
|
2019-08-15 17:16:37
|
HumanCellAtlas/dcp-community
|
https://api.github.com/repos/HumanCellAtlas/dcp-community
|
opened
|
RFC(s) should reference related Roadmap Objectives in ZenHub
|
rfc-process
|
During the August 15 Refinement meeting, @diekhans recommended that RFC(s) include a reference to their related spike in ZenHub, when the RFC is required to address a Roadmap Objective. This will improve visibility and encourage reviewers to also update the ZenHub issue as required.
|
1.0
|
RFC(s) should reference related Roadmap Objectives in ZenHub - During the August 15 Refinement meeting, @diekhans recommended that RFC(s) include a reference to their related spike in ZenHub, when the RFC is required to address a Roadmap Objective. This will improve visibility and encourage reviewers to also update the ZenHub issue as required.
|
process
|
rfc s should reference related roadmap objectives in zenhub during the august refinement meeting diekhans recommended that rfc s include a reference to their related spike in zenhub when the rfc is required to address a roadmap objective this will improve visibility and encourage reviewers to also update the zenhub issue as required
| 1
|
578,684
| 17,150,020,293
|
IssuesEvent
|
2021-07-13 19:13:33
|
vrchatapi/specification
|
https://api.github.com/repos/vrchatapi/specification
|
closed
|
Document File API
|
Priority: High Status: In Progress Type: Enhancement
|
Add `/file` and `/files` endpoints. Some of the information exists in Markdown, but some of it is missing or outdated so need to double-check.
|
1.0
|
Document File API - Add `/file` and `/files` endpoints. Some of the information exists in Markdown, but some of it is missing or outdated so need to double-check.
|
non_process
|
document file api add file and files endpoints some of the information exists in markdown but some of it is missing or outdated so need to double check
| 0
|
753
| 3,227,548,762
|
IssuesEvent
|
2015-10-11 09:33:22
|
pwittchen/ReactiveBeacons
|
https://api.github.com/repos/pwittchen/ReactiveBeacons
|
closed
|
Release 0.2.0
|
release process
|
**Initial release notes**:
- decreased min SDK version to 9
- added `isBleSupported()` method to the public API
- if BLE is not supported by the device, library emits an empty Observable
- updated exemplary app
- updated documentation in `README.md` file
**Things to be done**:
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close & release repository on Nexus
- [x] update `CHANGELOG.md` file
- [x] update version in `README.md` file
- [x] create new GitHub release
|
1.0
|
Release 0.2.0 - **Initial release notes**:
- decreased min SDK version to 9
- added `isBleSupported()` method to the public API
- if BLE is not supported by the device, library emits an empty Observable
- updated exemplary app
- updated documentation in `README.md` file
**Things to be done**:
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close & release repository on Nexus
- [x] update `CHANGELOG.md` file
- [x] update version in `README.md` file
- [x] create new GitHub release
|
process
|
release initial release notes decreased min sdk version to added isblesupported method to the public api if ble is not supported by the device library emits an empty observable updated exemplary app updated documentation in readme md file things to be done bump library version upload archives to maven central close release repository on nexus update changelog md file update version in readme md file create new github release
| 1
|
7,902
| 11,089,229,462
|
IssuesEvent
|
2019-12-14 17:00:58
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
select-topic children of chunking-generated title-only topic do not have links to child topics
|
bug preprocess/chunking stale
|
## Expected Behavior
Title-only topic generated from topichead by @chunk is rendered identically to equivalent literal title-only topic. In particular, in HTML output there are links to the child topics.
## Actual Behavior
Generated title-only topic does not have links to child topics.
## Possible Solution
None identified.
## Steps to Reproduce
1. Apply HTML or XHTML transform to the root topic https://github.com/dita-community/dita-test-cases/blob/master/topichead-chunking/topichead-chunking-test-02.ditamap
2. Observe that title-only topic generated from `<topichead navtitle="Topic Head 2.2" chunk="to-content select-topic">` does not have links to the child topics
## Copy of the error message, log file or stack trace
N/A. No unexpected messages in the log for the chunk step.
## Environment
* DITA-OT version: 2.5.3
* Operating system and version: macOS
* How did you run DITA-OT?
oXygen
* Transformation type:
html5
<!--
Before submitting, check the Preview tab above to verify the XML markup appears
correctly and remember you can edit the description later to add information.
-->
|
1.0
|
select-topic children of chunking-generated title-only topic do not have links to child topics - ## Expected Behavior
Title-only topic generated from topichead by @chunk is rendered identically to equivalent literal title-only topic. In particular, in HTML output there are links to the child topics.
## Actual Behavior
Generated title-only topic does not have links to child topics.
## Possible Solution
None identified.
## Steps to Reproduce
1. Apply HTML or XHTML transform to the root topic https://github.com/dita-community/dita-test-cases/blob/master/topichead-chunking/topichead-chunking-test-02.ditamap
2. Observe that title-only topic generated from `<topichead navtitle="Topic Head 2.2" chunk="to-content select-topic">` does not have links to the child topics
## Copy of the error message, log file or stack trace
N/A. No unexpected messages in the log for the chunk step.
## Environment
* DITA-OT version: 2.5.3
* Operating system and version: macOS
* How did you run DITA-OT?
oXygen
* Transformation type:
html5
<!--
Before submitting, check the Preview tab above to verify the XML markup appears
correctly and remember you can edit the description later to add information.
-->
|
process
|
select topic children of chunking generated title only topic do not have links to child topics expected behavior title only topic generated from topichead by chunk is rendered identically to equivalent literal title only topic in particular in html output there are links to the child topics actual behavior generated title only topic does not have links to child topics possible solution none identified steps to reproduce apply html or xhtml transform to the root topic observe that title only topic generated from does not have links to the child topics copy of the error message log file or stack trace n a no unexpected messages in the log for the chunk step environment dita ot version operating system and version macos how did you run dita ot oxygen transformation type before submitting check the preview tab above to verify the xml markup appears correctly and remember you can edit the description later to add information
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.