Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
19,214
25,347,319,052
IssuesEvent
2022-11-19 10:57:40
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
Remove all single-step BP classes
mini-project GOC meeting editors-discussion cellular processes MF_in_BP
Redundant with the one function they are linked to. They add no value and actually create noise in enrichment analyses.
1.0
Remove all single-step BP classes - Redundant with the one function they are linked to. They add no value and actually create noise in enrichment analyses.
process
remove all single step bp classes redundant with the one function they are linked to they add no value and actually create noise in enrichment analyses
1
11,610
14,478,958,646
IssuesEvent
2020-12-10 09:10:00
decidim/decidim
https://api.github.com/repos/decidim/decidim
closed
Show process on cards of the Participatory Group
contract: process-groups
Ref.: PG06 **Is your feature request related to a problem? Please describe.** As a visitor when I see a Meeting or a Proposal (or any other component) that's inside of a PG, the information of which participatory process this component belongs to doesn't appear. **Describe the solution you'd like** For consistency, this should be implemented as we already have on other cards in the [general search](https://www.decidim.barcelona/search?utf8=%E2%9C%93&term=sant+marti&locale=es). **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** *Now* ![](https://i.imgur.com/n1C5dX0.png) *Proposal (based on search)* ![](https://i.imgur.com/wanY86f.png) **Does this issue could impact on users private data?** No **Acceptance criteria** - [x] As a visitor I can see which Participatory Process a given Resource (ie Meeting, Proposal, etc) belongs to on a Participatory Process Group page
1.0
Show process on cards of the Participatory Group - Ref.: PG06 **Is your feature request related to a problem? Please describe.** As a visitor when I see a Meeting or a Proposal (or any other component) that's inside of a PG, the information of which participatory process this component belongs to doesn't appear. **Describe the solution you'd like** For consistency, this should be implemented as we already have on other cards in the [general search](https://www.decidim.barcelona/search?utf8=%E2%9C%93&term=sant+marti&locale=es). **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** *Now* ![](https://i.imgur.com/n1C5dX0.png) *Proposal (based on search)* ![](https://i.imgur.com/wanY86f.png) **Does this issue could impact on users private data?** No **Acceptance criteria** - [x] As a visitor I can see which Participatory Process a given Resource (ie Meeting, Proposal, etc) belongs to on a Participatory Process Group page
process
show process on cards of the participatory group ref is your feature request related to a problem please describe as a visitor when i see a meeting or a proposal or any other component that s inside of a pg the information of which participatory process this component belongs to doesn t appear describe the solution you d like for consistency this should be implemented as we already have on other cards in the describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context now proposal based on search does this issue could impact on users private data no acceptance criteria as a visitor i can see which participatory process a given resource ie meeting proposal etc belongs to on a participatory process group page
1
145,903
11,713,182,769
IssuesEvent
2020-03-09 09:51:05
trezor/trezor-firmware
https://api.github.com/repos/trezor/trezor-firmware
closed
test suite timeouts fail with on-device tests
ci enhancement feature tests
The test suite is currently set up to have a time limit of 60 seconds per test case. This is to prevent the CI being stuck when for some reason a testcase freezes the emulator. The timeout is unsuitable for running on real hardware, because the device is much slower. It is possible to override the timeout from the command line or via an environment variable, but this will not work for test cases that are explicitly marked with a bigger timeout. --- It seems that using the timeouts this way isn't appropriate. A potential solution is to not configure any timeouts in the test suite itself, and set up a timeout via an argument/envvar just in the CI. Local runs would not have this timeout, but that should not matter, as in a local run the developer can notice that the suite is stuck and shut it down by hand. Alternately anyone can set up an envvar for their liking. A drawback of this solution is that we have tests that take longer than others. While 60 seconds is a reasonable timeout for "normal" tests, certain others (signing 100-input tx; testing 16-of-16-of-16 Shamir recovery) might need a longer time limit even in the emulator. One option is to figure out a time limit appropriate for all tests. At a preliminary guess, 5 minutes per test case should be enough for the long test to finish, and not blocking the CI for too long in the normal cases. Another option is to extend use of `pytest.mark.slow` to all relevant test, and have one test run with `pytest -m "not slow" --timeout=60` and another with `pytest -m slow --timeout=600`. For now I'd go with the first option, setting 5 minutes in CI and seeing if that works OK. cc @onvej-sl
1.0
test suite timeouts fail with on-device tests - The test suite is currently set up to have a time limit of 60 seconds per test case. This is to prevent the CI being stuck when for some reason a testcase freezes the emulator. The timeout is unsuitable for running on real hardware, because the device is much slower. It is possible to override the timeout from the command line or via an environment variable, but this will not work for test cases that are explicitly marked with a bigger timeout. --- It seems that using the timeouts this way isn't appropriate. A potential solution is to not configure any timeouts in the test suite itself, and set up a timeout via an argument/envvar just in the CI. Local runs would not have this timeout, but that should not matter, as in a local run the developer can notice that the suite is stuck and shut it down by hand. Alternately anyone can set up an envvar for their liking. A drawback of this solution is that we have tests that take longer than others. While 60 seconds is a reasonable timeout for "normal" tests, certain others (signing 100-input tx; testing 16-of-16-of-16 Shamir recovery) might need a longer time limit even in the emulator. One option is to figure out a time limit appropriate for all tests. At a preliminary guess, 5 minutes per test case should be enough for the long test to finish, and not blocking the CI for too long in the normal cases. Another option is to extend use of `pytest.mark.slow` to all relevant test, and have one test run with `pytest -m "not slow" --timeout=60` and another with `pytest -m slow --timeout=600`. For now I'd go with the first option, setting 5 minutes in CI and seeing if that works OK. cc @onvej-sl
non_process
test suite timeouts fail with on device tests the test suite is currently set up to have a time limit of seconds per test case this is to prevent the ci being stuck when for some reason a testcase freezes the emulator the timeout is unsuitable for running on real hardware because the device is much slower it is possible to override the timeout from the command line or via an environment variable but this will not work for test cases that are explicitly marked with a bigger timeout it seems that using the timeouts this way isn t appropriate a potential solution is to not configure any timeouts in the test suite itself and set up a timeout via an argument envvar just in the ci local runs would not have this timeout but that should not matter as in a local run the developer can notice that the suite is stuck and shut it down by hand alternately anyone can set up an envvar for their liking a drawback of this solution is that we have tests that take longer than others while seconds is a reasonable timeout for normal tests certain others signing input tx testing of of shamir recovery might need a longer time limit even in the emulator one option is to figure out a time limit appropriate for all tests at a preliminary guess minutes per test case should be enough for the long test to finish and not blocking the ci for too long in the normal cases another option is to extend use of pytest mark slow to all relevant test and have one test run with pytest m not slow timeout and another with pytest m slow timeout for now i d go with the first option setting minutes in ci and seeing if that works ok cc onvej sl
0
13,901
16,662,207,841
IssuesEvent
2021-06-06 14:33:06
Leviatan-Analytics/LA-data-processing
https://api.github.com/repos/Leviatan-Analytics/LA-data-processing
closed
Merge ui-configuration and recording with video split into image script [2]
Data Processing Sprint 2 Week 2
Combine previous week unified script with video to image converter script.
1.0
Merge ui-configuration and recording with video split into image script [2] - Combine previous week unified script with video to image converter script.
process
merge ui configuration and recording with video split into image script combine previous week unified script with video to image converter script
1
18,655
24,581,300,163
IssuesEvent
2022-10-13 15:48:34
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Consent API] Issue related to consentContentVersion in consentArtifacts method
Bug P2 Process: Fixed Process: Tested QA Process: Tested dev
Publish the study for the fourth time and Verify the consentContentVersion in consentArtifacts method. **Note:** 1. Issue is observed only when published the study for fourth time. 2. Issue is not observed for other published version. AR: consentContentVersion in consentArtifacts method is getting dispalyed as "1.3000001" ER: consentContentVersion in consentArtifacts method should be dispalyed as "1.3" ![s1](https://user-images.githubusercontent.com/86007179/149951376-f816fad6-e504-4c53-b971-6b75ce234263.png)
3.0
[Consent API] Issue related to consentContentVersion in consentArtifacts method - Publish the study for the fourth time and Verify the consentContentVersion in consentArtifacts method. **Note:** 1. Issue is observed only when published the study for fourth time. 2. Issue is not observed for other published version. AR: consentContentVersion in consentArtifacts method is getting dispalyed as "1.3000001" ER: consentContentVersion in consentArtifacts method should be dispalyed as "1.3" ![s1](https://user-images.githubusercontent.com/86007179/149951376-f816fad6-e504-4c53-b971-6b75ce234263.png)
process
issue related to consentcontentversion in consentartifacts method publish the study for the fourth time and verify the consentcontentversion in consentartifacts method note issue is observed only when published the study for fourth time issue is not observed for other published version ar consentcontentversion in consentartifacts method is getting dispalyed as er consentcontentversion in consentartifacts method should be dispalyed as
1
4,862
7,746,880,982
IssuesEvent
2018-05-29 23:52:08
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Process.WaitForExit does not always return, even if process has exited [macOS/Linux]
Servicing-Approved-2.1.1 area-System.Diagnostics.Process bug os-linux
It appears Process.WaitForExit does not actually complete if called from certain threads, even if the process it waits on has exited. I've only been able to reproduce this on macOS and Linux, not Windows. **Repro** This issue was originally reported as a bug in dotnet-watch (see https://github.com/aspnet/DotNetTools/issues/410) I created a minimal repro here: https://gist.github.com/natemcmaster/b6229d8c923f9305f2f71ffa1686ac8b. To run it, ``` $ dotnet build $ dotnet bin/Debug/netcoreapp2.1/App.dll (Wait for a few seconds and press CTRL + C) ``` The actual code that has encounters the issue is here: https://github.com/aspnet/Common/blob/2.1.0-rc1-final/shared/Microsoft.Extensions.Process.Sources/ProcessHelper.cs#L48 **Expected** 1. dotnet App.dll starts child process A ("sleep 60") 2. CTRL+C event fires 3. The app invokes `Process.Start("pgrep", $"-P {processA.Id}")` and reads the output of pgrep to find any child processes 4. The app reads the output of p kills child process A and its entire process tree 5. App exists **Actual** 1. dotnet App.dll starts child process A ("sleep 60") 2. CTRL+C event fires 3. The app invokes `Process.Start("pgrep", $"-P {processA.Id}")`, but the WaitForExit call on the process object never returns, even after pgrep has exited. **Details** I investigated a little. From what I can tell, the indefinite hang appears to happen when Process.WaitForExit is invoked on certain threads. The hang consistently appears if called when Thread.CurrentThread.IsThreadPoolThread == false. If I dispatch the cancelling the cancellation token via ThreadPool.QueueUserWorkItem, Process.WaitForExit works as expected.
1.0
Process.WaitForExit does not always return, even if process has exited [macOS/Linux] - It appears Process.WaitForExit does not actually complete if called from certain threads, even if the process it waits on has exited. I've only been able to reproduce this on macOS and Linux, not Windows. **Repro** This issue was originally reported as a bug in dotnet-watch (see https://github.com/aspnet/DotNetTools/issues/410) I created a minimal repro here: https://gist.github.com/natemcmaster/b6229d8c923f9305f2f71ffa1686ac8b. To run it, ``` $ dotnet build $ dotnet bin/Debug/netcoreapp2.1/App.dll (Wait for a few seconds and press CTRL + C) ``` The actual code that has encounters the issue is here: https://github.com/aspnet/Common/blob/2.1.0-rc1-final/shared/Microsoft.Extensions.Process.Sources/ProcessHelper.cs#L48 **Expected** 1. dotnet App.dll starts child process A ("sleep 60") 2. CTRL+C event fires 3. The app invokes `Process.Start("pgrep", $"-P {processA.Id}")` and reads the output of pgrep to find any child processes 4. The app reads the output of p kills child process A and its entire process tree 5. App exists **Actual** 1. dotnet App.dll starts child process A ("sleep 60") 2. CTRL+C event fires 3. The app invokes `Process.Start("pgrep", $"-P {processA.Id}")`, but the WaitForExit call on the process object never returns, even after pgrep has exited. **Details** I investigated a little. From what I can tell, the indefinite hang appears to happen when Process.WaitForExit is invoked on certain threads. The hang consistently appears if called when Thread.CurrentThread.IsThreadPoolThread == false. If I dispatch the cancelling the cancellation token via ThreadPool.QueueUserWorkItem, Process.WaitForExit works as expected.
process
process waitforexit does not always return even if process has exited it appears process waitforexit does not actually complete if called from certain threads even if the process it waits on has exited i ve only been able to reproduce this on macos and linux not windows repro this issue was originally reported as a bug in dotnet watch see i created a minimal repro here to run it dotnet build dotnet bin debug app dll wait for a few seconds and press ctrl c the actual code that has encounters the issue is here expected dotnet app dll starts child process a sleep ctrl c event fires the app invokes process start pgrep p processa id and reads the output of pgrep to find any child processes the app reads the output of p kills child process a and its entire process tree app exists actual dotnet app dll starts child process a sleep ctrl c event fires the app invokes process start pgrep p processa id but the waitforexit call on the process object never returns even after pgrep has exited details i investigated a little from what i can tell the indefinite hang appears to happen when process waitforexit is invoked on certain threads the hang consistently appears if called when thread currentthread isthreadpoolthread false if i dispatch the cancelling the cancellation token via threadpool queueuserworkitem process waitforexit works as expected
1
21,893
30,342,331,248
IssuesEvent
2023-07-11 13:28:57
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Error message about GA driver instead of CSV upload settings
Type:Bug .Backend .Team/QueryProcessor :hammer_and_wrench:
(from Kyle) Just tried going to the CSV upload settings on stats and got this fun message. ![image](https://github.com/metabase/metabase/assets/125455699/ec9f6d45-27d3-45b1-9d18-02acf43544db) [Slack Message](https://metaboat.slack.com/archives/C04S696LRUM/p1687457373030259?thread_ts=1687457373.030259&cid=C04S696LRUM)
1.0
Error message about GA driver instead of CSV upload settings - (from Kyle) Just tried going to the CSV upload settings on stats and got this fun message. ![image](https://github.com/metabase/metabase/assets/125455699/ec9f6d45-27d3-45b1-9d18-02acf43544db) [Slack Message](https://metaboat.slack.com/archives/C04S696LRUM/p1687457373030259?thread_ts=1687457373.030259&cid=C04S696LRUM)
process
error message about ga driver instead of csv upload settings from kyle just tried going to the csv upload settings on stats and got this fun message
1
1,523
4,116,501,790
IssuesEvent
2016-06-08 00:58:42
metabase/metabase
https://api.github.com/repos/metabase/metabase
reopened
Feature Request: Prepend Query Metadata in SQL Comment
Enhancement Help Wanted Query Processor
Hey, I'm back with a new feature request. Metabase is a very powerful tool, but in the hands of the wrong people it can do more harm than good. We had the case this week that one of our colleagues put up a dashboard with fairly intensive queries in a loop where it would refresh every 30 seconds. That put a heavy load on the database as you can imagine. It was kind of difficult for us to find the culprit and what was going on because we could see the queries and that they were originating from the user designated for metabase, but it was difficult to find out what questions it were and who was executing them. Re:Dash has a nice solution to this they prepend a comment to every SQL Query stating among other things the question id and user like this: `/* Username: Scheduled, Task ID: 66c28896-d5a0-4230-be02-ea0da7c8e92d, Query ID: 15, Queue: scheduled_queries, Query Hash: 59ad1137f86748f72eec0bd0a9acb8c1 */` With this information I can quickly identify a query that is causing performance issues and also why/by whom it is being executed. It would be really nice to have the same on Metabase as well. Thanks for your awesome work :)
1.0
Feature Request: Prepend Query Metadata in SQL Comment - Hey, I'm back with a new feature request. Metabase is a very powerful tool, but in the hands of the wrong people it can do more harm than good. We had the case this week that one of our colleagues put up a dashboard with fairly intensive queries in a loop where it would refresh every 30 seconds. That put a heavy load on the database as you can imagine. It was kind of difficult for us to find the culprit and what was going on because we could see the queries and that they were originating from the user designated for metabase, but it was difficult to find out what questions it were and who was executing them. Re:Dash has a nice solution to this they prepend a comment to every SQL Query stating among other things the question id and user like this: `/* Username: Scheduled, Task ID: 66c28896-d5a0-4230-be02-ea0da7c8e92d, Query ID: 15, Queue: scheduled_queries, Query Hash: 59ad1137f86748f72eec0bd0a9acb8c1 */` With this information I can quickly identify a query that is causing performance issues and also why/by whom it is being executed. It would be really nice to have the same on Metabase as well. Thanks for your awesome work :)
process
feature request prepend query metadata in sql comment hey i m back with a new feature request metabase is a very powerful tool but in the hands of the wrong people it can do more harm than good we had the case this week that one of our colleagues put up a dashboard with fairly intensive queries in a loop where it would refresh every seconds that put a heavy load on the database as you can imagine it was kind of difficult for us to find the culprit and what was going on because we could see the queries and that they were originating from the user designated for metabase but it was difficult to find out what questions it were and who was executing them re dash has a nice solution to this they prepend a comment to every sql query stating among other things the question id and user like this username scheduled task id query id queue scheduled queries query hash with this information i can quickly identify a query that is causing performance issues and also why by whom it is being executed it would be really nice to have the same on metabase as well thanks for your awesome work
1
123,273
16,471,797,060
IssuesEvent
2021-05-23 15:07:43
ngs-lang/ngs
https://api.github.com/repos/ngs-lang/ngs
closed
Implement default limit in pmap threads
aspect/threads needs-design question
Would make sense to implement some kind of limit to the number of threads when none is defined Feedback from meetup: `Limit by number of virtual cores.` - Daniel Eiband `Usually most tools default to the number or cores/cpus available` - Fulvio Scapin
1.0
Implement default limit in pmap threads - Would make sense to implement some kind of limit to the number of threads when none is defined Feedback from meetup: `Limit by number of virtual cores.` - Daniel Eiband `Usually most tools default to the number or cores/cpus available` - Fulvio Scapin
non_process
implement default limit in pmap threads would make sense to implement some kind of limit to the number of threads when none is defined feedback from meetup limit by number of virtual cores daniel eiband usually most tools default to the number or cores cpus available fulvio scapin
0
50,874
6,130,702,494
IssuesEvent
2017-06-24 08:07:33
kubernetes/kubeadm
https://api.github.com/repos/kubernetes/kubeadm
closed
kubeadm e2e targeted at HEAD should use control plane images from master
area/testing help-wanted kind/enhancement priority/important-soon
## Is this a BUG REPORT or FEATURE REQUEST? Choose one: BUG REPORT or FEATURE REQUEST TESTING REQUEST Using something as simple as https://gist.github.com/luxas/7515d1e1fb94cff1327b83ae4affd7a4, it's possible to download and `docker load` images from the latest CI builds. We should use this instead of relying on `dl.k8s.io/release/latest.txt`, which has been a major obstacle for us in the v1.7 cycle. It's fairly straightforward to implement this functionality in kubernetes-anywhere now that https://github.com/kubernetes/test-infra/pull/2761 is merged. We should just make the `master` or `HEAD` keyword of kubernetes-version special and when kubernetes-anywhere picks that up it will use the latest images from the CI builds. @pipejakob Can you fix this?
1.0
kubeadm e2e targeted at HEAD should use control plane images from master - ## Is this a BUG REPORT or FEATURE REQUEST? Choose one: BUG REPORT or FEATURE REQUEST TESTING REQUEST Using something as simple as https://gist.github.com/luxas/7515d1e1fb94cff1327b83ae4affd7a4, it's possible to download and `docker load` images from the latest CI builds. We should use this instead of relying on `dl.k8s.io/release/latest.txt`, which has been a major obstacle for us in the v1.7 cycle. It's fairly straightforward to implement this functionality in kubernetes-anywhere now that https://github.com/kubernetes/test-infra/pull/2761 is merged. We should just make the `master` or `HEAD` keyword of kubernetes-version special and when kubernetes-anywhere picks that up it will use the latest images from the CI builds. @pipejakob Can you fix this?
non_process
kubeadm targeted at head should use control plane images from master is this a bug report or feature request choose one bug report or feature request testing request using something as simple as it s possible to download and docker load images from the latest ci builds we should use this instead of relying on dl io release latest txt which has been a major obstacle for us in the cycle it s fairly straightforward to implement this functionality in kubernetes anywhere now that is merged we should just make the master or head keyword of kubernetes version special and when kubernetes anywhere picks that up it will use the latest images from the ci builds pipejakob can you fix this
0
187,370
14,427,589,699
IssuesEvent
2020-12-06 05:00:40
kalexmills/github-vet-tests-dec2020
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
closed
chiradeep/kube-policy-manager: vendor/k8s.io/kubernetes/pkg/kubectl/resource_printer_test.go; 30 LoC
fresh small test vendored
Found a possible issue in [chiradeep/kube-policy-manager](https://www.github.com/chiradeep/kube-policy-manager) at [vendor/k8s.io/kubernetes/pkg/kubectl/resource_printer_test.go](https://github.com/chiradeep/kube-policy-manager/blob/46abeb066091c32177aff94dd4cc03d8e8f01dc9/vendor/k8s.io/kubernetes/pkg/kubectl/resource_printer_test.go#L933-L962) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to svc at line 936 may start a goroutine [Click here to see the code in its original context.](https://github.com/chiradeep/kube-policy-manager/blob/46abeb066091c32177aff94dd4cc03d8e8f01dc9/vendor/k8s.io/kubernetes/pkg/kubectl/resource_printer_test.go#L933-L962) <details> <summary>Click here to show the 30 line(s) of Go which triggered the analyzer.</summary> ```go for _, svc := range tests { for _, wide := range []bool{false, true} { buff := bytes.Buffer{} printService(&svc, &buff, PrintOptions{false, false, false, wide, false, false, false, "", []string{}}) output := string(buff.Bytes()) ip := svc.Spec.ClusterIP if !strings.Contains(output, ip) { t.Errorf("expected to contain ClusterIP %s, but doesn't: %s", ip, output) } for n, ingress := range svc.Status.LoadBalancer.Ingress { ip = ingress.IP // For non-wide output, we only guarantee the first IP to be printed if (n == 0 || wide) && !strings.Contains(output, ip) { t.Errorf("expected to contain ingress ip %s with wide=%v, but doesn't: %s", ip, wide, output) } } for _, port := range svc.Spec.Ports { portSpec := fmt.Sprintf("%d/%s", port.Port, port.Protocol) if !strings.Contains(output, portSpec) { t.Errorf("expected to contain port: %s, but doesn't: %s", portSpec, output) } } // Each service should print on one line if 1 != strings.Count(output, "\n") { t.Errorf("expected a single newline, found %d", strings.Count(output, "\n")) } } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 46abeb066091c32177aff94dd4cc03d8e8f01dc9
1.0
chiradeep/kube-policy-manager: vendor/k8s.io/kubernetes/pkg/kubectl/resource_printer_test.go; 30 LoC - Found a possible issue in [chiradeep/kube-policy-manager](https://www.github.com/chiradeep/kube-policy-manager) at [vendor/k8s.io/kubernetes/pkg/kubectl/resource_printer_test.go](https://github.com/chiradeep/kube-policy-manager/blob/46abeb066091c32177aff94dd4cc03d8e8f01dc9/vendor/k8s.io/kubernetes/pkg/kubectl/resource_printer_test.go#L933-L962) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to svc at line 936 may start a goroutine [Click here to see the code in its original context.](https://github.com/chiradeep/kube-policy-manager/blob/46abeb066091c32177aff94dd4cc03d8e8f01dc9/vendor/k8s.io/kubernetes/pkg/kubectl/resource_printer_test.go#L933-L962) <details> <summary>Click here to show the 30 line(s) of Go which triggered the analyzer.</summary> ```go for _, svc := range tests { for _, wide := range []bool{false, true} { buff := bytes.Buffer{} printService(&svc, &buff, PrintOptions{false, false, false, wide, false, false, false, "", []string{}}) output := string(buff.Bytes()) ip := svc.Spec.ClusterIP if !strings.Contains(output, ip) { t.Errorf("expected to contain ClusterIP %s, but doesn't: %s", ip, output) } for n, ingress := range svc.Status.LoadBalancer.Ingress { ip = ingress.IP // For non-wide output, we only guarantee the first IP to be printed if (n == 0 || wide) && !strings.Contains(output, ip) { t.Errorf("expected to contain ingress ip %s with wide=%v, but doesn't: %s", ip, wide, output) } } for _, port := range svc.Spec.Ports { portSpec := fmt.Sprintf("%d/%s", port.Port, port.Protocol) if !strings.Contains(output, portSpec) { t.Errorf("expected to contain port: %s, but doesn't: %s", portSpec, output) } } // Each service should print on one line if 1 != strings.Count(output, "\n") { t.Errorf("expected a single newline, found %d", strings.Count(output, "\n")) } } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 46abeb066091c32177aff94dd4cc03d8e8f01dc9
non_process
chiradeep kube policy manager vendor io kubernetes pkg kubectl resource printer test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to svc at line may start a goroutine click here to show the line s of go which triggered the analyzer go for svc range tests for wide range bool false true buff bytes buffer printservice svc buff printoptions false false false wide false false false string output string buff bytes ip svc spec clusterip if strings contains output ip t errorf expected to contain clusterip s but doesn t s ip output for n ingress range svc status loadbalancer ingress ip ingress ip for non wide output we only guarantee the first ip to be printed if n wide strings contains output ip t errorf expected to contain ingress ip s with wide v but doesn t s ip wide output for port range svc spec ports portspec fmt sprintf d s port port port protocol if strings contains output portspec t errorf expected to contain port s but doesn t s portspec output each service should print on one line if strings count output n t errorf expected a single newline found d strings count output n leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
17,069
22,534,244,441
IssuesEvent
2022-06-25 01:47:25
googleapis/google-cloud-go
https://api.github.com/repos/googleapis/google-cloud-go
closed
logging: integrate jsonlog-preview into structured logging version
api: logging type: process
Review the printing logs in the format of the Json strings to stdout/stderr implemented in the [jsonlog-preview](https://github.com/googleapis/google-cloud-go/tree/jsonlog-preview) branch and integrate it into the main branch for the next release of the library. The branch was previously released as the [preview](https://pkg.go.dev/cloud.google.com/go/logging@v1.5.0-jsonlog-preview/jsonlog) version.
1.0
logging: integrate jsonlog-preview into structured logging version - Review the printing logs in the format of the Json strings to stdout/stderr implemented in the [jsonlog-preview](https://github.com/googleapis/google-cloud-go/tree/jsonlog-preview) branch and integrate it into the main branch for the next release of the library. The branch was previously released as the [preview](https://pkg.go.dev/cloud.google.com/go/logging@v1.5.0-jsonlog-preview/jsonlog) version.
process
logging integrate jsonlog preview into structured logging version review the printing logs in the format of the json strings to stdout stderr implemented in the branch and integrate it into the main branch for the next release of the library the branch was previously released as the version
1
78,233
10,053,884,969
IssuesEvent
2019-07-21 20:35:31
Kavuti/python-italy-telegram-bot
https://api.github.com/repos/Kavuti/python-italy-telegram-bot
closed
[Refactoring] documentazione
documentation
In giornata modificherò il file readme con una veste più elegante e una sezione dedicata alla configurazione sia su heroku che su vps. Volete che aggiunga qualcosa di particolare? Come template pensavo di riciclare la stessa utilizzata sulla repo di [Jelly](https://github.com/MattiaFailla/Jelly)
1.0
[Refactoring] documentazione - In giornata modificherò il file readme con una veste più elegante e una sezione dedicata alla configurazione sia su heroku che su vps. Volete che aggiunga qualcosa di particolare? Come template pensavo di riciclare la stessa utilizzata sulla repo di [Jelly](https://github.com/MattiaFailla/Jelly)
non_process
documentazione in giornata modificherò il file readme con una veste più elegante e una sezione dedicata alla configurazione sia su heroku che su vps volete che aggiunga qualcosa di particolare come template pensavo di riciclare la stessa utilizzata sulla repo di
0
22,503
31,552,179,898
IssuesEvent
2023-09-02 07:17:28
NomaDamas/KoPrivateGPT
https://api.github.com/repos/NomaDamas/KoPrivateGPT
closed
Add pdf link loader
Preprocess
Download pdf from link => save it to tempfile => ingest PDF => delete tempfile (options to not delete)
1.0
Add pdf link loader - Download pdf from link => save it to tempfile => ingest PDF => delete tempfile (options to not delete)
process
add pdf link loader download pdf from link save it to tempfile ingest pdf delete tempfile options to not delete
1
8,105
11,300,014,729
IssuesEvent
2020-01-17 12:39:13
prisma/lift
https://api.github.com/repos/prisma/lift
closed
Fix help messages in CLI
bug/2-confirmed kind/bug process/next-milestone
Hi. Found some misleading/wrong messages in prisma2 CLI. You may want to fix it. It asks me to use `prisma2 lift create` but there is no such command - I guess you meant `prisma2 lift save` ![Screenshot from 2019-11-04 15-47-05](https://user-images.githubusercontent.com/1165845/68113864-ba385f80-ff1a-11e9-9d9c-4c18fdaae9d0.png) In the below example, it gives me `prisma2 lift` as one of the suggested commands but it is missing an option. I guess you meant `prisma lift up` ![Screenshot from 2019-11-04 15-47-49](https://user-images.githubusercontent.com/1165845/68113947-eeac1b80-ff1a-11e9-8ecd-e6fc93698dde.png) Thanks.
1.0
Fix help messages in CLI - Hi. Found some misleading/wrong messages in prisma2 CLI. You may want to fix it. It asks me to use `prisma2 lift create` but there is no such command - I guess you meant `prisma2 lift save` ![Screenshot from 2019-11-04 15-47-05](https://user-images.githubusercontent.com/1165845/68113864-ba385f80-ff1a-11e9-9d9c-4c18fdaae9d0.png) In the below example, it gives me `prisma2 lift` as one of the suggested commands but it is missing an option. I guess you meant `prisma lift up` ![Screenshot from 2019-11-04 15-47-49](https://user-images.githubusercontent.com/1165845/68113947-eeac1b80-ff1a-11e9-8ecd-e6fc93698dde.png) Thanks.
process
fix help messages in cli hi found some misleading wrong messages in cli you may want to fix it it asks me to use lift create but there is no such command i guess you meant lift save in the below example it gives me lift as one of the suggested commands but it is missing an option i guess you meant prisma lift up thanks
1
10,569
13,369,121,339
IssuesEvent
2020-09-01 08:22:20
jgraley/inferno-cpp2v
https://api.github.com/repos/jgraley/inferno-cpp2v
opened
SystemicConstraint may be too strict on couplings
Constraint Processing
`SystemicConstraint` for a coupled pattern node will require all parent X nodes to point to the same child X _by address_ because it resolves for exactly one choice of X node across the whole problem. This removes ambiguity about _which_ parent(s) is/are subject to subtree matching. OTOH the `AndRuleEngine` requires only the first reaching of the coupled pattern node to sub-tree match in-place and then all the rest are compared using SimpleCompare. If we want to couple nodes at different addresses (because they are identical modulo `SimpleCompare`) then we'll need new variables, and therefore, according to our rules, new pattern nodes/agents. These would probably be eg `ModuloSimpleCompare` and reduce their subtree to the first-found match in X using SimpleCompare. This is a hassle, so seriously consider switching the rule for couplings from "must match modulo `SimpleCompare`" to "must be identical by address". Compare #93 although that won't be enough to solve this, it's worth keeping in mind.
1.0
SystemicConstraint may be too strict on couplings - `SystemicConstraint` for a coupled pattern node will require all parent X nodes to point to the same child X _by address_ because it resolves for exactly one choice of X node across the whole problem. This removes ambiguity about _which_ parent(s) is/are subject to subtree matching. OTOH the `AndRuleEngine` requires only the first reaching of the coupled pattern node to sub-tree match in-place and then all the rest are compared using SimpleCompare. If we want to couple nodes at different addresses (because they are identical modulo `SimpleCompare`) then we'll need new variables, and therefore, according to our rules, new pattern nodes/agents. These would probably be eg `ModuloSimpleCompare` and reduce their subtree to the first-found match in X using SimpleCompare. This is a hassle, so seriously consider switching the rule for couplings from "must match modulo `SimpleCompare`" to "must be identical by address". Compare #93 although that won't be enough to solve this, it's worth keeping in mind.
process
systemicconstraint may be too strict on couplings systemicconstraint for a coupled pattern node will require all parent x nodes to point to the same child x by address because it resolves for exactly one choice of x node across the whole problem this removes ambiguity about which parent s is are subject to subtree matching otoh the andruleengine requires only the first reaching of the coupled pattern node to sub tree match in place and then all the rest are compared using simplecompare if we want to couple nodes at different addresses because they are identical modulo simplecompare then we ll need new variables and therefore according to our rules new pattern nodes agents these would probably be eg modulosimplecompare and reduce their subtree to the first found match in x using simplecompare this is a hassle so seriously consider switching the rule for couplings from must match modulo simplecompare to must be identical by address compare although that won t be enough to solve this it s worth keeping in mind
1
13,580
16,115,255,470
IssuesEvent
2021-04-28 06:26:21
unicode-org/icu4x
https://api.github.com/repos/unicode-org/icu4x
opened
Build cargo docs in PR
C-process S-small T-docs-tests
It would be helpful to reviewers if cargo doc could be built for every PR so that it can be reviewed for quality.
1.0
Build cargo docs in PR - It would be helpful to reviewers if cargo doc could be built for every PR so that it can be reviewed for quality.
process
build cargo docs in pr it would be helpful to reviewers if cargo doc could be built for every pr so that it can be reviewed for quality
1
28,815
5,383,853,265
IssuesEvent
2017-02-24 08:37:00
opencaching/opencaching-pl
https://api.github.com/repos/opencaching/opencaching-pl
closed
viewcache and hidden, non-spoiler, pictures
Component_Cache Priority_Low Type_Defect
Take a look at: http://www.opencaching.ro/viewcache.php?cacheid=295 This cache has 3 pictures loaded, all of which are maked "do not display". The pictures are used in description, but they should not appear in the Pictures section. One can see the pictures section with no pictures. The pictures section in this case should not be shown at all.
1.0
viewcache and hidden, non-spoiler, pictures - Take a look at: http://www.opencaching.ro/viewcache.php?cacheid=295 This cache has 3 pictures loaded, all of which are maked "do not display". The pictures are used in description, but they should not appear in the Pictures section. One can see the pictures section with no pictures. The pictures section in this case should not be shown at all.
non_process
viewcache and hidden non spoiler pictures take a look at this cache has pictures loaded all of which are maked do not display the pictures are used in description but they should not appear in the pictures section one can see the pictures section with no pictures the pictures section in this case should not be shown at all
0
5,160
7,933,550,372
IssuesEvent
2018-07-08 07:56:31
arthurphilippe/CC-1000_bornes
https://api.github.com/repos/arthurphilippe/CC-1000_bornes
closed
Mettre en place le traitement des messages dans la classe MilleBornes
message processor server
## Messages serveur Le serveur doit formater les informations envoyées à chaque joueur. Parmi les informations à envoyer on retrouve : - Au moment de la connexion le joueur reçois son identifiant unique `id [id du joueur]` - La liste des joueurs connectés `lsplayers [id1] [id2] [id3] ...` - Les informations concernant un joueur avec `playerstate [id] [dist] [carte incident] [booléen limité ?] [booléen as du volant] [booléen citerne] [booléen increvable] [booléen prioritaire]` - Avant de jouer, le joueur reçoit le contenu de sa main `lscards [carte pos 1] [carte pos 2] ... [carte pos n]` - Au moment ou un joueur est prié de joueur, il reçoit le prompt suivant `your_turn` - A la fin de la partie, quel joueur à gagné `winner [id gagant]` - A la fin de la partie, au cas où il ne reste qu'un joueur connecté, il reçoit `forfeit` ## Messages joueur Le joueur n'a besoin que de répondre au serveur lorsque c'est à son tour de jouer. Pour se faire le joueur dispose des commandes suivantes : - Pour attaquer un joueur `use [position de la carte dans la main] [id du joueur à attaquer]` - Pour utiliser une carte qui n'est pas une attaque `use [position de la carte dans la main]` - Pour jeter une carte `discard [position de la carte dans la main]`
1.0
Mettre en place le traitement des messages dans la classe MilleBornes - ## Messages serveur Le serveur doit formater les informations envoyées à chaque joueur. Parmi les informations à envoyer on retrouve : - Au moment de la connexion le joueur reçois son identifiant unique `id [id du joueur]` - La liste des joueurs connectés `lsplayers [id1] [id2] [id3] ...` - Les informations concernant un joueur avec `playerstate [id] [dist] [carte incident] [booléen limité ?] [booléen as du volant] [booléen citerne] [booléen increvable] [booléen prioritaire]` - Avant de jouer, le joueur reçoit le contenu de sa main `lscards [carte pos 1] [carte pos 2] ... [carte pos n]` - Au moment ou un joueur est prié de joueur, il reçoit le prompt suivant `your_turn` - A la fin de la partie, quel joueur à gagné `winner [id gagant]` - A la fin de la partie, au cas où il ne reste qu'un joueur connecté, il reçoit `forfeit` ## Messages joueur Le joueur n'a besoin que de répondre au serveur lorsque c'est à son tour de jouer. Pour se faire le joueur dispose des commandes suivantes : - Pour attaquer un joueur `use [position de la carte dans la main] [id du joueur à attaquer]` - Pour utiliser une carte qui n'est pas une attaque `use [position de la carte dans la main]` - Pour jeter une carte `discard [position de la carte dans la main]`
process
mettre en place le traitement des messages dans la classe millebornes messages serveur le serveur doit formater les informations envoyées à chaque joueur parmi les informations à envoyer on retrouve au moment de la connexion le joueur reçois son identifiant unique id la liste des joueurs connectés lsplayers les informations concernant un joueur avec playerstate avant de jouer le joueur reçoit le contenu de sa main lscards au moment ou un joueur est prié de joueur il reçoit le prompt suivant your turn a la fin de la partie quel joueur à gagné winner a la fin de la partie au cas où il ne reste qu un joueur connecté il reçoit forfeit messages joueur le joueur n a besoin que de répondre au serveur lorsque c est à son tour de jouer pour se faire le joueur dispose des commandes suivantes pour attaquer un joueur use pour utiliser une carte qui n est pas une attaque use pour jeter une carte discard
1
19,398
25,539,288,237
IssuesEvent
2022-11-29 14:16:31
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
Multi-model after time average
preprocessor
Hey, I'm hoping that someone can help me figure out whats going wrong here. I'm trying produce a multi-model mean of a 2D (x-z dimensional) field. It's a fairly complex preprocessor, several of the stages can be quite slow, and I'll need to run it over lots (dozens?) of model datasets. With that in mind, I'm trying to keep it lightweight: ``` prep_transect: # For extracting a transect custom_order: true time_average: regrid: target_grid: 1x1 scheme: linear zonal_means: coordinate: longitude mean_type: mean extract_levels: levels: [0.1, 0.5, 1, 10, 20, 40, 80, 120, 160, 200, 240, 280, 320, 360, 400, 440, 480, 520, 560, 600, 640, 680, 720, 760, 800, 840, 880, 920, 960, 1000, 1200, 1400, 1600, 1800, 2000, 2200, 2400, 2600, 2800, 3000, 3200, 3400, 3600, 3800, 4000, 4200, 4400, 4600, 4800, 5000, 5200, 5400, 5600, 5800] scheme: linear multi_model_statistics: span: full statistics: [mean, ] ``` (The extract_levels field is a bit silly, please don't worry about it too much.) The problem that I'm seeing now is that the `multi_model_statistics` part doesn't produce any results. I think that this is because it can't find a time overlap between the files: ``` 2019-03-12 15:56:35,921 UTC [29013] DEBUG esmvaltool.preprocessor._multimodel:304 Multimodel statistics: computing: ['mean'] 2019-03-12 15:56:35,923 UTC [29013] INFO esmvaltool.preprocessor._multimodel:313 Time overlap between cubes is none or a single point.check datasets: will not compute statistics. ``` The first step of the preprocessor is to take a time average, as this reduces the workload of the function by an order of magnitude or more. However, I suspect that this is the reason why it can't find any overlap in the time range between the models. Perhaps people can suggest a better way to do this - or perhaps a way to get the multi-model mean function to ignore the time overlap? Cheers!
1.0
Multi-model after time average - Hey, I'm hoping that someone can help me figure out whats going wrong here. I'm trying produce a multi-model mean of a 2D (x-z dimensional) field. It's a fairly complex preprocessor, several of the stages can be quite slow, and I'll need to run it over lots (dozens?) of model datasets. With that in mind, I'm trying to keep it lightweight: ``` prep_transect: # For extracting a transect custom_order: true time_average: regrid: target_grid: 1x1 scheme: linear zonal_means: coordinate: longitude mean_type: mean extract_levels: levels: [0.1, 0.5, 1, 10, 20, 40, 80, 120, 160, 200, 240, 280, 320, 360, 400, 440, 480, 520, 560, 600, 640, 680, 720, 760, 800, 840, 880, 920, 960, 1000, 1200, 1400, 1600, 1800, 2000, 2200, 2400, 2600, 2800, 3000, 3200, 3400, 3600, 3800, 4000, 4200, 4400, 4600, 4800, 5000, 5200, 5400, 5600, 5800] scheme: linear multi_model_statistics: span: full statistics: [mean, ] ``` (The extract_levels field is a bit silly, please don't worry about it too much.) The problem that I'm seeing now is that the `multi_model_statistics` part doesn't produce any results. I think that this is because it can't find a time overlap between the files: ``` 2019-03-12 15:56:35,921 UTC [29013] DEBUG esmvaltool.preprocessor._multimodel:304 Multimodel statistics: computing: ['mean'] 2019-03-12 15:56:35,923 UTC [29013] INFO esmvaltool.preprocessor._multimodel:313 Time overlap between cubes is none or a single point.check datasets: will not compute statistics. ``` The first step of the preprocessor is to take a time average, as this reduces the workload of the function by an order of magnitude or more. However, I suspect that this is the reason why it can't find any overlap in the time range between the models. Perhaps people can suggest a better way to do this - or perhaps a way to get the multi-model mean function to ignore the time overlap? Cheers!
process
multi model after time average hey i m hoping that someone can help me figure out whats going wrong here i m trying produce a multi model mean of a x z dimensional field it s a fairly complex preprocessor several of the stages can be quite slow and i ll need to run it over lots dozens of model datasets with that in mind i m trying to keep it lightweight prep transect for extracting a transect custom order true time average regrid target grid scheme linear zonal means coordinate longitude mean type mean extract levels levels scheme linear multi model statistics span full statistics the extract levels field is a bit silly please don t worry about it too much the problem that i m seeing now is that the multi model statistics part doesn t produce any results i think that this is because it can t find a time overlap between the files utc debug esmvaltool preprocessor multimodel multimodel statistics computing utc info esmvaltool preprocessor multimodel time overlap between cubes is none or a single point check datasets will not compute statistics the first step of the preprocessor is to take a time average as this reduces the workload of the function by an order of magnitude or more however i suspect that this is the reason why it can t find any overlap in the time range between the models perhaps people can suggest a better way to do this or perhaps a way to get the multi model mean function to ignore the time overlap cheers
1
74,572
9,087,222,180
IssuesEvent
2019-02-18 13:11:28
WordPress/gutenberg
https://api.github.com/repos/WordPress/gutenberg
closed
Drag and drop content within blocks
Drag and Drop Needs Design Feedback [Block] Paragraph [Component] Raw Handling
**Issue** When writing text in a paragraph it would be wonderful to be able to highlight a sentence and drag it to another part of my paragraph. Currently I can highlight text, but if I hold on my cursor and try to move it it wont go. To rearrange content I have to cut and paste by keyboard shortcuts or through the menu on my mouse. ![trying to move text](https://cld.wthms.co/nom0jD+) (Image link: https://cld.wthms.co/nom0jD ) **Solution** When text is highlighted in a paragraph it can be dragged to another part of that same paragraph or a different paragraph block, much the same as how I can in a text editor on my computer. ![moving text](https://cld.wthms.co/MOCqhf+) (Image link: https://cld.wthms.co/MOCqhf )
1.0
Drag and drop content within blocks - **Issue** When writing text in a paragraph it would be wonderful to be able to highlight a sentence and drag it to another part of my paragraph. Currently I can highlight text, but if I hold on my cursor and try to move it it wont go. To rearrange content I have to cut and paste by keyboard shortcuts or through the menu on my mouse. ![trying to move text](https://cld.wthms.co/nom0jD+) (Image link: https://cld.wthms.co/nom0jD ) **Solution** When text is highlighted in a paragraph it can be dragged to another part of that same paragraph or a different paragraph block, much the same as how I can in a text editor on my computer. ![moving text](https://cld.wthms.co/MOCqhf+) (Image link: https://cld.wthms.co/MOCqhf )
non_process
drag and drop content within blocks issue when writing text in a paragraph it would be wonderful to be able to highlight a sentence and drag it to another part of my paragraph currently i can highlight text but if i hold on my cursor and try to move it it wont go to rearrange content i have to cut and paste by keyboard shortcuts or through the menu on my mouse image link solution when text is highlighted in a paragraph it can be dragged to another part of that same paragraph or a different paragraph block much the same as how i can in a text editor on my computer image link
0
5,099
7,880,726,420
IssuesEvent
2018-06-26 16:45:26
amarbajric/EBUSA-AIM17
https://api.github.com/repos/amarbajric/EBUSA-AIM17
closed
Test S-BPM Modeller v2
SWD business processes frontend review
# ToDo: - Install new version of SBPM Modeller from mkolody and test it - `https://github.com/mkolodiy/s-bpm-modeler` - Check, if Processes modelled with the new version can be imported into our platform - If **NO**, consult @amarbajric for further discussion - If **YES**, model the three example processes already modelled with the old modeller with the new one (this time in english) and UPDATE the corresponding process description which will be the example description available in the ProcessStore - Check, If old SBPM Modeller can be replaced with the new one (since the old one is not working correctly)
1.0
Test S-BPM Modeller v2 - # ToDo: - Install new version of SBPM Modeller from mkolody and test it - `https://github.com/mkolodiy/s-bpm-modeler` - Check, if Processes modelled with the new version can be imported into our platform - If **NO**, consult @amarbajric for further discussion - If **YES**, model the three example processes already modelled with the old modeller with the new one (this time in english) and UPDATE the corresponding process description which will be the example description available in the ProcessStore - Check, If old SBPM Modeller can be replaced with the new one (since the old one is not working correctly)
process
test s bpm modeller todo install new version of sbpm modeller from mkolody and test it check if processes modelled with the new version can be imported into our platform if no consult amarbajric for further discussion if yes model the three example processes already modelled with the old modeller with the new one this time in english and update the corresponding process description which will be the example description available in the processstore check if old sbpm modeller can be replaced with the new one since the old one is not working correctly
1
118,044
25,238,849,186
IssuesEvent
2022-11-15 04:55:57
WordPress/openverse-frontend
https://api.github.com/repos/WordPress/openverse-frontend
closed
Create `useUiStateCookie` composable
🟧 priority: high ✨ goal: improvement 💻 aspect: code
## Problem <!-- Describe a problem solved by this feature; or delete the section entirely. --> As part of https://github.com/WordPress/openverse/pull/164 we need a `useUiStateCookie` that reveals a readonly `reactive` object representing the UI state cookie. We should implement this as a Pinia store to avoid global Ref issues with SSR (extremely important for this one otherwise cookie state could easily be leaked across requests). The store should grab default state variables from the cookie if it exists. Otherwise, it should create the cookie and set the appropriate defaults. Use UA sniffing to set the correct default breakpoint (`sm` for mobile UAs, `md` otherwise). The store should use `useEventListener` to watch for resize changes and update the cookie breakpoint value appropriately. (Please debounce this function so that resizes aren't updating the cookie until resizing has stopped). ## Description <!-- Describe the feature and how it solves the problem. --> The store should expose `set` handles for updating the store that also update the stored cookie value. Blocked by Pinia infrastructure being added. ## Additional context <!-- Add any other context about the feature here; or delete the section entirely. --> ## Implementation <!-- Replace the [ ] with [x] to check the box. --> - [ ] 🙋 I would be interested in implementing this feature.
1.0
Create `useUiStateCookie` composable - ## Problem <!-- Describe a problem solved by this feature; or delete the section entirely. --> As part of https://github.com/WordPress/openverse/pull/164 we need a `useUiStateCookie` that reveals a readonly `reactive` object representing the UI state cookie. We should implement this as a Pinia store to avoid global Ref issues with SSR (extremely important for this one otherwise cookie state could easily be leaked across requests). The store should grab default state variables from the cookie if it exists. Otherwise, it should create the cookie and set the appropriate defaults. Use UA sniffing to set the correct default breakpoint (`sm` for mobile UAs, `md` otherwise). The store should use `useEventListener` to watch for resize changes and update the cookie breakpoint value appropriately. (Please debounce this function so that resizes aren't updating the cookie until resizing has stopped). ## Description <!-- Describe the feature and how it solves the problem. --> The store should expose `set` handles for updating the store that also update the stored cookie value. Blocked by Pinia infrastructure being added. ## Additional context <!-- Add any other context about the feature here; or delete the section entirely. --> ## Implementation <!-- Replace the [ ] with [x] to check the box. --> - [ ] 🙋 I would be interested in implementing this feature.
non_process
create useuistatecookie composable problem as part of we need a useuistatecookie that reveals a readonly reactive object representing the ui state cookie we should implement this as a pinia store to avoid global ref issues with ssr extremely important for this one otherwise cookie state could easily be leaked across requests the store should grab default state variables from the cookie if it exists otherwise it should create the cookie and set the appropriate defaults use ua sniffing to set the correct default breakpoint sm for mobile uas md otherwise the store should use useeventlistener to watch for resize changes and update the cookie breakpoint value appropriately please debounce this function so that resizes aren t updating the cookie until resizing has stopped description the store should expose set handles for updating the store that also update the stored cookie value blocked by pinia infrastructure being added additional context implementation 🙋 i would be interested in implementing this feature
0
319,188
23,759,862,623
IssuesEvent
2022-09-01 07:58:22
Unity-Technologies/com.unity.multiplayer.docs
https://api.github.com/repos/Unity-Technologies/com.unity.multiplayer.docs
closed
Feedback for ersioned_docs/version-1.0.0/advanced-topics/network-update-loop-system/index.md
documentation IN JIRA
Page does not describe how to add any kind of events to it. Please add an example.
1.0
Feedback for ersioned_docs/version-1.0.0/advanced-topics/network-update-loop-system/index.md - Page does not describe how to add any kind of events to it. Please add an example.
non_process
feedback for ersioned docs version advanced topics network update loop system index md page does not describe how to add any kind of events to it please add an example
0
4,815
7,702,756,043
IssuesEvent
2018-05-21 04:48:26
log2timeline/plaso
https://api.github.com/repos/log2timeline/plaso
closed
Linux pre-processor fails on mounted xfs directory of RHEL installation
bug preprocessing
While testing plaso on RHEL ``` Traceback (most recent call last): File "plaso/tools/log2timeline.py", line 68, in <module> if not Main(): File "plaso/tools/log2timeline.py", line 54, in Main tool.ExtractEventsFromSources() File "/plaso/cli/log2timeline_tool.py", line 410, in ExtractEventsFromSources self._PreprocessSources(extraction_engine) File "/plaso/cli/extraction_tool.py", line 171, in _PreprocessSources resolver_context=self._resolver_context) File "/plaso/engine/engine.py", line 254, in PreprocessSources artifacts_registry, file_system, mount_point, self.knowledge_base) File "/plaso/preprocessors/manager.py", line 276, in RunPlugins artifacts_registry, knowledge_base, searcher, file_system) File "/plaso/preprocessors/manager.py", line 146, in CollectFromFileSystem knowledge_base, artifact_definition, searcher, file_system) File "/plaso/preprocessors/interface.py", line 82, in Collect source.separator) File "/plaso/preprocessors/interface.py", line 135, in _ParsePathSpecification self._ParseFileEntry(knowledge_base, file_entry) File "/plaso/preprocessors/interface.py", line 171, in _ParseFileEntry self._ParseFileData(knowledge_base, file_object) File "/plaso/preprocessors/linux.py", line 158, in _ParseFileData key, value = line.split('=') ValueError: need more than 1 value to unpack ```
1.0
Linux pre-processor fails on mounted xfs directory of RHEL installation - While testing plaso on RHEL ``` Traceback (most recent call last): File "plaso/tools/log2timeline.py", line 68, in <module> if not Main(): File "plaso/tools/log2timeline.py", line 54, in Main tool.ExtractEventsFromSources() File "/plaso/cli/log2timeline_tool.py", line 410, in ExtractEventsFromSources self._PreprocessSources(extraction_engine) File "/plaso/cli/extraction_tool.py", line 171, in _PreprocessSources resolver_context=self._resolver_context) File "/plaso/engine/engine.py", line 254, in PreprocessSources artifacts_registry, file_system, mount_point, self.knowledge_base) File "/plaso/preprocessors/manager.py", line 276, in RunPlugins artifacts_registry, knowledge_base, searcher, file_system) File "/plaso/preprocessors/manager.py", line 146, in CollectFromFileSystem knowledge_base, artifact_definition, searcher, file_system) File "/plaso/preprocessors/interface.py", line 82, in Collect source.separator) File "/plaso/preprocessors/interface.py", line 135, in _ParsePathSpecification self._ParseFileEntry(knowledge_base, file_entry) File "/plaso/preprocessors/interface.py", line 171, in _ParseFileEntry self._ParseFileData(knowledge_base, file_object) File "/plaso/preprocessors/linux.py", line 158, in _ParseFileData key, value = line.split('=') ValueError: need more than 1 value to unpack ```
process
linux pre processor fails on mounted xfs directory of rhel installation while testing plaso on rhel traceback most recent call last file plaso tools py line in if not main file plaso tools py line in main tool extracteventsfromsources file plaso cli tool py line in extracteventsfromsources self preprocesssources extraction engine file plaso cli extraction tool py line in preprocesssources resolver context self resolver context file plaso engine engine py line in preprocesssources artifacts registry file system mount point self knowledge base file plaso preprocessors manager py line in runplugins artifacts registry knowledge base searcher file system file plaso preprocessors manager py line in collectfromfilesystem knowledge base artifact definition searcher file system file plaso preprocessors interface py line in collect source separator file plaso preprocessors interface py line in parsepathspecification self parsefileentry knowledge base file entry file plaso preprocessors interface py line in parsefileentry self parsefiledata knowledge base file object file plaso preprocessors linux py line in parsefiledata key value line split valueerror need more than value to unpack
1
317,268
9,662,622,517
IssuesEvent
2019-05-20 21:21:26
wevote/WebApp
https://api.github.com/repos/wevote/WebApp
closed
Ballot Search Box: On mac in mobile mode, extra horizontal line appears
Difficulty: Easy Priority: 2
This only happens on Mac/Chrome. Related to: https://github.com/wevote/WebApp/issues/2058 ![Screen Shot 2019-05-07 at 9 08 31 AM](https://user-images.githubusercontent.com/7756031/57315190-f3952d00-70a7-11e9-973f-0c7ee2b8b550.png)
1.0
Ballot Search Box: On mac in mobile mode, extra horizontal line appears - This only happens on Mac/Chrome. Related to: https://github.com/wevote/WebApp/issues/2058 ![Screen Shot 2019-05-07 at 9 08 31 AM](https://user-images.githubusercontent.com/7756031/57315190-f3952d00-70a7-11e9-973f-0c7ee2b8b550.png)
non_process
ballot search box on mac in mobile mode extra horizontal line appears this only happens on mac chrome related to
0
19,267
25,455,950,626
IssuesEvent
2022-11-24 14:12:26
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
AttributeProcessor Action Builder
Stale processor/attributes priority:needed
**Is your feature request related to a problem? Please describe.** I would like to add actions to the AttributeProcessor with the Config package, because I would like to create the pipelines within runtime. **Describe the solution you'd like** I have the code here, I did it before to validate a POC that I am working in integrating the opentelemetry-collector with a project that I work with. This an example of my suggestion ```golang baseCfg.AddInsertActionKeyValue("attribute1", 123) baseCfg.AddInsertActionFromContext("attribute2", "sinkId") baseCfg.AddInsertActionFromAttribute("attribute3", "attribute2") baseCfg.AddConvertAction("attribute4", "string") baseCfg.AddHashAction("attribute5") baseCfg.AddUpdateActionKeyValue("attribute6", "789") baseCfg.AddUpdateActionFromContext("attribute7", "sinkId") baseCfg.AddUpdateActionFromAttribute("attribute8", "attribute2") baseCfg.AddDeleteActionKey("attribute9") baseCfg.AddUpsertActionKeyValue("attribute10", 123) baseCfg.AddUpsertActionFromContext("attribute11", "sinkId") baseCfg.AddUpsertActionFromAttribute("attribute12", "attribute2") ``` **Describe alternatives you've considered** Without making the attraction package public, there is no other alternative except accessing its features using methods, Factory could also be used here, but I really like the approach of using Config, since it is more direct since config.Actions is the subject to change. I talked with people in Slack and they suggested me this approach. **Additional context** I am working with ns1labs/orb project and adding this will enhance a lot the usage and adoption of opentelemetry.
1.0
AttributeProcessor Action Builder - **Is your feature request related to a problem? Please describe.** I would like to add actions to the AttributeProcessor with the Config package, because I would like to create the pipelines within runtime. **Describe the solution you'd like** I have the code here, I did it before to validate a POC that I am working in integrating the opentelemetry-collector with a project that I work with. This an example of my suggestion ```golang baseCfg.AddInsertActionKeyValue("attribute1", 123) baseCfg.AddInsertActionFromContext("attribute2", "sinkId") baseCfg.AddInsertActionFromAttribute("attribute3", "attribute2") baseCfg.AddConvertAction("attribute4", "string") baseCfg.AddHashAction("attribute5") baseCfg.AddUpdateActionKeyValue("attribute6", "789") baseCfg.AddUpdateActionFromContext("attribute7", "sinkId") baseCfg.AddUpdateActionFromAttribute("attribute8", "attribute2") baseCfg.AddDeleteActionKey("attribute9") baseCfg.AddUpsertActionKeyValue("attribute10", 123) baseCfg.AddUpsertActionFromContext("attribute11", "sinkId") baseCfg.AddUpsertActionFromAttribute("attribute12", "attribute2") ``` **Describe alternatives you've considered** Without making the attraction package public, there is no other alternative except accessing its features using methods, Factory could also be used here, but I really like the approach of using Config, since it is more direct since config.Actions is the subject to change. I talked with people in Slack and they suggested me this approach. **Additional context** I am working with ns1labs/orb project and adding this will enhance a lot the usage and adoption of opentelemetry.
process
attributeprocessor action builder is your feature request related to a problem please describe i would like to add actions to the attributeprocessor with the config package because i would like to create the pipelines within runtime describe the solution you d like i have the code here i did it before to validate a poc that i am working in integrating the opentelemetry collector with a project that i work with this an example of my suggestion golang basecfg addinsertactionkeyvalue basecfg addinsertactionfromcontext sinkid basecfg addinsertactionfromattribute basecfg addconvertaction string basecfg addhashaction basecfg addupdateactionkeyvalue basecfg addupdateactionfromcontext sinkid basecfg addupdateactionfromattribute basecfg adddeleteactionkey basecfg addupsertactionkeyvalue basecfg addupsertactionfromcontext sinkid basecfg addupsertactionfromattribute describe alternatives you ve considered without making the attraction package public there is no other alternative except accessing its features using methods factory could also be used here but i really like the approach of using config since it is more direct since config actions is the subject to change i talked with people in slack and they suggested me this approach additional context i am working with orb project and adding this will enhance a lot the usage and adoption of opentelemetry
1
14,471
17,580,347,023
IssuesEvent
2021-08-16 06:23:13
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
A class that extends `Function` is not getting the correct prototype when running under Hammerhead
TYPE: bug AREA: client FREQUENCY: level 2 SYSTEM: client side processing
### What is the Current behavior? A class that extends `Function` is not getting the correct prototype. To reproduce, create a minimal app that executes the following: ```js class Test extends Function { extend() {} } console.log(new Test().__proto__); ``` Running it in Chrome shows the following in dev tools: ![image](https://user-images.githubusercontent.com/15754/92962676-4a9ad700-f43f-11ea-8ceb-90d526fd01a0.png) When running under Testcafe, the following is shown: ![image](https://user-images.githubusercontent.com/15754/92962632-38b93400-f43f-11ea-8b2a-7d317c534128.png) In Testcafe, instantiated classes that extend `Function` no longer have access to their methods. Presumably this is due to the native `Function` [being overwritten](https://github.com/DevExpress/testcafe-hammerhead/blob/f7c0cc4cc53fa4fbc0c05fe7ae9801d4c330576c/src/client/sandbox/node/window.ts#L726-L728) but I didn't do much other digging once I could create a test case. ### Your Environment details: * node.js version: v14.4.0 * browser name and version: Chrome Version 85.0.4183.102 (Official Build) (64-bit) * platform and version: Observed on OSX and Windows
1.0
A class that extends `Function` is not getting the correct prototype when running under Hammerhead - ### What is the Current behavior? A class that extends `Function` is not getting the correct prototype. To reproduce, create a minimal app that executes the following: ```js class Test extends Function { extend() {} } console.log(new Test().__proto__); ``` Running it in Chrome shows the following in dev tools: ![image](https://user-images.githubusercontent.com/15754/92962676-4a9ad700-f43f-11ea-8ceb-90d526fd01a0.png) When running under Testcafe, the following is shown: ![image](https://user-images.githubusercontent.com/15754/92962632-38b93400-f43f-11ea-8b2a-7d317c534128.png) In Testcafe, instantiated classes that extend `Function` no longer have access to their methods. Presumably this is due to the native `Function` [being overwritten](https://github.com/DevExpress/testcafe-hammerhead/blob/f7c0cc4cc53fa4fbc0c05fe7ae9801d4c330576c/src/client/sandbox/node/window.ts#L726-L728) but I didn't do much other digging once I could create a test case. ### Your Environment details: * node.js version: v14.4.0 * browser name and version: Chrome Version 85.0.4183.102 (Official Build) (64-bit) * platform and version: Observed on OSX and Windows
process
a class that extends function is not getting the correct prototype when running under hammerhead what is the current behavior a class that extends function is not getting the correct prototype to reproduce create a minimal app that executes the following js class test extends function extend console log new test proto running it in chrome shows the following in dev tools when running under testcafe the following is shown in testcafe instantiated classes that extend function no longer have access to their methods presumably this is due to the native function but i didn t do much other digging once i could create a test case your environment details node js version browser name and version chrome version official build bit platform and version observed on osx and windows
1
10,631
13,441,577,339
IssuesEvent
2020-09-08 04:35:00
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Unhandled java error: java.lang.IndexOutOfBoundsException
bug needs reproduction preprocess
## Expected Behavior I expect the toolkit to rescue this Java error and return a DITA-OT error telling me that the file could not be transformed. The DITA build should either continue without this file, or exit at this point. The issue I came across is that this failed silently in our build pipeline and ended up uploading an empty folder. Admittedly, the file that's causing the issue is a total mess. Cleaning up the many DITA validation warnings in the file solved the issue. I've attached a map with the original file and a fixed version. Because there are so many issues with the file, it was difficult to ascertain which of the issues is causing the java error. Our previous pipeline used 3.3.3 and did not produce the error. ## Actual Behavior Running this map with a clean version of 3.5.3 (no plugins) causes a java error: ``` Error: java.lang.IndexOutOfBoundsException: Index 3 out-of-bounds for length 3 ``` The build exits with a 0, so if I'm running multiple builds in a script, there is no indication that it failed without consulting the DITA log. ## Steps to Reproduce 1. Unzip the attached archive 2. Run an HTML transformation on the file: `path/to/dita -i test.ditamap -o html -f html5 -d` * DITA-OT version: 3.5.3 * Operating system and version: MacOS 10.15.6 * How did you run DITA-OT? `path/to/dita -i test.ditamap -o html -f html5 -d` Attachement: [java_error.zip](https://github.com/dita-ot/dita-ot/files/5152581/java_error.zip) <!-- Before submitting, check the Preview tab above to verify the XML markup appears correctly and remember you can edit the description later to add information. -->
1.0
Unhandled java error: java.lang.IndexOutOfBoundsException - ## Expected Behavior I expect the toolkit to rescue this Java error and return a DITA-OT error telling me that the file could not be transformed. The DITA build should either continue without this file, or exit at this point. The issue I came across is that this failed silently in our build pipeline and ended up uploading an empty folder. Admittedly, the file that's causing the issue is a total mess. Cleaning up the many DITA validation warnings in the file solved the issue. I've attached a map with the original file and a fixed version. Because there are so many issues with the file, it was difficult to ascertain which of the issues is causing the java error. Our previous pipeline used 3.3.3 and did not produce the error. ## Actual Behavior Running this map with a clean version of 3.5.3 (no plugins) causes a java error: ``` Error: java.lang.IndexOutOfBoundsException: Index 3 out-of-bounds for length 3 ``` The build exits with a 0, so if I'm running multiple builds in a script, there is no indication that it failed without consulting the DITA log. ## Steps to Reproduce 1. Unzip the attached archive 2. Run an HTML transformation on the file: `path/to/dita -i test.ditamap -o html -f html5 -d` * DITA-OT version: 3.5.3 * Operating system and version: MacOS 10.15.6 * How did you run DITA-OT? `path/to/dita -i test.ditamap -o html -f html5 -d` Attachement: [java_error.zip](https://github.com/dita-ot/dita-ot/files/5152581/java_error.zip) <!-- Before submitting, check the Preview tab above to verify the XML markup appears correctly and remember you can edit the description later to add information. -->
process
unhandled java error java lang indexoutofboundsexception expected behavior i expect the toolkit to rescue this java error and return a dita ot error telling me that the file could not be transformed the dita build should either continue without this file or exit at this point the issue i came across is that this failed silently in our build pipeline and ended up uploading an empty folder admittedly the file that s causing the issue is a total mess cleaning up the many dita validation warnings in the file solved the issue i ve attached a map with the original file and a fixed version because there are so many issues with the file it was difficult to ascertain which of the issues is causing the java error our previous pipeline used and did not produce the error actual behavior running this map with a clean version of no plugins causes a java error error java lang indexoutofboundsexception index out of bounds for length the build exits with a so if i m running multiple builds in a script there is no indication that it failed without consulting the dita log steps to reproduce unzip the attached archive run an html transformation on the file path to dita i test ditamap o html f d dita ot version operating system and version macos how did you run dita ot path to dita i test ditamap o html f d attachement before submitting check the preview tab above to verify the xml markup appears correctly and remember you can edit the description later to add information
1
292,124
25,202,083,488
IssuesEvent
2022-11-13 08:22:31
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: transfer-leases/drain-other-node failed
C-test-failure O-robot O-roachtest release-blocker branch-release-22.2.0
roachtest.transfer-leases/drain-other-node [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7478551?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7478551?buildTab=artifacts#/transfer-leases/drain-other-node) on release-22.2.0 @ [234c9295cc02150f919cfa96b09ee2fa07b68ace](https://github.com/cockroachdb/cockroach/commits/234c9295cc02150f919cfa96b09ee2fa07b68ace): ``` test artifacts and logs in: /artifacts/transfer-leases/drain-other-node/run_1 quit.go:72,quit.go:324,soon.go:69,retry.go:208,soon.go:75,soon.go:48,quit.go:228,quit.go:95,quit.go:154,context.go:91,quit.go:153,quit.go:95,quit.go:54,quit.go:361,test_runner.go:930: (1) ranges with no lease outside of node 3: []string{"37"} ``` <p>Parameters: <code>ROACHTEST_cloud=gce</code> , <code>ROACHTEST_cpu=4</code> , <code>ROACHTEST_encrypted=false</code> , <code>ROACHTEST_ssd=0</code> </p> <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/kv-triage <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*transfer-leases/drain-other-node.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
2.0
roachtest: transfer-leases/drain-other-node failed - roachtest.transfer-leases/drain-other-node [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7478551?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7478551?buildTab=artifacts#/transfer-leases/drain-other-node) on release-22.2.0 @ [234c9295cc02150f919cfa96b09ee2fa07b68ace](https://github.com/cockroachdb/cockroach/commits/234c9295cc02150f919cfa96b09ee2fa07b68ace): ``` test artifacts and logs in: /artifacts/transfer-leases/drain-other-node/run_1 quit.go:72,quit.go:324,soon.go:69,retry.go:208,soon.go:75,soon.go:48,quit.go:228,quit.go:95,quit.go:154,context.go:91,quit.go:153,quit.go:95,quit.go:54,quit.go:361,test_runner.go:930: (1) ranges with no lease outside of node 3: []string{"37"} ``` <p>Parameters: <code>ROACHTEST_cloud=gce</code> , <code>ROACHTEST_cpu=4</code> , <code>ROACHTEST_encrypted=false</code> , <code>ROACHTEST_ssd=0</code> </p> <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/kv-triage <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*transfer-leases/drain-other-node.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
non_process
roachtest transfer leases drain other node failed roachtest transfer leases drain other node with on release test artifacts and logs in artifacts transfer leases drain other node run quit go quit go soon go retry go soon go soon go quit go quit go quit go context go quit go quit go quit go quit go test runner go ranges with no lease outside of node string parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb kv triage
0
18,893
24,833,461,764
IssuesEvent
2022-10-26 06:49:02
didi/mpx
https://api.github.com/repos/didi/mpx
closed
可否提供以百度小程序为母版构建其他小程序的方法?
processing
因为我司主要是百度小程序战场,现在需要拓展疆域,所以需要一个像MPX这样的跨平台解决方案,但目前只提供了微信为母版构建其他小程序。特别需要支持百度为母版转其他。 研究过你们的@mpxjs/webpack-plugin插件,但并没有太深入,如果自己修改的话,请指教需要修改哪些地方,才能满足以上我们的需求。或者官方能否提供一个呢? 最后,谢谢!!
1.0
可否提供以百度小程序为母版构建其他小程序的方法? - 因为我司主要是百度小程序战场,现在需要拓展疆域,所以需要一个像MPX这样的跨平台解决方案,但目前只提供了微信为母版构建其他小程序。特别需要支持百度为母版转其他。 研究过你们的@mpxjs/webpack-plugin插件,但并没有太深入,如果自己修改的话,请指教需要修改哪些地方,才能满足以上我们的需求。或者官方能否提供一个呢? 最后,谢谢!!
process
可否提供以百度小程序为母版构建其他小程序的方法? 因为我司主要是百度小程序战场,现在需要拓展疆域,所以需要一个像mpx这样的跨平台解决方案,但目前只提供了微信为母版构建其他小程序。特别需要支持百度为母版转其他。 研究过你们的 mpxjs webpack plugin插件,但并没有太深入,如果自己修改的话,请指教需要修改哪些地方,才能满足以上我们的需求。或者官方能否提供一个呢? 最后,谢谢!!
1
6,926
10,084,619,574
IssuesEvent
2019-07-25 16:03:36
dry-python/dependencies
https://api.github.com/repos/dry-python/dependencies
opened
Support setup and teardown processes using @value and context manager.
asyncio injection-process
Value decorator can be generator or an async generator. This is very similar to the [contextlib.contextmanager](https://docs.python.org/3/library/contextlib.html#contextlib.contextmanager). It can be synchronous: ```python class Container(Injector): app = App @value def db_connection(pool): connection = pool.aquire() yield connection connection.release() with Container as initialized: initialized.app.process() ``` Or it can be asynchronous: ```python class Container(Injector): app = App @value async def db_connection(pool): connection = await pool.aquire() yield connection await connection.release() async with Container as initialized: await initialized.app.process() ``` In the case of `asyncio` explicit loop should be added to the injection scope and required by value decorated function.
1.0
Support setup and teardown processes using @value and context manager. - Value decorator can be generator or an async generator. This is very similar to the [contextlib.contextmanager](https://docs.python.org/3/library/contextlib.html#contextlib.contextmanager). It can be synchronous: ```python class Container(Injector): app = App @value def db_connection(pool): connection = pool.aquire() yield connection connection.release() with Container as initialized: initialized.app.process() ``` Or it can be asynchronous: ```python class Container(Injector): app = App @value async def db_connection(pool): connection = await pool.aquire() yield connection await connection.release() async with Container as initialized: await initialized.app.process() ``` In the case of `asyncio` explicit loop should be added to the injection scope and required by value decorated function.
process
support setup and teardown processes using value and context manager value decorator can be generator or an async generator this is very similar to the it can be synchronous python class container injector app app value def db connection pool connection pool aquire yield connection connection release with container as initialized initialized app process or it can be asynchronous python class container injector app app value async def db connection pool connection await pool aquire yield connection await connection release async with container as initialized await initialized app process in the case of asyncio explicit loop should be added to the injection scope and required by value decorated function
1
7,422
7,925,452,309
IssuesEvent
2018-07-05 20:41:20
aws/aws-sdk-js
https://api.github.com/repos/aws/aws-sdk-js
closed
Pinpoint SNS topics?
Service / API
Why are there no topics in pinpoints like in SNS? It is very difficult to send a push notification to a dynamically changing limited audience. sendUsersMessages has a limit of 15 recipients. I can't use segments because they are statically loaded from an s3 bucket. I guess I'll have to send my messages in 15 user chunks or maybe I'll switch to firebase
1.0
Pinpoint SNS topics? - Why are there no topics in pinpoints like in SNS? It is very difficult to send a push notification to a dynamically changing limited audience. sendUsersMessages has a limit of 15 recipients. I can't use segments because they are statically loaded from an s3 bucket. I guess I'll have to send my messages in 15 user chunks or maybe I'll switch to firebase
non_process
pinpoint sns topics why are there no topics in pinpoints like in sns it is very difficult to send a push notification to a dynamically changing limited audience sendusersmessages has a limit of recipients i can t use segments because they are statically loaded from an bucket i guess i ll have to send my messages in user chunks or maybe i ll switch to firebase
0
126,776
17,970,709,234
IssuesEvent
2021-09-14 01:21:31
dundermifflin0/struts-examples
https://api.github.com/repos/dundermifflin0/struts-examples
opened
CVE-2021-39140 (Medium) detected in xstream-1.4.11.1.jar
security vulnerability
## CVE-2021-39140 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.11.1.jar</b></p></summary> <p>XStream is a serialization library from Java objects to XML and back.</p> <p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p> <p>Path to dependency file: struts-examples/rest-angular/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.11.1/xstream-1.4.11.1.jar</p> <p> Dependency Hierarchy: - struts2-rest-plugin-2.5.25.jar (Root Library) - :x: **xstream-1.4.11.1.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to allocate 100% CPU time on the target system depending on CPU type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose. <p>Publish Date: 2021-08-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39140>CVE-2021-39140</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-6wf9-jmg9-vxcc">https://github.com/x-stream/xstream/security/advisories/GHSA-6wf9-jmg9-vxcc</a></p> <p>Release Date: 2021-08-23</p> <p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.18</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-39140 (Medium) detected in xstream-1.4.11.1.jar - ## CVE-2021-39140 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.11.1.jar</b></p></summary> <p>XStream is a serialization library from Java objects to XML and back.</p> <p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p> <p>Path to dependency file: struts-examples/rest-angular/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.11.1/xstream-1.4.11.1.jar</p> <p> Dependency Hierarchy: - struts2-rest-plugin-2.5.25.jar (Root Library) - :x: **xstream-1.4.11.1.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to allocate 100% CPU time on the target system depending on CPU type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose. <p>Publish Date: 2021-08-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39140>CVE-2021-39140</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-6wf9-jmg9-vxcc">https://github.com/x-stream/xstream/security/advisories/GHSA-6wf9-jmg9-vxcc</a></p> <p>Release Date: 2021-08-23</p> <p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.18</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in xstream jar cve medium severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back library home page a href path to dependency file struts examples rest angular pom xml path to vulnerable library home wss scanner repository com thoughtworks xstream xstream xstream jar dependency hierarchy rest plugin jar root library x xstream jar vulnerable library found in base branch master vulnerability details xstream is a simple library to serialize objects to xml and back again in affected versions this vulnerability may allow a remote attacker to allocate cpu time on the target system depending on cpu type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types xstream uses no longer a blacklist by default since it cannot be secured for general purpose publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream step up your open source security game with whitesource
0
19,612
25,962,646,063
IssuesEvent
2022-12-19 02:00:07
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Mon, 19 Dec 22
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Location-aware Adaptive Denormalization: A Deep Learning Approach For Wildfire Danger Forecasting - **Authors:** Mohamad Hakam Shams Eddin, Ribana Roscher, Juergen Gall - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08208 - **Pdf link:** https://arxiv.org/pdf/2212.08208 - **Abstract** Climate change is expected to intensify and increase extreme events in the weather cycle. Since this has a significant impact on various sectors of our life, recent works are concerned with identifying and predicting such extreme events from Earth observations. This paper proposes a 2D/3D two-branch convolutional neural network (CNN) for wildfire danger forecasting. To use a unified framework, previous approaches duplicate static variables along the time dimension and neglect the intrinsic differences between static and dynamic variables. Furthermore, most existing multi-branch architectures lose the interconnections between the branches during the feature learning stage. To address these issues, we propose a two-branch architecture with a Location-aware Adaptive Denormalization layer (LOADE). Using LOADE as a building block, we can modulate the dynamic features conditional on their geographical location. Thus, our approach considers feature properties as a unified yet compound 2D/3D model. Besides, we propose using an absolute temporal encoding for time-related forecasting problems. Our experimental results show a better performance of our approach than other baselines on the challenging FireCube dataset. ### Learning Classifiers of Prototypes and Reciprocal Points for Universal Domain Adaptation - **Authors:** Sungsu Hur, Inkyu Shin, Kwanyong Park, Sanghyun Woo, In So Kweon - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08355 - **Pdf link:** https://arxiv.org/pdf/2212.08355 - **Abstract** Universal Domain Adaptation aims to transfer the knowledge between the datasets by handling two shifts: domain-shift and category-shift. The main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target. Most existing methods approach this problem by first training the target adapted known classifier and then relying on the single threshold to distinguish unknown target samples. However, this simple threshold-based approach prevents the model from considering the underlying complexities existing between the known and unknown samples in the high-dimensional feature space. In this paper, we propose a new approach in which we use two sets of feature points, namely dual Classifiers for Prototypes and Reciprocals (CPR). Our key idea is to associate each prototype with corresponding known class features while pushing the reciprocals apart from these prototypes to locate them in the potential unknown feature space. The target samples are then classified as unknown if they fall near any reciprocals at test time. To successfully train our framework, we collect the partial, confident target samples that are classified as known or unknown through on our proposed multi-criteria selection. We then additionally apply the entropy loss regularization to them. For further adaptation, we also apply standard consistency regularization that matches the predictions of two different views of the input to make more compact target feature space. We evaluate our proposal, CPR, on three standard benchmarks and achieve comparable or new state-of-the-art results. We also provide extensive ablation experiments to verify our main design choices in our framework. ### Fast-moving object counting with an event camera - **Authors:** Kamil Bialik, Marcin Kowalczyk, Krzysztof Blachut, Tomasz Kryjak - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Systems and Control (eess.SY) - **Arxiv link:** https://arxiv.org/abs/2212.08384 - **Pdf link:** https://arxiv.org/pdf/2212.08384 - **Abstract** This paper proposes the use of an event camera as a component of a vision system that enables counting of fast-moving objects - in this case, falling corn grains. These type of cameras transmit information about the change in brightness of individual pixels and are characterised by low latency, no motion blur, correct operation in different lighting conditions, as well as very low power consumption. The proposed counting algorithm processes events in real time. The operation of the solution was demonstrated on a stand consisting of a chute with a vibrating feeder, which allowed the number of grains falling to be adjusted. The objective of the control system with a PID controller was to maintain a constant average number of falling objects. The proposed solution was subjected to a series of tests to determine the correctness of the developed method operation. On their basis, the validity of using an event camera to count small, fast-moving objects and the associated wide range of potential industrial applications can be confirmed. ### Traffic sign detection and recognition using event camera image reconstruction - **Authors:** Kamil Jeziorek, Tomasz Kryjak - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.08387 - **Pdf link:** https://arxiv.org/pdf/2212.08387 - **Abstract** This paper presents a method for detection and recognition of traffic signs based on information extracted from an event camera. The solution used a FireNet deep convolutional neural network to reconstruct events into greyscale frames. Two YOLOv4 network models were trained, one based on greyscale images and the other on colour images. The best result was achieved for the model trained on the basis of greyscale images, achieving an efficiency of 87.03%. ## Keyword: event camera ### Fast-moving object counting with an event camera - **Authors:** Kamil Bialik, Marcin Kowalczyk, Krzysztof Blachut, Tomasz Kryjak - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Systems and Control (eess.SY) - **Arxiv link:** https://arxiv.org/abs/2212.08384 - **Pdf link:** https://arxiv.org/pdf/2212.08384 - **Abstract** This paper proposes the use of an event camera as a component of a vision system that enables counting of fast-moving objects - in this case, falling corn grains. These type of cameras transmit information about the change in brightness of individual pixels and are characterised by low latency, no motion blur, correct operation in different lighting conditions, as well as very low power consumption. The proposed counting algorithm processes events in real time. The operation of the solution was demonstrated on a stand consisting of a chute with a vibrating feeder, which allowed the number of grains falling to be adjusted. The objective of the control system with a PID controller was to maintain a constant average number of falling objects. The proposed solution was subjected to a series of tests to determine the correctness of the developed method operation. On their basis, the validity of using an event camera to count small, fast-moving objects and the associated wide range of potential industrial applications can be confirmed. ### Traffic sign detection and recognition using event camera image reconstruction - **Authors:** Kamil Jeziorek, Tomasz Kryjak - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.08387 - **Pdf link:** https://arxiv.org/pdf/2212.08387 - **Abstract** This paper presents a method for detection and recognition of traffic signs based on information extracted from an event camera. The solution used a FireNet deep convolutional neural network to reconstruct events into greyscale frames. Two YOLOv4 network models were trained, one based on greyscale images and the other on colour images. The best result was achieved for the model trained on the basis of greyscale images, achieving an efficiency of 87.03%. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast ### On Human Visual Contrast Sensitivity and Machine Vision Robustness: A Comparative Study - **Authors:** Ming-Chang Chiu, Yingfei Wang, Derrick Eui Gyu Kim, Pin-Yu Chen, Xuezhe Ma - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08650 - **Pdf link:** https://arxiv.org/pdf/2212.08650 - **Abstract** It is well established in neuroscience that color vision plays an essential part in the human visual perception system. Meanwhile, many novel designs for computer vision inspired by human vision have achieved success in a wide range of tasks and applications. Nonetheless, how color differences affect machine vision has not been well explored. Our work tries to bridge this gap between the human color vision aspect of visual recognition and that of the machine. To achieve this, we curate two datasets: CIFAR10-F and CIFAR100-F, which are based on the foreground colors of the popular CIFAR datasets. Together with CIFAR10-B and CIFAR100-B, the existing counterpart datasets with information on the background colors of CIFAR test sets, we assign each image based on its color contrast level per its foreground and background color labels and use this as a proxy to study how color contrast affects machine vision. We first conduct a proof-of-concept study, showing the effect of color difference and validate our datasets. Furthermore, on a broader level, an important characteristic of human vision is its robustness against ambient changes; therefore, drawing inspirations from ophthalmology and the robustness literature, we analogize contrast sensitivity from the human visual aspect to machine vision and complement the current robustness study using corrupted images with our CIFAR-CoCo datasets. In summary, motivated by neuroscience and equipped with the datasets we curate, we devise a new framework in two dimensions to perform extensive analyses on the effect of color contrast and corrupted images: (1) model architecture, (2) model size, to measure the perception ability of machine vision beyond total accuracy. We also explore how task complexity and data augmentation play a role in this setup. Our results call attention to new evaluation approaches for human-like machine perception. ## Keyword: AWB There is no result ## Keyword: ISP ### An annotated instance segmentation XXL-CT dataset from a historic airplane - **Authors:** Roland Gruber (1 and 2), Nils Reims (1), Andreas Hempfer (3), Stefan Gerth (1), Michael Salamon (1), Thomas Wittenberg (1 and 2) ((1) Fraunhofer IIS, Fraunhofer Institute for Integrated Circuits IIS (2) Friedrich-Alexander-Universität Erlangen-Nürnberg, (3) Deutsches Museum, München) - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08639 - **Pdf link:** https://arxiv.org/pdf/2212.08639 - **Abstract** The Me 163 was a Second World War fighter airplane and a result of the German air force secret developments. One of these airplanes is currently owned and displayed in the historic aircraft exhibition of the Deutsches Museum in Munich, Germany. To gain insights with respect to its history, design and state of preservation, a complete CT scan was obtained using an industrial XXL-computer tomography scanner. Using the CT data from the Me 163, all its details can visually be examined at various levels, ranging from the complete hull down to single sprockets and rivets. However, while a trained human observer can identify and interpret the volumetric data with all its parts and connections, a virtual dissection of the airplane and all its different parts would be quite desirable. Nevertheless, this means, that an instance segmentation of all components and objects of interest into disjoint entities from the CT data is necessary. As of currently, no adequate computer-assisted tools for automated or semi-automated segmentation of such XXL-airplane data are available, in a first step, an interactive data annotation and object labeling process has been established. So far, seven 512 x 512 x 512 voxel sub-volumes from the Me 163 airplane have been annotated and labeled, whose results can potentially be used for various new applications in the field of digital heritage, non-destructive testing, or machine-learning. This work describes the data acquisition process of the airplane using an industrial XXL-CT scanner, outlines the interactive segmentation and labeling scheme to annotate sub-volumes of the airplane's CT data, describes and discusses various challenges with respect to interpreting and handling the annotated and labeled data. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers - **Authors:** Zhikai Li, Junrui Xiao, Lianwei Yang, Qingyi Gu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2212.08254 - **Pdf link:** https://arxiv.org/pdf/2212.08254 - **Abstract** Post-training quantization (PTQ), which only requires a tiny dataset for calibration without end-to-end retraining, is a light and practical model compression technique. Recently, several PTQ schemes for vision transformers (ViTs) have been presented; unfortunately, they typically suffer from non-trivial accuracy degradation, especially in low-bit cases. In this paper, we propose RepQ-ViT, a novel PTQ framework for ViTs based on quantization scale reparameterization, to address the above issues. RepQ-ViT decouples the quantization and inference processes, where the former employs complex quantizers and the latter employs scale-reparameterized simplified quantizers. This ensures both accurate quantization and efficient inference, which distinguishes it from existing approaches that sacrifice quantization performance to meet the target hardware. More specifically, we focus on two components with extreme distributions: post-LayerNorm activations with severe inter-channel variation and post-Softmax activations with power-law features, and initially apply channel-wise quantization and log$\sqrt{2}$ quantization, respectively. Then, we reparameterize the scales to hardware-friendly layer-wise quantization and log2 quantization for inference, with only slight accuracy or computational costs. Extensive experiments are conducted on multiple vision tasks with different model variants, proving that RepQ-ViT, without hyperparameters and expensive reconstruction procedures, can outperform existing strong baselines and encouragingly improve the accuracy of 4-bit PTQ of ViTs to a usable level. ### Can We Find Strong Lottery Tickets in Generative Models? - **Authors:** Sangyeop Yeo, Yoojin Jang, Jy-yong Sohn, Dongyoon Han, Jaejun Yoo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2212.08311 - **Pdf link:** https://arxiv.org/pdf/2212.08311 - **Abstract** Yes. In this paper, we investigate strong lottery tickets in generative models, the subnetworks that achieve good generative performance without any weight update. Neural network pruning is considered the main cornerstone of model compression for reducing the costs of computation and memory. Unfortunately, pruning a generative model has not been extensively explored, and all existing pruning algorithms suffer from excessive weight-training costs, performance degradation, limited generalizability, or complicated training. To address these problems, we propose to find a strong lottery ticket via moment-matching scores. Our experimental results show that the discovered subnetwork can perform similarly or better than the trained dense model even when only 10% of the weights remain. To the best of our knowledge, we are the first to show the existence of strong lottery tickets in generative models and provide an algorithm to find it stably. Our code and supplementary materials are publicly available. ## Keyword: RAW ### Neural Enhanced Belief Propagation for Multiobject Tracking - **Authors:** Mingchao Liang, Florian Meyer - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Signal Processing (eess.SP) - **Arxiv link:** https://arxiv.org/abs/2212.08340 - **Pdf link:** https://arxiv.org/pdf/2212.08340 - **Abstract** Algorithmic solutions for multi-object tracking (MOT) are a key enabler for applications in autonomous navigation and applied ocean sciences. State-of-the-art MOT methods fully rely on a statistical model and typically use preprocessed sensor data as measurements. In particular, measurements are produced by a detector that extracts potential object locations from the raw sensor data collected for a discrete time step. This preparatory processing step reduces data flow and computational complexity but may result in a loss of information. State-of-the-art Bayesian MOT methods that are based on belief propagation (BP) systematically exploit graph structures of the statistical model to reduce computational complexity and improve scalability. However, as a fully model-based approach, BP can only provide suboptimal estimates when there is a mismatch between the statistical model and the true data-generating process. Existing BP-based MOT methods can further only make use of preprocessed measurements. In this paper, we introduce a variant of BP that combines model-based with data-driven MOT. The proposed neural enhanced belief propagation (NEBP) method complements the statistical model of BP by information learned from raw sensor data. This approach conjectures that the learned information can reduce model mismatch and thus improve data association and false alarm rejection. Our NEBP method improves tracking performance compared to model-based methods. At the same time, it inherits the advantages of BP-based MOT, i.e., it scales only quadratically in the number of objects, and it can thus generate and maintain a large number of object tracks. We evaluate the performance of our NEBP approach for MOT on the nuScenes autonomous driving dataset and demonstrate that it has state-of-the-art performance. ### Free-form 3D Scene Inpainting with Dual-stream GAN - **Authors:** Ru-Fen Jheng, Tsung-Han Wu, Jia-Fong Yeh, Winston H. Hsu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08464 - **Pdf link:** https://arxiv.org/pdf/2212.08464 - **Abstract** Nowadays, the need for user editing in a 3D scene has rapidly increased due to the development of AR and VR technology. However, the existing 3D scene completion task (and datasets) cannot suit the need because the missing regions in scenes are generated by the sensor limitation or object occlusion. Thus, we present a novel task named free-form 3D scene inpainting. Unlike scenes in previous 3D completion datasets preserving most of the main structures and hints of detailed shapes around missing regions, the proposed inpainting dataset, FF-Matterport, contains large and diverse missing regions formed by our free-form 3D mask generation algorithm that can mimic human drawing trajectories in 3D space. Moreover, prior 3D completion methods cannot perform well on this challenging yet practical task, simply interpolating nearby geometry and color context. Thus, a tailored dual-stream GAN method is proposed. First, our dual-stream generator, fusing both geometry and color information, produces distinct semantic boundaries and solves the interpolation issue. To further enhance the details, our lightweight dual-stream discriminator regularizes the geometry and color edges of the predicted scenes to be realistic and sharp. We conducted experiments with the proposed FF-Matterport dataset. Qualitative and quantitative results validate the superiority of our approach over existing scene completion methods and the efficacy of all proposed components. ### On Human Visual Contrast Sensitivity and Machine Vision Robustness: A Comparative Study - **Authors:** Ming-Chang Chiu, Yingfei Wang, Derrick Eui Gyu Kim, Pin-Yu Chen, Xuezhe Ma - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08650 - **Pdf link:** https://arxiv.org/pdf/2212.08650 - **Abstract** It is well established in neuroscience that color vision plays an essential part in the human visual perception system. Meanwhile, many novel designs for computer vision inspired by human vision have achieved success in a wide range of tasks and applications. Nonetheless, how color differences affect machine vision has not been well explored. Our work tries to bridge this gap between the human color vision aspect of visual recognition and that of the machine. To achieve this, we curate two datasets: CIFAR10-F and CIFAR100-F, which are based on the foreground colors of the popular CIFAR datasets. Together with CIFAR10-B and CIFAR100-B, the existing counterpart datasets with information on the background colors of CIFAR test sets, we assign each image based on its color contrast level per its foreground and background color labels and use this as a proxy to study how color contrast affects machine vision. We first conduct a proof-of-concept study, showing the effect of color difference and validate our datasets. Furthermore, on a broader level, an important characteristic of human vision is its robustness against ambient changes; therefore, drawing inspirations from ophthalmology and the robustness literature, we analogize contrast sensitivity from the human visual aspect to machine vision and complement the current robustness study using corrupted images with our CIFAR-CoCo datasets. In summary, motivated by neuroscience and equipped with the datasets we curate, we devise a new framework in two dimensions to perform extensive analyses on the effect of color contrast and corrupted images: (1) model architecture, (2) model size, to measure the perception ability of machine vision beyond total accuracy. We also explore how task complexity and data augmentation play a role in this setup. Our results call attention to new evaluation approaches for human-like machine perception. ## Keyword: raw image There is no result
2.0
New submissions for Mon, 19 Dec 22 - ## Keyword: events ### Location-aware Adaptive Denormalization: A Deep Learning Approach For Wildfire Danger Forecasting - **Authors:** Mohamad Hakam Shams Eddin, Ribana Roscher, Juergen Gall - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08208 - **Pdf link:** https://arxiv.org/pdf/2212.08208 - **Abstract** Climate change is expected to intensify and increase extreme events in the weather cycle. Since this has a significant impact on various sectors of our life, recent works are concerned with identifying and predicting such extreme events from Earth observations. This paper proposes a 2D/3D two-branch convolutional neural network (CNN) for wildfire danger forecasting. To use a unified framework, previous approaches duplicate static variables along the time dimension and neglect the intrinsic differences between static and dynamic variables. Furthermore, most existing multi-branch architectures lose the interconnections between the branches during the feature learning stage. To address these issues, we propose a two-branch architecture with a Location-aware Adaptive Denormalization layer (LOADE). Using LOADE as a building block, we can modulate the dynamic features conditional on their geographical location. Thus, our approach considers feature properties as a unified yet compound 2D/3D model. Besides, we propose using an absolute temporal encoding for time-related forecasting problems. Our experimental results show a better performance of our approach than other baselines on the challenging FireCube dataset. ### Learning Classifiers of Prototypes and Reciprocal Points for Universal Domain Adaptation - **Authors:** Sungsu Hur, Inkyu Shin, Kwanyong Park, Sanghyun Woo, In So Kweon - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08355 - **Pdf link:** https://arxiv.org/pdf/2212.08355 - **Abstract** Universal Domain Adaptation aims to transfer the knowledge between the datasets by handling two shifts: domain-shift and category-shift. The main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target. Most existing methods approach this problem by first training the target adapted known classifier and then relying on the single threshold to distinguish unknown target samples. However, this simple threshold-based approach prevents the model from considering the underlying complexities existing between the known and unknown samples in the high-dimensional feature space. In this paper, we propose a new approach in which we use two sets of feature points, namely dual Classifiers for Prototypes and Reciprocals (CPR). Our key idea is to associate each prototype with corresponding known class features while pushing the reciprocals apart from these prototypes to locate them in the potential unknown feature space. The target samples are then classified as unknown if they fall near any reciprocals at test time. To successfully train our framework, we collect the partial, confident target samples that are classified as known or unknown through on our proposed multi-criteria selection. We then additionally apply the entropy loss regularization to them. For further adaptation, we also apply standard consistency regularization that matches the predictions of two different views of the input to make more compact target feature space. We evaluate our proposal, CPR, on three standard benchmarks and achieve comparable or new state-of-the-art results. We also provide extensive ablation experiments to verify our main design choices in our framework. ### Fast-moving object counting with an event camera - **Authors:** Kamil Bialik, Marcin Kowalczyk, Krzysztof Blachut, Tomasz Kryjak - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Systems and Control (eess.SY) - **Arxiv link:** https://arxiv.org/abs/2212.08384 - **Pdf link:** https://arxiv.org/pdf/2212.08384 - **Abstract** This paper proposes the use of an event camera as a component of a vision system that enables counting of fast-moving objects - in this case, falling corn grains. These type of cameras transmit information about the change in brightness of individual pixels and are characterised by low latency, no motion blur, correct operation in different lighting conditions, as well as very low power consumption. The proposed counting algorithm processes events in real time. The operation of the solution was demonstrated on a stand consisting of a chute with a vibrating feeder, which allowed the number of grains falling to be adjusted. The objective of the control system with a PID controller was to maintain a constant average number of falling objects. The proposed solution was subjected to a series of tests to determine the correctness of the developed method operation. On their basis, the validity of using an event camera to count small, fast-moving objects and the associated wide range of potential industrial applications can be confirmed. ### Traffic sign detection and recognition using event camera image reconstruction - **Authors:** Kamil Jeziorek, Tomasz Kryjak - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.08387 - **Pdf link:** https://arxiv.org/pdf/2212.08387 - **Abstract** This paper presents a method for detection and recognition of traffic signs based on information extracted from an event camera. The solution used a FireNet deep convolutional neural network to reconstruct events into greyscale frames. Two YOLOv4 network models were trained, one based on greyscale images and the other on colour images. The best result was achieved for the model trained on the basis of greyscale images, achieving an efficiency of 87.03%. ## Keyword: event camera ### Fast-moving object counting with an event camera - **Authors:** Kamil Bialik, Marcin Kowalczyk, Krzysztof Blachut, Tomasz Kryjak - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Systems and Control (eess.SY) - **Arxiv link:** https://arxiv.org/abs/2212.08384 - **Pdf link:** https://arxiv.org/pdf/2212.08384 - **Abstract** This paper proposes the use of an event camera as a component of a vision system that enables counting of fast-moving objects - in this case, falling corn grains. These type of cameras transmit information about the change in brightness of individual pixels and are characterised by low latency, no motion blur, correct operation in different lighting conditions, as well as very low power consumption. The proposed counting algorithm processes events in real time. The operation of the solution was demonstrated on a stand consisting of a chute with a vibrating feeder, which allowed the number of grains falling to be adjusted. The objective of the control system with a PID controller was to maintain a constant average number of falling objects. The proposed solution was subjected to a series of tests to determine the correctness of the developed method operation. On their basis, the validity of using an event camera to count small, fast-moving objects and the associated wide range of potential industrial applications can be confirmed. ### Traffic sign detection and recognition using event camera image reconstruction - **Authors:** Kamil Jeziorek, Tomasz Kryjak - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2212.08387 - **Pdf link:** https://arxiv.org/pdf/2212.08387 - **Abstract** This paper presents a method for detection and recognition of traffic signs based on information extracted from an event camera. The solution used a FireNet deep convolutional neural network to reconstruct events into greyscale frames. Two YOLOv4 network models were trained, one based on greyscale images and the other on colour images. The best result was achieved for the model trained on the basis of greyscale images, achieving an efficiency of 87.03%. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast ### On Human Visual Contrast Sensitivity and Machine Vision Robustness: A Comparative Study - **Authors:** Ming-Chang Chiu, Yingfei Wang, Derrick Eui Gyu Kim, Pin-Yu Chen, Xuezhe Ma - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08650 - **Pdf link:** https://arxiv.org/pdf/2212.08650 - **Abstract** It is well established in neuroscience that color vision plays an essential part in the human visual perception system. Meanwhile, many novel designs for computer vision inspired by human vision have achieved success in a wide range of tasks and applications. Nonetheless, how color differences affect machine vision has not been well explored. Our work tries to bridge this gap between the human color vision aspect of visual recognition and that of the machine. To achieve this, we curate two datasets: CIFAR10-F and CIFAR100-F, which are based on the foreground colors of the popular CIFAR datasets. Together with CIFAR10-B and CIFAR100-B, the existing counterpart datasets with information on the background colors of CIFAR test sets, we assign each image based on its color contrast level per its foreground and background color labels and use this as a proxy to study how color contrast affects machine vision. We first conduct a proof-of-concept study, showing the effect of color difference and validate our datasets. Furthermore, on a broader level, an important characteristic of human vision is its robustness against ambient changes; therefore, drawing inspirations from ophthalmology and the robustness literature, we analogize contrast sensitivity from the human visual aspect to machine vision and complement the current robustness study using corrupted images with our CIFAR-CoCo datasets. In summary, motivated by neuroscience and equipped with the datasets we curate, we devise a new framework in two dimensions to perform extensive analyses on the effect of color contrast and corrupted images: (1) model architecture, (2) model size, to measure the perception ability of machine vision beyond total accuracy. We also explore how task complexity and data augmentation play a role in this setup. Our results call attention to new evaluation approaches for human-like machine perception. ## Keyword: AWB There is no result ## Keyword: ISP ### An annotated instance segmentation XXL-CT dataset from a historic airplane - **Authors:** Roland Gruber (1 and 2), Nils Reims (1), Andreas Hempfer (3), Stefan Gerth (1), Michael Salamon (1), Thomas Wittenberg (1 and 2) ((1) Fraunhofer IIS, Fraunhofer Institute for Integrated Circuits IIS (2) Friedrich-Alexander-Universität Erlangen-Nürnberg, (3) Deutsches Museum, München) - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08639 - **Pdf link:** https://arxiv.org/pdf/2212.08639 - **Abstract** The Me 163 was a Second World War fighter airplane and a result of the German air force secret developments. One of these airplanes is currently owned and displayed in the historic aircraft exhibition of the Deutsches Museum in Munich, Germany. To gain insights with respect to its history, design and state of preservation, a complete CT scan was obtained using an industrial XXL-computer tomography scanner. Using the CT data from the Me 163, all its details can visually be examined at various levels, ranging from the complete hull down to single sprockets and rivets. However, while a trained human observer can identify and interpret the volumetric data with all its parts and connections, a virtual dissection of the airplane and all its different parts would be quite desirable. Nevertheless, this means, that an instance segmentation of all components and objects of interest into disjoint entities from the CT data is necessary. As of currently, no adequate computer-assisted tools for automated or semi-automated segmentation of such XXL-airplane data are available, in a first step, an interactive data annotation and object labeling process has been established. So far, seven 512 x 512 x 512 voxel sub-volumes from the Me 163 airplane have been annotated and labeled, whose results can potentially be used for various new applications in the field of digital heritage, non-destructive testing, or machine-learning. This work describes the data acquisition process of the airplane using an industrial XXL-CT scanner, outlines the interactive segmentation and labeling scheme to annotate sub-volumes of the airplane's CT data, describes and discusses various challenges with respect to interpreting and handling the annotated and labeled data. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers - **Authors:** Zhikai Li, Junrui Xiao, Lianwei Yang, Qingyi Gu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2212.08254 - **Pdf link:** https://arxiv.org/pdf/2212.08254 - **Abstract** Post-training quantization (PTQ), which only requires a tiny dataset for calibration without end-to-end retraining, is a light and practical model compression technique. Recently, several PTQ schemes for vision transformers (ViTs) have been presented; unfortunately, they typically suffer from non-trivial accuracy degradation, especially in low-bit cases. In this paper, we propose RepQ-ViT, a novel PTQ framework for ViTs based on quantization scale reparameterization, to address the above issues. RepQ-ViT decouples the quantization and inference processes, where the former employs complex quantizers and the latter employs scale-reparameterized simplified quantizers. This ensures both accurate quantization and efficient inference, which distinguishes it from existing approaches that sacrifice quantization performance to meet the target hardware. More specifically, we focus on two components with extreme distributions: post-LayerNorm activations with severe inter-channel variation and post-Softmax activations with power-law features, and initially apply channel-wise quantization and log$\sqrt{2}$ quantization, respectively. Then, we reparameterize the scales to hardware-friendly layer-wise quantization and log2 quantization for inference, with only slight accuracy or computational costs. Extensive experiments are conducted on multiple vision tasks with different model variants, proving that RepQ-ViT, without hyperparameters and expensive reconstruction procedures, can outperform existing strong baselines and encouragingly improve the accuracy of 4-bit PTQ of ViTs to a usable level. ### Can We Find Strong Lottery Tickets in Generative Models? - **Authors:** Sangyeop Yeo, Yoojin Jang, Jy-yong Sohn, Dongyoon Han, Jaejun Yoo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2212.08311 - **Pdf link:** https://arxiv.org/pdf/2212.08311 - **Abstract** Yes. In this paper, we investigate strong lottery tickets in generative models, the subnetworks that achieve good generative performance without any weight update. Neural network pruning is considered the main cornerstone of model compression for reducing the costs of computation and memory. Unfortunately, pruning a generative model has not been extensively explored, and all existing pruning algorithms suffer from excessive weight-training costs, performance degradation, limited generalizability, or complicated training. To address these problems, we propose to find a strong lottery ticket via moment-matching scores. Our experimental results show that the discovered subnetwork can perform similarly or better than the trained dense model even when only 10% of the weights remain. To the best of our knowledge, we are the first to show the existence of strong lottery tickets in generative models and provide an algorithm to find it stably. Our code and supplementary materials are publicly available. ## Keyword: RAW ### Neural Enhanced Belief Propagation for Multiobject Tracking - **Authors:** Mingchao Liang, Florian Meyer - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Signal Processing (eess.SP) - **Arxiv link:** https://arxiv.org/abs/2212.08340 - **Pdf link:** https://arxiv.org/pdf/2212.08340 - **Abstract** Algorithmic solutions for multi-object tracking (MOT) are a key enabler for applications in autonomous navigation and applied ocean sciences. State-of-the-art MOT methods fully rely on a statistical model and typically use preprocessed sensor data as measurements. In particular, measurements are produced by a detector that extracts potential object locations from the raw sensor data collected for a discrete time step. This preparatory processing step reduces data flow and computational complexity but may result in a loss of information. State-of-the-art Bayesian MOT methods that are based on belief propagation (BP) systematically exploit graph structures of the statistical model to reduce computational complexity and improve scalability. However, as a fully model-based approach, BP can only provide suboptimal estimates when there is a mismatch between the statistical model and the true data-generating process. Existing BP-based MOT methods can further only make use of preprocessed measurements. In this paper, we introduce a variant of BP that combines model-based with data-driven MOT. The proposed neural enhanced belief propagation (NEBP) method complements the statistical model of BP by information learned from raw sensor data. This approach conjectures that the learned information can reduce model mismatch and thus improve data association and false alarm rejection. Our NEBP method improves tracking performance compared to model-based methods. At the same time, it inherits the advantages of BP-based MOT, i.e., it scales only quadratically in the number of objects, and it can thus generate and maintain a large number of object tracks. We evaluate the performance of our NEBP approach for MOT on the nuScenes autonomous driving dataset and demonstrate that it has state-of-the-art performance. ### Free-form 3D Scene Inpainting with Dual-stream GAN - **Authors:** Ru-Fen Jheng, Tsung-Han Wu, Jia-Fong Yeh, Winston H. Hsu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08464 - **Pdf link:** https://arxiv.org/pdf/2212.08464 - **Abstract** Nowadays, the need for user editing in a 3D scene has rapidly increased due to the development of AR and VR technology. However, the existing 3D scene completion task (and datasets) cannot suit the need because the missing regions in scenes are generated by the sensor limitation or object occlusion. Thus, we present a novel task named free-form 3D scene inpainting. Unlike scenes in previous 3D completion datasets preserving most of the main structures and hints of detailed shapes around missing regions, the proposed inpainting dataset, FF-Matterport, contains large and diverse missing regions formed by our free-form 3D mask generation algorithm that can mimic human drawing trajectories in 3D space. Moreover, prior 3D completion methods cannot perform well on this challenging yet practical task, simply interpolating nearby geometry and color context. Thus, a tailored dual-stream GAN method is proposed. First, our dual-stream generator, fusing both geometry and color information, produces distinct semantic boundaries and solves the interpolation issue. To further enhance the details, our lightweight dual-stream discriminator regularizes the geometry and color edges of the predicted scenes to be realistic and sharp. We conducted experiments with the proposed FF-Matterport dataset. Qualitative and quantitative results validate the superiority of our approach over existing scene completion methods and the efficacy of all proposed components. ### On Human Visual Contrast Sensitivity and Machine Vision Robustness: A Comparative Study - **Authors:** Ming-Chang Chiu, Yingfei Wang, Derrick Eui Gyu Kim, Pin-Yu Chen, Xuezhe Ma - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2212.08650 - **Pdf link:** https://arxiv.org/pdf/2212.08650 - **Abstract** It is well established in neuroscience that color vision plays an essential part in the human visual perception system. Meanwhile, many novel designs for computer vision inspired by human vision have achieved success in a wide range of tasks and applications. Nonetheless, how color differences affect machine vision has not been well explored. Our work tries to bridge this gap between the human color vision aspect of visual recognition and that of the machine. To achieve this, we curate two datasets: CIFAR10-F and CIFAR100-F, which are based on the foreground colors of the popular CIFAR datasets. Together with CIFAR10-B and CIFAR100-B, the existing counterpart datasets with information on the background colors of CIFAR test sets, we assign each image based on its color contrast level per its foreground and background color labels and use this as a proxy to study how color contrast affects machine vision. We first conduct a proof-of-concept study, showing the effect of color difference and validate our datasets. Furthermore, on a broader level, an important characteristic of human vision is its robustness against ambient changes; therefore, drawing inspirations from ophthalmology and the robustness literature, we analogize contrast sensitivity from the human visual aspect to machine vision and complement the current robustness study using corrupted images with our CIFAR-CoCo datasets. In summary, motivated by neuroscience and equipped with the datasets we curate, we devise a new framework in two dimensions to perform extensive analyses on the effect of color contrast and corrupted images: (1) model architecture, (2) model size, to measure the perception ability of machine vision beyond total accuracy. We also explore how task complexity and data augmentation play a role in this setup. Our results call attention to new evaluation approaches for human-like machine perception. ## Keyword: raw image There is no result
process
new submissions for mon dec keyword events location aware adaptive denormalization a deep learning approach for wildfire danger forecasting authors mohamad hakam shams eddin ribana roscher juergen gall subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract climate change is expected to intensify and increase extreme events in the weather cycle since this has a significant impact on various sectors of our life recent works are concerned with identifying and predicting such extreme events from earth observations this paper proposes a two branch convolutional neural network cnn for wildfire danger forecasting to use a unified framework previous approaches duplicate static variables along the time dimension and neglect the intrinsic differences between static and dynamic variables furthermore most existing multi branch architectures lose the interconnections between the branches during the feature learning stage to address these issues we propose a two branch architecture with a location aware adaptive denormalization layer loade using loade as a building block we can modulate the dynamic features conditional on their geographical location thus our approach considers feature properties as a unified yet compound model besides we propose using an absolute temporal encoding for time related forecasting problems our experimental results show a better performance of our approach than other baselines on the challenging firecube dataset learning classifiers of prototypes and reciprocal points for universal domain adaptation authors sungsu hur inkyu shin kwanyong park sanghyun woo in so kweon subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract universal domain adaptation aims to transfer the knowledge between the datasets by handling two shifts domain shift and category shift the main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target most existing methods approach this problem by first training the target adapted known classifier and then relying on the single threshold to distinguish unknown target samples however this simple threshold based approach prevents the model from considering the underlying complexities existing between the known and unknown samples in the high dimensional feature space in this paper we propose a new approach in which we use two sets of feature points namely dual classifiers for prototypes and reciprocals cpr our key idea is to associate each prototype with corresponding known class features while pushing the reciprocals apart from these prototypes to locate them in the potential unknown feature space the target samples are then classified as unknown if they fall near any reciprocals at test time to successfully train our framework we collect the partial confident target samples that are classified as known or unknown through on our proposed multi criteria selection we then additionally apply the entropy loss regularization to them for further adaptation we also apply standard consistency regularization that matches the predictions of two different views of the input to make more compact target feature space we evaluate our proposal cpr on three standard benchmarks and achieve comparable or new state of the art results we also provide extensive ablation experiments to verify our main design choices in our framework fast moving object counting with an event camera authors kamil bialik marcin kowalczyk krzysztof blachut tomasz kryjak subjects computer vision and pattern recognition cs cv image and video processing eess iv systems and control eess sy arxiv link pdf link abstract this paper proposes the use of an event camera as a component of a vision system that enables counting of fast moving objects in this case falling corn grains these type of cameras transmit information about the change in brightness of individual pixels and are characterised by low latency no motion blur correct operation in different lighting conditions as well as very low power consumption the proposed counting algorithm processes events in real time the operation of the solution was demonstrated on a stand consisting of a chute with a vibrating feeder which allowed the number of grains falling to be adjusted the objective of the control system with a pid controller was to maintain a constant average number of falling objects the proposed solution was subjected to a series of tests to determine the correctness of the developed method operation on their basis the validity of using an event camera to count small fast moving objects and the associated wide range of potential industrial applications can be confirmed traffic sign detection and recognition using event camera image reconstruction authors kamil jeziorek tomasz kryjak subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract this paper presents a method for detection and recognition of traffic signs based on information extracted from an event camera the solution used a firenet deep convolutional neural network to reconstruct events into greyscale frames two network models were trained one based on greyscale images and the other on colour images the best result was achieved for the model trained on the basis of greyscale images achieving an efficiency of keyword event camera fast moving object counting with an event camera authors kamil bialik marcin kowalczyk krzysztof blachut tomasz kryjak subjects computer vision and pattern recognition cs cv image and video processing eess iv systems and control eess sy arxiv link pdf link abstract this paper proposes the use of an event camera as a component of a vision system that enables counting of fast moving objects in this case falling corn grains these type of cameras transmit information about the change in brightness of individual pixels and are characterised by low latency no motion blur correct operation in different lighting conditions as well as very low power consumption the proposed counting algorithm processes events in real time the operation of the solution was demonstrated on a stand consisting of a chute with a vibrating feeder which allowed the number of grains falling to be adjusted the objective of the control system with a pid controller was to maintain a constant average number of falling objects the proposed solution was subjected to a series of tests to determine the correctness of the developed method operation on their basis the validity of using an event camera to count small fast moving objects and the associated wide range of potential industrial applications can be confirmed traffic sign detection and recognition using event camera image reconstruction authors kamil jeziorek tomasz kryjak subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract this paper presents a method for detection and recognition of traffic signs based on information extracted from an event camera the solution used a firenet deep convolutional neural network to reconstruct events into greyscale frames two network models were trained one based on greyscale images and the other on colour images the best result was achieved for the model trained on the basis of greyscale images achieving an efficiency of keyword events camera there is no result keyword white balance there is no result keyword color contrast on human visual contrast sensitivity and machine vision robustness a comparative study authors ming chang chiu yingfei wang derrick eui gyu kim pin yu chen xuezhe ma subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract it is well established in neuroscience that color vision plays an essential part in the human visual perception system meanwhile many novel designs for computer vision inspired by human vision have achieved success in a wide range of tasks and applications nonetheless how color differences affect machine vision has not been well explored our work tries to bridge this gap between the human color vision aspect of visual recognition and that of the machine to achieve this we curate two datasets f and f which are based on the foreground colors of the popular cifar datasets together with b and b the existing counterpart datasets with information on the background colors of cifar test sets we assign each image based on its color contrast level per its foreground and background color labels and use this as a proxy to study how color contrast affects machine vision we first conduct a proof of concept study showing the effect of color difference and validate our datasets furthermore on a broader level an important characteristic of human vision is its robustness against ambient changes therefore drawing inspirations from ophthalmology and the robustness literature we analogize contrast sensitivity from the human visual aspect to machine vision and complement the current robustness study using corrupted images with our cifar coco datasets in summary motivated by neuroscience and equipped with the datasets we curate we devise a new framework in two dimensions to perform extensive analyses on the effect of color contrast and corrupted images model architecture model size to measure the perception ability of machine vision beyond total accuracy we also explore how task complexity and data augmentation play a role in this setup our results call attention to new evaluation approaches for human like machine perception keyword awb there is no result keyword isp an annotated instance segmentation xxl ct dataset from a historic airplane authors roland gruber and nils reims andreas hempfer stefan gerth michael salamon thomas wittenberg and fraunhofer iis fraunhofer institute for integrated circuits iis friedrich alexander universität erlangen nürnberg deutsches museum münchen subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the me was a second world war fighter airplane and a result of the german air force secret developments one of these airplanes is currently owned and displayed in the historic aircraft exhibition of the deutsches museum in munich germany to gain insights with respect to its history design and state of preservation a complete ct scan was obtained using an industrial xxl computer tomography scanner using the ct data from the me all its details can visually be examined at various levels ranging from the complete hull down to single sprockets and rivets however while a trained human observer can identify and interpret the volumetric data with all its parts and connections a virtual dissection of the airplane and all its different parts would be quite desirable nevertheless this means that an instance segmentation of all components and objects of interest into disjoint entities from the ct data is necessary as of currently no adequate computer assisted tools for automated or semi automated segmentation of such xxl airplane data are available in a first step an interactive data annotation and object labeling process has been established so far seven x x voxel sub volumes from the me airplane have been annotated and labeled whose results can potentially be used for various new applications in the field of digital heritage non destructive testing or machine learning this work describes the data acquisition process of the airplane using an industrial xxl ct scanner outlines the interactive segmentation and labeling scheme to annotate sub volumes of the airplane s ct data describes and discusses various challenges with respect to interpreting and handling the annotated and labeled data keyword image signal processing there is no result keyword image signal process there is no result keyword compression repq vit scale reparameterization for post training quantization of vision transformers authors zhikai li junrui xiao lianwei yang qingyi gu subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract post training quantization ptq which only requires a tiny dataset for calibration without end to end retraining is a light and practical model compression technique recently several ptq schemes for vision transformers vits have been presented unfortunately they typically suffer from non trivial accuracy degradation especially in low bit cases in this paper we propose repq vit a novel ptq framework for vits based on quantization scale reparameterization to address the above issues repq vit decouples the quantization and inference processes where the former employs complex quantizers and the latter employs scale reparameterized simplified quantizers this ensures both accurate quantization and efficient inference which distinguishes it from existing approaches that sacrifice quantization performance to meet the target hardware more specifically we focus on two components with extreme distributions post layernorm activations with severe inter channel variation and post softmax activations with power law features and initially apply channel wise quantization and log sqrt quantization respectively then we reparameterize the scales to hardware friendly layer wise quantization and quantization for inference with only slight accuracy or computational costs extensive experiments are conducted on multiple vision tasks with different model variants proving that repq vit without hyperparameters and expensive reconstruction procedures can outperform existing strong baselines and encouragingly improve the accuracy of bit ptq of vits to a usable level can we find strong lottery tickets in generative models authors sangyeop yeo yoojin jang jy yong sohn dongyoon han jaejun yoo subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract yes in this paper we investigate strong lottery tickets in generative models the subnetworks that achieve good generative performance without any weight update neural network pruning is considered the main cornerstone of model compression for reducing the costs of computation and memory unfortunately pruning a generative model has not been extensively explored and all existing pruning algorithms suffer from excessive weight training costs performance degradation limited generalizability or complicated training to address these problems we propose to find a strong lottery ticket via moment matching scores our experimental results show that the discovered subnetwork can perform similarly or better than the trained dense model even when only of the weights remain to the best of our knowledge we are the first to show the existence of strong lottery tickets in generative models and provide an algorithm to find it stably our code and supplementary materials are publicly available keyword raw neural enhanced belief propagation for multiobject tracking authors mingchao liang florian meyer subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg signal processing eess sp arxiv link pdf link abstract algorithmic solutions for multi object tracking mot are a key enabler for applications in autonomous navigation and applied ocean sciences state of the art mot methods fully rely on a statistical model and typically use preprocessed sensor data as measurements in particular measurements are produced by a detector that extracts potential object locations from the raw sensor data collected for a discrete time step this preparatory processing step reduces data flow and computational complexity but may result in a loss of information state of the art bayesian mot methods that are based on belief propagation bp systematically exploit graph structures of the statistical model to reduce computational complexity and improve scalability however as a fully model based approach bp can only provide suboptimal estimates when there is a mismatch between the statistical model and the true data generating process existing bp based mot methods can further only make use of preprocessed measurements in this paper we introduce a variant of bp that combines model based with data driven mot the proposed neural enhanced belief propagation nebp method complements the statistical model of bp by information learned from raw sensor data this approach conjectures that the learned information can reduce model mismatch and thus improve data association and false alarm rejection our nebp method improves tracking performance compared to model based methods at the same time it inherits the advantages of bp based mot i e it scales only quadratically in the number of objects and it can thus generate and maintain a large number of object tracks we evaluate the performance of our nebp approach for mot on the nuscenes autonomous driving dataset and demonstrate that it has state of the art performance free form scene inpainting with dual stream gan authors ru fen jheng tsung han wu jia fong yeh winston h hsu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract nowadays the need for user editing in a scene has rapidly increased due to the development of ar and vr technology however the existing scene completion task and datasets cannot suit the need because the missing regions in scenes are generated by the sensor limitation or object occlusion thus we present a novel task named free form scene inpainting unlike scenes in previous completion datasets preserving most of the main structures and hints of detailed shapes around missing regions the proposed inpainting dataset ff matterport contains large and diverse missing regions formed by our free form mask generation algorithm that can mimic human drawing trajectories in space moreover prior completion methods cannot perform well on this challenging yet practical task simply interpolating nearby geometry and color context thus a tailored dual stream gan method is proposed first our dual stream generator fusing both geometry and color information produces distinct semantic boundaries and solves the interpolation issue to further enhance the details our lightweight dual stream discriminator regularizes the geometry and color edges of the predicted scenes to be realistic and sharp we conducted experiments with the proposed ff matterport dataset qualitative and quantitative results validate the superiority of our approach over existing scene completion methods and the efficacy of all proposed components on human visual contrast sensitivity and machine vision robustness a comparative study authors ming chang chiu yingfei wang derrick eui gyu kim pin yu chen xuezhe ma subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract it is well established in neuroscience that color vision plays an essential part in the human visual perception system meanwhile many novel designs for computer vision inspired by human vision have achieved success in a wide range of tasks and applications nonetheless how color differences affect machine vision has not been well explored our work tries to bridge this gap between the human color vision aspect of visual recognition and that of the machine to achieve this we curate two datasets f and f which are based on the foreground colors of the popular cifar datasets together with b and b the existing counterpart datasets with information on the background colors of cifar test sets we assign each image based on its color contrast level per its foreground and background color labels and use this as a proxy to study how color contrast affects machine vision we first conduct a proof of concept study showing the effect of color difference and validate our datasets furthermore on a broader level an important characteristic of human vision is its robustness against ambient changes therefore drawing inspirations from ophthalmology and the robustness literature we analogize contrast sensitivity from the human visual aspect to machine vision and complement the current robustness study using corrupted images with our cifar coco datasets in summary motivated by neuroscience and equipped with the datasets we curate we devise a new framework in two dimensions to perform extensive analyses on the effect of color contrast and corrupted images model architecture model size to measure the perception ability of machine vision beyond total accuracy we also explore how task complexity and data augmentation play a role in this setup our results call attention to new evaluation approaches for human like machine perception keyword raw image there is no result
1
3,471
6,551,325,652
IssuesEvent
2017-09-05 14:25:16
Jumpscale/developer
https://api.github.com/repos/Jumpscale/developer
closed
Errors, missing packagrs when starting JS9
process_wontfix
#### Installation information - jumpscale version: js9 - operating system: Ubuntu 16.04 <img width="747" alt="screen shot 2017-09-04 at 10 23 37" src="https://user-images.githubusercontent.com/13795109/30017915-36c1daf2-915b-11e7-9f0d-9b9c90582078.png">
1.0
Errors, missing packagrs when starting JS9 - #### Installation information - jumpscale version: js9 - operating system: Ubuntu 16.04 <img width="747" alt="screen shot 2017-09-04 at 10 23 37" src="https://user-images.githubusercontent.com/13795109/30017915-36c1daf2-915b-11e7-9f0d-9b9c90582078.png">
process
errors missing packagrs when starting installation information jumpscale version operating system ubuntu img width alt screen shot at src
1
736,806
25,488,838,015
IssuesEvent
2022-11-26 19:48:04
cloudflare/cloudflared
https://api.github.com/repos/cloudflare/cloudflared
closed
🐛 Private networking not working
Type: Bug Priority: Normal
**Describe the bug** 1. All ports are open, and able to complete the TCP-Tree-way handshake, despite receiving no data after connection completion 2. Unable to ping private IPv4 (172.16.0.1) 3. Even the ports which are open like 80/TCP, 3389/TCP are not getting any packets back i.e. connection is established after tcp tree way handhshake but I don't get any response **To Reproduce** Steps to reproduce the behavior: 1. Configure the means of enrollment in the Cloudflare dashboard 2. Add a means of login in the authentication tab (Ex, Github or Google identity provider) 3. Install cloudflared on server: cloudflared.exe service install [tooken] 4. Configure team name 5. Install Cloudflare WARP on the client (in this case my personal computer) 6. Proceed to enter cloudflare Zero-Trust configured earlier 7. Try to access any application that is on the server with its local IP, which in this case is: (172.16.0.1), note that this IP is the local IP that is present on the server, the local IP of my personal network is : 192.168.0.211, so there is no conflict If it's an issue with Cloudflare Tunnel: 8. Tunnel ID : 17a7f944-3407-4e1b-ad50-294fa23f2077 ![image](https://user-images.githubusercontent.com/40347728/204100286-fc156742-8b43-421a-b010-b1698cb2f00e.png) **Expected behavior** That the ping (ICMP) and application response are correctly obtained, even if the TCP-Tree-Way handhshake is a false positive. **Environment and versions** - OS: Windows Server 2019 - Architecture: Intel x64 - Version: [e.g. 2022.11.0] **Logs and errors** No logs **Additional context** ![image](https://user-images.githubusercontent.com/40347728/204100346-9ee39d12-7384-43b5-9c56-865cb6727f79.png) ![image](https://user-images.githubusercontent.com/40347728/204100359-90c212c3-9f35-426d-909d-ffa9578fb566.png) ![image](https://user-images.githubusercontent.com/40347728/204100375-479d9917-9825-4335-8993-5b2e3c36395f.png)
1.0
🐛 Private networking not working - **Describe the bug** 1. All ports are open, and able to complete the TCP-Tree-way handshake, despite receiving no data after connection completion 2. Unable to ping private IPv4 (172.16.0.1) 3. Even the ports which are open like 80/TCP, 3389/TCP are not getting any packets back i.e. connection is established after tcp tree way handhshake but I don't get any response **To Reproduce** Steps to reproduce the behavior: 1. Configure the means of enrollment in the Cloudflare dashboard 2. Add a means of login in the authentication tab (Ex, Github or Google identity provider) 3. Install cloudflared on server: cloudflared.exe service install [tooken] 4. Configure team name 5. Install Cloudflare WARP on the client (in this case my personal computer) 6. Proceed to enter cloudflare Zero-Trust configured earlier 7. Try to access any application that is on the server with its local IP, which in this case is: (172.16.0.1), note that this IP is the local IP that is present on the server, the local IP of my personal network is : 192.168.0.211, so there is no conflict If it's an issue with Cloudflare Tunnel: 8. Tunnel ID : 17a7f944-3407-4e1b-ad50-294fa23f2077 ![image](https://user-images.githubusercontent.com/40347728/204100286-fc156742-8b43-421a-b010-b1698cb2f00e.png) **Expected behavior** That the ping (ICMP) and application response are correctly obtained, even if the TCP-Tree-Way handhshake is a false positive. **Environment and versions** - OS: Windows Server 2019 - Architecture: Intel x64 - Version: [e.g. 2022.11.0] **Logs and errors** No logs **Additional context** ![image](https://user-images.githubusercontent.com/40347728/204100346-9ee39d12-7384-43b5-9c56-865cb6727f79.png) ![image](https://user-images.githubusercontent.com/40347728/204100359-90c212c3-9f35-426d-909d-ffa9578fb566.png) ![image](https://user-images.githubusercontent.com/40347728/204100375-479d9917-9825-4335-8993-5b2e3c36395f.png)
non_process
🐛 private networking not working describe the bug all ports are open and able to complete the tcp tree way handshake despite receiving no data after connection completion unable to ping private even the ports which are open like tcp tcp are not getting any packets back i e connection is established after tcp tree way handhshake but i don t get any response to reproduce steps to reproduce the behavior configure the means of enrollment in the cloudflare dashboard add a means of login in the authentication tab ex github or google identity provider install cloudflared on server cloudflared exe service install configure team name install cloudflare warp on the client in this case my personal computer proceed to enter cloudflare zero trust configured earlier try to access any application that is on the server with its local ip which in this case is note that this ip is the local ip that is present on the server the local ip of my personal network is so there is no conflict if it s an issue with cloudflare tunnel tunnel id expected behavior that the ping icmp and application response are correctly obtained even if the tcp tree way handhshake is a false positive environment and versions os windows server architecture intel version logs and errors no logs additional context
0
5,515
8,379,179,365
IssuesEvent
2018-10-06 22:04:47
zotero/zotero
https://api.github.com/repos/zotero/zotero
opened
In-text citations with author outside parentheses
Enhancement Word Processor Integration
“According to Smith (2018)” and similar patterns This has been requested for [many years](https://forums.zotero.org/discussion/5282/multiple-in-text-citation-patterns). It would be nice to put this to rest. See [this post](https://forums.zotero.org/discussion/comment/314840/#Comment_314840) and down for the latest summary. (Read earlier posts at your own risk.) It seems like citeproc-js already largely [supports what we need](https://citeproc-js.readthedocs.io/en/latest/running.html#partial-suppression-of-citation-content). As I understand it, we could add an "Author-Only" option now that would work for most author-date styles, but 1) it would be incorrect for APA and any other styles that wanted different styling inside and outside of parentheses ("&" vs. "and") and 2) it wouldn't work for numeric styles that don't have an in-text author format defined. To fix (1) properly, we would have to extend CSL (@bwiernik: "The basic extension to CSL needed for full support would be to have a new citation position value to indicate in-text position."), but as a stopgap measure maybe there's some way we can hard-code the formatting for APA? Not sure whether that would need to go in citeproc-js or if it could be in Zotero. To fix (2), @bwiernik and @adam3smith suggest a default author macro: > In absence of specifying an “out-of-parentheses” format for numeric styles, there could be a default author macro used with author, substitute editor, then translator, and et-al-min="3" et-al-use-first="1". This would need to be in citeproc-js. @fbennett, any guidance you can provide here on how best to proceed?
1.0
In-text citations with author outside parentheses - “According to Smith (2018)” and similar patterns This has been requested for [many years](https://forums.zotero.org/discussion/5282/multiple-in-text-citation-patterns). It would be nice to put this to rest. See [this post](https://forums.zotero.org/discussion/comment/314840/#Comment_314840) and down for the latest summary. (Read earlier posts at your own risk.) It seems like citeproc-js already largely [supports what we need](https://citeproc-js.readthedocs.io/en/latest/running.html#partial-suppression-of-citation-content). As I understand it, we could add an "Author-Only" option now that would work for most author-date styles, but 1) it would be incorrect for APA and any other styles that wanted different styling inside and outside of parentheses ("&" vs. "and") and 2) it wouldn't work for numeric styles that don't have an in-text author format defined. To fix (1) properly, we would have to extend CSL (@bwiernik: "The basic extension to CSL needed for full support would be to have a new citation position value to indicate in-text position."), but as a stopgap measure maybe there's some way we can hard-code the formatting for APA? Not sure whether that would need to go in citeproc-js or if it could be in Zotero. To fix (2), @bwiernik and @adam3smith suggest a default author macro: > In absence of specifying an “out-of-parentheses” format for numeric styles, there could be a default author macro used with author, substitute editor, then translator, and et-al-min="3" et-al-use-first="1". This would need to be in citeproc-js. @fbennett, any guidance you can provide here on how best to proceed?
process
in text citations with author outside parentheses “according to smith ” and similar patterns this has been requested for it would be nice to put this to rest see and down for the latest summary read earlier posts at your own risk it seems like citeproc js already largely as i understand it we could add an author only option now that would work for most author date styles but it would be incorrect for apa and any other styles that wanted different styling inside and outside of parentheses vs and and it wouldn t work for numeric styles that don t have an in text author format defined to fix properly we would have to extend csl bwiernik the basic extension to csl needed for full support would be to have a new citation position value to indicate in text position but as a stopgap measure maybe there s some way we can hard code the formatting for apa not sure whether that would need to go in citeproc js or if it could be in zotero to fix bwiernik and suggest a default author macro in absence of specifying an “out of parentheses” format for numeric styles there could be a default author macro used with author substitute editor then translator and et al min et al use first this would need to be in citeproc js fbennett any guidance you can provide here on how best to proceed
1
123,664
16,522,409,330
IssuesEvent
2021-05-26 15:50:25
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
closed
Update the Contributing to Formation page in the documentation section of design.va.gov
vsp-design-system-team
## Issue Description Update the Contributing to Formation page in the documentation section of design.va.gov --- ## Tasks - [x] Review all content in this section - [x] Draft content as needed - [x] Have content changes reviewed by team member - [x] Update content categorization and structure as needed - [x] Post updates on design.va.gov ## Acceptance Criteria - [x] Contributing page of the documentation section on design.va.gov is updated --- Notes from discussion: - We want to add some experimental stuff in here. - Ryan’s recommendation - basically rip off gov.uk from a structural standpoint on how they organize their community section - steal their subnav - Need to put in the new flowchart - Backlog needs to be a subpage Updating this site - What about adding people to the repo? - Give us your documentation and we will put it in the design system? - V1. experimental system is up and running. It’ll be up to the design system team to add the first 5 components to the official system, after that we’ll give you templates on how to add to the design system - Move this under contribution- look at whatever the subnav of is contributing to the design system.
1.0
Update the Contributing to Formation page in the documentation section of design.va.gov - ## Issue Description Update the Contributing to Formation page in the documentation section of design.va.gov --- ## Tasks - [x] Review all content in this section - [x] Draft content as needed - [x] Have content changes reviewed by team member - [x] Update content categorization and structure as needed - [x] Post updates on design.va.gov ## Acceptance Criteria - [x] Contributing page of the documentation section on design.va.gov is updated --- Notes from discussion: - We want to add some experimental stuff in here. - Ryan’s recommendation - basically rip off gov.uk from a structural standpoint on how they organize their community section - steal their subnav - Need to put in the new flowchart - Backlog needs to be a subpage Updating this site - What about adding people to the repo? - Give us your documentation and we will put it in the design system? - V1. experimental system is up and running. It’ll be up to the design system team to add the first 5 components to the official system, after that we’ll give you templates on how to add to the design system - Move this under contribution- look at whatever the subnav of is contributing to the design system.
non_process
update the contributing to formation page in the documentation section of design va gov issue description update the contributing to formation page in the documentation section of design va gov tasks review all content in this section draft content as needed have content changes reviewed by team member update content categorization and structure as needed post updates on design va gov acceptance criteria contributing page of the documentation section on design va gov is updated notes from discussion we want to add some experimental stuff in here ryan’s recommendation basically rip off gov uk from a structural standpoint on how they organize their community section steal their subnav need to put in the new flowchart backlog needs to be a subpage updating this site what about adding people to the repo give us your documentation and we will put it in the design system experimental system is up and running it’ll be up to the design system team to add the first components to the official system after that we’ll give you templates on how to add to the design system move this under contribution look at whatever the subnav of is contributing to the design system
0
39,053
9,187,510,286
IssuesEvent
2019-03-06 03:12:39
UIOWA5830SP19/SPP300
https://api.github.com/repos/UIOWA5830SP19/SPP300
closed
[Bug] File upload in production
S2 - High Sprint3 T1 - Defect
**Describe the bug** When pushed to production, file upload cannot load image url as well as link to the pdf viewer. It looks like the problem with Google drive permission and how the link generated from the website **To Reproduce** - Push the application to heroku - Log in with valid credentials - Go to profile page and try to upload picture - The app will crash and heroku log will be show this error ![image](https://user-images.githubusercontent.com/12748234/53709004-9b1d8700-3dfb-11e9-9f4b-e5b6544f828d.png) **Expected behavior** Image thumbnail should be able to generated
1.0
[Bug] File upload in production - **Describe the bug** When pushed to production, file upload cannot load image url as well as link to the pdf viewer. It looks like the problem with Google drive permission and how the link generated from the website **To Reproduce** - Push the application to heroku - Log in with valid credentials - Go to profile page and try to upload picture - The app will crash and heroku log will be show this error ![image](https://user-images.githubusercontent.com/12748234/53709004-9b1d8700-3dfb-11e9-9f4b-e5b6544f828d.png) **Expected behavior** Image thumbnail should be able to generated
non_process
file upload in production describe the bug when pushed to production file upload cannot load image url as well as link to the pdf viewer it looks like the problem with google drive permission and how the link generated from the website to reproduce push the application to heroku log in with valid credentials go to profile page and try to upload picture the app will crash and heroku log will be show this error expected behavior image thumbnail should be able to generated
0
281,114
21,315,375,304
IssuesEvent
2022-04-16 07:13:27
lzf834/pe
https://api.github.com/repos/lzf834/pe
opened
UG missing fullstop.
type.DocumentationBug severity.VeryLow
![image.png](https://raw.githubusercontent.com/lzf834/pe/main/files/c40c9db6-cfc7-4cb0-a57a-655be107782f.png) Very minor issue in consistency where there is a missing fullstop for the "Adding a property to sell" bulletpoint. <!--session: 1650088698312-8545e95d-9d11-4161-b28d-2647bfa8ebe8--> <!--Version: Web v3.4.2-->
1.0
UG missing fullstop. - ![image.png](https://raw.githubusercontent.com/lzf834/pe/main/files/c40c9db6-cfc7-4cb0-a57a-655be107782f.png) Very minor issue in consistency where there is a missing fullstop for the "Adding a property to sell" bulletpoint. <!--session: 1650088698312-8545e95d-9d11-4161-b28d-2647bfa8ebe8--> <!--Version: Web v3.4.2-->
non_process
ug missing fullstop very minor issue in consistency where there is a missing fullstop for the adding a property to sell bulletpoint
0
19,422
25,569,815,224
IssuesEvent
2022-11-30 16:48:08
temporalio/sdk-typescript
https://api.github.com/repos/temporalio/sdk-typescript
closed
Run clippy as part of CI
good first issue CICD processes
I just your pipeline doesn't run clippy which would be a nice thing to do. Probably there's a handful of suggestions in here that'd be useful. You can check out the buildkite definition in Core for how we run clippy in pipelines. _Originally posted by @Sushisource in https://github.com/temporalio/sdk-node/pull/114#discussion_r645034429_
1.0
Run clippy as part of CI - I just your pipeline doesn't run clippy which would be a nice thing to do. Probably there's a handful of suggestions in here that'd be useful. You can check out the buildkite definition in Core for how we run clippy in pipelines. _Originally posted by @Sushisource in https://github.com/temporalio/sdk-node/pull/114#discussion_r645034429_
process
run clippy as part of ci i just your pipeline doesn t run clippy which would be a nice thing to do probably there s a handful of suggestions in here that d be useful you can check out the buildkite definition in core for how we run clippy in pipelines originally posted by sushisource in
1
15,548
19,703,502,173
IssuesEvent
2022-01-12 19:07:55
googleapis/java-resource-settings
https://api.github.com/repos/googleapis/java-resource-settings
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'resource-settings' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'resource-settings' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname resource settings invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
140,251
18,900,728,475
IssuesEvent
2021-11-16 00:24:10
pustovitDmytro/cottus
https://api.github.com/repos/pustovitDmytro/cottus
opened
CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz
security vulnerability
## CVE-2021-3918 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary> <p>JSON Schema validation and specifications</p> <p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p> <p>Path to dependency file: cottus/package.json</p> <p>Path to vulnerable library: cottus/node_modules/npm/node_modules/json-schema/package.json,cottus/node_modules/json-schema/package.json</p> <p> Dependency Hierarchy: - coveralls-3.1.1.tgz (Root Library) - request-2.88.2.tgz - http-signature-1.2.0.tgz - jsprim-1.4.1.tgz - :x: **json-schema-0.2.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/pustovitDmytro/cottus/commit/8f640ed14f71e2046729c0b4210ab5a3591a2681">8f640ed14f71e2046729c0b4210ab5a3591a2681</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution') <p>Publish Date: 2021-11-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz - ## CVE-2021-3918 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary> <p>JSON Schema validation and specifications</p> <p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p> <p>Path to dependency file: cottus/package.json</p> <p>Path to vulnerable library: cottus/node_modules/npm/node_modules/json-schema/package.json,cottus/node_modules/json-schema/package.json</p> <p> Dependency Hierarchy: - coveralls-3.1.1.tgz (Root Library) - request-2.88.2.tgz - http-signature-1.2.0.tgz - jsprim-1.4.1.tgz - :x: **json-schema-0.2.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/pustovitDmytro/cottus/commit/8f640ed14f71e2046729c0b4210ab5a3591a2681">8f640ed14f71e2046729c0b4210ab5a3591a2681</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution') <p>Publish Date: 2021-11-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in json schema tgz cve high severity vulnerability vulnerable library json schema tgz json schema validation and specifications library home page a href path to dependency file cottus package json path to vulnerable library cottus node modules npm node modules json schema package json cottus node modules json schema package json dependency hierarchy coveralls tgz root library request tgz http signature tgz jsprim tgz x json schema tgz vulnerable library found in head commit a href found in base branch master vulnerability details json schema is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource
0
646,752
21,074,822,585
IssuesEvent
2022-04-02 02:04:25
open-telemetry/opentelemetry-cpp
https://api.github.com/repos/open-telemetry/opentelemetry-cpp
closed
Optimize OTLP exporter
area:exporter:otlp priority:p2 Stale do-not-stale
### Problem Currently, the OTLP exporter is passed `Recordable` objects at export time. The span protobuf in the `Recordable` needs to be moved into the request that will be exported (see [`otlp_exporter.cc`](https://github.com/open-telemetry/opentelemetry-cpp/blob/ec6f70d76bd2b4c8fb8198ad852facd18afe20b6/exporters/otlp/src/otlp_exporter.cc#L29)). Although we use `std::move` to move the span protobuf from the `Recordable` to the request, benchmark tests indicate that the protobuf is likely being *copied*, not *moved*. This is due to the way `std::move` handles const lvalue references (see this [link](https://www.nextptr.com/tutorial/ta1211389378/beware-of-using-stdmove-on-a-const-lvalue) for more information). To see the difference between move and copy, I did the following test: **Current code in [`otlp_exporter.cc`](https://github.com/open-telemetry/opentelemetry-cpp/blob/ec6f70d76bd2b4c8fb8198ad852facd18afe20b6/exporters/otlp/src/otlp_exporter.cc#L29)** (probably copying, even though we're using `std::move`): ``` *instrumentation_lib->add_spans() = std::move(rec->span()); ``` **Current benchmarks**: ![benchmark_copy](https://user-images.githubusercontent.com/14475923/91239332-1e1f5500-e6f4-11ea-867a-fb5f47c30204.png) **Modified code** (swap, effectively moving): ``` instrumentation_lib->add_spans()->Swap( const_cast<opentelemetry::proto::trace::v1::Span *>(&rec->span())); ``` **Modified benchmarks**: ![benchmark_move](https://user-images.githubusercontent.com/14475923/91239337-237c9f80-e6f4-11ea-8c1d-0f894555a481.png) We can see that the benchmarks are roughly half for the modified code (500 ns vs. 1000 ns for dense spans). ### Solution The modified code above is unsafe, since it uses a `const_cast`. Another solution is to modify the `Recordable` class to store a pointer to a span protobuf, instead of a span protobuf object. The exporter could maintain a collection of the actual span protobufs. When it becomes time to export, the exporter could associate each given `Recordable` with its span protobuf using the span id. Storing span protobufs in the exporter, rather than in `Recordable`, would likely allow the protobufs to be moved into the requests, rather than copied.
1.0
Optimize OTLP exporter - ### Problem Currently, the OTLP exporter is passed `Recordable` objects at export time. The span protobuf in the `Recordable` needs to be moved into the request that will be exported (see [`otlp_exporter.cc`](https://github.com/open-telemetry/opentelemetry-cpp/blob/ec6f70d76bd2b4c8fb8198ad852facd18afe20b6/exporters/otlp/src/otlp_exporter.cc#L29)). Although we use `std::move` to move the span protobuf from the `Recordable` to the request, benchmark tests indicate that the protobuf is likely being *copied*, not *moved*. This is due to the way `std::move` handles const lvalue references (see this [link](https://www.nextptr.com/tutorial/ta1211389378/beware-of-using-stdmove-on-a-const-lvalue) for more information). To see the difference between move and copy, I did the following test: **Current code in [`otlp_exporter.cc`](https://github.com/open-telemetry/opentelemetry-cpp/blob/ec6f70d76bd2b4c8fb8198ad852facd18afe20b6/exporters/otlp/src/otlp_exporter.cc#L29)** (probably copying, even though we're using `std::move`): ``` *instrumentation_lib->add_spans() = std::move(rec->span()); ``` **Current benchmarks**: ![benchmark_copy](https://user-images.githubusercontent.com/14475923/91239332-1e1f5500-e6f4-11ea-867a-fb5f47c30204.png) **Modified code** (swap, effectively moving): ``` instrumentation_lib->add_spans()->Swap( const_cast<opentelemetry::proto::trace::v1::Span *>(&rec->span())); ``` **Modified benchmarks**: ![benchmark_move](https://user-images.githubusercontent.com/14475923/91239337-237c9f80-e6f4-11ea-8c1d-0f894555a481.png) We can see that the benchmarks are roughly half for the modified code (500 ns vs. 1000 ns for dense spans). ### Solution The modified code above is unsafe, since it uses a `const_cast`. Another solution is to modify the `Recordable` class to store a pointer to a span protobuf, instead of a span protobuf object. The exporter could maintain a collection of the actual span protobufs. When it becomes time to export, the exporter could associate each given `Recordable` with its span protobuf using the span id. Storing span protobufs in the exporter, rather than in `Recordable`, would likely allow the protobufs to be moved into the requests, rather than copied.
non_process
optimize otlp exporter problem currently the otlp exporter is passed recordable objects at export time the span protobuf in the recordable needs to be moved into the request that will be exported see although we use std move to move the span protobuf from the recordable to the request benchmark tests indicate that the protobuf is likely being copied not moved this is due to the way std move handles const lvalue references see this for more information to see the difference between move and copy i did the following test current code in probably copying even though we re using std move instrumentation lib add spans std move rec span current benchmarks modified code swap effectively moving instrumentation lib add spans swap const cast rec span modified benchmarks we can see that the benchmarks are roughly half for the modified code ns vs ns for dense spans solution the modified code above is unsafe since it uses a const cast another solution is to modify the recordable class to store a pointer to a span protobuf instead of a span protobuf object the exporter could maintain a collection of the actual span protobufs when it becomes time to export the exporter could associate each given recordable with its span protobuf using the span id storing span protobufs in the exporter rather than in recordable would likely allow the protobufs to be moved into the requests rather than copied
0
20,185
26,745,591,139
IssuesEvent
2023-01-30 15:47:06
tokio-rs/tokio
https://api.github.com/repos/tokio-rs/tokio
closed
Documentation of tokio::process::Command::arg is misleading
C-bug E-help-wanted T-docs E-easy A-tokio M-process
**Version** Using tokio 1.24.2, rust 1.67 **Platform** Windows 10, 64 bit **Description** The example of [`tokio::process::Command::arg`](https://docs.rs/tokio/1.24.2/tokio/process/struct.Command.html#method.arg) (and other methods of `Command`) suggests the following code: ```rust use tokio::process::Command; let command = Command::new("ls").arg("-l").arg("-a"); ``` which naturally extends to ```rust #[tokio::test] async fn wontcompile() { use tokio::process::Command; let command = Command::new("ls").arg("-l").arg("-a"); command.output(); } ``` This snippet won't compile, however, with the error: ``` error[E0716]: temporary value dropped while borrowed --> taumada-rusty-runner\src\lib.rs:80:19 | 80 | let command = Command::new("ls").arg("-l").arg("-a"); | ^^^^^^^^^^^^^^^^^^ - temporary value is freed at the end of this statement | | | creates a temporary value which is freed while still in use 81 | command.output(); | ---------------- borrow later used here | = note: consider using a `let` binding to create a longer lived value For more information about this error, try `rustc --explain E0716`. ``` **Resolution** I suggest to either, like [`std::process::Command::arg`](https://doc.rust-lang.org/1.67.0/std/process/struct.Command.html#method.arg), add a `.spawn().unwrap()`, or the remove the let binding named `command`.
1.0
Documentation of tokio::process::Command::arg is misleading - **Version** Using tokio 1.24.2, rust 1.67 **Platform** Windows 10, 64 bit **Description** The example of [`tokio::process::Command::arg`](https://docs.rs/tokio/1.24.2/tokio/process/struct.Command.html#method.arg) (and other methods of `Command`) suggests the following code: ```rust use tokio::process::Command; let command = Command::new("ls").arg("-l").arg("-a"); ``` which naturally extends to ```rust #[tokio::test] async fn wontcompile() { use tokio::process::Command; let command = Command::new("ls").arg("-l").arg("-a"); command.output(); } ``` This snippet won't compile, however, with the error: ``` error[E0716]: temporary value dropped while borrowed --> taumada-rusty-runner\src\lib.rs:80:19 | 80 | let command = Command::new("ls").arg("-l").arg("-a"); | ^^^^^^^^^^^^^^^^^^ - temporary value is freed at the end of this statement | | | creates a temporary value which is freed while still in use 81 | command.output(); | ---------------- borrow later used here | = note: consider using a `let` binding to create a longer lived value For more information about this error, try `rustc --explain E0716`. ``` **Resolution** I suggest to either, like [`std::process::Command::arg`](https://doc.rust-lang.org/1.67.0/std/process/struct.Command.html#method.arg), add a `.spawn().unwrap()`, or the remove the let binding named `command`.
process
documentation of tokio process command arg is misleading version using tokio rust platform windows bit description the example of and other methods of command suggests the following code rust use tokio process command let command command new ls arg l arg a which naturally extends to rust async fn wontcompile use tokio process command let command command new ls arg l arg a command output this snippet won t compile however with the error error temporary value dropped while borrowed taumada rusty runner src lib rs let command command new ls arg l arg a temporary value is freed at the end of this statement creates a temporary value which is freed while still in use command output borrow later used here note consider using a let binding to create a longer lived value for more information about this error try rustc explain resolution i suggest to either like add a spawn unwrap or the remove the let binding named command
1
21,318
28,597,120,569
IssuesEvent
2023-04-23 01:02:38
serai-dex/serai
https://api.github.com/repos/serai-dex/serai
opened
DoS by too big batches
bug processor
A malicious ETH validator can create a block flooding Serai with InInstructions. If the encoded size of the SignedBatch exceeds the block size, the batch will fail to be published. This will effectively halt all activity on ETH until a hard fork occurs. If a SignedBatch exceeds some size, we need to split it into multiple batches.
1.0
DoS by too big batches - A malicious ETH validator can create a block flooding Serai with InInstructions. If the encoded size of the SignedBatch exceeds the block size, the batch will fail to be published. This will effectively halt all activity on ETH until a hard fork occurs. If a SignedBatch exceeds some size, we need to split it into multiple batches.
process
dos by too big batches a malicious eth validator can create a block flooding serai with ininstructions if the encoded size of the signedbatch exceeds the block size the batch will fail to be published this will effectively halt all activity on eth until a hard fork occurs if a signedbatch exceeds some size we need to split it into multiple batches
1
4,879
7,758,282,004
IssuesEvent
2018-05-31 19:03:41
GoogleCloudPlatform/google-cloud-java
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-java
closed
Merge without code review allowed
type: process
See https://github.com/GoogleCloudPlatform/google-cloud-java/pull/3193 for example. "Review has been requested on this pull request. It is not required to merge." I suspect the repo should be configured to require review before merge.
1.0
Merge without code review allowed - See https://github.com/GoogleCloudPlatform/google-cloud-java/pull/3193 for example. "Review has been requested on this pull request. It is not required to merge." I suspect the repo should be configured to require review before merge.
process
merge without code review allowed see for example review has been requested on this pull request it is not required to merge i suspect the repo should be configured to require review before merge
1
162,867
20,254,366,916
IssuesEvent
2022-02-14 21:20:03
timf-app-test/ng1
https://api.github.com/repos/timf-app-test/ng1
opened
async-2.6.1.tgz: 4 vulnerabilities (highest severity is: 9.1)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>async-2.6.1.tgz</b></p></summary> <p></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/lodash/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/timf-app-test/ng1/commit/3c6c8b1083ad63c98e2306891044400e62b9545f">3c6c8b1083ad63c98e2306891044400e62b9545f</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2019-10744](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.1 | lodash-4.17.11.tgz | Transitive | 2.6.2 | ✅ | | [CVE-2020-8203](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.4 | lodash-4.17.11.tgz | Transitive | 2.6.2 | ✅ | | [CVE-2021-23337](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.2 | lodash-4.17.11.tgz | Transitive | 2.6.2 | ✅ | | [CVE-2020-28500](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | lodash-4.17.11.tgz | Transitive | 2.6.2 | ✅ | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-10744</summary> ### Vulnerable Library - <b>lodash-4.17.11.tgz</b></p> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - async-2.6.1.tgz (Root Library) - :x: **lodash-4.17.11.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/timf-app-test/ng1/commit/3c6c8b1083ad63c98e2306891044400e62b9545f">3c6c8b1083ad63c98e2306891044400e62b9545f</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload. <p>Publish Date: 2019-07-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.1</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-jf85-cpcp-j695">https://github.com/advisories/GHSA-jf85-cpcp-j695</a></p> <p>Release Date: 2019-07-26</p> <p>Fix Resolution (lodash): 4.17.12</p> <p>Direct dependency fix Resolution (async): 2.6.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-8203</summary> ### Vulnerable Library - <b>lodash-4.17.11.tgz</b></p> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - async-2.6.1.tgz (Root Library) - :x: **lodash-4.17.11.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/timf-app-test/ng1/commit/3c6c8b1083ad63c98e2306891044400e62b9545f">3c6c8b1083ad63c98e2306891044400e62b9545f</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20. <p>Publish Date: 2020-07-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.4</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p> <p>Release Date: 2020-07-15</p> <p>Fix Resolution (lodash): 4.17.19</p> <p>Direct dependency fix Resolution (async): 2.6.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-23337</summary> ### Vulnerable Library - <b>lodash-4.17.11.tgz</b></p> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - async-2.6.1.tgz (Root Library) - :x: **lodash-4.17.11.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/timf-app-test/ng1/commit/3c6c8b1083ad63c98e2306891044400e62b9545f">3c6c8b1083ad63c98e2306891044400e62b9545f</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.2</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution (lodash): 4.17.21</p> <p>Direct dependency fix Resolution (async): 2.6.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-28500</summary> ### Vulnerable Library - <b>lodash-4.17.11.tgz</b></p> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - async-2.6.1.tgz (Root Library) - :x: **lodash-4.17.11.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/timf-app-test/ng1/commit/3c6c8b1083ad63c98e2306891044400e62b9545f">3c6c8b1083ad63c98e2306891044400e62b9545f</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. WhiteSource Note: After conducting further research, WhiteSource has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution (lodash): 4.17.21</p> <p>Direct dependency fix Resolution (async): 2.6.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> <!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"async","packageVersion":"2.6.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"async:2.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-10744","vulnerabilityDetails":"Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}},{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"async","packageVersion":"2.6.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"async:2.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}},{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"async","packageVersion":"2.6.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"async:2.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23337","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}},{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"async","packageVersion":"2.6.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"async:2.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28500","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.\n WhiteSource Note: After conducting further research, WhiteSource has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> -->
True
async-2.6.1.tgz: 4 vulnerabilities (highest severity is: 9.1) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>async-2.6.1.tgz</b></p></summary> <p></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/lodash/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/timf-app-test/ng1/commit/3c6c8b1083ad63c98e2306891044400e62b9545f">3c6c8b1083ad63c98e2306891044400e62b9545f</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2019-10744](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.1 | lodash-4.17.11.tgz | Transitive | 2.6.2 | ✅ | | [CVE-2020-8203](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.4 | lodash-4.17.11.tgz | Transitive | 2.6.2 | ✅ | | [CVE-2021-23337](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.2 | lodash-4.17.11.tgz | Transitive | 2.6.2 | ✅ | | [CVE-2020-28500](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | lodash-4.17.11.tgz | Transitive | 2.6.2 | ✅ | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-10744</summary> ### Vulnerable Library - <b>lodash-4.17.11.tgz</b></p> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - async-2.6.1.tgz (Root Library) - :x: **lodash-4.17.11.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/timf-app-test/ng1/commit/3c6c8b1083ad63c98e2306891044400e62b9545f">3c6c8b1083ad63c98e2306891044400e62b9545f</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload. <p>Publish Date: 2019-07-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.1</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-jf85-cpcp-j695">https://github.com/advisories/GHSA-jf85-cpcp-j695</a></p> <p>Release Date: 2019-07-26</p> <p>Fix Resolution (lodash): 4.17.12</p> <p>Direct dependency fix Resolution (async): 2.6.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-8203</summary> ### Vulnerable Library - <b>lodash-4.17.11.tgz</b></p> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - async-2.6.1.tgz (Root Library) - :x: **lodash-4.17.11.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/timf-app-test/ng1/commit/3c6c8b1083ad63c98e2306891044400e62b9545f">3c6c8b1083ad63c98e2306891044400e62b9545f</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20. <p>Publish Date: 2020-07-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.4</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p> <p>Release Date: 2020-07-15</p> <p>Fix Resolution (lodash): 4.17.19</p> <p>Direct dependency fix Resolution (async): 2.6.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-23337</summary> ### Vulnerable Library - <b>lodash-4.17.11.tgz</b></p> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - async-2.6.1.tgz (Root Library) - :x: **lodash-4.17.11.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/timf-app-test/ng1/commit/3c6c8b1083ad63c98e2306891044400e62b9545f">3c6c8b1083ad63c98e2306891044400e62b9545f</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.2</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution (lodash): 4.17.21</p> <p>Direct dependency fix Resolution (async): 2.6.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-28500</summary> ### Vulnerable Library - <b>lodash-4.17.11.tgz</b></p> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - async-2.6.1.tgz (Root Library) - :x: **lodash-4.17.11.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/timf-app-test/ng1/commit/3c6c8b1083ad63c98e2306891044400e62b9545f">3c6c8b1083ad63c98e2306891044400e62b9545f</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. WhiteSource Note: After conducting further research, WhiteSource has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution (lodash): 4.17.21</p> <p>Direct dependency fix Resolution (async): 2.6.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> <!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"async","packageVersion":"2.6.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"async:2.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-10744","vulnerabilityDetails":"Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}},{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"async","packageVersion":"2.6.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"async:2.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}},{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"async","packageVersion":"2.6.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"async:2.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23337","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}},{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"async","packageVersion":"2.6.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"async:2.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.6.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28500","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.\n WhiteSource Note: After conducting further research, WhiteSource has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> -->
non_process
async tgz vulnerabilities highest severity is vulnerable library async tgz path to dependency file package json path to vulnerable library node modules lodash package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high lodash tgz transitive ✅ high lodash tgz transitive ✅ high lodash tgz transitive ✅ medium lodash tgz transitive ✅ details cve vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy async tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of lodash lower than are vulnerable to prototype pollution the function defaultsdeep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash direct dependency fix resolution async rescue worker helmet automatic remediation is available for this issue cve vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy async tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution attack when using zipobjectdeep in lodash before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash direct dependency fix resolution async rescue worker helmet automatic remediation is available for this issue cve vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy async tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash direct dependency fix resolution async rescue worker helmet automatic remediation is available for this issue cve vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy async tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions whitesource note after conducting further research whitesource has determined that cve only affects environments with versions to of lodash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash direct dependency fix resolution async rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue istransitivedependency false dependencytree async isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails versions of lodash lower than are vulnerable to prototype pollution the function defaultsdeep could be tricked into adding or modifying properties of object prototype using a constructor payload vulnerabilityurl istransitivedependency false dependencytree async isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails prototype pollution attack when using zipobjectdeep in lodash before vulnerabilityurl istransitivedependency false dependencytree async isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails lodash versions prior to are vulnerable to command injection via the template function vulnerabilityurl istransitivedependency false dependencytree async isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions n whitesource note after conducting further research whitesource has determined that cve only affects environments with versions to of lodash vulnerabilityurl
0
3,902
6,823,000,717
IssuesEvent
2017-11-07 22:04:52
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
whenBlock --verbose --list should show all ethName blocks / dates
status-inprocess tools-whenBlock type-enhancement
It would be very simple, if in list mode, if --verbose was on, to read and display all the start dates of the famous token launches.
1.0
whenBlock --verbose --list should show all ethName blocks / dates - It would be very simple, if in list mode, if --verbose was on, to read and display all the start dates of the famous token launches.
process
whenblock verbose list should show all ethname blocks dates it would be very simple if in list mode if verbose was on to read and display all the start dates of the famous token launches
1
15,046
18,762,569,321
IssuesEvent
2021-11-05 18:19:54
ORNL-AMO/AMO-Tools-Suite
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Suite
closed
Cascade Heat
Needs Verification Process Heating Calculator
Issue overview -------------- Need to update how heat cascade works ![image.png](https://images.zenhubusercontent.com/5cd48a2af8cffa5a19122d27/101b3995-0248-4a03-b943-8b80cdaea117) ![image.png](https://images.zenhubusercontent.com/5cd48a2af8cffa5a19122d27/78707bc8-d81f-4424-9fe0-a3628bb793ad) ![image.png](https://images.zenhubusercontent.com/5cd48a2af8cffa5a19122d27/c6a36213-092a-4ca6-bb6f-641d0e50cb42) I'll send the excel via teams.
1.0
Cascade Heat - Issue overview -------------- Need to update how heat cascade works ![image.png](https://images.zenhubusercontent.com/5cd48a2af8cffa5a19122d27/101b3995-0248-4a03-b943-8b80cdaea117) ![image.png](https://images.zenhubusercontent.com/5cd48a2af8cffa5a19122d27/78707bc8-d81f-4424-9fe0-a3628bb793ad) ![image.png](https://images.zenhubusercontent.com/5cd48a2af8cffa5a19122d27/c6a36213-092a-4ca6-bb6f-641d0e50cb42) I'll send the excel via teams.
process
cascade heat issue overview need to update how heat cascade works i ll send the excel via teams
1
18,158
24,193,351,180
IssuesEvent
2022-09-23 20:07:01
hashgraph/hedera-json-rpc-relay
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
opened
Add automated prettier command
enhancement P3 process
### Problem Files continue to have inconsistent formatting since not all PRs apply the prettier logic. ### Solution Automate the prettier command run. Consider something like [husky precommit](https://prettier.io/docs/en/precommit.html) ### Alternatives _No response_
1.0
Add automated prettier command - ### Problem Files continue to have inconsistent formatting since not all PRs apply the prettier logic. ### Solution Automate the prettier command run. Consider something like [husky precommit](https://prettier.io/docs/en/precommit.html) ### Alternatives _No response_
process
add automated prettier command problem files continue to have inconsistent formatting since not all prs apply the prettier logic solution automate the prettier command run consider something like alternatives no response
1
22,727
32,045,245,145
IssuesEvent
2023-09-23 00:37:17
h4sh5/npm-auto-scanner
https://api.github.com/repos/h4sh5/npm-auto-scanner
opened
nx 16.9.0 has 3 guarddog issues
npm-install-script shady-links npm-silent-process-execution
```{"npm-install-script":[{"code":" \"postinstall\": \"node ./bin/post-install\"","location":"package/package.json:12","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" const p = (0, child_process_1.spawn)('node', [scriptPath, `\"${this.cachePath}\"`], {\n stdio: 'ignore',\n detached: true,\n shell: false,\n });","location":"package/src/tasks-runner/cache.js:28","message":"This package is silently executing another executable"}],"shady-links":[{"code":"(self.webpackChunk=self.webpackChunk||[]).push([[179],{13148:(e,t,n)=\u003e{\"use strict\";var r=n(33286),o=function(){return o=Object.assign||function(e){for(var t,n=1,r=arguments.length;n\u003cr;n++)for(var o in t=arguments[n])Object.prototype.hasOwn...e.s=t)}]);","location":"package/src/core/graph/main.js:1","message":"This package contains an URL to a domain with a suspicious extension"}]}```
1.0
nx 16.9.0 has 3 guarddog issues - ```{"npm-install-script":[{"code":" \"postinstall\": \"node ./bin/post-install\"","location":"package/package.json:12","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" const p = (0, child_process_1.spawn)('node', [scriptPath, `\"${this.cachePath}\"`], {\n stdio: 'ignore',\n detached: true,\n shell: false,\n });","location":"package/src/tasks-runner/cache.js:28","message":"This package is silently executing another executable"}],"shady-links":[{"code":"(self.webpackChunk=self.webpackChunk||[]).push([[179],{13148:(e,t,n)=\u003e{\"use strict\";var r=n(33286),o=function(){return o=Object.assign||function(e){for(var t,n=1,r=arguments.length;n\u003cr;n++)for(var o in t=arguments[n])Object.prototype.hasOwn...e.s=t)}]);","location":"package/src/core/graph/main.js:1","message":"This package contains an URL to a domain with a suspicious extension"}]}```
process
nx has guarddog issues npm install script npm silent process execution n stdio ignore n detached true n shell false n location package src tasks runner cache js message this package is silently executing another executable shady links push e t n use strict var r n o function return o object assign function e for var t n r arguments length n n for var o in t arguments object prototype hasown e s t location package src core graph main js message this package contains an url to a domain with a suspicious extension
1
18,350
24,475,164,304
IssuesEvent
2022-10-08 04:20:02
eosnetworkfoundation/devrel
https://api.github.com/repos/eosnetworkfoundation/devrel
closed
Github training 3 - working with Github Web
Process
Train project manager on using web-based Github functions for: - create a PR - review a PR - update a PR Training [Part 1](https://github.com/eosnetworkfoundation/devrel/issues/23), [Part 2](https://github.com/eosnetworkfoundation/devrel/issues/24)
1.0
Github training 3 - working with Github Web - Train project manager on using web-based Github functions for: - create a PR - review a PR - update a PR Training [Part 1](https://github.com/eosnetworkfoundation/devrel/issues/23), [Part 2](https://github.com/eosnetworkfoundation/devrel/issues/24)
process
github training working with github web train project manager on using web based github functions for create a pr review a pr update a pr training
1
17,988
24,009,796,413
IssuesEvent
2022-09-14 17:44:16
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
icon, title, and resize error
bug terminal-persistence terminal-process
``` ERR Could not find pty on pty host: CodeExpectedError: Could not find pty on pty host at h._throwIfNoPty (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:36849) at h.resize (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:33720) at Object.call (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:16:8386) at u.onPromise (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5833) at u.onRawMessage (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5216) at /Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:4502 at _.invoke (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:145) at A.deliver (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:2275) at F.fire (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:1853) at process.X (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:9:20708) at process.emit (node:events:526:28) at emit (node:internal/child_process:938:14) at process.processTicksAndRejections (node:internal/process/task_queues:84:21) workbench.desktop.main.js:606 ERR Could not find pty on pty host: CodeExpectedError: Could not find pty on pty host at h._throwIfNoPty (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:36849) at h.updateIcon (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:32643) at Object.call (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:16:8386) at u.onPromise (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5833) at u.onRawMessage (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5216) at /Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:4502 at _.invoke (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:145) at A.deliver (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:2275) at F.fire (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:1853) at process.X (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:9:20708) at process.emit (node:events:526:28) at emit (node:internal/child_process:938:14) at process.processTicksAndRejections (node:internal/process/task_queues:84:21) workbench.desktop.main.js:606 ERR Could not find pty on pty host: CodeExpectedError: Could not find pty on pty host at h._throwIfNoPty (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:36849) at h.updateTitle (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:32581) at Object.call (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:16:8386) at u.onPromise (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5833) at u.onRawMessage (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5216) at /Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:4502 at _.invoke (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:145) at A.deliver (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:2275) at F.fire (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:1853) at process.X (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:9:20708) at process.emit (node:events:526:28) at emit (node:internal/child_process:938:14) at process.processTicksAndRejections (node:internal/process/task_queues:84:21) ```
1.0
icon, title, and resize error - ``` ERR Could not find pty on pty host: CodeExpectedError: Could not find pty on pty host at h._throwIfNoPty (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:36849) at h.resize (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:33720) at Object.call (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:16:8386) at u.onPromise (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5833) at u.onRawMessage (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5216) at /Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:4502 at _.invoke (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:145) at A.deliver (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:2275) at F.fire (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:1853) at process.X (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:9:20708) at process.emit (node:events:526:28) at emit (node:internal/child_process:938:14) at process.processTicksAndRejections (node:internal/process/task_queues:84:21) workbench.desktop.main.js:606 ERR Could not find pty on pty host: CodeExpectedError: Could not find pty on pty host at h._throwIfNoPty (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:36849) at h.updateIcon (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:32643) at Object.call (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:16:8386) at u.onPromise (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5833) at u.onRawMessage (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5216) at /Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:4502 at _.invoke (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:145) at A.deliver (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:2275) at F.fire (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:1853) at process.X (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:9:20708) at process.emit (node:events:526:28) at emit (node:internal/child_process:938:14) at process.processTicksAndRejections (node:internal/process/task_queues:84:21) workbench.desktop.main.js:606 ERR Could not find pty on pty host: CodeExpectedError: Could not find pty on pty host at h._throwIfNoPty (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:36849) at h.updateTitle (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:23:32581) at Object.call (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:16:8386) at u.onPromise (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5833) at u.onRawMessage (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:5216) at /Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:15:4502 at _.invoke (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:145) at A.deliver (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:2275) at F.fire (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:11:1853) at process.X (/Users/meganrogge/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/platform/terminal/node/ptyHostMain.js:9:20708) at process.emit (node:events:526:28) at emit (node:internal/child_process:938:14) at process.processTicksAndRejections (node:internal/process/task_queues:84:21) ```
process
icon title and resize error err could not find pty on pty host codeexpectederror could not find pty on pty host at h throwifnopty users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at h resize users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at object call users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at u onpromise users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at u onrawmessage users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at invoke users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at a deliver users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at f fire users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at process x users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at process emit node events at emit node internal child process at process processticksandrejections node internal process task queues workbench desktop main js err could not find pty on pty host codeexpectederror could not find pty on pty host at h throwifnopty users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at h updateicon users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at object call users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at u onpromise users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at u onrawmessage users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at invoke users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at a deliver users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at f fire users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at process x users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at process emit node events at emit node internal child process at process processticksandrejections node internal process task queues workbench desktop main js err could not find pty on pty host codeexpectederror could not find pty on pty host at h throwifnopty users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at h updatetitle users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at object call users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at u onpromise users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at u onrawmessage users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at invoke users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at a deliver users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at f fire users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at process x users meganrogge applications visual studio code insiders app contents resources app out vs platform terminal node ptyhostmain js at process emit node events at emit node internal child process at process processticksandrejections node internal process task queues
1
22,043
30,566,391,390
IssuesEvent
2023-07-20 18:07:35
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[MLv2] Add a method to get query's database ID
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
We only allow joining tables from the same database as the source table. The FE needs the database ID to filter out other databases from the join data picker
1.0
[MLv2] Add a method to get query's database ID - We only allow joining tables from the same database as the source table. The FE needs the database ID to filter out other databases from the join data picker
process
add a method to get query s database id we only allow joining tables from the same database as the source table the fe needs the database id to filter out other databases from the join data picker
1
944
3,410,647,198
IssuesEvent
2015-12-04 21:11:09
MaretEngineering/MROV
https://api.github.com/repos/MaretEngineering/MROV
closed
Make these four lines into a rectagle
enhancement Processing
line 182 ```Java line(500, 300, 750, 300); line(750, 300, 750, 550); line(750, 550, 500, 550); line(500, 550, 500, 300); ```
1.0
Make these four lines into a rectagle - line 182 ```Java line(500, 300, 750, 300); line(750, 300, 750, 550); line(750, 550, 500, 550); line(500, 550, 500, 300); ```
process
make these four lines into a rectagle line java line line line line
1
34,814
7,460,637,296
IssuesEvent
2018-03-30 20:39:05
kerdokullamae/test_koik_issued
https://api.github.com/repos/kerdokullamae/test_koik_issued
closed
Täpsem otsing. Puuduvad piirdaatumite järgi ja mitmed muud otsinguväljad
C: AIS P: highest R: fixed T: defect
**Reported by aadikaljuvee on 17 Mar 2017 08:45 UTC** Lähteülesandes näpuga järge ajades: - Saab otsida eeldefineeritud Perioodide järgi, aga ei saa praegu otsida isedefineeritava Algus- ja Lõppdaatumite järgi. - Perioodi puhul ei saa täpsustada, kas otsitakse perioodis Sisalduvat või perioodiga Kattuvate daatumitega ainest. - Valdkonna järgi otsing ei võimalda täpset/alamvaldkondadega otsingu valikut - Valdkonna järgi otsing ei võimalda valdkonda hierarhia-puust valida - Ainese liigi järgi otsing puudub - Isiku järgi otsing otsib täpselt, puudub valikuvõimalus samanimeliste ja otsisõna sisaldumise järgi hajusotsinguks - Kohanime, kohahierarhiast ja Koht_täpsemalt järgi otsing puuduvad. - Puudub KÜ sisestamisaja järgi otsing
1.0
Täpsem otsing. Puuduvad piirdaatumite järgi ja mitmed muud otsinguväljad - **Reported by aadikaljuvee on 17 Mar 2017 08:45 UTC** Lähteülesandes näpuga järge ajades: - Saab otsida eeldefineeritud Perioodide järgi, aga ei saa praegu otsida isedefineeritava Algus- ja Lõppdaatumite järgi. - Perioodi puhul ei saa täpsustada, kas otsitakse perioodis Sisalduvat või perioodiga Kattuvate daatumitega ainest. - Valdkonna järgi otsing ei võimalda täpset/alamvaldkondadega otsingu valikut - Valdkonna järgi otsing ei võimalda valdkonda hierarhia-puust valida - Ainese liigi järgi otsing puudub - Isiku järgi otsing otsib täpselt, puudub valikuvõimalus samanimeliste ja otsisõna sisaldumise järgi hajusotsinguks - Kohanime, kohahierarhiast ja Koht_täpsemalt järgi otsing puuduvad. - Puudub KÜ sisestamisaja järgi otsing
non_process
täpsem otsing puuduvad piirdaatumite järgi ja mitmed muud otsinguväljad reported by aadikaljuvee on mar utc lähteülesandes näpuga järge ajades saab otsida eeldefineeritud perioodide järgi aga ei saa praegu otsida isedefineeritava algus ja lõppdaatumite järgi perioodi puhul ei saa täpsustada kas otsitakse perioodis sisalduvat või perioodiga kattuvate daatumitega ainest valdkonna järgi otsing ei võimalda täpset alamvaldkondadega otsingu valikut valdkonna järgi otsing ei võimalda valdkonda hierarhia puust valida ainese liigi järgi otsing puudub isiku järgi otsing otsib täpselt puudub valikuvõimalus samanimeliste ja otsisõna sisaldumise järgi hajusotsinguks kohanime kohahierarhiast ja koht täpsemalt järgi otsing puuduvad puudub kü sisestamisaja järgi otsing
0
20,102
26,637,215,313
IssuesEvent
2023-01-24 23:17:45
firebase/firebase-cpp-sdk
https://api.github.com/repos/firebase/firebase-cpp-sdk
closed
[C++] Nightly Integration Testing Report for Firestore
type: process nightly-testing
<hidden value="integration-test-status-comment"></hidden> ### ✅&nbsp; [build against repo] Integration test succeeded! Requested by @sunmou99 on commit 9c494f708c5918d1b82e9624460b7ab4bbaf8431 Last updated: Tue Jan 24 04:03 PST 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3995022157)** <hidden value="integration-test-status-comment"></hidden> *** ### ❌&nbsp; [build against SDK] Integration test FAILED Requested by @firebase-workflow-trigger[bot] on commit 9c494f708c5918d1b82e9624460b7ab4bbaf8431 Last updated: Tue Jan 24 05:59 PST 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3996121210)** | Failures | Configs | |----------|---------| | missing_log | [BUILD] [ERROR] [MacOS] [1/2 ssl_lib: arm64] [boringssl]<br/>[TEST] [ERROR] [MacOS] [1/2 ssl_lib: arm64] [boringssl]<br/> | Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against tip] Integration test succeeded! Requested by @sunmou99 on commit 9c494f708c5918d1b82e9624460b7ab4bbaf8431 Last updated: Tue Jan 24 03:47 PST 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3995489492)**
1.0
[C++] Nightly Integration Testing Report for Firestore - <hidden value="integration-test-status-comment"></hidden> ### ✅&nbsp; [build against repo] Integration test succeeded! Requested by @sunmou99 on commit 9c494f708c5918d1b82e9624460b7ab4bbaf8431 Last updated: Tue Jan 24 04:03 PST 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3995022157)** <hidden value="integration-test-status-comment"></hidden> *** ### ❌&nbsp; [build against SDK] Integration test FAILED Requested by @firebase-workflow-trigger[bot] on commit 9c494f708c5918d1b82e9624460b7ab4bbaf8431 Last updated: Tue Jan 24 05:59 PST 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3996121210)** | Failures | Configs | |----------|---------| | missing_log | [BUILD] [ERROR] [MacOS] [1/2 ssl_lib: arm64] [boringssl]<br/>[TEST] [ERROR] [MacOS] [1/2 ssl_lib: arm64] [boringssl]<br/> | Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against tip] Integration test succeeded! Requested by @sunmou99 on commit 9c494f708c5918d1b82e9624460b7ab4bbaf8431 Last updated: Tue Jan 24 03:47 PST 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3995489492)**
process
nightly integration testing report for firestore ✅ nbsp integration test succeeded requested by on commit last updated tue jan pst ❌ nbsp integration test failed requested by firebase workflow trigger on commit last updated tue jan pst failures configs missing log add flaky tests to ✅ nbsp integration test succeeded requested by on commit last updated tue jan pst
1
14,182
17,089,923,496
IssuesEvent
2021-07-08 16:05:02
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
The examples on this page would be 200% better if they included the output after the pipeline runs.
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
[Enter feedback here] --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
The examples on this page would be 200% better if they included the output after the pipeline runs. - [Enter feedback here] --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
the examples on this page would be better if they included the output after the pipeline runs document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
20,096
26,629,090,268
IssuesEvent
2023-01-24 16:29:59
googleapis/gapic-generator-java
https://api.github.com/repos/googleapis/gapic-generator-java
opened
Stop relying on googleapis/gax-java repository in WORKSPACE file
type: process priority: p3
Stop relying on googleapis/gax-java repository in WORKSPACE file, https://github.com/googleapis/gapic-generator-java/blob/92984db73edfea2cbb9ab84244da791e6bc1bd2d/WORKSPACE#L15
1.0
Stop relying on googleapis/gax-java repository in WORKSPACE file - Stop relying on googleapis/gax-java repository in WORKSPACE file, https://github.com/googleapis/gapic-generator-java/blob/92984db73edfea2cbb9ab84244da791e6bc1bd2d/WORKSPACE#L15
process
stop relying on googleapis gax java repository in workspace file stop relying on googleapis gax java repository in workspace file
1
16,088
20,255,961,780
IssuesEvent
2022-02-14 23:16:15
varabyte/kobweb
https://api.github.com/repos/varabyte/kobweb
opened
Remove deprecated task listener code in Kobweb Gradle Application plugin
process
This stuff: https://github.com/varabyte/kobweb/blob/92b2fb729054038eae2788a580d25ef4e078ecf5/gradle-plugins/application/src/main/kotlin/com/varabyte/kobweb/gradle/application/KobwebApplicationPlugin.kt#L132 needs to be migrated over to using a build service. Something to do with an upcoming "configuration cache" change for Gradle 8.0
1.0
Remove deprecated task listener code in Kobweb Gradle Application plugin - This stuff: https://github.com/varabyte/kobweb/blob/92b2fb729054038eae2788a580d25ef4e078ecf5/gradle-plugins/application/src/main/kotlin/com/varabyte/kobweb/gradle/application/KobwebApplicationPlugin.kt#L132 needs to be migrated over to using a build service. Something to do with an upcoming "configuration cache" change for Gradle 8.0
process
remove deprecated task listener code in kobweb gradle application plugin this stuff needs to be migrated over to using a build service something to do with an upcoming configuration cache change for gradle
1
1,034
3,489,705,370
IssuesEvent
2016-01-04 02:25:54
osresearch/vst
https://api.github.com/repos/osresearch/vst
closed
3D shapes in processing
processing
Add support for generating 3D wireframes in processing. This can't use the p3d calls since we need vectors before they go to the OpenGL renderer.
1.0
3D shapes in processing - Add support for generating 3D wireframes in processing. This can't use the p3d calls since we need vectors before they go to the OpenGL renderer.
process
shapes in processing add support for generating wireframes in processing this can t use the calls since we need vectors before they go to the opengl renderer
1
14,536
17,632,526,504
IssuesEvent
2021-08-19 09:45:21
KI-Vorlesung/kitest
https://api.github.com/repos/KI-Vorlesung/kitest
closed
Erzeugen von Abbildungen aus Tex-Artefakten
WEB SLIDES PRE-PROCESSING
Code-Umgebungen mit Math-Escape als eigenständige TeX-Dateien in Images-Ordner ablegen. Im Vorverarbeitungsschritt alle `images/*.tex`-Dateien finden und mit LaTeX zu `.png` übersetzen. Danach dann die Slides und/oder die Webseiten erzeugen: In den jeweiligen Seiten (`index.md`) werden von vornherein die Bilder eingebunden.
1.0
Erzeugen von Abbildungen aus Tex-Artefakten - Code-Umgebungen mit Math-Escape als eigenständige TeX-Dateien in Images-Ordner ablegen. Im Vorverarbeitungsschritt alle `images/*.tex`-Dateien finden und mit LaTeX zu `.png` übersetzen. Danach dann die Slides und/oder die Webseiten erzeugen: In den jeweiligen Seiten (`index.md`) werden von vornherein die Bilder eingebunden.
process
erzeugen von abbildungen aus tex artefakten code umgebungen mit math escape als eigenständige tex dateien in images ordner ablegen im vorverarbeitungsschritt alle images tex dateien finden und mit latex zu png übersetzen danach dann die slides und oder die webseiten erzeugen in den jeweiligen seiten index md werden von vornherein die bilder eingebunden
1
776,423
27,259,838,065
IssuesEvent
2023-02-22 14:11:48
mui/mui-toolpad
https://api.github.com/repos/mui/mui-toolpad
opened
Support inline editable markdown
priority: low enhancement
### Duplicates - [X] I have searched the existing issues ### Latest version - [X] I have tested the latest version ### Summary 💡 Extend the inline editing functionality for text components to markdown: https://github.com/mui/mui-toolpad/pull/1694 ### Examples 🌈 _No response_ ### Motivation 🔦 _No response_
1.0
Support inline editable markdown - ### Duplicates - [X] I have searched the existing issues ### Latest version - [X] I have tested the latest version ### Summary 💡 Extend the inline editing functionality for text components to markdown: https://github.com/mui/mui-toolpad/pull/1694 ### Examples 🌈 _No response_ ### Motivation 🔦 _No response_
non_process
support inline editable markdown duplicates i have searched the existing issues latest version i have tested the latest version summary 💡 extend the inline editing functionality for text components to markdown examples 🌈 no response motivation 🔦 no response
0
716,901
24,652,617,619
IssuesEvent
2022-10-17 20:02:11
AuthGuard/AuthGuard
https://api.github.com/repos/AuthGuard/AuthGuard
opened
A generic error is returned when an email address or phone number is updated
bug high priority
Instead of proper descriptive errors like those returned with POST requests, PATCH requests return generic errors which don't indicate that an email is already registered for example.
1.0
A generic error is returned when an email address or phone number is updated - Instead of proper descriptive errors like those returned with POST requests, PATCH requests return generic errors which don't indicate that an email is already registered for example.
non_process
a generic error is returned when an email address or phone number is updated instead of proper descriptive errors like those returned with post requests patch requests return generic errors which don t indicate that an email is already registered for example
0
18,990
24,980,474,990
IssuesEvent
2022-11-02 11:16:22
aiidateam/aiida-core
https://api.github.com/repos/aiidateam/aiida-core
closed
`InputPort`: allow explicitly passed `None` for ports that are not required
priority/nice-to-have type/enhancement topic/processes
Currently, when passing `None` to an input of a `Process` will raise a validation exception, even if that input is not required. This can be kind of surprising for a user. Consider the following: ```python def accepts_none(a=None): pass accepts_none() accepts_none(None) accepts_none(a=None) ``` All of these invocations work just fine, as one would expect. The same holds if `accepts_none` is decorated with `calcfunction`. Process functions will also allow `None` for functions that define ``None`` as the default. The following, however, will raise: ``` class AcceptNoneProcess(Process): """Simple process with dynamic input namespace.""" _node_class = orm.WorkflowNode @classmethod def define(cls, spec): super().define(spec) spec.input('not_required', valid_type=orm.Int, required=False) run(AcceptNoneProcess) # This works just fine run(AcceptNoneProcess, not_required=None) # But this will except ``` This is not consistent. We should consider making `None` an automatically accepted type for non-required input ports.
1.0
`InputPort`: allow explicitly passed `None` for ports that are not required - Currently, when passing `None` to an input of a `Process` will raise a validation exception, even if that input is not required. This can be kind of surprising for a user. Consider the following: ```python def accepts_none(a=None): pass accepts_none() accepts_none(None) accepts_none(a=None) ``` All of these invocations work just fine, as one would expect. The same holds if `accepts_none` is decorated with `calcfunction`. Process functions will also allow `None` for functions that define ``None`` as the default. The following, however, will raise: ``` class AcceptNoneProcess(Process): """Simple process with dynamic input namespace.""" _node_class = orm.WorkflowNode @classmethod def define(cls, spec): super().define(spec) spec.input('not_required', valid_type=orm.Int, required=False) run(AcceptNoneProcess) # This works just fine run(AcceptNoneProcess, not_required=None) # But this will except ``` This is not consistent. We should consider making `None` an automatically accepted type for non-required input ports.
process
inputport allow explicitly passed none for ports that are not required currently when passing none to an input of a process will raise a validation exception even if that input is not required this can be kind of surprising for a user consider the following python def accepts none a none pass accepts none accepts none none accepts none a none all of these invocations work just fine as one would expect the same holds if accepts none is decorated with calcfunction process functions will also allow none for functions that define none as the default the following however will raise class acceptnoneprocess process simple process with dynamic input namespace node class orm workflownode classmethod def define cls spec super define spec spec input not required valid type orm int required false run acceptnoneprocess this works just fine run acceptnoneprocess not required none but this will except this is not consistent we should consider making none an automatically accepted type for non required input ports
1
9,245
12,277,590,073
IssuesEvent
2020-05-08 08:13:44
bazelbuild/rules_python
https://api.github.com/repos/bazelbuild/rules_python
opened
Stakeholder Reach-out [Maintainer Communication]
type: process
Myself (Jonathon Belotti) and @andyscott (Andy Scott) recently became maintainers of rules_python. We hail from [Canva](https://www.canva.com/) and [Stripe](https://stripe.com/), respectively. Within our companies Bazel is used extensively and supporting Python within Bazel is important. We know the needs of our respective companies well. Before we make any changes to rules_python we'd also like to understand the needs of the community. This way we can better guide the development of the rules. Specifically, we'd love to know: - Which companies are using Python with Bazel? - Which rule sets are you using for Python? - rules_python, [rules_python_external](https://github.com/dillon-giacoppo/rules_python_external), [ali5h/rules_pip](https://github.com/soniaai/rules_poetry), [soniaa/rules_poetry](https://github.com/soniaai/rules_poetry)? - Are you using any internal custom rules? - Are you willing to contribute? Feel free to respond directly on this issue, or reach out to either of us on Slack. We are both available available there and should be reasonably responsive to messages. To join the Slack, go to [https://slack.bazel.build/](https://slack.bazel.build/). It's also worth noting that one of our long term goals includes establishing a healthy group of maintainers with a stake in the success of the project. If you would like to be involved, please let us know. We won't be adding additional maintainers immediately. However, we are interested in establishing relationships now.
1.0
Stakeholder Reach-out [Maintainer Communication] - Myself (Jonathon Belotti) and @andyscott (Andy Scott) recently became maintainers of rules_python. We hail from [Canva](https://www.canva.com/) and [Stripe](https://stripe.com/), respectively. Within our companies Bazel is used extensively and supporting Python within Bazel is important. We know the needs of our respective companies well. Before we make any changes to rules_python we'd also like to understand the needs of the community. This way we can better guide the development of the rules. Specifically, we'd love to know: - Which companies are using Python with Bazel? - Which rule sets are you using for Python? - rules_python, [rules_python_external](https://github.com/dillon-giacoppo/rules_python_external), [ali5h/rules_pip](https://github.com/soniaai/rules_poetry), [soniaa/rules_poetry](https://github.com/soniaai/rules_poetry)? - Are you using any internal custom rules? - Are you willing to contribute? Feel free to respond directly on this issue, or reach out to either of us on Slack. We are both available available there and should be reasonably responsive to messages. To join the Slack, go to [https://slack.bazel.build/](https://slack.bazel.build/). It's also worth noting that one of our long term goals includes establishing a healthy group of maintainers with a stake in the success of the project. If you would like to be involved, please let us know. We won't be adding additional maintainers immediately. However, we are interested in establishing relationships now.
process
stakeholder reach out myself jonathon belotti and andyscott andy scott recently became maintainers of rules python we hail from and respectively within our companies bazel is used extensively and supporting python within bazel is important we know the needs of our respective companies well before we make any changes to rules python we d also like to understand the needs of the community this way we can better guide the development of the rules specifically we d love to know which companies are using python with bazel which rule sets are you using for python rules python are you using any internal custom rules are you willing to contribute feel free to respond directly on this issue or reach out to either of us on slack we are both available available there and should be reasonably responsive to messages to join the slack go to it s also worth noting that one of our long term goals includes establishing a healthy group of maintainers with a stake in the success of the project if you would like to be involved please let us know we won t be adding additional maintainers immediately however we are interested in establishing relationships now
1
14,035
16,833,097,113
IssuesEvent
2021-06-18 08:22:16
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
opened
fx_data not preserved as 'cell_measures' after iris aggregated_by and 'extract_levels' processor
bug iris preprocessor
Running a recipe with time aggregation (any using `cube.aggregated_by()`) followed by area statistics crashes as the latter doesn't find the 'cell_measures' variable in the cube. Note that this extends also to `cube.collapsed()` operation. I've been already discussing this with @schlunma in #1096 and after digging a bit of testing I realized that the example we looked in there (https://github.com/ESMValGroup/ESMValCore/issues/1096#issuecomment-848674115) was not compliant with the `cell_measures` assigned by the code to cubes, as `time` coord was missing. As `cell_measures` depend also on the `time` coordinate, by extending @schlunma example with time also in the `area` variable, the problem finally come up (see details) <details> ```py >>> time_coord = iris.coords.DimCoord([0, 1], var_name='time', units='day') >>> lat_coord = iris.coords.DimCoord([0.0], var_name='lat', units='rad') >>> year_coord = iris.coords.AuxCoord([1900, 1900], var_name='year') >>> x = iris.cube.Cube([[1.0], [2.0]], var_name='x', dim_coords_and_dims=[(time_coord, 0), (lat_coord, 1)]) >>> x.add_aux_coord(year_coord, 0) >>> area = iris.coords.CellMeasure([[50.], [100.0]], var_name='areacella') >>> x.add_cell_measure(area, [0,1]) >>> print(x) x / (unknown) (time: 2; lat: 1) Dimension coordinates: time x - lat - x Auxiliary coordinates: year x - Cell measures: areacella x x >>> new_x = x.aggregated_by('year', iris.analysis.MEAN) >>> print(new_x) x / (unknown) (time: 1; lat: 1) Dimension coordinates: time x - lat - x Auxiliary coordinates: year x - Cell methods: mean: year >>> col_x = x.collapsed('time', iris.analysis.MEAN) >>> print(col_x) x / (unknown) (lat: 1) Dimension coordinates: lat x Scalar coordinates: time: 0 day, bound=(0, 1) day year: 1900, bound=(1900, 1900) Cell methods: mean: time >>> ``` </details> This relates to the choice made in iris to discard `cell_measures` when these are time dependent as I guess it will be very tricky to handle it in the correct way (or maybe it is simply a bug!). **Second point**, but for a different reason, `cell_measures` are lost also in the `extract_levels` processor, where a new cube is generated by scratch and this property is not propagated (a practical example is I want to compute global average of a specific layer from a 3D variable, e.g. seawater oxygen). In this case the 2D `cell_measures` is the area and should be associated to the cube, while in the case of a 3D `cell_measures` it should be coherently extracted as the variable data. [main_log_debug.txt](https://github.com/ESMValGroup/ESMValCore/files/6675662/main_log_debug.txt)
1.0
fx_data not preserved as 'cell_measures' after iris aggregated_by and 'extract_levels' processor - Running a recipe with time aggregation (any using `cube.aggregated_by()`) followed by area statistics crashes as the latter doesn't find the 'cell_measures' variable in the cube. Note that this extends also to `cube.collapsed()` operation. I've been already discussing this with @schlunma in #1096 and after digging a bit of testing I realized that the example we looked in there (https://github.com/ESMValGroup/ESMValCore/issues/1096#issuecomment-848674115) was not compliant with the `cell_measures` assigned by the code to cubes, as `time` coord was missing. As `cell_measures` depend also on the `time` coordinate, by extending @schlunma example with time also in the `area` variable, the problem finally come up (see details) <details> ```py >>> time_coord = iris.coords.DimCoord([0, 1], var_name='time', units='day') >>> lat_coord = iris.coords.DimCoord([0.0], var_name='lat', units='rad') >>> year_coord = iris.coords.AuxCoord([1900, 1900], var_name='year') >>> x = iris.cube.Cube([[1.0], [2.0]], var_name='x', dim_coords_and_dims=[(time_coord, 0), (lat_coord, 1)]) >>> x.add_aux_coord(year_coord, 0) >>> area = iris.coords.CellMeasure([[50.], [100.0]], var_name='areacella') >>> x.add_cell_measure(area, [0,1]) >>> print(x) x / (unknown) (time: 2; lat: 1) Dimension coordinates: time x - lat - x Auxiliary coordinates: year x - Cell measures: areacella x x >>> new_x = x.aggregated_by('year', iris.analysis.MEAN) >>> print(new_x) x / (unknown) (time: 1; lat: 1) Dimension coordinates: time x - lat - x Auxiliary coordinates: year x - Cell methods: mean: year >>> col_x = x.collapsed('time', iris.analysis.MEAN) >>> print(col_x) x / (unknown) (lat: 1) Dimension coordinates: lat x Scalar coordinates: time: 0 day, bound=(0, 1) day year: 1900, bound=(1900, 1900) Cell methods: mean: time >>> ``` </details> This relates to the choice made in iris to discard `cell_measures` when these are time dependent as I guess it will be very tricky to handle it in the correct way (or maybe it is simply a bug!). **Second point**, but for a different reason, `cell_measures` are lost also in the `extract_levels` processor, where a new cube is generated by scratch and this property is not propagated (a practical example is I want to compute global average of a specific layer from a 3D variable, e.g. seawater oxygen). In this case the 2D `cell_measures` is the area and should be associated to the cube, while in the case of a 3D `cell_measures` it should be coherently extracted as the variable data. [main_log_debug.txt](https://github.com/ESMValGroup/ESMValCore/files/6675662/main_log_debug.txt)
process
fx data not preserved as cell measures after iris aggregated by and extract levels processor running a recipe with time aggregation any using cube aggregated by followed by area statistics crashes as the latter doesn t find the cell measures variable in the cube note that this extends also to cube collapsed operation i ve been already discussing this with schlunma in and after digging a bit of testing i realized that the example we looked in there was not compliant with the cell measures assigned by the code to cubes as time coord was missing as cell measures depend also on the time coordinate by extending schlunma example with time also in the area variable the problem finally come up see details py time coord iris coords dimcoord var name time units day lat coord iris coords dimcoord var name lat units rad year coord iris coords auxcoord var name year x iris cube cube var name x dim coords and dims x add aux coord year coord area iris coords cellmeasure var name areacella x add cell measure area print x x unknown time lat dimension coordinates time x lat x auxiliary coordinates year x cell measures areacella x x new x x aggregated by year iris analysis mean print new x x unknown time lat dimension coordinates time x lat x auxiliary coordinates year x cell methods mean year col x x collapsed time iris analysis mean print col x x unknown lat dimension coordinates lat x scalar coordinates time day bound day year bound cell methods mean time this relates to the choice made in iris to discard cell measures when these are time dependent as i guess it will be very tricky to handle it in the correct way or maybe it is simply a bug second point but for a different reason cell measures are lost also in the extract levels processor where a new cube is generated by scratch and this property is not propagated a practical example is i want to compute global average of a specific layer from a variable e g seawater oxygen in this case the cell measures is the area and should be associated to the cube while in the case of a cell measures it should be coherently extracted as the variable data
1
20,191
26,757,270,130
IssuesEvent
2023-01-31 02:00:08
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Mon, 30 Jan 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### Multimodal Event Transformer for Image-guided Story Ending Generation - **Authors:** Yucheng Zhou, Guodong Long - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.11357 - **Pdf link:** https://arxiv.org/pdf/2301.11357 - **Abstract** Image-guided story ending generation (IgSEG) is to generate a story ending based on given story plots and ending image. Existing methods focus on cross-modal feature fusion but overlook reasoning and mining implicit information from story plots and ending image. To tackle this drawback, we propose a multimodal event transformer, an event-based reasoning framework for IgSEG. Specifically, we construct visual and semantic event graphs from story plots and ending image, and leverage event-based reasoning to reason and mine implicit information in a single modality. Next, we connect visual and semantic event graphs and utilize cross-modal fusion to integrate different-modality features. In addition, we propose a multimodal injector to adaptive pass essential information to decoder. Besides, we present an incoherence detection to enhance the understanding context of a story plot and the robustness of graph modeling for our model. Experimental results show that our method achieves state-of-the-art performance for the image-guided story ending generation. ### Style-Aware Contrastive Learning for Multi-Style Image Captioning - **Authors:** Yucheng Zhou, Guodong Long - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.11367 - **Pdf link:** https://arxiv.org/pdf/2301.11367 - **Abstract** Existing multi-style image captioning methods show promising results in generating a caption with accurate visual content and desired linguistic style. However, existing methods overlook the relationship between linguistic style and visual content. To overcome this drawback, we propose style-aware contrastive learning for multi-style image captioning. First, we present a style-aware visual encoder with contrastive learning to mine potential visual content relevant to style. Moreover, we propose a style-aware triplet contrast objective to distinguish whether the image, style and caption matched. To provide positive and negative samples for contrastive learning, we present three retrieval schemes: object-based retrieval, RoI-based retrieval and triplet-based retrieval, and design a dynamic trade-off function to calculate retrieval scores. Experimental results demonstrate that our approach achieves state-of-the-art performance. In addition, we conduct an extensive analysis to verify the effectiveness of our method. ### Accelerating Guided Diffusion Sampling with Splitting Numerical Methods - **Authors:** Suttisak Wizadwongsa, Supasorn Suwajanakorn - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.11558 - **Pdf link:** https://arxiv.org/pdf/2301.11558 - **Abstract** Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task. One drawback of diffusion models, however, is their slow sampling process. Recent techniques can accelerate unguided sampling by applying high-order numerical methods to the sampling process when viewed as differential equations. On the contrary, we discover that the same techniques do not work for guided sampling, and little has been explored about its acceleration. This paper explores the culprit of this problem and provides a solution based on operator splitting methods, motivated by our key finding that classical high-order numerical methods are unsuitable for the conditional function. Our proposed method can re-utilize the high-order methods for guided sampling and can generate images with the same quality as a 250-step DDIM baseline using 32-58% less sampling time on ImageNet256. We also demonstrate usage on a wide variety of conditional generation tasks, such as text-to-image generation, colorization, inpainting, and super-resolution. ## Keyword: ISP ### RMSim: Controlled Respiratory Motion Simulation on Static Patient Scans - **Authors:** Donghoon Lee, Ellen Yorke, Masoud Zarepisheh, Saad Nadeem, Yu-Chi Hu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.11422 - **Pdf link:** https://arxiv.org/pdf/2301.11422 - **Abstract** This work aims to generate realistic anatomical deformations from static patient scans. Specifically, we present a method to generate these deformations/augmentations via deep learning driven respiratory motion simulation that provides the ground truth for validating deformable image registration (DIR) algorithms and driving more accurate deep learning based DIR. We present a novel 3D Seq2Seq deep learning respiratory motion simulator (RMSim) that learns from 4D-CT images and predicts future breathing phases given a static CT image. The predicted respiratory patterns, represented by time-varying displacement vector fields (DVFs) at different breathing phases, are modulated through auxiliary inputs of 1D breathing traces so that a larger amplitude in the trace results in more significant predicted deformation. Stacked 3D-ConvLSTMs are used to capture the spatial-temporal respiration patterns. Training loss includes a smoothness loss in the DVF and mean-squared error between the predicted and ground truth phase images. A spatial transformer deforms the static CT with the predicted DVF to generate the predicted phase image. 10-phase 4D-CTs of 140 internal patients were used to train and test RMSim. The trained RMSim was then used to augment a public DIR challenge dataset for training VoxelMorph to show the effectiveness of RMSim-generated deformation augmentation. We validated our RMSim output with both private and public benchmark datasets (healthy and cancer patients). The proposed approach can be used for validating DIR algorithms as well as for patient-specific augmentations to improve deep learning DIR algorithms. The code, pretrained models, and augmented DIR validation datasets will be released at https://github.com/nadeemlab/SeqX2Y. ### Boundary Aware U-Net for Glacier Segmentation - **Authors:** Bibek Aryal, Katie E. Miles, Sergio A. Vargas Zesati, Olac Fuentes - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2301.11454 - **Pdf link:** https://arxiv.org/pdf/2301.11454 - **Abstract** Large-scale study of glaciers improves our understanding of global glacier change and is imperative for monitoring the ecological environment, preventing disasters, and studying the effects of global climate change. Glaciers in the Hindu Kush Himalaya (HKH) are particularly interesting as the HKH is one of the world's most sensitive regions for climate change. In this work, we: (1) propose a modified version of the U-Net for large-scale, spatially non-overlapping, clean glacial ice, and debris-covered glacial ice segmentation; (2) introduce a novel self-learning boundary-aware loss to improve debris-covered glacial ice segmentation performance; and (3) propose a feature-wise saliency score to understand the contribution of each feature in the multispectral Landsat 7 imagery for glacier mapping. Our results show that the debris-covered glacial ice segmentation model trained using self-learning boundary-aware loss outperformed the model trained using dice loss. Furthermore, we conclude that red, shortwave infrared, and near-infrared bands have the highest contribution toward debris-covered glacial ice segmentation from Landsat 7 images. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### Multimodal Event Transformer for Image-guided Story Ending Generation - **Authors:** Yucheng Zhou, Guodong Long - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.11357 - **Pdf link:** https://arxiv.org/pdf/2301.11357 - **Abstract** Image-guided story ending generation (IgSEG) is to generate a story ending based on given story plots and ending image. Existing methods focus on cross-modal feature fusion but overlook reasoning and mining implicit information from story plots and ending image. To tackle this drawback, we propose a multimodal event transformer, an event-based reasoning framework for IgSEG. Specifically, we construct visual and semantic event graphs from story plots and ending image, and leverage event-based reasoning to reason and mine implicit information in a single modality. Next, we connect visual and semantic event graphs and utilize cross-modal fusion to integrate different-modality features. In addition, we propose a multimodal injector to adaptive pass essential information to decoder. Besides, we present an incoherence detection to enhance the understanding context of a story plot and the robustness of graph modeling for our model. Experimental results show that our method achieves state-of-the-art performance for the image-guided story ending generation. ### Style-Aware Contrastive Learning for Multi-Style Image Captioning - **Authors:** Yucheng Zhou, Guodong Long - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.11367 - **Pdf link:** https://arxiv.org/pdf/2301.11367 - **Abstract** Existing multi-style image captioning methods show promising results in generating a caption with accurate visual content and desired linguistic style. However, existing methods overlook the relationship between linguistic style and visual content. To overcome this drawback, we propose style-aware contrastive learning for multi-style image captioning. First, we present a style-aware visual encoder with contrastive learning to mine potential visual content relevant to style. Moreover, we propose a style-aware triplet contrast objective to distinguish whether the image, style and caption matched. To provide positive and negative samples for contrastive learning, we present three retrieval schemes: object-based retrieval, RoI-based retrieval and triplet-based retrieval, and design a dynamic trade-off function to calculate retrieval scores. Experimental results demonstrate that our approach achieves state-of-the-art performance. In addition, we conduct an extensive analysis to verify the effectiveness of our method. ### 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models - **Authors:** Biao Zhang, Jiapeng Tang, Matthias Niessner, Peter Wonka - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2301.11445 - **Pdf link:** https://arxiv.org/pdf/2301.11445 - **Abstract** We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for generative diffusion models. Our shape representation can encode 3D shapes given as surface models or point clouds, and represents them as neural fields. The concept of neural fields has previously been combined with a global latent vector, a regular grid of latent vectors, or an irregular grid of latent vectors. Our new representation encodes neural fields on top of a set of vectors. We draw from multiple concepts, such as the radial basis function representation and the cross attention and self-attention function, to design a learnable representation that is especially suitable for processing with transformers. Our results show improved performance in 3D shape encoding and 3D shape generative modeling tasks. We demonstrate a wide variety of generative applications: unconditioned generation, category-conditioned generation, text-conditioned generation, point-cloud completion, and image-conditioned generation. ### Accelerating Guided Diffusion Sampling with Splitting Numerical Methods - **Authors:** Suttisak Wizadwongsa, Supasorn Suwajanakorn - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.11558 - **Pdf link:** https://arxiv.org/pdf/2301.11558 - **Abstract** Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task. One drawback of diffusion models, however, is their slow sampling process. Recent techniques can accelerate unguided sampling by applying high-order numerical methods to the sampling process when viewed as differential equations. On the contrary, we discover that the same techniques do not work for guided sampling, and little has been explored about its acceleration. This paper explores the culprit of this problem and provides a solution based on operator splitting methods, motivated by our key finding that classical high-order numerical methods are unsuitable for the conditional function. Our proposed method can re-utilize the high-order methods for guided sampling and can generate images with the same quality as a 250-step DDIM baseline using 32-58% less sampling time on ImageNet256. We also demonstrate usage on a wide variety of conditional generation tasks, such as text-to-image generation, colorization, inpainting, and super-resolution. ## Keyword: raw image There is no result
2.0
New submissions for Mon, 30 Jan 23 - ## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### Multimodal Event Transformer for Image-guided Story Ending Generation - **Authors:** Yucheng Zhou, Guodong Long - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.11357 - **Pdf link:** https://arxiv.org/pdf/2301.11357 - **Abstract** Image-guided story ending generation (IgSEG) is to generate a story ending based on given story plots and ending image. Existing methods focus on cross-modal feature fusion but overlook reasoning and mining implicit information from story plots and ending image. To tackle this drawback, we propose a multimodal event transformer, an event-based reasoning framework for IgSEG. Specifically, we construct visual and semantic event graphs from story plots and ending image, and leverage event-based reasoning to reason and mine implicit information in a single modality. Next, we connect visual and semantic event graphs and utilize cross-modal fusion to integrate different-modality features. In addition, we propose a multimodal injector to adaptive pass essential information to decoder. Besides, we present an incoherence detection to enhance the understanding context of a story plot and the robustness of graph modeling for our model. Experimental results show that our method achieves state-of-the-art performance for the image-guided story ending generation. ### Style-Aware Contrastive Learning for Multi-Style Image Captioning - **Authors:** Yucheng Zhou, Guodong Long - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.11367 - **Pdf link:** https://arxiv.org/pdf/2301.11367 - **Abstract** Existing multi-style image captioning methods show promising results in generating a caption with accurate visual content and desired linguistic style. However, existing methods overlook the relationship between linguistic style and visual content. To overcome this drawback, we propose style-aware contrastive learning for multi-style image captioning. First, we present a style-aware visual encoder with contrastive learning to mine potential visual content relevant to style. Moreover, we propose a style-aware triplet contrast objective to distinguish whether the image, style and caption matched. To provide positive and negative samples for contrastive learning, we present three retrieval schemes: object-based retrieval, RoI-based retrieval and triplet-based retrieval, and design a dynamic trade-off function to calculate retrieval scores. Experimental results demonstrate that our approach achieves state-of-the-art performance. In addition, we conduct an extensive analysis to verify the effectiveness of our method. ### Accelerating Guided Diffusion Sampling with Splitting Numerical Methods - **Authors:** Suttisak Wizadwongsa, Supasorn Suwajanakorn - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.11558 - **Pdf link:** https://arxiv.org/pdf/2301.11558 - **Abstract** Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task. One drawback of diffusion models, however, is their slow sampling process. Recent techniques can accelerate unguided sampling by applying high-order numerical methods to the sampling process when viewed as differential equations. On the contrary, we discover that the same techniques do not work for guided sampling, and little has been explored about its acceleration. This paper explores the culprit of this problem and provides a solution based on operator splitting methods, motivated by our key finding that classical high-order numerical methods are unsuitable for the conditional function. Our proposed method can re-utilize the high-order methods for guided sampling and can generate images with the same quality as a 250-step DDIM baseline using 32-58% less sampling time on ImageNet256. We also demonstrate usage on a wide variety of conditional generation tasks, such as text-to-image generation, colorization, inpainting, and super-resolution. ## Keyword: ISP ### RMSim: Controlled Respiratory Motion Simulation on Static Patient Scans - **Authors:** Donghoon Lee, Ellen Yorke, Masoud Zarepisheh, Saad Nadeem, Yu-Chi Hu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.11422 - **Pdf link:** https://arxiv.org/pdf/2301.11422 - **Abstract** This work aims to generate realistic anatomical deformations from static patient scans. Specifically, we present a method to generate these deformations/augmentations via deep learning driven respiratory motion simulation that provides the ground truth for validating deformable image registration (DIR) algorithms and driving more accurate deep learning based DIR. We present a novel 3D Seq2Seq deep learning respiratory motion simulator (RMSim) that learns from 4D-CT images and predicts future breathing phases given a static CT image. The predicted respiratory patterns, represented by time-varying displacement vector fields (DVFs) at different breathing phases, are modulated through auxiliary inputs of 1D breathing traces so that a larger amplitude in the trace results in more significant predicted deformation. Stacked 3D-ConvLSTMs are used to capture the spatial-temporal respiration patterns. Training loss includes a smoothness loss in the DVF and mean-squared error between the predicted and ground truth phase images. A spatial transformer deforms the static CT with the predicted DVF to generate the predicted phase image. 10-phase 4D-CTs of 140 internal patients were used to train and test RMSim. The trained RMSim was then used to augment a public DIR challenge dataset for training VoxelMorph to show the effectiveness of RMSim-generated deformation augmentation. We validated our RMSim output with both private and public benchmark datasets (healthy and cancer patients). The proposed approach can be used for validating DIR algorithms as well as for patient-specific augmentations to improve deep learning DIR algorithms. The code, pretrained models, and augmented DIR validation datasets will be released at https://github.com/nadeemlab/SeqX2Y. ### Boundary Aware U-Net for Glacier Segmentation - **Authors:** Bibek Aryal, Katie E. Miles, Sergio A. Vargas Zesati, Olac Fuentes - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2301.11454 - **Pdf link:** https://arxiv.org/pdf/2301.11454 - **Abstract** Large-scale study of glaciers improves our understanding of global glacier change and is imperative for monitoring the ecological environment, preventing disasters, and studying the effects of global climate change. Glaciers in the Hindu Kush Himalaya (HKH) are particularly interesting as the HKH is one of the world's most sensitive regions for climate change. In this work, we: (1) propose a modified version of the U-Net for large-scale, spatially non-overlapping, clean glacial ice, and debris-covered glacial ice segmentation; (2) introduce a novel self-learning boundary-aware loss to improve debris-covered glacial ice segmentation performance; and (3) propose a feature-wise saliency score to understand the contribution of each feature in the multispectral Landsat 7 imagery for glacier mapping. Our results show that the debris-covered glacial ice segmentation model trained using self-learning boundary-aware loss outperformed the model trained using dice loss. Furthermore, we conclude that red, shortwave infrared, and near-infrared bands have the highest contribution toward debris-covered glacial ice segmentation from Landsat 7 images. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### Multimodal Event Transformer for Image-guided Story Ending Generation - **Authors:** Yucheng Zhou, Guodong Long - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.11357 - **Pdf link:** https://arxiv.org/pdf/2301.11357 - **Abstract** Image-guided story ending generation (IgSEG) is to generate a story ending based on given story plots and ending image. Existing methods focus on cross-modal feature fusion but overlook reasoning and mining implicit information from story plots and ending image. To tackle this drawback, we propose a multimodal event transformer, an event-based reasoning framework for IgSEG. Specifically, we construct visual and semantic event graphs from story plots and ending image, and leverage event-based reasoning to reason and mine implicit information in a single modality. Next, we connect visual and semantic event graphs and utilize cross-modal fusion to integrate different-modality features. In addition, we propose a multimodal injector to adaptive pass essential information to decoder. Besides, we present an incoherence detection to enhance the understanding context of a story plot and the robustness of graph modeling for our model. Experimental results show that our method achieves state-of-the-art performance for the image-guided story ending generation. ### Style-Aware Contrastive Learning for Multi-Style Image Captioning - **Authors:** Yucheng Zhou, Guodong Long - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2301.11367 - **Pdf link:** https://arxiv.org/pdf/2301.11367 - **Abstract** Existing multi-style image captioning methods show promising results in generating a caption with accurate visual content and desired linguistic style. However, existing methods overlook the relationship between linguistic style and visual content. To overcome this drawback, we propose style-aware contrastive learning for multi-style image captioning. First, we present a style-aware visual encoder with contrastive learning to mine potential visual content relevant to style. Moreover, we propose a style-aware triplet contrast objective to distinguish whether the image, style and caption matched. To provide positive and negative samples for contrastive learning, we present three retrieval schemes: object-based retrieval, RoI-based retrieval and triplet-based retrieval, and design a dynamic trade-off function to calculate retrieval scores. Experimental results demonstrate that our approach achieves state-of-the-art performance. In addition, we conduct an extensive analysis to verify the effectiveness of our method. ### 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models - **Authors:** Biao Zhang, Jiapeng Tang, Matthias Niessner, Peter Wonka - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2301.11445 - **Pdf link:** https://arxiv.org/pdf/2301.11445 - **Abstract** We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for generative diffusion models. Our shape representation can encode 3D shapes given as surface models or point clouds, and represents them as neural fields. The concept of neural fields has previously been combined with a global latent vector, a regular grid of latent vectors, or an irregular grid of latent vectors. Our new representation encodes neural fields on top of a set of vectors. We draw from multiple concepts, such as the radial basis function representation and the cross attention and self-attention function, to design a learnable representation that is especially suitable for processing with transformers. Our results show improved performance in 3D shape encoding and 3D shape generative modeling tasks. We demonstrate a wide variety of generative applications: unconditioned generation, category-conditioned generation, text-conditioned generation, point-cloud completion, and image-conditioned generation. ### Accelerating Guided Diffusion Sampling with Splitting Numerical Methods - **Authors:** Suttisak Wizadwongsa, Supasorn Suwajanakorn - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.11558 - **Pdf link:** https://arxiv.org/pdf/2301.11558 - **Abstract** Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task. One drawback of diffusion models, however, is their slow sampling process. Recent techniques can accelerate unguided sampling by applying high-order numerical methods to the sampling process when viewed as differential equations. On the contrary, we discover that the same techniques do not work for guided sampling, and little has been explored about its acceleration. This paper explores the culprit of this problem and provides a solution based on operator splitting methods, motivated by our key finding that classical high-order numerical methods are unsuitable for the conditional function. Our proposed method can re-utilize the high-order methods for guided sampling and can generate images with the same quality as a 250-step DDIM baseline using 32-58% less sampling time on ImageNet256. We also demonstrate usage on a wide variety of conditional generation tasks, such as text-to-image generation, colorization, inpainting, and super-resolution. ## Keyword: raw image There is no result
process
new submissions for mon jan keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb multimodal event transformer for image guided story ending generation authors yucheng zhou guodong long subjects computer vision and pattern recognition cs cv computation and language cs cl arxiv link pdf link abstract image guided story ending generation igseg is to generate a story ending based on given story plots and ending image existing methods focus on cross modal feature fusion but overlook reasoning and mining implicit information from story plots and ending image to tackle this drawback we propose a multimodal event transformer an event based reasoning framework for igseg specifically we construct visual and semantic event graphs from story plots and ending image and leverage event based reasoning to reason and mine implicit information in a single modality next we connect visual and semantic event graphs and utilize cross modal fusion to integrate different modality features in addition we propose a multimodal injector to adaptive pass essential information to decoder besides we present an incoherence detection to enhance the understanding context of a story plot and the robustness of graph modeling for our model experimental results show that our method achieves state of the art performance for the image guided story ending generation style aware contrastive learning for multi style image captioning authors yucheng zhou guodong long subjects computer vision and pattern recognition cs cv computation and language cs cl arxiv link pdf link abstract existing multi style image captioning methods show promising results in generating a caption with accurate visual content and desired linguistic style however existing methods overlook the relationship between linguistic style and visual content to overcome this drawback we propose style aware contrastive learning for multi style image captioning first we present a style aware visual encoder with contrastive learning to mine potential visual content relevant to style moreover we propose a style aware triplet contrast objective to distinguish whether the image style and caption matched to provide positive and negative samples for contrastive learning we present three retrieval schemes object based retrieval roi based retrieval and triplet based retrieval and design a dynamic trade off function to calculate retrieval scores experimental results demonstrate that our approach achieves state of the art performance in addition we conduct an extensive analysis to verify the effectiveness of our method accelerating guided diffusion sampling with splitting numerical methods authors suttisak wizadwongsa supasorn suwajanakorn subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task one drawback of diffusion models however is their slow sampling process recent techniques can accelerate unguided sampling by applying high order numerical methods to the sampling process when viewed as differential equations on the contrary we discover that the same techniques do not work for guided sampling and little has been explored about its acceleration this paper explores the culprit of this problem and provides a solution based on operator splitting methods motivated by our key finding that classical high order numerical methods are unsuitable for the conditional function our proposed method can re utilize the high order methods for guided sampling and can generate images with the same quality as a step ddim baseline using less sampling time on we also demonstrate usage on a wide variety of conditional generation tasks such as text to image generation colorization inpainting and super resolution keyword isp rmsim controlled respiratory motion simulation on static patient scans authors donghoon lee ellen yorke masoud zarepisheh saad nadeem yu chi hu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this work aims to generate realistic anatomical deformations from static patient scans specifically we present a method to generate these deformations augmentations via deep learning driven respiratory motion simulation that provides the ground truth for validating deformable image registration dir algorithms and driving more accurate deep learning based dir we present a novel deep learning respiratory motion simulator rmsim that learns from ct images and predicts future breathing phases given a static ct image the predicted respiratory patterns represented by time varying displacement vector fields dvfs at different breathing phases are modulated through auxiliary inputs of breathing traces so that a larger amplitude in the trace results in more significant predicted deformation stacked convlstms are used to capture the spatial temporal respiration patterns training loss includes a smoothness loss in the dvf and mean squared error between the predicted and ground truth phase images a spatial transformer deforms the static ct with the predicted dvf to generate the predicted phase image phase cts of internal patients were used to train and test rmsim the trained rmsim was then used to augment a public dir challenge dataset for training voxelmorph to show the effectiveness of rmsim generated deformation augmentation we validated our rmsim output with both private and public benchmark datasets healthy and cancer patients the proposed approach can be used for validating dir algorithms as well as for patient specific augmentations to improve deep learning dir algorithms the code pretrained models and augmented dir validation datasets will be released at boundary aware u net for glacier segmentation authors bibek aryal katie e miles sergio a vargas zesati olac fuentes subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract large scale study of glaciers improves our understanding of global glacier change and is imperative for monitoring the ecological environment preventing disasters and studying the effects of global climate change glaciers in the hindu kush himalaya hkh are particularly interesting as the hkh is one of the world s most sensitive regions for climate change in this work we propose a modified version of the u net for large scale spatially non overlapping clean glacial ice and debris covered glacial ice segmentation introduce a novel self learning boundary aware loss to improve debris covered glacial ice segmentation performance and propose a feature wise saliency score to understand the contribution of each feature in the multispectral landsat imagery for glacier mapping our results show that the debris covered glacial ice segmentation model trained using self learning boundary aware loss outperformed the model trained using dice loss furthermore we conclude that red shortwave infrared and near infrared bands have the highest contribution toward debris covered glacial ice segmentation from landsat images keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw multimodal event transformer for image guided story ending generation authors yucheng zhou guodong long subjects computer vision and pattern recognition cs cv computation and language cs cl arxiv link pdf link abstract image guided story ending generation igseg is to generate a story ending based on given story plots and ending image existing methods focus on cross modal feature fusion but overlook reasoning and mining implicit information from story plots and ending image to tackle this drawback we propose a multimodal event transformer an event based reasoning framework for igseg specifically we construct visual and semantic event graphs from story plots and ending image and leverage event based reasoning to reason and mine implicit information in a single modality next we connect visual and semantic event graphs and utilize cross modal fusion to integrate different modality features in addition we propose a multimodal injector to adaptive pass essential information to decoder besides we present an incoherence detection to enhance the understanding context of a story plot and the robustness of graph modeling for our model experimental results show that our method achieves state of the art performance for the image guided story ending generation style aware contrastive learning for multi style image captioning authors yucheng zhou guodong long subjects computer vision and pattern recognition cs cv computation and language cs cl arxiv link pdf link abstract existing multi style image captioning methods show promising results in generating a caption with accurate visual content and desired linguistic style however existing methods overlook the relationship between linguistic style and visual content to overcome this drawback we propose style aware contrastive learning for multi style image captioning first we present a style aware visual encoder with contrastive learning to mine potential visual content relevant to style moreover we propose a style aware triplet contrast objective to distinguish whether the image style and caption matched to provide positive and negative samples for contrastive learning we present three retrieval schemes object based retrieval roi based retrieval and triplet based retrieval and design a dynamic trade off function to calculate retrieval scores experimental results demonstrate that our approach achieves state of the art performance in addition we conduct an extensive analysis to verify the effectiveness of our method a shape representation for neural fields and generative diffusion models authors biao zhang jiapeng tang matthias niessner peter wonka subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract we introduce a novel shape representation for neural fields designed for generative diffusion models our shape representation can encode shapes given as surface models or point clouds and represents them as neural fields the concept of neural fields has previously been combined with a global latent vector a regular grid of latent vectors or an irregular grid of latent vectors our new representation encodes neural fields on top of a set of vectors we draw from multiple concepts such as the radial basis function representation and the cross attention and self attention function to design a learnable representation that is especially suitable for processing with transformers our results show improved performance in shape encoding and shape generative modeling tasks we demonstrate a wide variety of generative applications unconditioned generation category conditioned generation text conditioned generation point cloud completion and image conditioned generation accelerating guided diffusion sampling with splitting numerical methods authors suttisak wizadwongsa supasorn suwajanakorn subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task one drawback of diffusion models however is their slow sampling process recent techniques can accelerate unguided sampling by applying high order numerical methods to the sampling process when viewed as differential equations on the contrary we discover that the same techniques do not work for guided sampling and little has been explored about its acceleration this paper explores the culprit of this problem and provides a solution based on operator splitting methods motivated by our key finding that classical high order numerical methods are unsuitable for the conditional function our proposed method can re utilize the high order methods for guided sampling and can generate images with the same quality as a step ddim baseline using less sampling time on we also demonstrate usage on a wide variety of conditional generation tasks such as text to image generation colorization inpainting and super resolution keyword raw image there is no result
1
10,099
13,044,162,096
IssuesEvent
2020-07-29 03:47:29
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `AddStringAndDuration` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `AddStringAndDuration` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @breeswish ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `AddStringAndDuration` from TiDB - ## Description Port the scalar function `AddStringAndDuration` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @breeswish ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function addstringandduration from tidb description port the scalar function addstringandduration from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
1
225,618
17,272,038,078
IssuesEvent
2021-07-22 21:17:18
jonrau1/ElectricEye
https://api.github.com/repos/jonrau1/ElectricEye
opened
Parameter Validation Failed while writing output to Security Hub
bug documentation
**Describe the bug** When the scan is completed the issues are pushed to Security Hub but recently there are errors thrown around parameters validation failure which leads to none of the issues sent to Security Hub. **To Reproduce** Steps to reproduce the behavior: 1. Setup ECS task (Fargate) or run via Cloudshell - python3 eeauditor/controller.py 2. When the scan finishes the writing of results to Security Hub initiates 3. Error is thrown with parameter validation failed **Expected behavior** Successfull scan results pushed to Security Hub **Logs** Writing 376 results to SecurityHub Error writing output: Parameter validation failed: Unknown parameter in Findings[9].Resources[0].Details.AwsEc2Instance: "**AmiAge**", must be one of: Type, ImageId, IpV4Addresses, IpV6Addresses, KeyName, IamInstanceProfileArn, VpcId, SubnetId, LaunchedAt, NetworkInterfaces Unknown parameter in Findings[10].Resources[0].Details.AwsEc2Instance: "**AmiStatus**", must be one of: Type, ImageId, IpV4Addresses, IpV6Addresses, KeyName, IamInstanceProfileArn, VpcId, SubnetId, LaunchedAt, NetworkInterfaces Done. **Additional context** For testing purpose I removed Amazon_EC2_Auditor but again error is thrown for **"Other"** parameter (must be oneof:aws Type, Id, Partition, Region, ResourceRole, Tags, DataClassification, Details) which is present in almost all auditors. . The scan outputs are successfull for csv and json but error occurs only for Security Hub
1.0
Parameter Validation Failed while writing output to Security Hub - **Describe the bug** When the scan is completed the issues are pushed to Security Hub but recently there are errors thrown around parameters validation failure which leads to none of the issues sent to Security Hub. **To Reproduce** Steps to reproduce the behavior: 1. Setup ECS task (Fargate) or run via Cloudshell - python3 eeauditor/controller.py 2. When the scan finishes the writing of results to Security Hub initiates 3. Error is thrown with parameter validation failed **Expected behavior** Successfull scan results pushed to Security Hub **Logs** Writing 376 results to SecurityHub Error writing output: Parameter validation failed: Unknown parameter in Findings[9].Resources[0].Details.AwsEc2Instance: "**AmiAge**", must be one of: Type, ImageId, IpV4Addresses, IpV6Addresses, KeyName, IamInstanceProfileArn, VpcId, SubnetId, LaunchedAt, NetworkInterfaces Unknown parameter in Findings[10].Resources[0].Details.AwsEc2Instance: "**AmiStatus**", must be one of: Type, ImageId, IpV4Addresses, IpV6Addresses, KeyName, IamInstanceProfileArn, VpcId, SubnetId, LaunchedAt, NetworkInterfaces Done. **Additional context** For testing purpose I removed Amazon_EC2_Auditor but again error is thrown for **"Other"** parameter (must be oneof:aws Type, Id, Partition, Region, ResourceRole, Tags, DataClassification, Details) which is present in almost all auditors. . The scan outputs are successfull for csv and json but error occurs only for Security Hub
non_process
parameter validation failed while writing output to security hub describe the bug when the scan is completed the issues are pushed to security hub but recently there are errors thrown around parameters validation failure which leads to none of the issues sent to security hub to reproduce steps to reproduce the behavior setup ecs task fargate or run via cloudshell eeauditor controller py when the scan finishes the writing of results to security hub initiates error is thrown with parameter validation failed expected behavior successfull scan results pushed to security hub logs writing results to securityhub error writing output parameter validation failed unknown parameter in findings resources details amiage must be one of type imageid keyname iaminstanceprofilearn vpcid subnetid launchedat networkinterfaces unknown parameter in findings resources details amistatus must be one of type imageid keyname iaminstanceprofilearn vpcid subnetid launchedat networkinterfaces done additional context for testing purpose i removed amazon auditor but again error is thrown for other parameter must be oneof aws type id partition region resourcerole tags dataclassification details which is present in almost all auditors the scan outputs are successfull for csv and json but error occurs only for security hub
0
94,467
10,824,977,550
IssuesEvent
2019-11-09 13:02:20
PokeNavBot/issue-tracker
https://api.github.com/repos/PokeNavBot/issue-tracker
closed
Have help text from commands link in subtext to the documentation website
bot documentation enhancement
Request: Add a link to [The documentation website](http://docs.pokenavbot.com) to the subtext of every help command responce, and command syntax error responce. Reason: [The documentation website](http://docs.pokenavbot.com) is a super valuable resource for users who are confused and for those who may not be familiar with commands in general, can be easier to understand than the `$help` commands for some users.
1.0
Have help text from commands link in subtext to the documentation website - Request: Add a link to [The documentation website](http://docs.pokenavbot.com) to the subtext of every help command responce, and command syntax error responce. Reason: [The documentation website](http://docs.pokenavbot.com) is a super valuable resource for users who are confused and for those who may not be familiar with commands in general, can be easier to understand than the `$help` commands for some users.
non_process
have help text from commands link in subtext to the documentation website request add a link to to the subtext of every help command responce and command syntax error responce reason is a super valuable resource for users who are confused and for those who may not be familiar with commands in general can be easier to understand than the help commands for some users
0
69,289
17,618,338,531
IssuesEvent
2021-08-18 12:37:56
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
warnings (please do not import '@grpc//third_party/nanopb:pb_common.c' directly ; depends on deprecated target ; ...) during build from source
stat:awaiting response type:build/install subtype: ubuntu/linux
<em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em> **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04.2 LTS - Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A - TensorFlow installed from (source or binary): source - TensorFlow version: r1.12 - Python version: Python 3.6.8 :: Anaconda, Inc. - Installed using virtualenv? pip? conda?: conda - Bazel version (if compiling from source): 0.19.2 - GCC/Compiler version (if compiling from source): 6.5.0 - CUDA/cuDNN version: 9.0 / 7.5.0 - GPU model and memory: GeForce GT 650M / 2 GB **Describe the problem** I try to compile TF from source to get compute capability 3.0 support. Therefore I followed more or less the guide on [this site](https://medium.com/@mccann.matt/compiling-tensorflow-with-cuda-3-0-support-42d8fe0bf3b5). But I get warnings and the compilation failed. **Provide the exact sequence of commands / steps that you executed before running into the problem** ./configure bazel build --config=opt --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" //tensorflow/tools/pip_package:build_pip_package (see details below) **Any other info / logs** Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Not sure if important but I added `import /home/jonathan/tensorflow/tools/bazel.rc` on top line of(hide file) "/home/jonathan/tensorflow/.bazelrc " as stated [here](https://github.com/tensorflow/tensorflow/issues/23401#issuecomment-435827786). ``` $ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.2 installed. Please specify the location of python. [Default is /home/jonathan/anaconda2/envs/tf_cu90/bin/python]: Found possible Python library paths: /home/jonathan/anaconda2/envs/tf_cu90/lib/python3.6/site-packages Please input the desired Python library path to use. Default is [/home/jonathan/anaconda2/envs/tf_cu90/lib/python3.6/site-packages] Do you wish to build TensorFlow with Apache Ignite support? [Y/n]: Apache Ignite support will be enabled for TensorFlow. Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-9.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-9.0]: /usr/lib/x86_64-linux-gnu Do you wish to build TensorFlow with TensorRT support? [y/N]: No TensorRT support will be enabled for TensorFlow. Please specify the NCCL version you want to use. If NCCL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. [Default is 2.2]: 1.3 Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.0]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: /usr/bin/gcc-6 Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. Configuration finished ``` ``` $ bazel build --config=opt --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" //tensorflow/tools/pip_package:build_pip_package Loading: Loading: 0 packages loaded Analyzing: target //tensorflow/tools/pip_package:build_pip_package (0 packages loaded, 0 targets configured) WARNING: /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/BUILD:1992:1: in srcs attribute of cc_library rule @grpc//:grpc_nanopb: please do not import '@grpc//third_party/nanopb:pb_common.c' directly. You should either move the file to this package or depend on an appropriate rule there. Since this rule was created by the macro 'grpc_generate_one_off_targets', the error might have been caused by the macro implementation in /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/bazel/grpc_build_system.bzl:172:12 WARNING: /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/BUILD:1992:1: in srcs attribute of cc_library rule @grpc//:grpc_nanopb: please do not import '@grpc//third_party/nanopb:pb_decode.c' directly. You should either move the file to this package or depend on an appropriate rule there. Since this rule was created by the macro 'grpc_generate_one_off_targets', the error might have been caused by the macro implementation in /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/bazel/grpc_build_system.bzl:172:12 WARNING: /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/BUILD:1992:1: in srcs attribute of cc_library rule @grpc//:grpc_nanopb: please do not import '@grpc//third_party/nanopb:pb_encode.c' directly. You should either move the file to this package or depend on an appropriate rule there. Since this rule was created by the macro 'grpc_generate_one_off_targets', the error might have been caused by the macro implementation in /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/bazel/grpc_build_system.bzl:172:12 WARNING: /home/jonathan/tensorflow/tensorflow/contrib/learn/BUILD:17:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': No longer supported. Switch to SavedModel immediately. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/learn/BUILD:17:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': No longer supported. Switch to SavedModel immediately. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/timeseries/python/timeseries/BUILD:354:1: in py_library rule //tensorflow/contrib/timeseries/python/timeseries:ar_model: target '//tensorflow/contrib/timeseries/python/timeseries:ar_model' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/timeseries/python/timeseries/state_space_models/BUILD:73:1: in py_library rule //tensorflow/contrib/timeseries/python/timeseries/state_space_models:kalman_filter: target '//tensorflow/contrib/timeseries/python/timeseries/state_space_models:kalman_filter' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/timeseries/python/timeseries/state_space_models/BUILD:230:1: in py_library rule //tensorflow/contrib/timeseries/python/timeseries/state_space_models:filtering_postprocessor: target '//tensorflow/contrib/timeseries/python/timeseries/state_space_models:filtering_postprocessor' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/bayesflow/BUILD:17:1: in py_library rule //tensorflow/contrib/bayesflow:bayesflow_py: target '//tensorflow/contrib/bayesflow:bayesflow_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/seq2seq/BUILD:23:1: in py_library rule //tensorflow/contrib/seq2seq:seq2seq_py: target '//tensorflow/contrib/seq2seq:seq2seq_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/BUILD:13:1: in py_library rule //tensorflow/contrib:contrib_py: target '//tensorflow/contrib:contrib_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. INFO: Analysed target //tensorflow/tools/pip_package:build_pip_package (0 packages loaded, 0 targets configured). INFO: Found 1 target... [0 / 4] [-----] ProtoCompile tensorflow/core/example/example_pb2.py [5 / 21] Compiling tensorflow/core/ops/nn_ops.cc [for host]; 2s local ... (8 actions running) [6 / 22] Compiling tensorflow/core/ops/nn_ops.cc [for host]; 6s local ... (7 actions running) [13 / 35] Compiling tensorflow/core/ops/nn_ops.cc [for host]; 9s local ... (8 actions running) [17 / 37] Compiling tensorflow/contrib/tensor_forest/hybrid/core/ops/routing_gradient_op.cc; 6s local ... (8 actions running) [24 / 46] Compiling tensorflow/contrib/tensor_forest/hybrid/core/ops/k_feature_gradient_op.cc [for host]; 6s local ... (8 actions running) [38 / 73] Compiling tensorflow/python/framework/python_op_gen_internal.cc [for host]; 7s local ... (8 actions, 7 running) INFO: From Compiling tensorflow/python/framework/python_op_gen_internal.cc [for host]: tensorflow/python/framework/python_op_gen_internal.cc: In member function 'virtual std::__cxx11::string tensorflow::python_op_gen_internal::GenPythonOp::Code()': tensorflow/python/framework/python_op_gen_internal.cc:542:44: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = op_def_.input_arg_size(); i < params_no_default.size(); ++i) { ~~^~~~~~~~~~~~~~~~~~~~~~~~~~ tensorflow/python/framework/python_op_gen_internal.cc:545:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < params_with_default.size(); ++i) { ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~ INFO: From Compiling tensorflow/python/framework/python_op_gen.cc [for host]: tensorflow/python/framework/python_op_gen.cc: In function 'std::__cxx11::string tensorflow::{anonymous}::VectorToTuple(const std::vector<std::__cxx11::basic_string<char> >&)': tensorflow/python/framework/python_op_gen.cc:65:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < l.size(); ++i) { ~~^~~~~~~~~~ tensorflow/python/framework/python_op_gen.cc: In function 'void tensorflow::{anonymous}::Unflatten(const string&, const std::vector<std::__cxx11::basic_string<char> >&, const string&, std::__cxx11::string*)': tensorflow/python/framework/python_op_gen.cc:77:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < output_sizes.size(); ++i) { ~~^~~~~~~~~~~~~~~~~~~~~ [...] ERROR: /home/jonathan/tensorflow/tensorflow/core/kernels/BUILD:2951:1: output 'tensorflow/core/kernels/_objs/cwise_op_gpu/cwise_op_gpu_bitwise_and.cu.pic.o' was not created INFO: From Compiling tensorflow/core/kernels/cwise_op_gpu_ceil.cu.cc [for host]: ./tensorflow/core/kernels/cwise_ops.h(190): warning: __host__ annotation on a defaulted function("scalar_left") is ignored ./tensorflow/core/kernels/cwise_ops.h(190): warning: __device__ annotation on a defaulted function("scalar_left") is ignored ./tensorflow/core/kernels/cwise_ops.h(220): warning: __host__ annotation on a defaulted function("scalar_right") is ignored ./tensorflow/core/kernels/cwise_ops.h(220): warning: __device__ annotation on a defaulted function("scalar_right") is ignored ./tensorflow/core/kernels/cwise_ops.h(190): warning: __host__ annotation on a defaulted function("scalar_left") is ignored ./tensorflow/core/kernels/cwise_ops.h(190): warning: __device__ annotation on a defaulted function("scalar_left") is ignored ./tensorflow/core/kernels/cwise_ops.h(220): warning: __host__ annotation on a defaulted function("scalar_right") is ignored ./tensorflow/core/kernels/cwise_ops.h(220): warning: __device__ annotation on a defaulted function("scalar_right") is ignored ERROR: /home/jonathan/tensorflow/tensorflow/core/kernels/BUILD:2951:1: output 'tensorflow/core/kernels/_objs/cwise_op_gpu/cwise_op_gpu_mul.cu.pic.o' was not created ERROR: /home/jonathan/tensorflow/tensorflow/core/kernels/BUILD:2951:1: not all outputs were created or valid Target //tensorflow/tools/pip_package:build_pip_package failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 10303.024s, Critical Path: 9226.48s, Remote (0.00% of the time): [queue: 0.00%, setup: 0.00%, process: 0.00%] INFO: 1813 processes: 1813 local. FAILED: Build did NOT complete successfully FAILED: Build did NOT complete successfully ``` for full log-file see [log.txt](https://github.com/tensorflow/tensorflow/files/2913016/log.txt)
1.0
warnings (please do not import '@grpc//third_party/nanopb:pb_common.c' directly ; depends on deprecated target ; ...) during build from source - <em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em> **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04.2 LTS - Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A - TensorFlow installed from (source or binary): source - TensorFlow version: r1.12 - Python version: Python 3.6.8 :: Anaconda, Inc. - Installed using virtualenv? pip? conda?: conda - Bazel version (if compiling from source): 0.19.2 - GCC/Compiler version (if compiling from source): 6.5.0 - CUDA/cuDNN version: 9.0 / 7.5.0 - GPU model and memory: GeForce GT 650M / 2 GB **Describe the problem** I try to compile TF from source to get compute capability 3.0 support. Therefore I followed more or less the guide on [this site](https://medium.com/@mccann.matt/compiling-tensorflow-with-cuda-3-0-support-42d8fe0bf3b5). But I get warnings and the compilation failed. **Provide the exact sequence of commands / steps that you executed before running into the problem** ./configure bazel build --config=opt --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" //tensorflow/tools/pip_package:build_pip_package (see details below) **Any other info / logs** Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Not sure if important but I added `import /home/jonathan/tensorflow/tools/bazel.rc` on top line of(hide file) "/home/jonathan/tensorflow/.bazelrc " as stated [here](https://github.com/tensorflow/tensorflow/issues/23401#issuecomment-435827786). ``` $ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.2 installed. Please specify the location of python. [Default is /home/jonathan/anaconda2/envs/tf_cu90/bin/python]: Found possible Python library paths: /home/jonathan/anaconda2/envs/tf_cu90/lib/python3.6/site-packages Please input the desired Python library path to use. Default is [/home/jonathan/anaconda2/envs/tf_cu90/lib/python3.6/site-packages] Do you wish to build TensorFlow with Apache Ignite support? [Y/n]: Apache Ignite support will be enabled for TensorFlow. Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-9.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-9.0]: /usr/lib/x86_64-linux-gnu Do you wish to build TensorFlow with TensorRT support? [y/N]: No TensorRT support will be enabled for TensorFlow. Please specify the NCCL version you want to use. If NCCL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. [Default is 2.2]: 1.3 Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.0]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: /usr/bin/gcc-6 Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. Configuration finished ``` ``` $ bazel build --config=opt --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" //tensorflow/tools/pip_package:build_pip_package Loading: Loading: 0 packages loaded Analyzing: target //tensorflow/tools/pip_package:build_pip_package (0 packages loaded, 0 targets configured) WARNING: /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/BUILD:1992:1: in srcs attribute of cc_library rule @grpc//:grpc_nanopb: please do not import '@grpc//third_party/nanopb:pb_common.c' directly. You should either move the file to this package or depend on an appropriate rule there. Since this rule was created by the macro 'grpc_generate_one_off_targets', the error might have been caused by the macro implementation in /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/bazel/grpc_build_system.bzl:172:12 WARNING: /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/BUILD:1992:1: in srcs attribute of cc_library rule @grpc//:grpc_nanopb: please do not import '@grpc//third_party/nanopb:pb_decode.c' directly. You should either move the file to this package or depend on an appropriate rule there. Since this rule was created by the macro 'grpc_generate_one_off_targets', the error might have been caused by the macro implementation in /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/bazel/grpc_build_system.bzl:172:12 WARNING: /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/BUILD:1992:1: in srcs attribute of cc_library rule @grpc//:grpc_nanopb: please do not import '@grpc//third_party/nanopb:pb_encode.c' directly. You should either move the file to this package or depend on an appropriate rule there. Since this rule was created by the macro 'grpc_generate_one_off_targets', the error might have been caused by the macro implementation in /home/jonathan/.cache/bazel/_bazel_jonathan/e7c09fc463511989ded3d56396c466d4/external/grpc/bazel/grpc_build_system.bzl:172:12 WARNING: /home/jonathan/tensorflow/tensorflow/contrib/learn/BUILD:17:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': No longer supported. Switch to SavedModel immediately. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/learn/BUILD:17:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': No longer supported. Switch to SavedModel immediately. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/timeseries/python/timeseries/BUILD:354:1: in py_library rule //tensorflow/contrib/timeseries/python/timeseries:ar_model: target '//tensorflow/contrib/timeseries/python/timeseries:ar_model' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/timeseries/python/timeseries/state_space_models/BUILD:73:1: in py_library rule //tensorflow/contrib/timeseries/python/timeseries/state_space_models:kalman_filter: target '//tensorflow/contrib/timeseries/python/timeseries/state_space_models:kalman_filter' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/timeseries/python/timeseries/state_space_models/BUILD:230:1: in py_library rule //tensorflow/contrib/timeseries/python/timeseries/state_space_models:filtering_postprocessor: target '//tensorflow/contrib/timeseries/python/timeseries/state_space_models:filtering_postprocessor' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/bayesflow/BUILD:17:1: in py_library rule //tensorflow/contrib/bayesflow:bayesflow_py: target '//tensorflow/contrib/bayesflow:bayesflow_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/seq2seq/BUILD:23:1: in py_library rule //tensorflow/contrib/seq2seq:seq2seq_py: target '//tensorflow/contrib/seq2seq:seq2seq_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. WARNING: /home/jonathan/tensorflow/tensorflow/contrib/BUILD:13:1: in py_library rule //tensorflow/contrib:contrib_py: target '//tensorflow/contrib:contrib_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of `tf.contrib.distributions` to `tfp.distributions`. INFO: Analysed target //tensorflow/tools/pip_package:build_pip_package (0 packages loaded, 0 targets configured). INFO: Found 1 target... [0 / 4] [-----] ProtoCompile tensorflow/core/example/example_pb2.py [5 / 21] Compiling tensorflow/core/ops/nn_ops.cc [for host]; 2s local ... (8 actions running) [6 / 22] Compiling tensorflow/core/ops/nn_ops.cc [for host]; 6s local ... (7 actions running) [13 / 35] Compiling tensorflow/core/ops/nn_ops.cc [for host]; 9s local ... (8 actions running) [17 / 37] Compiling tensorflow/contrib/tensor_forest/hybrid/core/ops/routing_gradient_op.cc; 6s local ... (8 actions running) [24 / 46] Compiling tensorflow/contrib/tensor_forest/hybrid/core/ops/k_feature_gradient_op.cc [for host]; 6s local ... (8 actions running) [38 / 73] Compiling tensorflow/python/framework/python_op_gen_internal.cc [for host]; 7s local ... (8 actions, 7 running) INFO: From Compiling tensorflow/python/framework/python_op_gen_internal.cc [for host]: tensorflow/python/framework/python_op_gen_internal.cc: In member function 'virtual std::__cxx11::string tensorflow::python_op_gen_internal::GenPythonOp::Code()': tensorflow/python/framework/python_op_gen_internal.cc:542:44: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = op_def_.input_arg_size(); i < params_no_default.size(); ++i) { ~~^~~~~~~~~~~~~~~~~~~~~~~~~~ tensorflow/python/framework/python_op_gen_internal.cc:545:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < params_with_default.size(); ++i) { ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~ INFO: From Compiling tensorflow/python/framework/python_op_gen.cc [for host]: tensorflow/python/framework/python_op_gen.cc: In function 'std::__cxx11::string tensorflow::{anonymous}::VectorToTuple(const std::vector<std::__cxx11::basic_string<char> >&)': tensorflow/python/framework/python_op_gen.cc:65:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < l.size(); ++i) { ~~^~~~~~~~~~ tensorflow/python/framework/python_op_gen.cc: In function 'void tensorflow::{anonymous}::Unflatten(const string&, const std::vector<std::__cxx11::basic_string<char> >&, const string&, std::__cxx11::string*)': tensorflow/python/framework/python_op_gen.cc:77:21: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < output_sizes.size(); ++i) { ~~^~~~~~~~~~~~~~~~~~~~~ [...] ERROR: /home/jonathan/tensorflow/tensorflow/core/kernels/BUILD:2951:1: output 'tensorflow/core/kernels/_objs/cwise_op_gpu/cwise_op_gpu_bitwise_and.cu.pic.o' was not created INFO: From Compiling tensorflow/core/kernels/cwise_op_gpu_ceil.cu.cc [for host]: ./tensorflow/core/kernels/cwise_ops.h(190): warning: __host__ annotation on a defaulted function("scalar_left") is ignored ./tensorflow/core/kernels/cwise_ops.h(190): warning: __device__ annotation on a defaulted function("scalar_left") is ignored ./tensorflow/core/kernels/cwise_ops.h(220): warning: __host__ annotation on a defaulted function("scalar_right") is ignored ./tensorflow/core/kernels/cwise_ops.h(220): warning: __device__ annotation on a defaulted function("scalar_right") is ignored ./tensorflow/core/kernels/cwise_ops.h(190): warning: __host__ annotation on a defaulted function("scalar_left") is ignored ./tensorflow/core/kernels/cwise_ops.h(190): warning: __device__ annotation on a defaulted function("scalar_left") is ignored ./tensorflow/core/kernels/cwise_ops.h(220): warning: __host__ annotation on a defaulted function("scalar_right") is ignored ./tensorflow/core/kernels/cwise_ops.h(220): warning: __device__ annotation on a defaulted function("scalar_right") is ignored ERROR: /home/jonathan/tensorflow/tensorflow/core/kernels/BUILD:2951:1: output 'tensorflow/core/kernels/_objs/cwise_op_gpu/cwise_op_gpu_mul.cu.pic.o' was not created ERROR: /home/jonathan/tensorflow/tensorflow/core/kernels/BUILD:2951:1: not all outputs were created or valid Target //tensorflow/tools/pip_package:build_pip_package failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 10303.024s, Critical Path: 9226.48s, Remote (0.00% of the time): [queue: 0.00%, setup: 0.00%, process: 0.00%] INFO: 1813 processes: 1813 local. FAILED: Build did NOT complete successfully FAILED: Build did NOT complete successfully ``` for full log-file see [log.txt](https://github.com/tensorflow/tensorflow/files/2913016/log.txt)
non_process
warnings please do not import grpc third party nanopb pb common c directly depends on deprecated target during build from source please make sure that this is a build installation issue as per our we only address code doc bugs performance issues feature requests and build installation issues on github tag build template system information os platform and distribution e g linux ubuntu ubuntu lts mobile device e g iphone pixel samsung galaxy if the issue happens on mobile device n a tensorflow installed from source or binary source tensorflow version python version python anaconda inc installed using virtualenv pip conda conda bazel version if compiling from source gcc compiler version if compiling from source cuda cudnn version gpu model and memory geforce gt gb describe the problem i try to compile tf from source to get compute capability support therefore i followed more or less the guide on but i get warnings and the compilation failed provide the exact sequence of commands steps that you executed before running into the problem configure bazel build config opt cxxopt d glibcxx use abi tensorflow tools pip package build pip package see details below any other info logs include any logs or source code that would be helpful to diagnose the problem if including tracebacks please include the full traceback large logs and files should be attached not sure if important but i added import home jonathan tensorflow tools bazel rc on top line of hide file home jonathan tensorflow bazelrc as stated configure warning batch mode is deprecated please instead explicitly shut down your bazel server using the command bazel shutdown you have bazel installed please specify the location of python found possible python library paths home jonathan envs tf lib site packages please input the desired python library path to use default is do you wish to build tensorflow with apache ignite support apache ignite support will be enabled for tensorflow do you wish to build tensorflow with xla jit support xla jit support will be enabled for tensorflow do you wish to build tensorflow with opencl sycl support no opencl sycl support will be enabled for tensorflow do you wish to build tensorflow with rocm support no rocm support will be enabled for tensorflow do you wish to build tensorflow with cuda support y cuda support will be enabled for tensorflow please specify the cuda sdk version you want to use please specify the location where cuda toolkit is installed refer to readme md for more details usr local cuda please specify the cudnn version you want to use please specify the location where cudnn library is installed refer to readme md for more details usr lib linux gnu do you wish to build tensorflow with tensorrt support no tensorrt support will be enabled for tensorflow please specify the nccl version you want to use if nccl is not installed then you can use version that can be fetched automatically but it may have worse performance with multiple gpus please specify a list of comma separated cuda compute capabilities you want to build with you can find the compute capability of your device at please note that each additional compute capability significantly increases your build time and binary size do you want to use clang as cuda compiler nvcc will be used as cuda compiler please specify which gcc should be used by nvcc as the host compiler usr bin gcc do you wish to build tensorflow with mpi support no mpi support will be enabled for tensorflow please specify optimization flags to use during compilation when bazel option config opt is specified would you like to interactively configure workspace for android builds not configuring the workspace for android builds preconfigured bazel build configs you can use any of the below by adding config to your build command see tools bazel rc for more details config mkl build with mkl support config monolithic config for mostly static monolithic build config gdr build with gdr support config verbs build with libverbs support config ngraph build with intel ngraph support configuration finished bazel build config opt cxxopt d glibcxx use abi tensorflow tools pip package build pip package loading loading packages loaded analyzing target tensorflow tools pip package build pip package packages loaded targets configured warning home jonathan cache bazel bazel jonathan external grpc build in srcs attribute of cc library rule grpc grpc nanopb please do not import grpc third party nanopb pb common c directly you should either move the file to this package or depend on an appropriate rule there since this rule was created by the macro grpc generate one off targets the error might have been caused by the macro implementation in home jonathan cache bazel bazel jonathan external grpc bazel grpc build system bzl warning home jonathan cache bazel bazel jonathan external grpc build in srcs attribute of cc library rule grpc grpc nanopb please do not import grpc third party nanopb pb decode c directly you should either move the file to this package or depend on an appropriate rule there since this rule was created by the macro grpc generate one off targets the error might have been caused by the macro implementation in home jonathan cache bazel bazel jonathan external grpc bazel grpc build system bzl warning home jonathan cache bazel bazel jonathan external grpc build in srcs attribute of cc library rule grpc grpc nanopb please do not import grpc third party nanopb pb encode c directly you should either move the file to this package or depend on an appropriate rule there since this rule was created by the macro grpc generate one off targets the error might have been caused by the macro implementation in home jonathan cache bazel bazel jonathan external grpc bazel grpc build system bzl warning home jonathan tensorflow tensorflow contrib learn build in py library rule tensorflow contrib learn learn target tensorflow contrib learn learn depends on deprecated target tensorflow contrib session bundle exporter no longer supported switch to savedmodel immediately warning home jonathan tensorflow tensorflow contrib learn build in py library rule tensorflow contrib learn learn target tensorflow contrib learn learn depends on deprecated target tensorflow contrib session bundle gc no longer supported switch to savedmodel immediately warning home jonathan tensorflow tensorflow contrib timeseries python timeseries build in py library rule tensorflow contrib timeseries python timeseries ar model target tensorflow contrib timeseries python timeseries ar model depends on deprecated target tensorflow contrib distributions distributions py tensorflow distributions has migrated to tensorflow probability deprecated copies remaining in tf contrib distributions are unmaintained unsupported and will be removed by late you should update all usage of tf contrib distributions to tfp distributions warning home jonathan tensorflow tensorflow contrib timeseries python timeseries state space models build in py library rule tensorflow contrib timeseries python timeseries state space models kalman filter target tensorflow contrib timeseries python timeseries state space models kalman filter depends on deprecated target tensorflow contrib distributions distributions py tensorflow distributions has migrated to tensorflow probability deprecated copies remaining in tf contrib distributions are unmaintained unsupported and will be removed by late you should update all usage of tf contrib distributions to tfp distributions warning home jonathan tensorflow tensorflow contrib timeseries python timeseries state space models build in py library rule tensorflow contrib timeseries python timeseries state space models filtering postprocessor target tensorflow contrib timeseries python timeseries state space models filtering postprocessor depends on deprecated target tensorflow contrib distributions distributions py tensorflow distributions has migrated to tensorflow probability deprecated copies remaining in tf contrib distributions are unmaintained unsupported and will be removed by late you should update all usage of tf contrib distributions to tfp distributions warning home jonathan tensorflow tensorflow contrib bayesflow build in py library rule tensorflow contrib bayesflow bayesflow py target tensorflow contrib bayesflow bayesflow py depends on deprecated target tensorflow contrib distributions distributions py tensorflow distributions has migrated to tensorflow probability deprecated copies remaining in tf contrib distributions are unmaintained unsupported and will be removed by late you should update all usage of tf contrib distributions to tfp distributions warning home jonathan tensorflow tensorflow contrib build in py library rule tensorflow contrib py target tensorflow contrib py depends on deprecated target tensorflow contrib distributions distributions py tensorflow distributions has migrated to tensorflow probability deprecated copies remaining in tf contrib distributions are unmaintained unsupported and will be removed by late you should update all usage of tf contrib distributions to tfp distributions warning home jonathan tensorflow tensorflow contrib build in py library rule tensorflow contrib contrib py target tensorflow contrib contrib py depends on deprecated target tensorflow contrib distributions distributions py tensorflow distributions has migrated to tensorflow probability deprecated copies remaining in tf contrib distributions are unmaintained unsupported and will be removed by late you should update all usage of tf contrib distributions to tfp distributions info analysed target tensorflow tools pip package build pip package packages loaded targets configured info found target protocompile tensorflow core example example py compiling tensorflow core ops nn ops cc local actions running compiling tensorflow core ops nn ops cc local actions running compiling tensorflow core ops nn ops cc local actions running compiling tensorflow contrib tensor forest hybrid core ops routing gradient op cc local actions running compiling tensorflow contrib tensor forest hybrid core ops k feature gradient op cc local actions running compiling tensorflow python framework python op gen internal cc local actions running info from compiling tensorflow python framework python op gen internal cc tensorflow python framework python op gen internal cc in member function virtual std string tensorflow python op gen internal genpythonop code tensorflow python framework python op gen internal cc warning comparison between signed and unsigned integer expressions for int i op def input arg size i params no default size i tensorflow python framework python op gen internal cc warning comparison between signed and unsigned integer expressions for int i i params with default size i info from compiling tensorflow python framework python op gen cc tensorflow python framework python op gen cc in function std string tensorflow anonymous vectortotuple const std vector tensorflow python framework python op gen cc warning comparison between signed and unsigned integer expressions for int i i l size i tensorflow python framework python op gen cc in function void tensorflow anonymous unflatten const string const std vector const string std string tensorflow python framework python op gen cc warning comparison between signed and unsigned integer expressions for int i i output sizes size i error home jonathan tensorflow tensorflow core kernels build output tensorflow core kernels objs cwise op gpu cwise op gpu bitwise and cu pic o was not created info from compiling tensorflow core kernels cwise op gpu ceil cu cc tensorflow core kernels cwise ops h warning host annotation on a defaulted function scalar left is ignored tensorflow core kernels cwise ops h warning device annotation on a defaulted function scalar left is ignored tensorflow core kernels cwise ops h warning host annotation on a defaulted function scalar right is ignored tensorflow core kernels cwise ops h warning device annotation on a defaulted function scalar right is ignored tensorflow core kernels cwise ops h warning host annotation on a defaulted function scalar left is ignored tensorflow core kernels cwise ops h warning device annotation on a defaulted function scalar left is ignored tensorflow core kernels cwise ops h warning host annotation on a defaulted function scalar right is ignored tensorflow core kernels cwise ops h warning device annotation on a defaulted function scalar right is ignored error home jonathan tensorflow tensorflow core kernels build output tensorflow core kernels objs cwise op gpu cwise op gpu mul cu pic o was not created error home jonathan tensorflow tensorflow core kernels build not all outputs were created or valid target tensorflow tools pip package build pip package failed to build use verbose failures to see the command lines of failed build steps info elapsed time critical path remote of the time info processes local failed build did not complete successfully failed build did not complete successfully for full log file see
0
8,938
12,055,021,896
IssuesEvent
2020-04-15 12:16:31
Ultimate-Hosts-Blacklist/whitelist
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
opened
[FALSE-POSITIVE?] 1drv.ms
whitelisting process
See also: mitchellkrogza/Ultimate.Hosts.Blacklist#575 **Domains or links** 1drv.ms **More Information** How did you discover your web site or domain was listed here? Not applicable, not my web site **Have you requested removal from other sources?** Not applicable, internal lists **Additional context** 1drv.ms is a redirect service for microsoft office shares
1.0
[FALSE-POSITIVE?] 1drv.ms - See also: mitchellkrogza/Ultimate.Hosts.Blacklist#575 **Domains or links** 1drv.ms **More Information** How did you discover your web site or domain was listed here? Not applicable, not my web site **Have you requested removal from other sources?** Not applicable, internal lists **Additional context** 1drv.ms is a redirect service for microsoft office shares
process
ms see also mitchellkrogza ultimate hosts blacklist domains or links ms more information how did you discover your web site or domain was listed here not applicable not my web site have you requested removal from other sources not applicable internal lists additional context ms is a redirect service for microsoft office shares
1
416,182
12,140,720,289
IssuesEvent
2020-04-23 21:01:39
department-of-veterans-affairs/caseflow
https://api.github.com/repos/department-of-veterans-affairs/caseflow
opened
Intake | Handling Stuck Claims Requiring Data Fix
Eng: Backend Work Group Priority: Low Product: caseflow-intake Stakeholder: AMO Team: Foxtrot 🦊
As a VBA Intake user, stuck claims requiring data fixes in VBMS or SHARE are emailed to an established AMO/OAR inbox to be rectified. ## Acceptance criteria - [ ] Please put this work behind the feature toggle: [toggle name] - [ ] This feature should be accessible to the following user groups: - [ ] Include screenshot(s) in the Github issue if there are front-end changes - [ ] Update documentation: [link] ## Release notes <!-- Write what should be included in release notes (Caseflow uses Headway), updated when the story is built, before it's deployed. --> <!-- The following sections can be deleted if they are not needed --> ### Out of scope <!-- Clarify what is out of scope if the designs include more or there are many tickets for this chunk of work --> ### Designs <!-- Include screenshots or links to designs if applicable. --> ### Background/context <!-- Include as needed, especially for issues that aren't part of epics. Include a value statement - why is this feature being developed? --> ### Technical notes <!-- Include notes that might help an engineer get started on this more quickly, or potential pitfalls to watch out for. --> ### Other notes ### Resources/other links <!-- E.g. links to other issues, PRs, Sentry alerts, or Slack threads, or external service requests. -->
1.0
Intake | Handling Stuck Claims Requiring Data Fix - As a VBA Intake user, stuck claims requiring data fixes in VBMS or SHARE are emailed to an established AMO/OAR inbox to be rectified. ## Acceptance criteria - [ ] Please put this work behind the feature toggle: [toggle name] - [ ] This feature should be accessible to the following user groups: - [ ] Include screenshot(s) in the Github issue if there are front-end changes - [ ] Update documentation: [link] ## Release notes <!-- Write what should be included in release notes (Caseflow uses Headway), updated when the story is built, before it's deployed. --> <!-- The following sections can be deleted if they are not needed --> ### Out of scope <!-- Clarify what is out of scope if the designs include more or there are many tickets for this chunk of work --> ### Designs <!-- Include screenshots or links to designs if applicable. --> ### Background/context <!-- Include as needed, especially for issues that aren't part of epics. Include a value statement - why is this feature being developed? --> ### Technical notes <!-- Include notes that might help an engineer get started on this more quickly, or potential pitfalls to watch out for. --> ### Other notes ### Resources/other links <!-- E.g. links to other issues, PRs, Sentry alerts, or Slack threads, or external service requests. -->
non_process
intake handling stuck claims requiring data fix as a vba intake user stuck claims requiring data fixes in vbms or share are emailed to an established amo oar inbox to be rectified acceptance criteria please put this work behind the feature toggle this feature should be accessible to the following user groups include screenshot s in the github issue if there are front end changes update documentation release notes out of scope designs background context technical notes other notes resources other links
0
113
2,546,327,812
IssuesEvent
2015-01-29 23:05:03
tinkerpop/tinkerpop3
https://api.github.com/repos/tinkerpop/tinkerpop3
closed
Remove Neo4jGraphTraversal.
enhancement neo4j process
The only reason we need Neo4jGraphTraversal is cause of mid-traversal `cypher()`. Because of this, we have lots of overhead -- source code generation, some tests can't run because it method counts are off, etc. Also, lots of Neo4j code will go away .... e.g. Neo4jElementTraversal, Neo4jVertexTraversal, Neo4jGraphTraversals, Neo4jDefualGraphTraversal, Neo4jEdgeTravaersal, Neo4jVertexPropertyTraversal.... I'm wondering if we get rid of `g.V().out().id().cypher('MATCH...').select('a').outE()` in favor of JUST `g.cypher(MATCH).select('a')`. In short, `cypher()` is a method off `Neo4jGraph` and yields a `GraphTraversal`. Thoughts?
1.0
Remove Neo4jGraphTraversal. - The only reason we need Neo4jGraphTraversal is cause of mid-traversal `cypher()`. Because of this, we have lots of overhead -- source code generation, some tests can't run because it method counts are off, etc. Also, lots of Neo4j code will go away .... e.g. Neo4jElementTraversal, Neo4jVertexTraversal, Neo4jGraphTraversals, Neo4jDefualGraphTraversal, Neo4jEdgeTravaersal, Neo4jVertexPropertyTraversal.... I'm wondering if we get rid of `g.V().out().id().cypher('MATCH...').select('a').outE()` in favor of JUST `g.cypher(MATCH).select('a')`. In short, `cypher()` is a method off `Neo4jGraph` and yields a `GraphTraversal`. Thoughts?
process
remove the only reason we need is cause of mid traversal cypher because of this we have lots of overhead source code generation some tests can t run because it method counts are off etc also lots of code will go away e g i m wondering if we get rid of g v out id cypher match select a oute in favor of just g cypher match select a in short cypher is a method off and yields a graphtraversal thoughts
1
8,041
11,216,841,094
IssuesEvent
2020-01-07 07:40:10
AmpersandTarski/Ampersand
https://api.github.com/repos/AmpersandTarski/Ampersand
opened
Bug in Ampersand Dockerfile
deployment priority:normal software process
When I tried to test a new development version on branch `feature/fixAtlasComplete`, I was unable to produce an Ampersand image from the Dockerfile in the Ampersand repo. #### Version of ampersand that was used I worked with commit AmpersandTarski/Ampersand@1ed837d0895cb53be3e555f15687246f227f1df4. My working directory was the Ampersand clone. ``` % git status On branch feature/fixAtlasComplete Your branch is up to date with 'origin/feature/fixAtlasComplete'. ``` #### What I expected I expected the command `docker build .` to execute without mistakes. #### What happened instead This is what I got (some intermediate lines are omitted in the following log) ``` sjo00577@BA92-C02T81JCGTDY Ampersand % git status On branch feature/fixAtlasComplete Your branch is up to date with 'origin/feature/fixAtlasComplete'. nothing to commit, working tree clean sjo00577@BA92-C02T81JCGTDY Ampersand % docker build -t docker.pkg.github.com/ampersandtarski/ampersand/ampersand:latest . Sending build context to Docker daemon 3.406MB Step 1/13 : FROM haskell:8.6.5 AS buildstage ---> bae585027ddb Step 2/13 : RUN mkdir /opt/ampersand ---> Using cache ---> 14be82101078 Step 3/13 : WORKDIR /opt/ampersand ---> Using cache ---> 9fccfc7dcff7 Step 4/13 : COPY stack.yaml package.yaml /opt/ampersand/ ---> Using cache ---> c88a75e48cc6 Step 5/13 : RUN stack build --dependencies-only ---> Running in 6483d2630284 Downloading lts-14.17 build plan ... Downloaded lts-14.17 build plan. Updating package index Hackage (mirrored at https://s3.amazonaws.com/hackage.fpcomplete.com/) ... Selected mirror https://s3.amazonaws.com/hackage.fpcomplete.com/ Downloading root Selected mirror https://s3.amazonaws.com/hackage.fpcomplete.com/ ... rio-0.1.12.0: copy/register vector-algorithms-0.8.0.3: copy/register graphviz-2999.20.0.3: copy/register aeson-1.4.6.0: copy/register JuicyPixels-3.3.4: copy/register -- While building package Cabal-2.4.1.0 using: /root/.stack/setup-exe-cache/x86_64-linux/Cabal-simple_mPHDZzAJ_2.4.0.1_ghc-8.6.5 --builddir=.stack-work/dist/x86_64-linux/Cabal-2.4.0.1 build --ghc-options " -ddump-hi -ddump-to-file" Process exited with code: ExitFailure (-9) (THIS MAY INDICATE OUT OF MEMORY) Logs have been written to: /opt/ampersand/.stack-work/logs/Cabal-2.4.1.0.log Configuring Cabal-2.4.1.0... Preprocessing library for Cabal-2.4.1.0.. Building library for Cabal-2.4.1.0.. [ 1 of 220] Compiling Distribution.Compat.Binary ( Distribution/Compat/Binary.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Binary.o ) [ 2 of 220] Compiling Distribution.Compat.Directory ( Distribution/Compat/Directory.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Directory.o ) [ 3 of 220] Compiling Distribution.Compat.Exception ( Distribution/Compat/Exception.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Exception.o ) [ 4 of 220] Compiling Distribution.Compat.Internal.TempFile ( Distribution/Compat/Internal/TempFile.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Internal/TempFile.o ) ... [167 of 220] Compiling Distribution.PackageDescription.FieldGrammar ( Distribution/PackageDescription/FieldGrammar.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/FieldGrammar.o ) [168 of 220] Compiling Distribution.PackageDescription.PrettyPrint ( Distribution/PackageDescription/PrettyPrint.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/PrettyPrint.o ) [169 of 220] Compiling Distribution.PackageDescription.Parsec ( Distribution/PackageDescription/Parsec.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/Parsec.o ) [170 of 220] Compiling Distribution.FieldGrammar.FieldDescrs ( Distribution/FieldGrammar/FieldDescrs.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/FieldGrammar/FieldDescrs.o ) [171 of 220] Compiling Distribution.Types.InstalledPackageInfo.FieldGrammar ( Distribution/Types/InstalledPackageInfo/FieldGrammar.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Types/InstalledPackageInfo/FieldGrammar.o ) The command '/bin/sh -c stack build --dependencies-only' returned a non-zero code: 1 ``` #### Steps to reproduce 1. 2. 3. 4. #### Screenshot / Video #### Context / Source of ampersand script <!-- Optional: share your script if possible. It helps us reproduce the problem. Please try to keep the scripts tiny We'd also love to know how you found the bug: #dogfooding, #manual-testing, #automated-testing, or #user-report if applicable. If requesting a new feature, explain why you'd like to see it added. -->
1.0
Bug in Ampersand Dockerfile - When I tried to test a new development version on branch `feature/fixAtlasComplete`, I was unable to produce an Ampersand image from the Dockerfile in the Ampersand repo. #### Version of ampersand that was used I worked with commit AmpersandTarski/Ampersand@1ed837d0895cb53be3e555f15687246f227f1df4. My working directory was the Ampersand clone. ``` % git status On branch feature/fixAtlasComplete Your branch is up to date with 'origin/feature/fixAtlasComplete'. ``` #### What I expected I expected the command `docker build .` to execute without mistakes. #### What happened instead This is what I got (some intermediate lines are omitted in the following log) ``` sjo00577@BA92-C02T81JCGTDY Ampersand % git status On branch feature/fixAtlasComplete Your branch is up to date with 'origin/feature/fixAtlasComplete'. nothing to commit, working tree clean sjo00577@BA92-C02T81JCGTDY Ampersand % docker build -t docker.pkg.github.com/ampersandtarski/ampersand/ampersand:latest . Sending build context to Docker daemon 3.406MB Step 1/13 : FROM haskell:8.6.5 AS buildstage ---> bae585027ddb Step 2/13 : RUN mkdir /opt/ampersand ---> Using cache ---> 14be82101078 Step 3/13 : WORKDIR /opt/ampersand ---> Using cache ---> 9fccfc7dcff7 Step 4/13 : COPY stack.yaml package.yaml /opt/ampersand/ ---> Using cache ---> c88a75e48cc6 Step 5/13 : RUN stack build --dependencies-only ---> Running in 6483d2630284 Downloading lts-14.17 build plan ... Downloaded lts-14.17 build plan. Updating package index Hackage (mirrored at https://s3.amazonaws.com/hackage.fpcomplete.com/) ... Selected mirror https://s3.amazonaws.com/hackage.fpcomplete.com/ Downloading root Selected mirror https://s3.amazonaws.com/hackage.fpcomplete.com/ ... rio-0.1.12.0: copy/register vector-algorithms-0.8.0.3: copy/register graphviz-2999.20.0.3: copy/register aeson-1.4.6.0: copy/register JuicyPixels-3.3.4: copy/register -- While building package Cabal-2.4.1.0 using: /root/.stack/setup-exe-cache/x86_64-linux/Cabal-simple_mPHDZzAJ_2.4.0.1_ghc-8.6.5 --builddir=.stack-work/dist/x86_64-linux/Cabal-2.4.0.1 build --ghc-options " -ddump-hi -ddump-to-file" Process exited with code: ExitFailure (-9) (THIS MAY INDICATE OUT OF MEMORY) Logs have been written to: /opt/ampersand/.stack-work/logs/Cabal-2.4.1.0.log Configuring Cabal-2.4.1.0... Preprocessing library for Cabal-2.4.1.0.. Building library for Cabal-2.4.1.0.. [ 1 of 220] Compiling Distribution.Compat.Binary ( Distribution/Compat/Binary.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Binary.o ) [ 2 of 220] Compiling Distribution.Compat.Directory ( Distribution/Compat/Directory.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Directory.o ) [ 3 of 220] Compiling Distribution.Compat.Exception ( Distribution/Compat/Exception.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Exception.o ) [ 4 of 220] Compiling Distribution.Compat.Internal.TempFile ( Distribution/Compat/Internal/TempFile.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Compat/Internal/TempFile.o ) ... [167 of 220] Compiling Distribution.PackageDescription.FieldGrammar ( Distribution/PackageDescription/FieldGrammar.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/FieldGrammar.o ) [168 of 220] Compiling Distribution.PackageDescription.PrettyPrint ( Distribution/PackageDescription/PrettyPrint.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/PrettyPrint.o ) [169 of 220] Compiling Distribution.PackageDescription.Parsec ( Distribution/PackageDescription/Parsec.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/PackageDescription/Parsec.o ) [170 of 220] Compiling Distribution.FieldGrammar.FieldDescrs ( Distribution/FieldGrammar/FieldDescrs.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/FieldGrammar/FieldDescrs.o ) [171 of 220] Compiling Distribution.Types.InstalledPackageInfo.FieldGrammar ( Distribution/Types/InstalledPackageInfo/FieldGrammar.hs, .stack-work/dist/x86_64-linux/Cabal-2.4.0.1/build/Distribution/Types/InstalledPackageInfo/FieldGrammar.o ) The command '/bin/sh -c stack build --dependencies-only' returned a non-zero code: 1 ``` #### Steps to reproduce 1. 2. 3. 4. #### Screenshot / Video #### Context / Source of ampersand script <!-- Optional: share your script if possible. It helps us reproduce the problem. Please try to keep the scripts tiny We'd also love to know how you found the bug: #dogfooding, #manual-testing, #automated-testing, or #user-report if applicable. If requesting a new feature, explain why you'd like to see it added. -->
process
bug in ampersand dockerfile when i tried to test a new development version on branch feature fixatlascomplete i was unable to produce an ampersand image from the dockerfile in the ampersand repo version of ampersand that was used i worked with commit ampersandtarski ampersand my working directory was the ampersand clone git status on branch feature fixatlascomplete your branch is up to date with origin feature fixatlascomplete what i expected i expected the command docker build to execute without mistakes what happened instead this is what i got some intermediate lines are omitted in the following log ampersand git status on branch feature fixatlascomplete your branch is up to date with origin feature fixatlascomplete nothing to commit working tree clean ampersand docker build t docker pkg github com ampersandtarski ampersand ampersand latest sending build context to docker daemon step from haskell as buildstage step run mkdir opt ampersand using cache step workdir opt ampersand using cache step copy stack yaml package yaml opt ampersand using cache step run stack build dependencies only running in downloading lts build plan downloaded lts build plan updating package index hackage mirrored at selected mirror downloading root selected mirror rio copy register vector algorithms copy register graphviz copy register aeson copy register juicypixels copy register while building package cabal using root stack setup exe cache linux cabal simple mphdzzaj ghc builddir stack work dist linux cabal build ghc options ddump hi ddump to file process exited with code exitfailure this may indicate out of memory logs have been written to opt ampersand stack work logs cabal log configuring cabal preprocessing library for cabal building library for cabal compiling distribution compat binary distribution compat binary hs stack work dist linux cabal build distribution compat binary o compiling distribution compat directory distribution compat directory hs stack work dist linux cabal build distribution compat directory o compiling distribution compat exception distribution compat exception hs stack work dist linux cabal build distribution compat exception o compiling distribution compat internal tempfile distribution compat internal tempfile hs stack work dist linux cabal build distribution compat internal tempfile o compiling distribution packagedescription fieldgrammar distribution packagedescription fieldgrammar hs stack work dist linux cabal build distribution packagedescription fieldgrammar o compiling distribution packagedescription prettyprint distribution packagedescription prettyprint hs stack work dist linux cabal build distribution packagedescription prettyprint o compiling distribution packagedescription parsec distribution packagedescription parsec hs stack work dist linux cabal build distribution packagedescription parsec o compiling distribution fieldgrammar fielddescrs distribution fieldgrammar fielddescrs hs stack work dist linux cabal build distribution fieldgrammar fielddescrs o compiling distribution types installedpackageinfo fieldgrammar distribution types installedpackageinfo fieldgrammar hs stack work dist linux cabal build distribution types installedpackageinfo fieldgrammar o the command bin sh c stack build dependencies only returned a non zero code steps to reproduce screenshot video context source of ampersand script optional share your script if possible it helps us reproduce the problem please try to keep the scripts tiny we d also love to know how you found the bug dogfooding manual testing automated testing or user report if applicable if requesting a new feature explain why you d like to see it added
1
12,119
14,740,699,362
IssuesEvent
2021-01-07 09:29:49
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Toronto Billing cycles
anc-process anp-urgent ant-bug has attachment
In GitLab by @kdjstudios on Nov 28, 2018, 12:36 **Submitted by:** "Denise Joseph" <denise.joseph@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-11-28-61141/conversation **Server:** Internal **Client/Site:** Toronto **Account:** NA **Issue:** Toronto called the HD and informed me that they had created their 11/24/18 Master and US billing cycles. However they had to revert and when they went to change the dates after reverting it would not reset. Now the dates are showing correctly for the Master Billing Cycle. but the US cycle now shows 4 open billing cycles at once. ![image](/uploads/633733f0faf0294457674b945b7f098c/image.png) Denise wrote: > After changing the billing cycle date to 11/24/2018 for both Master and US accounts and uploaded the billing txt filing proceeded with billing process and started to generate reports for both Master and US accounts. Realized that all US accounts report showed totals as all zeros. Reverted billing for US and saw the billing cycle date set for 02/24/2019 instead of 11/24/2018. Can you please look into this for us and please reset the billing cycle date for US accounts to 11/24/2018 so that we can proceed with billing.
1.0
Toronto Billing cycles - In GitLab by @kdjstudios on Nov 28, 2018, 12:36 **Submitted by:** "Denise Joseph" <denise.joseph@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-11-28-61141/conversation **Server:** Internal **Client/Site:** Toronto **Account:** NA **Issue:** Toronto called the HD and informed me that they had created their 11/24/18 Master and US billing cycles. However they had to revert and when they went to change the dates after reverting it would not reset. Now the dates are showing correctly for the Master Billing Cycle. but the US cycle now shows 4 open billing cycles at once. ![image](/uploads/633733f0faf0294457674b945b7f098c/image.png) Denise wrote: > After changing the billing cycle date to 11/24/2018 for both Master and US accounts and uploaded the billing txt filing proceeded with billing process and started to generate reports for both Master and US accounts. Realized that all US accounts report showed totals as all zeros. Reverted billing for US and saw the billing cycle date set for 02/24/2019 instead of 11/24/2018. Can you please look into this for us and please reset the billing cycle date for US accounts to 11/24/2018 so that we can proceed with billing.
process
toronto billing cycles in gitlab by kdjstudios on nov submitted by denise joseph helpdesk server internal client site toronto account na issue toronto called the hd and informed me that they had created their master and us billing cycles however they had to revert and when they went to change the dates after reverting it would not reset now the dates are showing correctly for the master billing cycle but the us cycle now shows open billing cycles at once uploads image png denise wrote after changing the billing cycle date to for both master and us accounts and uploaded the billing txt filing proceeded with billing process and started to generate reports for both master and us accounts realized that all us accounts report showed totals as all zeros reverted billing for us and saw the billing cycle date set for instead of can you please look into this for us and please reset the billing cycle date for us accounts to so that we can proceed with billing
1
6,548
7,687,153,274
IssuesEvent
2018-05-17 03:40:29
GalateaEngine/Galatea
https://api.github.com/repos/GalateaEngine/Galatea
closed
Command line arguments to override config file parameters
emotion_classifier enhancement microservices
src/microservices/modules/emotion/ Currently, we read the config file for all parameters for the server The usage of command line arguments to either override the current config file or override arguments. ideally with Argsparse
1.0
Command line arguments to override config file parameters - src/microservices/modules/emotion/ Currently, we read the config file for all parameters for the server The usage of command line arguments to either override the current config file or override arguments. ideally with Argsparse
non_process
command line arguments to override config file parameters src microservices modules emotion currently we read the config file for all parameters for the server the usage of command line arguments to either override the current config file or override arguments ideally with argsparse
0
565,137
16,749,642,477
IssuesEvent
2021-06-11 20:41:43
Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2
https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2
opened
Tortollan race and counties
:books:lore:books: :grey_exclamation: priority low :question: suggestion :question: :star:new feature:new: 🌐cartography🌲 👑history📚
We need to make a tortollan race trait for putting tortollan characters on the game. Also, these counties needs to be tortollan: - c_darkwood https://wowpedia.fandom.com/wiki/Tortaka_Refuge - c_torga https://wowpedia.fandom.com/wiki/Torga - c_atalgral (Probably not just Loristically but there is a small village of Tortollan in the area so I think this county could be.)
1.0
Tortollan race and counties - We need to make a tortollan race trait for putting tortollan characters on the game. Also, these counties needs to be tortollan: - c_darkwood https://wowpedia.fandom.com/wiki/Tortaka_Refuge - c_torga https://wowpedia.fandom.com/wiki/Torga - c_atalgral (Probably not just Loristically but there is a small village of Tortollan in the area so I think this county could be.)
non_process
tortollan race and counties we need to make a tortollan race trait for putting tortollan characters on the game also these counties needs to be tortollan c darkwood c torga c atalgral probably not just loristically but there is a small village of tortollan in the area so i think this county could be
0
114,692
9,746,978,820
IssuesEvent
2019-06-03 13:29:35
Students-of-the-city-of-Kostroma/Student-timetable
https://api.github.com/repos/Students-of-the-city-of-Kostroma/Student-timetable
opened
Написать unit-тесты для конструктора MTypesOfOccupation(Model model) сущности MTypesOfOccupation
Unit test Вид занятия
Этап тестирования https://github.com/Students-of-the-city-of-Kostroma/Student-timetable/issues/842
1.0
Написать unit-тесты для конструктора MTypesOfOccupation(Model model) сущности MTypesOfOccupation - Этап тестирования https://github.com/Students-of-the-city-of-Kostroma/Student-timetable/issues/842
non_process
написать unit тесты для конструктора mtypesofoccupation model model сущности mtypesofoccupation этап тестирования
0
255,589
8,125,817,113
IssuesEvent
2018-08-16 22:28:12
aowen87/BAR
https://api.github.com/repos/aowen87/BAR
closed
Settting SSH command in host profile fails (Windows)
Bug Likelihood: 3 - Occasional Priority: Normal Severity: 3 - Major Irritation
If I set SSH command to plink.exe on windows (full path contains spaces) in the host profile settings, then connections to the remote machine fail. I tried 'unix-style' path, escaping path-delimiters and escaping spaces, quoting path, etc. At one point, viewer log indicated SSHCOMMAND something like this: {C:, Program, Files, (x86), putty, plink.exe} instead of one command. -----------------------REDMINE MIGRATION----------------------- This ticket was migrated from Redmine. As such, not all information was able to be captured in the transition. Below is a complete record of the original redmine ticket. Ticket number: 2481 Status: Resolved Project: VisIt Tracker: Bug Priority: Normal Subject: Settting SSH command in host profile fails (Windows) Assigned to: Kathleen Biagas Category: Target version: 2.10.1 Author: Kathleen Biagas Start: 12/11/2015 Due date: % Done: 100 Estimated time: Created: 12/11/2015 11:46 am Updated: 12/16/2015 05:52 pm Likelihood: 3 - Occasional Severity: 3 - Major Irritation Found in version: 2.10.0 Impact: Expected Use: OS: Windows Support Group: Any Description: If I set SSH command to plink.exe on windows (full path contains spaces) in the host profile settings, then connections to the remote machine fail. I tried 'unix-style' path, escaping path-delimiters and escaping spaces, quoting path, etc. At one point, viewer log indicated SSHCOMMAND something like this: {C:, Program, Files, (x86), putty, plink.exe} instead of one command. Comments: Ensure quoted ssh command isn't split along spaces. Only split on args past the end of of the quoted command./src/common/comm/RemoteProcess.C/src/gui/QvisHostProfileWindow.C/src/gui/QvisHostProfileWindow.h
1.0
Settting SSH command in host profile fails (Windows) - If I set SSH command to plink.exe on windows (full path contains spaces) in the host profile settings, then connections to the remote machine fail. I tried 'unix-style' path, escaping path-delimiters and escaping spaces, quoting path, etc. At one point, viewer log indicated SSHCOMMAND something like this: {C:, Program, Files, (x86), putty, plink.exe} instead of one command. -----------------------REDMINE MIGRATION----------------------- This ticket was migrated from Redmine. As such, not all information was able to be captured in the transition. Below is a complete record of the original redmine ticket. Ticket number: 2481 Status: Resolved Project: VisIt Tracker: Bug Priority: Normal Subject: Settting SSH command in host profile fails (Windows) Assigned to: Kathleen Biagas Category: Target version: 2.10.1 Author: Kathleen Biagas Start: 12/11/2015 Due date: % Done: 100 Estimated time: Created: 12/11/2015 11:46 am Updated: 12/16/2015 05:52 pm Likelihood: 3 - Occasional Severity: 3 - Major Irritation Found in version: 2.10.0 Impact: Expected Use: OS: Windows Support Group: Any Description: If I set SSH command to plink.exe on windows (full path contains spaces) in the host profile settings, then connections to the remote machine fail. I tried 'unix-style' path, escaping path-delimiters and escaping spaces, quoting path, etc. At one point, viewer log indicated SSHCOMMAND something like this: {C:, Program, Files, (x86), putty, plink.exe} instead of one command. Comments: Ensure quoted ssh command isn't split along spaces. Only split on args past the end of of the quoted command./src/common/comm/RemoteProcess.C/src/gui/QvisHostProfileWindow.C/src/gui/QvisHostProfileWindow.h
non_process
settting ssh command in host profile fails windows if i set ssh command to plink exe on windows full path contains spaces in the host profile settings then connections to the remote machine fail i tried unix style path escaping path delimiters and escaping spaces quoting path etc at one point viewer log indicated sshcommand something like this c program files putty plink exe instead of one command redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject settting ssh command in host profile fails windows assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created am updated pm likelihood occasional severity major irritation found in version impact expected use os windows support group any description if i set ssh command to plink exe on windows full path contains spaces in the host profile settings then connections to the remote machine fail i tried unix style path escaping path delimiters and escaping spaces quoting path etc at one point viewer log indicated sshcommand something like this c program files putty plink exe instead of one command comments ensure quoted ssh command isn t split along spaces only split on args past the end of of the quoted command src common comm remoteprocess c src gui qvishostprofilewindow c src gui qvishostprofilewindow h
0
26,706
4,777,613,997
IssuesEvent
2016-10-27 16:46:59
wheeler-microfluidics/microdrop
https://api.github.com/repos/wheeler-microfluidics/microdrop
closed
Fix startup behaviour (Trac #35)
defect microdrop Migrated from Trac
Fix problems with initial startup (e.g., when no device is loaded, disable protocol menu and other relevant controls, etc.). Migrated from http://microfluidics.utoronto.ca/ticket/35 ```json { "status": "closed", "changetime": "2014-04-17T19:39:01", "description": "Fix problems with initial startup (e.g., when no device is loaded,\ndisable protocol menu and other relevant controls, etc.).", "reporter": "cfobel", "cc": "", "resolution": "fixed", "_ts": "1397763541728826", "component": "microdrop", "summary": "Fix startup behaviour", "priority": "major", "keywords": "", "version": "0.1", "time": "2012-01-06T21:40:02", "milestone": "Microdrop 1.0", "owner": "cfobel", "type": "defect" } ```
1.0
Fix startup behaviour (Trac #35) - Fix problems with initial startup (e.g., when no device is loaded, disable protocol menu and other relevant controls, etc.). Migrated from http://microfluidics.utoronto.ca/ticket/35 ```json { "status": "closed", "changetime": "2014-04-17T19:39:01", "description": "Fix problems with initial startup (e.g., when no device is loaded,\ndisable protocol menu and other relevant controls, etc.).", "reporter": "cfobel", "cc": "", "resolution": "fixed", "_ts": "1397763541728826", "component": "microdrop", "summary": "Fix startup behaviour", "priority": "major", "keywords": "", "version": "0.1", "time": "2012-01-06T21:40:02", "milestone": "Microdrop 1.0", "owner": "cfobel", "type": "defect" } ```
non_process
fix startup behaviour trac fix problems with initial startup e g when no device is loaded disable protocol menu and other relevant controls etc migrated from json status closed changetime description fix problems with initial startup e g when no device is loaded ndisable protocol menu and other relevant controls etc reporter cfobel cc resolution fixed ts component microdrop summary fix startup behaviour priority major keywords version time milestone microdrop owner cfobel type defect
0
397,588
27,170,526,824
IssuesEvent
2023-02-17 18:58:51
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
DataExporter: Attribute 'visibleOnly' not documented
documentation
### Describe the bug The [online documentation](https://primefaces.github.io/primefaces/12_0_0/#/components/dataexporter) of component DataExporter has no description for attribute 'visibleOnly'. ### Reproducer _No response_ ### Expected behavior _No response_ ### PrimeFaces edition Community ### PrimeFaces version 12.0.0 ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.3 ### Java version 11 ### Browser(s) _No response_
1.0
DataExporter: Attribute 'visibleOnly' not documented - ### Describe the bug The [online documentation](https://primefaces.github.io/primefaces/12_0_0/#/components/dataexporter) of component DataExporter has no description for attribute 'visibleOnly'. ### Reproducer _No response_ ### Expected behavior _No response_ ### PrimeFaces edition Community ### PrimeFaces version 12.0.0 ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.3 ### Java version 11 ### Browser(s) _No response_
non_process
dataexporter attribute visibleonly not documented describe the bug the of component dataexporter has no description for attribute visibleonly reproducer no response expected behavior no response primefaces edition community primefaces version theme no response jsf implementation mojarra jsf version java version browser s no response
0
1,669
4,307,846,550
IssuesEvent
2016-07-21 10:33:30
e-government-ua/iBP
https://api.github.com/repos/e-government-ua/iBP
closed
Копії рішень сільської ради про надання дозволу на розроблення проекту відвідення земельної ділянки - НЕДОБОЇВСЬКА СІЛЬСЬКА РАДА, ХОТИНСЬКОГО РАЙОНУ ЧЕРНІВЕЦЬКОЇ ОБЛАСТІ
In process of testing in work test
Контактна особа для тестування: Дранчук Андрій - 0992607124, nedoboivtsi14@ukr.net 60035, Чернівецька область, Хотинський район, село Недобоївці, вулиця Головна,28-а Тел.43-1- 34, E-mail: nedoboivtsi14@ukr.net Пн-Пт 8-30 - 17-00
1.0
Копії рішень сільської ради про надання дозволу на розроблення проекту відвідення земельної ділянки - НЕДОБОЇВСЬКА СІЛЬСЬКА РАДА, ХОТИНСЬКОГО РАЙОНУ ЧЕРНІВЕЦЬКОЇ ОБЛАСТІ - Контактна особа для тестування: Дранчук Андрій - 0992607124, nedoboivtsi14@ukr.net 60035, Чернівецька область, Хотинський район, село Недобоївці, вулиця Головна,28-а Тел.43-1- 34, E-mail: nedoboivtsi14@ukr.net Пн-Пт 8-30 - 17-00
process
копії рішень сільської ради про надання дозволу на розроблення проекту відвідення земельної ділянки недобоївська сільська рада хотинського району чернівецької області контактна особа для тестування дранчук андрій ukr net чернівецька область хотинський район село недобоївці вулиця головна а тел e mail ukr net пн пт
1
20,875
3,423,155,979
IssuesEvent
2015-12-09 03:57:59
jccastillo0007/eFacturaT
https://api.github.com/repos/jccastillo0007/eFacturaT
opened
validacion xml - cambiar etiqueta de compras por recibidos
defect
pueden ser compras o gastos, por ello es mas genérico Recibidos
1.0
validacion xml - cambiar etiqueta de compras por recibidos - pueden ser compras o gastos, por ello es mas genérico Recibidos
non_process
validacion xml cambiar etiqueta de compras por recibidos pueden ser compras o gastos por ello es mas genérico recibidos
0
21,111
28,071,302,886
IssuesEvent
2023-03-29 19:17:51
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[processor/servicegraph] making peer attributes of virtual node building configurable
enhancement processor/servicegraph
### Component(s) processor/servicegraph ### Is your feature request related to a problem? Please describe. making peer attributes of virtual node building configurable ### Describe the solution you'd like add new config items in Config: ```yaml virtual_node_peer_attributes: - db.name - rpc.service - ... ``` ### Describe alternatives you've considered _No response_ ### Additional context _No response_
1.0
[processor/servicegraph] making peer attributes of virtual node building configurable - ### Component(s) processor/servicegraph ### Is your feature request related to a problem? Please describe. making peer attributes of virtual node building configurable ### Describe the solution you'd like add new config items in Config: ```yaml virtual_node_peer_attributes: - db.name - rpc.service - ... ``` ### Describe alternatives you've considered _No response_ ### Additional context _No response_
process
making peer attributes of virtual node building configurable component s processor servicegraph is your feature request related to a problem please describe making peer attributes of virtual node building configurable describe the solution you d like add new config items in config yaml virtual node peer attributes db name rpc service describe alternatives you ve considered no response additional context no response
1
25,880
19,321,650,470
IssuesEvent
2021-12-14 06:38:56
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
[test] Rename build.sh 'excludemonofailures' option to 'mono'
wishlist area-Infrastructure untriaged in pr
I make too many typos writing 'excludemonofailures' Now that the `clr.hosts` subset can be built independently of the rest of CoreCLR, and the creation of `Core_Root` is done directly using the Mono and libraries artifacts, and the old hacky `Core_Root` patching step is gone, what we effectively have is an option to `./build.sh` to specify the runtime flavor. (related work: https://github.com/dotnet/runtime/issues/58266) We can bikeshed the details, but it would be nice if this worked: ``` ./build.sh generatelayoutonly release # builds a Core_Root with a CoreCLR release build # - or - ./build.sh generatelayoutonly mono release # builds a Core_Root with a Mono release build ``` The quickest fix is just to update this line to also add `mono|-mono|...`, but maybe we want something else? https://github.com/dotnet/runtime/blob/d8d80cd0d691b4fb1bd9e9a1519ff304cf33e1b5/src/tests/build.sh#L247
1.0
[test] Rename build.sh 'excludemonofailures' option to 'mono' - I make too many typos writing 'excludemonofailures' Now that the `clr.hosts` subset can be built independently of the rest of CoreCLR, and the creation of `Core_Root` is done directly using the Mono and libraries artifacts, and the old hacky `Core_Root` patching step is gone, what we effectively have is an option to `./build.sh` to specify the runtime flavor. (related work: https://github.com/dotnet/runtime/issues/58266) We can bikeshed the details, but it would be nice if this worked: ``` ./build.sh generatelayoutonly release # builds a Core_Root with a CoreCLR release build # - or - ./build.sh generatelayoutonly mono release # builds a Core_Root with a Mono release build ``` The quickest fix is just to update this line to also add `mono|-mono|...`, but maybe we want something else? https://github.com/dotnet/runtime/blob/d8d80cd0d691b4fb1bd9e9a1519ff304cf33e1b5/src/tests/build.sh#L247
non_process
rename build sh excludemonofailures option to mono i make too many typos writing excludemonofailures now that the clr hosts subset can be built independently of the rest of coreclr and the creation of core root is done directly using the mono and libraries artifacts and the old hacky core root patching step is gone what we effectively have is an option to build sh to specify the runtime flavor related work we can bikeshed the details but it would be nice if this worked build sh generatelayoutonly release builds a core root with a coreclr release build or build sh generatelayoutonly mono release builds a core root with a mono release build the quickest fix is just to update this line to also add mono mono but maybe we want something else
0
423,092
28,495,490,842
IssuesEvent
2023-04-18 13:57:12
robolaunch/charts
https://api.github.com/repos/robolaunch/charts
opened
Add Generic Deployment Notes
documentation
### What would you like to be updated? Since charts are created using `helmify`, no notes are generated inside the chart folder. Action that updates charts can put a generic `NOTES.txt` inside the chart folders.
1.0
Add Generic Deployment Notes - ### What would you like to be updated? Since charts are created using `helmify`, no notes are generated inside the chart folder. Action that updates charts can put a generic `NOTES.txt` inside the chart folders.
non_process
add generic deployment notes what would you like to be updated since charts are created using helmify no notes are generated inside the chart folder action that updates charts can put a generic notes txt inside the chart folders
0
15,986
20,188,188,885
IssuesEvent
2022-02-11 01:16:30
savitamittalmsft/WAS-SEC-TEST
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
opened
Periodically perform external and/or internal workload security audits
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Security & Compliance Compliance
<a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-audit#review-critical-access">Periodically perform external and/or internal workload security audits</a> <p><b>Why Consider This?</b></p> Compliance is important for several reasons. Aside from signifying levels of standards, like ISO 27001 and others, noncompliance with regulatory guidelines may bring sanctions and penalties. <p><b>Context</b></p> <p><b>Suggested Actions</b></p> <p><span>Use Azure Defender (Azure Security Center) to"nbsp; continuously assess and monitor your compliance score. </span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/security-center/security-center-compliance-dashboard#assess-your-regulatory-compliance" target="_blank"><span>https://docs.microsoft.com/en-us/azure/security-center/security-center-compliance-dashboard#assess-your-regulatory-compliance</span></a><span /></p>
1.0
Periodically perform external and/or internal workload security audits - <a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-audit#review-critical-access">Periodically perform external and/or internal workload security audits</a> <p><b>Why Consider This?</b></p> Compliance is important for several reasons. Aside from signifying levels of standards, like ISO 27001 and others, noncompliance with regulatory guidelines may bring sanctions and penalties. <p><b>Context</b></p> <p><b>Suggested Actions</b></p> <p><span>Use Azure Defender (Azure Security Center) to"nbsp; continuously assess and monitor your compliance score. </span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/security-center/security-center-compliance-dashboard#assess-your-regulatory-compliance" target="_blank"><span>https://docs.microsoft.com/en-us/azure/security-center/security-center-compliance-dashboard#assess-your-regulatory-compliance</span></a><span /></p>
process
periodically perform external and or internal workload security audits why consider this compliance is important for several reasons aside from signifying levels of standards like iso and others noncompliance with regulatory guidelines may bring sanctions and penalties context suggested actions use azure defender azure security center to nbsp continuously assess and monitor your compliance score learn more
1
685,512
23,458,900,374
IssuesEvent
2022-08-16 11:25:25
grpc/grpc
https://api.github.com/repos/grpc/grpc
closed
Grpc.Tools 2.30 does not generate correct client code.
kind/bug lang/C# priority/P2
Doing my first grpc greeter sample (template) and following the tutorial to create a C# client. https://docs.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-3.0&tabs=visual-studio The following code does not compile with Grpc.Tools v2.30 (`Greeter.GreeterClient` does not exist) ```csharp var client = new Greeter.GreeterClient(channel); ``` Grpc.Tools v2.29 works fine. HIH/2c
1.0
Grpc.Tools 2.30 does not generate correct client code. - Doing my first grpc greeter sample (template) and following the tutorial to create a C# client. https://docs.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-3.0&tabs=visual-studio The following code does not compile with Grpc.Tools v2.30 (`Greeter.GreeterClient` does not exist) ```csharp var client = new Greeter.GreeterClient(channel); ``` Grpc.Tools v2.29 works fine. HIH/2c
non_process
grpc tools does not generate correct client code doing my first grpc greeter sample template and following the tutorial to create a c client the following code does not compile with grpc tools greeter greeterclient does not exist csharp var client new greeter greeterclient channel grpc tools works fine hih
0
54,381
3,066,997,065
IssuesEvent
2015-08-18 07:34:21
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
Feature Request: a way for a Dart library to refer to resources that are not Dart code
Area-Library Priority-High Triaged Type-Enhancement
*This issue was originally filed by @sethladd* _____ Use case: a library needs to retrieve files inside itself and copy them to the current working directory. These files are not Dart code, instead they are text files and binary images. This library should work on the command line or as a web app or Chrome App. (that is, we can't always assume dart:io is available and we can't assume that Platform.script returns a file: URI) Current workaround is to embed the files' content into .dart files as triple quoted strings of text or base64. This is less than ideal: \* It requires some sort of build step for the library \* dart2js includes all of this content in its output, greatly increasing the size of the downloaded JS
1.0
Feature Request: a way for a Dart library to refer to resources that are not Dart code - *This issue was originally filed by @sethladd* _____ Use case: a library needs to retrieve files inside itself and copy them to the current working directory. These files are not Dart code, instead they are text files and binary images. This library should work on the command line or as a web app or Chrome App. (that is, we can't always assume dart:io is available and we can't assume that Platform.script returns a file: URI) Current workaround is to embed the files' content into .dart files as triple quoted strings of text or base64. This is less than ideal: \* It requires some sort of build step for the library \* dart2js includes all of this content in its output, greatly increasing the size of the downloaded JS
non_process
feature request a way for a dart library to refer to resources that are not dart code this issue was originally filed by sethladd use case a library needs to retrieve files inside itself and copy them to the current working directory these files are not dart code instead they are text files and binary images this library should work on the command line or as a web app or chrome app that is we can t always assume dart io is available and we can t assume that platform script returns a file uri current workaround is to embed the files content into dart files as triple quoted strings of text or this is less than ideal it requires some sort of build step for the library includes all of this content in its output greatly increasing the size of the downloaded js
0
43,553
2,889,847,941
IssuesEvent
2015-06-13 20:27:16
damonkohler/sl4a
https://api.github.com/repos/damonkohler/sl4a
opened
Auto indent support in the editor
auto-migrated Priority-Medium Type-Enhancement
_From @GoogleCodeExporter on May 31, 2015 11:25_ ``` Add auto indent support to the editor. ``` Original issue reported on code.google.com by `damonkoh...@gmail.com` on 22 Mar 2010 at 4:24 _Copied from original issue: damonkohler/android-scripting#258_
1.0
Auto indent support in the editor - _From @GoogleCodeExporter on May 31, 2015 11:25_ ``` Add auto indent support to the editor. ``` Original issue reported on code.google.com by `damonkoh...@gmail.com` on 22 Mar 2010 at 4:24 _Copied from original issue: damonkohler/android-scripting#258_
non_process
auto indent support in the editor from googlecodeexporter on may add auto indent support to the editor original issue reported on code google com by damonkoh gmail com on mar at copied from original issue damonkohler android scripting
0
42,661
11,042,324,212
IssuesEvent
2019-12-09 08:55:15
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
opened
Add libraries to Single bal file
Area/BuildTools Priority/High Type/NewFeature
**Description:** In some cases like  using interops, database drivers we need to use external libraries and include them in the final executable. In a ballerina project we can add external libraries to project ballerina.toml.  ATM we do not have a way to  add them when using single bal file. We can have another flag to ballerina build and run command ( ex.--libs) to add thsoe external libraries.
1.0
Add libraries to Single bal file - **Description:** In some cases like  using interops, database drivers we need to use external libraries and include them in the final executable. In a ballerina project we can add external libraries to project ballerina.toml.  ATM we do not have a way to  add them when using single bal file. We can have another flag to ballerina build and run command ( ex.--libs) to add thsoe external libraries.
non_process
add libraries to single bal file description in some cases like  using interops database drivers we need to use external libraries and include them in the final executable in a ballerina project we can add external libraries to project ballerina toml   atm we do not have a way to  add them when using single bal file we can have another flag to ballerina build and run command ex libs to add thsoe external libraries
0
19,485
25,794,233,187
IssuesEvent
2022-12-10 11:26:45
nodejs/node
https://api.github.com/repos/nodejs/node
closed
process.binding('spawn_sync') is required for spawn-wrap (and thus nyc) to work.
child_process
[spawn-wrap](https://github.com/tapjs/spawn-wrap/) works by intercepting calls to spawn and spawnSync. spawn is handled by replacing `ChildProcess.prototype.spawn`, spawnSync is handled by replacing `process.bindings('spawn_sync').spawn`. With #22160 and #22260 I assume we will need a migration path in the future. CC @isaacs @bcoe * **Version**: future * **Platform**: all * **Subsystem**: child_process
1.0
process.binding('spawn_sync') is required for spawn-wrap (and thus nyc) to work. - [spawn-wrap](https://github.com/tapjs/spawn-wrap/) works by intercepting calls to spawn and spawnSync. spawn is handled by replacing `ChildProcess.prototype.spawn`, spawnSync is handled by replacing `process.bindings('spawn_sync').spawn`. With #22160 and #22260 I assume we will need a migration path in the future. CC @isaacs @bcoe * **Version**: future * **Platform**: all * **Subsystem**: child_process
process
process binding spawn sync is required for spawn wrap and thus nyc to work works by intercepting calls to spawn and spawnsync spawn is handled by replacing childprocess prototype spawn spawnsync is handled by replacing process bindings spawn sync spawn with and i assume we will need a migration path in the future cc isaacs bcoe version future platform all subsystem child process
1
13,665
16,388,317,156
IssuesEvent
2021-05-17 13:21:37
googleapis/nodejs-iot
https://api.github.com/repos/googleapis/nodejs-iot
closed
stop skipping tests
api: cloudiot type: process
#266 has one test that's being skipped, we should come back and see if we can get this to work.
1.0
stop skipping tests - #266 has one test that's being skipped, we should come back and see if we can get this to work.
process
stop skipping tests has one test that s being skipped we should come back and see if we can get this to work
1
46,070
2,946,501,196
IssuesEvent
2015-07-04 00:54:00
mihaeu/warmshowers-ios
https://api.github.com/repos/mihaeu/warmshowers-ios
closed
Offline concept & support
enhancement high priority
The app needs to be usable without internet connection. Disabled features: - Map view (no map caching possible) Workarounds: - Disable map feature - Check regularly for connectivity (where?) - Cache messages - Cache feedback - Show favorites on startup - Show info using a custom alert
1.0
Offline concept & support - The app needs to be usable without internet connection. Disabled features: - Map view (no map caching possible) Workarounds: - Disable map feature - Check regularly for connectivity (where?) - Cache messages - Cache feedback - Show favorites on startup - Show info using a custom alert
non_process
offline concept support the app needs to be usable without internet connection disabled features map view no map caching possible workarounds disable map feature check regularly for connectivity where cache messages cache feedback show favorites on startup show info using a custom alert
0
440,645
30,754,205,046
IssuesEvent
2023-07-28 23:08:27
spdx/spdx-3-model
https://api.github.com/repos/spdx/spdx-3-model
opened
Document License Expressions
documentation Profile:Licensing
In 2.3, license expressions are documented in their own Annex. We do not have a place to document the full expression - yet. Note that we need to update the expressions to handle custom license additions as discussed in issue #208
1.0
Document License Expressions - In 2.3, license expressions are documented in their own Annex. We do not have a place to document the full expression - yet. Note that we need to update the expressions to handle custom license additions as discussed in issue #208
non_process
document license expressions in license expressions are documented in their own annex we do not have a place to document the full expression yet note that we need to update the expressions to handle custom license additions as discussed in issue
0
215,666
16,686,184,261
IssuesEvent
2021-06-08 08:17:27
r-lib/vdiffr
https://api.github.com/repos/r-lib/vdiffr
closed
Use different expectation wrapper?
feature testthat :ballot_box_with_check:
If instead of `test_that()` vdiffr used (say) `visualise_that()`, when managing cases, we'd only need to run the visual tests, not all tests
1.0
Use different expectation wrapper? - If instead of `test_that()` vdiffr used (say) `visualise_that()`, when managing cases, we'd only need to run the visual tests, not all tests
non_process
use different expectation wrapper if instead of test that vdiffr used say visualise that when managing cases we d only need to run the visual tests not all tests
0