Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
26,605 | 4,235,943,958 | IssuesEvent | 2016-07-05 16:45:15 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | stacktrace dumping shell logic is causing test flakes | area/tests kind/test-flake priority/P2 | the stack trace dumping logic is actually causing excess evaluations of commands, which means logic that loops while expect a certain result never sees that result.
in this case, we loop trying to create a key. we expect to see "key created" come back, once the app is available and responsive.
Instead, during one of the failed POST attempts (because the app isn't up yet) the stack trace logic gets invoked, and the stack trace logic itself ends up evaluating the curl command and doing another POST. That stack trace logic gets the "key created" response, which is useless.
Then the main loop continues and tries the POST again, but now it gets "key updated" and fails the test.
see bash debug output here:
https://gist.github.com/bparees/5cd82c622dbfc9e6725653102110bd64
the test logic starts in
end-to-end/core.sh:46 (POST to create key 1337)
then enters the retry loop at
hack/util.sh:283 and ends up in the stack trace logic as a result of the eval on line 284.
If you look at the gist output, you can see we enter the stack trace logic at line 11 due to an RC=7 from curl.
then line line 39-41 of the gist you can see the curl command gets evaluated (still within the stack trace handling logic) and gets the "key created" response back.
i assume the issue is faulty quoting on line 39 where the eval within the stack trace handling logic occurs.
| 2.0 | stacktrace dumping shell logic is causing test flakes - the stack trace dumping logic is actually causing excess evaluations of commands, which means logic that loops while expect a certain result never sees that result.
in this case, we loop trying to create a key. we expect to see "key created" come back, once the app is available and responsive.
Instead, during one of the failed POST attempts (because the app isn't up yet) the stack trace logic gets invoked, and the stack trace logic itself ends up evaluating the curl command and doing another POST. That stack trace logic gets the "key created" response, which is useless.
Then the main loop continues and tries the POST again, but now it gets "key updated" and fails the test.
see bash debug output here:
https://gist.github.com/bparees/5cd82c622dbfc9e6725653102110bd64
the test logic starts in
end-to-end/core.sh:46 (POST to create key 1337)
then enters the retry loop at
hack/util.sh:283 and ends up in the stack trace logic as a result of the eval on line 284.
If you look at the gist output, you can see we enter the stack trace logic at line 11 due to an RC=7 from curl.
then line line 39-41 of the gist you can see the curl command gets evaluated (still within the stack trace handling logic) and gets the "key created" response back.
i assume the issue is faulty quoting on line 39 where the eval within the stack trace handling logic occurs.
| test | stacktrace dumping shell logic is causing test flakes the stack trace dumping logic is actually causing excess evaluations of commands which means logic that loops while expect a certain result never sees that result in this case we loop trying to create a key we expect to see key created come back once the app is available and responsive instead during one of the failed post attempts because the app isn t up yet the stack trace logic gets invoked and the stack trace logic itself ends up evaluating the curl command and doing another post that stack trace logic gets the key created response which is useless then the main loop continues and tries the post again but now it gets key updated and fails the test see bash debug output here the test logic starts in end to end core sh post to create key then enters the retry loop at hack util sh and ends up in the stack trace logic as a result of the eval on line if you look at the gist output you can see we enter the stack trace logic at line due to an rc from curl then line line of the gist you can see the curl command gets evaluated still within the stack trace handling logic and gets the key created response back i assume the issue is faulty quoting on line where the eval within the stack trace handling logic occurs | 1 |
21,696 | 3,916,251,286 | IssuesEvent | 2016-04-21 00:23:38 | elastic/logstash | https://api.github.com/repos/elastic/logstash | closed | [WiP] Testing how-to docs, RSpec features listing | docs ruby style guide tests-infra work in progress | ### Background
Testing a pipeline like logstash might become a hard process, there has been a lots of improvements and simplifications for the testing of logstash, however one of the few things missing is a good document where people can refere in order to know how to write test. This issue aim to be a place to discuss and show how to write great test for logstash using the last features of rspec3, after we're happy with the content in here the objective is to generate a wiki page where everyone can go.
## RSpec 3.x features
In this section we'll do an overview of the different rspec features and might be useful for you when planning your tests suites.
### rspec-core
The rspec-core provides the structure for writing executable examples of how your code should behave.
#### describe, context and examples
```ruby
describe "something" do
context "in one context" do
it "does one thing" do
end
end
context "in another context" do
it "does another thing" do
end
end
end
```
#### Subjects
```ruby
describe "something" do
context "in one context" do
it "does one thing" do
end
end
context "in another context" do
it "does another thing" do
end
end
end
```
#### let
```ruby
$count = 0
describe "let" do
let(:count) { $count += 1 }
it "memoizes the value" do
expect(count).to eq(1)
expect(count).to eq(1)
end
it "is not cached across examples" do
expect(count).to eq(2)
end
end
```
#### Other important features
* hooks (before, around and after)
* filters
* metadata
* formaters
### rspec-mocks
A test-double framework for rspec with support for method stubs, fakes, and message expectations on generated test-doubles and real objects alike
#### Allow and expect
```ruby
describe "allow" do
it "returns nil from allowed messages" do
dbl = double("Some Collaborator")
allow(dbl).to receive(:foo)
expect(dbl.foo).to be_nil
end
end
```
#### Constraints
```ruby
expect(...).to receive(...).once
expect(...).to receive(...).twice
expect(...).to receive(...).exactly(n).times
expect(...).to receive(...).at_least(:once)
expect(...).to receive(...).at_least(:twice)
expect(...).to receive(...).at_least(n).times
expect(...).to receive(...).at_most(:once)
expect(...).to receive(...).at_most(:twice)
expect(...).to receive(...).at_most(n).times
```
#### Spies
```ruby
describe "have_received" do
it "passes when the message has been received" do
invitation = spy('invitation')
invitation.deliver
expect(invitation).to have_received(:deliver)
end
end
```
#### Dealing with legacy code
```ruby
allow_any_instance_of(Widget).to receive(:name).and_return("Wibble")
```
```ruby
expect_any_instance_of(Widget).to receive(:name).and_return("Wobble")
```
### rspec-expectations
Collection of assertions that lets you express expected outcomes on an object in an example.
#### Equality matchers
```ruby
describe "a string" do
it "is equal to another string of the same value" do
expect("this string").to eq("this string")
end
it "is not equal to another string of a different value" do
expect("this string").not_to eq("a different string")
end
end
```
#### Comparison matchers
```ruby
describe 18 do
it { is_expected.to be < 20 }
it { is_expected.to be > 15 }
it { is_expected.to be <= 19 }
it { is_expected.to be >= 17 }
# deliberate failures
it { is_expected.to be < 15 }
it { is_expected.to be > 20 }
it { is_expected.to be <= 17 }
it { is_expected.to be >= 19 }
end
```
#### Predicate matchers
```ruby
# calls 7.zero?
expect(7).not_to be_zero
# calls [].empty?
expect([]).to be_empty
# calls x.multiple_of?(3)
expect(x).to be_multiple_of(3)
```
#### Type matchers
```ruby
expect(obj).to be_a_kind_of(type)
expect([1, 3, 5]).to all( be_odd )
expect(obj).to be_truthy
expect(obj).to be_nil
expect(area_of_circle).to be_within(0.1).of(28.3)
expect { Counter.increment }.to change{Counter.count}.from(0).to(1)
expect([1, 2, 3]).to contain_exactly(2, 3, 1)
expect(1..10).to cover(5) #@ruby-1.9
expect("this string").to end_with "string"
expect(obj).to exist
expect { raise StandardError }.to raise_error
expect(10).to satisfy { |v| v % 5 == 0 }
expect(obj).to respond_to(:foo)
specify { expect { |b| MyClass.yield_once_with(1, &b) }.to yield_control }
specify { expect { print('foo') }.to output.to_stdout }
```
#### Custom matchers
#### Composing matchers
## Best practices for RSpec but also general for testing.
### Use context
```ruby
context 'when logged in' do
it { is_expected.to respond_with 200 }
end
context 'when logged out' do
it { is_expected.to respond_with 401 }
end
```
### Have short and clean descriptions
```ruby
context 'when not valid' do
it { is_expected.to respond_with 422 }
end
```
becomes
```ruby
when not valid
it should respond with 422
```
### One assertion per test
```ruby
context "when a resource is exposed" do
it { is_expected.to respond_with_content_type(:json) }
it { is_expected.to assign_to(:resource) }
end
```
### Use subjects
```ruby
subject(:hero) { Hero.first }
it "carries a sword" do
expect(hero.equipment).to include "sword"
end
```
### Use let and let!
```ruby
describe '#type_id' do
let(:resource) { FactoryGirl.create :device }
let(:type) { Type.find resource.type_id }
it 'sets the type_id field' do
expect(resource.type_id).to equal(type.id)
end
end
```
### Mock and stub but with caution!
```ruby
# simulate a not found resource
context "when not found" do
before { allow(Resource).to receive(:where).
with(created_from: params[:id]).
and_return(false)
}
it { is_expected.to respond_with 404 }
end
```
### Use descriptive factories
```ruby
config = <<-CONFIG
filter {
mutate { add_field => { "always" => "awesome" } }
if [foo] == "bar" {
mutate { add_field => { "hello" => "world" } }
} else if [bar] == "baz" {
mutate { add_field => { "fancy" => "pants" } }
} else {
mutate { add_field => { "free" => "hugs" } }
}
}
CONFIG
```
### Your test want to be independent
...
### Structure your test layout
... | 1.0 | [WiP] Testing how-to docs, RSpec features listing - ### Background
Testing a pipeline like logstash might become a hard process, there has been a lots of improvements and simplifications for the testing of logstash, however one of the few things missing is a good document where people can refere in order to know how to write test. This issue aim to be a place to discuss and show how to write great test for logstash using the last features of rspec3, after we're happy with the content in here the objective is to generate a wiki page where everyone can go.
## RSpec 3.x features
In this section we'll do an overview of the different rspec features and might be useful for you when planning your tests suites.
### rspec-core
The rspec-core provides the structure for writing executable examples of how your code should behave.
#### describe, context and examples
```ruby
describe "something" do
context "in one context" do
it "does one thing" do
end
end
context "in another context" do
it "does another thing" do
end
end
end
```
#### Subjects
```ruby
describe "something" do
context "in one context" do
it "does one thing" do
end
end
context "in another context" do
it "does another thing" do
end
end
end
```
#### let
```ruby
$count = 0
describe "let" do
let(:count) { $count += 1 }
it "memoizes the value" do
expect(count).to eq(1)
expect(count).to eq(1)
end
it "is not cached across examples" do
expect(count).to eq(2)
end
end
```
#### Other important features
* hooks (before, around and after)
* filters
* metadata
* formaters
### rspec-mocks
A test-double framework for rspec with support for method stubs, fakes, and message expectations on generated test-doubles and real objects alike
#### Allow and expect
```ruby
describe "allow" do
it "returns nil from allowed messages" do
dbl = double("Some Collaborator")
allow(dbl).to receive(:foo)
expect(dbl.foo).to be_nil
end
end
```
#### Constraints
```ruby
expect(...).to receive(...).once
expect(...).to receive(...).twice
expect(...).to receive(...).exactly(n).times
expect(...).to receive(...).at_least(:once)
expect(...).to receive(...).at_least(:twice)
expect(...).to receive(...).at_least(n).times
expect(...).to receive(...).at_most(:once)
expect(...).to receive(...).at_most(:twice)
expect(...).to receive(...).at_most(n).times
```
#### Spies
```ruby
describe "have_received" do
it "passes when the message has been received" do
invitation = spy('invitation')
invitation.deliver
expect(invitation).to have_received(:deliver)
end
end
```
#### Dealing with legacy code
```ruby
allow_any_instance_of(Widget).to receive(:name).and_return("Wibble")
```
```ruby
expect_any_instance_of(Widget).to receive(:name).and_return("Wobble")
```
### rspec-expectations
Collection of assertions that lets you express expected outcomes on an object in an example.
#### Equality matchers
```ruby
describe "a string" do
it "is equal to another string of the same value" do
expect("this string").to eq("this string")
end
it "is not equal to another string of a different value" do
expect("this string").not_to eq("a different string")
end
end
```
#### Comparison matchers
```ruby
describe 18 do
it { is_expected.to be < 20 }
it { is_expected.to be > 15 }
it { is_expected.to be <= 19 }
it { is_expected.to be >= 17 }
# deliberate failures
it { is_expected.to be < 15 }
it { is_expected.to be > 20 }
it { is_expected.to be <= 17 }
it { is_expected.to be >= 19 }
end
```
#### Predicate matchers
```ruby
# calls 7.zero?
expect(7).not_to be_zero
# calls [].empty?
expect([]).to be_empty
# calls x.multiple_of?(3)
expect(x).to be_multiple_of(3)
```
#### Type matchers
```ruby
expect(obj).to be_a_kind_of(type)
expect([1, 3, 5]).to all( be_odd )
expect(obj).to be_truthy
expect(obj).to be_nil
expect(area_of_circle).to be_within(0.1).of(28.3)
expect { Counter.increment }.to change{Counter.count}.from(0).to(1)
expect([1, 2, 3]).to contain_exactly(2, 3, 1)
expect(1..10).to cover(5) #@ruby-1.9
expect("this string").to end_with "string"
expect(obj).to exist
expect { raise StandardError }.to raise_error
expect(10).to satisfy { |v| v % 5 == 0 }
expect(obj).to respond_to(:foo)
specify { expect { |b| MyClass.yield_once_with(1, &b) }.to yield_control }
specify { expect { print('foo') }.to output.to_stdout }
```
#### Custom matchers
#### Composing matchers
## Best practices for RSpec but also general for testing.
### Use context
```ruby
context 'when logged in' do
it { is_expected.to respond_with 200 }
end
context 'when logged out' do
it { is_expected.to respond_with 401 }
end
```
### Have short and clean descriptions
```ruby
context 'when not valid' do
it { is_expected.to respond_with 422 }
end
```
becomes
```ruby
when not valid
it should respond with 422
```
### One assertion per test
```ruby
context "when a resource is exposed" do
it { is_expected.to respond_with_content_type(:json) }
it { is_expected.to assign_to(:resource) }
end
```
### Use subjects
```ruby
subject(:hero) { Hero.first }
it "carries a sword" do
expect(hero.equipment).to include "sword"
end
```
### Use let and let!
```ruby
describe '#type_id' do
let(:resource) { FactoryGirl.create :device }
let(:type) { Type.find resource.type_id }
it 'sets the type_id field' do
expect(resource.type_id).to equal(type.id)
end
end
```
### Mock and stub but with caution!
```ruby
# simulate a not found resource
context "when not found" do
before { allow(Resource).to receive(:where).
with(created_from: params[:id]).
and_return(false)
}
it { is_expected.to respond_with 404 }
end
```
### Use descriptive factories
```ruby
config = <<-CONFIG
filter {
mutate { add_field => { "always" => "awesome" } }
if [foo] == "bar" {
mutate { add_field => { "hello" => "world" } }
} else if [bar] == "baz" {
mutate { add_field => { "fancy" => "pants" } }
} else {
mutate { add_field => { "free" => "hugs" } }
}
}
CONFIG
```
### Your test want to be independent
...
### Structure your test layout
... | test | testing how to docs rspec features listing background testing a pipeline like logstash might become a hard process there has been a lots of improvements and simplifications for the testing of logstash however one of the few things missing is a good document where people can refere in order to know how to write test this issue aim to be a place to discuss and show how to write great test for logstash using the last features of after we re happy with the content in here the objective is to generate a wiki page where everyone can go rspec x features in this section we ll do an overview of the different rspec features and might be useful for you when planning your tests suites rspec core the rspec core provides the structure for writing executable examples of how your code should behave describe context and examples ruby describe something do context in one context do it does one thing do end end context in another context do it does another thing do end end end subjects ruby describe something do context in one context do it does one thing do end end context in another context do it does another thing do end end end let ruby count describe let do let count count it memoizes the value do expect count to eq expect count to eq end it is not cached across examples do expect count to eq end end other important features hooks before around and after filters metadata formaters rspec mocks a test double framework for rspec with support for method stubs fakes and message expectations on generated test doubles and real objects alike allow and expect ruby describe allow do it returns nil from allowed messages do dbl double some collaborator allow dbl to receive foo expect dbl foo to be nil end end constraints ruby expect to receive once expect to receive twice expect to receive exactly n times expect to receive at least once expect to receive at least twice expect to receive at least n times expect to receive at most once expect to receive at most twice expect to receive at most n times spies ruby describe have received do it passes when the message has been received do invitation spy invitation invitation deliver expect invitation to have received deliver end end dealing with legacy code ruby allow any instance of widget to receive name and return wibble ruby expect any instance of widget to receive name and return wobble rspec expectations collection of assertions that lets you express expected outcomes on an object in an example equality matchers ruby describe a string do it is equal to another string of the same value do expect this string to eq this string end it is not equal to another string of a different value do expect this string not to eq a different string end end comparison matchers ruby describe do it is expected to be it is expected to be it is expected to be it is expected to be deliberate failures it is expected to be it is expected to be it is expected to be it is expected to be end predicate matchers ruby calls zero expect not to be zero calls empty expect to be empty calls x multiple of expect x to be multiple of type matchers ruby expect obj to be a kind of type expect to all be odd expect obj to be truthy expect obj to be nil expect area of circle to be within of expect counter increment to change counter count from to expect to contain exactly expect to cover ruby expect this string to end with string expect obj to exist expect raise standarderror to raise error expect to satisfy v v expect obj to respond to foo specify expect b myclass yield once with b to yield control specify expect print foo to output to stdout custom matchers composing matchers best practices for rspec but also general for testing use context ruby context when logged in do it is expected to respond with end context when logged out do it is expected to respond with end have short and clean descriptions ruby context when not valid do it is expected to respond with end becomes ruby when not valid it should respond with one assertion per test ruby context when a resource is exposed do it is expected to respond with content type json it is expected to assign to resource end use subjects ruby subject hero hero first it carries a sword do expect hero equipment to include sword end use let and let ruby describe type id do let resource factorygirl create device let type type find resource type id it sets the type id field do expect resource type id to equal type id end end mock and stub but with caution ruby simulate a not found resource context when not found do before allow resource to receive where with created from params and return false it is expected to respond with end use descriptive factories ruby config config filter mutate add field always awesome if bar mutate add field hello world else if baz mutate add field fancy pants else mutate add field free hugs config your test want to be independent structure your test layout | 1 |
315,449 | 27,074,586,532 | IssuesEvent | 2023-02-14 09:41:45 | gradido/gradido | https://api.github.com/repos/gradido/gradido | closed | 🔧 [Refactor] linting of end-to-end test code | refactor test | ## 🔧 Refactor ticket
The end-to-end test code should have the same linting rules set as the rest of the repository.
| 1.0 | 🔧 [Refactor] linting of end-to-end test code - ## 🔧 Refactor ticket
The end-to-end test code should have the same linting rules set as the rest of the repository.
| test | 🔧 linting of end to end test code 🔧 refactor ticket the end to end test code should have the same linting rules set as the rest of the repository | 1 |
173,587 | 13,431,560,385 | IssuesEvent | 2020-09-07 07:11:45 | h-makoto0212/googleFormToKintone | https://api.github.com/repos/h-makoto0212/googleFormToKintone | closed | Destroyテスト | test | # 課題
`kintoneManager.destroy()`が正常に機能していることを確認する
# 手順
- `destroy.js`を作成する
- `destroy.js`と`kintoneManager.js`をGoogleApplicationScriptに登録し、実行する
- kintoneの指定したレコードが削除されていること | 1.0 | Destroyテスト - # 課題
`kintoneManager.destroy()`が正常に機能していることを確認する
# 手順
- `destroy.js`を作成する
- `destroy.js`と`kintoneManager.js`をGoogleApplicationScriptに登録し、実行する
- kintoneの指定したレコードが削除されていること | test | destroyテスト 課題 kintonemanager destroy が正常に機能していることを確認する 手順 destroy js を作成する destroy js と kintonemanager js をgoogleapplicationscriptに登録し、実行する kintoneの指定したレコードが削除されていること | 1 |
306,637 | 26,485,584,311 | IssuesEvent | 2023-01-17 17:44:07 | dart-lang/co19 | https://api.github.com/repos/dart-lang/co19 | closed | LanguageFeatures/Patterns/map_A04_t02 | bad-test | This test uses instances of class `C` as keys in the maps. Class `C` has a user-defined `operator ==`, but it doesn't define matching `hashCode`, violating the following contract:
https://github.com/dart-lang/sdk/blob/9a7d8857ec81590f9c940f1e15c6923e3ab3453a/sdk/lib/core/object.dart#L65-L71
So, map lookup for `c1` is not guaranteed to find an element which was put into the map with the key `c2` (c1 == c2, but most likely c1.hashCode != c2.hashCode).
https://github.com/dart-lang/co19/blob/780034a9476ea8e9de6d2eecf33e6e0f02a2c220/LanguageFeatures/Patterns/map_A04_t02.dart#L44-L55
https://github.com/dart-lang/co19/blob/780034a9476ea8e9de6d2eecf33e6e0f02a2c220/LanguageFeatures/Patterns/map_A04_t02.dart#L98
https://github.com/dart-lang/co19/blob/780034a9476ea8e9de6d2eecf33e6e0f02a2c220/LanguageFeatures/Patterns/map_A04_t02.dart#L63 | 1.0 | LanguageFeatures/Patterns/map_A04_t02 - This test uses instances of class `C` as keys in the maps. Class `C` has a user-defined `operator ==`, but it doesn't define matching `hashCode`, violating the following contract:
https://github.com/dart-lang/sdk/blob/9a7d8857ec81590f9c940f1e15c6923e3ab3453a/sdk/lib/core/object.dart#L65-L71
So, map lookup for `c1` is not guaranteed to find an element which was put into the map with the key `c2` (c1 == c2, but most likely c1.hashCode != c2.hashCode).
https://github.com/dart-lang/co19/blob/780034a9476ea8e9de6d2eecf33e6e0f02a2c220/LanguageFeatures/Patterns/map_A04_t02.dart#L44-L55
https://github.com/dart-lang/co19/blob/780034a9476ea8e9de6d2eecf33e6e0f02a2c220/LanguageFeatures/Patterns/map_A04_t02.dart#L98
https://github.com/dart-lang/co19/blob/780034a9476ea8e9de6d2eecf33e6e0f02a2c220/LanguageFeatures/Patterns/map_A04_t02.dart#L63 | test | languagefeatures patterns map this test uses instances of class c as keys in the maps class c has a user defined operator but it doesn t define matching hashcode violating the following contract so map lookup for is not guaranteed to find an element which was put into the map with the key but most likely hashcode hashcode | 1 |
490,925 | 14,142,167,987 | IssuesEvent | 2020-11-10 13:43:46 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.2 develop-109] Add information about minimum value for Craft resource/time multiplier | Category: UI Priority: Low | here:

and here:

| 1.0 | [0.9.2 develop-109] Add information about minimum value for Craft resource/time multiplier - here:

and here:

| non_test | add information about minimum value for craft resource time multiplier here and here | 0 |
317,611 | 27,247,920,486 | IssuesEvent | 2023-02-22 04:43:59 | harvester/harvester | https://api.github.com/repos/harvester/harvester | opened | [BUG] The preset namespace on resource edit page is not correct | kind/bug area/ui priority/2 severity/3 reproduce/always not-require/test-plan | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
Steps to reproduce the behavior:
1. Go to create a Secret in `cattle-logging-system`.
2. Go to Volume create page.
3. The current namespace is `cattle-logging-system`
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
We can not create Volume in `cattle-logging-system`, we should improve the filter logic of namespace dropdown component.
**Support bundle**
<!--
You can generate a support bundle in the bottom of Harvester UI (https://docs.harvesterhci.io/v1.0/troubleshooting/harvester/#generate-a-support-bundle). It includes logs and configurations that help diagnose the issue.
Tokens, passwords, and secrets are automatically removed from support bundles. If you feel it's not appropriate to share the bundle files publicly, please consider:
- Wait for a developer to reach you and provide the bundle file by any secure methods.
- Join our Slack community (https://rancher-users.slack.com/archives/C01GKHKAG0K) to provide the bundle.
- Send the bundle to harvester-support-bundle@suse.com with the correct issue ID. -->
**Environment**
- Harvester ISO version:
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630):
**Additional context**
Add any other context about the problem here.
| 1.0 | [BUG] The preset namespace on resource edit page is not correct - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
Steps to reproduce the behavior:
1. Go to create a Secret in `cattle-logging-system`.
2. Go to Volume create page.
3. The current namespace is `cattle-logging-system`
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
We can not create Volume in `cattle-logging-system`, we should improve the filter logic of namespace dropdown component.
**Support bundle**
<!--
You can generate a support bundle in the bottom of Harvester UI (https://docs.harvesterhci.io/v1.0/troubleshooting/harvester/#generate-a-support-bundle). It includes logs and configurations that help diagnose the issue.
Tokens, passwords, and secrets are automatically removed from support bundles. If you feel it's not appropriate to share the bundle files publicly, please consider:
- Wait for a developer to reach you and provide the bundle file by any secure methods.
- Join our Slack community (https://rancher-users.slack.com/archives/C01GKHKAG0K) to provide the bundle.
- Send the bundle to harvester-support-bundle@suse.com with the correct issue ID. -->
**Environment**
- Harvester ISO version:
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630):
**Additional context**
Add any other context about the problem here.
| test | the preset namespace on resource edit page is not correct describe the bug to reproduce steps to reproduce the behavior go to create a secret in cattle logging system go to volume create page the current namespace is cattle logging system expected behavior we can not create volume in cattle logging system we should improve the filter logic of namespace dropdown component support bundle you can generate a support bundle in the bottom of harvester ui it includes logs and configurations that help diagnose the issue tokens passwords and secrets are automatically removed from support bundles if you feel it s not appropriate to share the bundle files publicly please consider wait for a developer to reach you and provide the bundle file by any secure methods join our slack community to provide the bundle send the bundle to harvester support bundle suse com with the correct issue id environment harvester iso version underlying infrastructure e g baremetal with dell poweredge additional context add any other context about the problem here | 1 |
284,387 | 24,595,259,833 | IssuesEvent | 2022-10-14 07:45:56 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | v4 Beta 2 Win64 - Windows Desktop exported EXE Does Not Run On Linux WINE? | bug platform:windows platform:linuxbsd topic:porting needs testing crash | ### Godot version
v4.0.beta2.official [f8745f2f7]
### System information
Windows 11 Pro 64Bit/Linux Mint 21 Cinnamon 64Bit
### Issue description
Hi,
Version: v4.0.beta2.official [f8745f2f7] does not build
a Windows Desktop EXE that runs on Linux WINE version 7.x ?
Can this be fixed in Beta 3 ?
Jesse
### Steps to reproduce
Run Godot v4.0.beta2.official [f8745f2f7]
On Windows 11 Pro 64Bit
Then export game to Windows Desktop EXE executable.
Copy game folder to a Linux and run in WINE 6.x+.
WINE crashes:
________________________________________________________________
```
Unhandled exception: page fault on read access to 0x0000000000000060 in 64-bit code (0x000000014262bbb7).
Register dump:
rip:000000014262bbb7 rsp:000000000081f260 rbp:000000000081f8f0 eflags:00010206 ( R- -- I - -P- )
rax:0000000000000000 rbx:0000000009da85e0 rcx:0000000000000000 rdx:0000000009d40000
rsi:0000000142599690 rdi:0000000009da85f8 r8:0000000009da89f0 r9:0000000000000000 r10:0000000009f60000
r11:0000000009da8b78 r12:000000000081f320 r13:000000000081f310 r14:0000000009da85e8 r15:0000000009da89f0
Stack dump:
0x000000000081f260: 000000000081f348 000000017002a77b
0x000000000081f270: 0000000000000002 00000001429862ef
0x000000000081f280: 000000000081f338 000000014000cf99
0x000000000081f290: 0000000000000001 00000001c8dd5c5e
0x000000000081f2a0: 0000000000000002 0000000000000002
0x000000000081f2b0: 0000000000000002 0000000000000002
0x000000000081f2c0: 000000000081f328 00000001429b2f44
0x000000000081f2d0: 000000000081f8f0 000000000081f9b0
0x000000000081f2e0: 0000000143709fa0 000000000081f990
0x000000000081f2f0: 000000000081f970 00000001c8dd5c84
0x000000000081f300: 0000000009da8470 0000000000000000
0x000000000081f310: 0000000000000000 0000000000000005
Backtrace:
=>0 0x000000014262bbb7 EntryPoint+0x262a6f7() in tetristory (0x000000000081f8f0)
1 0x0000000142df67b9 EntryPoint+0x2df52f9() in tetristory (0x000000000081f8f0)
2 0x000000014006c9b6 EntryPoint+0x6b4f6() in tetristory (0x000000000085b320)
3 0x00000001430ef0d5 EntryPoint+0x30edc15() in tetristory (0x0000000000000001)
4 0x00000001400013c1 in tetristory (+0x13c1) (0x0000000000000001)
5 0x00000001400014d6 EntryPoint+0x16() in tetristory (0x0000000000000000)
6 0x000000007b62c599 ActivateActCtx+0x20a19() in kernel32 (0x0000000000000000)
7 0x0000000170059d53 A_SHAFinal+0x37e63() in ntdll (0x0000000000000000)
0x000000014262bbb7 tetristory+0x262bbb7: movq 0x0000000000000060(%r9),%rcx
Modules:
Module Address Debug info Name (36 modules)
PE 000000007a850000-000000007a854000 Deferred opengl32
PE 000000007b000000-000000007b0d5000 Deferred kernelbase
PE 000000007b600000-000000007b812000 Export kernel32
PE 0000000140000000-0000000143e03000 Export tetristory
PE 0000000170000000-000000017009a000 Export ntdll
PE 00000001c69e0000-00000001c72fc000 Deferred shell32
PE 00000001c8b40000-00000001c8b60000 Deferred msacm32
PE 00000001c8db0000-00000001c8e46000 Deferred msvcrt
PE 00000001d7cb0000-00000001d7cc1000 Deferred wsock32
PE 00000001ec2b0000-00000001ec2d5000 Deferred ws2_32
PE 00000001f51e0000-00000001f51f0000 Deferred hid
PE 000000021a7e0000-000000021a855000 Deferred setupapi
PE 0000000231ae0000-0000000231b62000 Deferred rpcrt4
PE 000000023d820000-000000023da6a000 Deferred user32
PE 0000000240030000-000000024005e000 Deferred iphlpapi
PE 000000025d740000-000000025d74e000 Deferred dwmapi
PE 000000026b4c0000-000000026b53b000 Deferred gdi32
PE 000000028dfa0000-000000028dfac000 Deferred nsi
PE 000000029cfc0000-000000029cfd6000 Deferred dnsapi
PE 00000002bb750000-00000002bb890000 Deferred comctl32
PE 00000002d4d40000-00000002d4d56000 Deferred bcrypt
PE 00000002e3540000-00000002e3591000 Deferred shlwapi
PE 00000002e8f10000-00000002e902b000 Deferred ole32
PE 00000002f1fa0000-00000002f1fad000 Deferred version
PE 00000002f2930000-00000002f293c000 Deferred avrt
PE 0000000308050000-000000030809b000 Deferred dinput8
PE 000000030a950000-000000030a9c1000 Deferred dwrite
PE 00000003126f0000-0000000312709000 Deferred shcore
PE 0000000327020000-0000000327072000 Deferred combase
PE 000000032a700000-000000032a729000 Deferred sechost
PE 0000000330260000-000000033029f000 Deferred advapi32
PE 0000000375610000-0000000375648000 Deferred win32u
PE 00000003af670000-00000003af72f000 Deferred ucrtbase
PE 00000003afd00000-00000003afd1a000 Deferred imm32
PE 00000003b8f00000-00000003b8fc1000 Deferred winmm
PE 00007f11f4080000-00007f11f4084000 Deferred winex11
Threads:
process tid prio (all id:s are in hex)
00000038 services.exe
0000003c 0
00000040 0
0000004c 0
00000050 0
00000074 0
00000098 0
000000b0 0
000000d4 0
000000d8 0
00000044 winedevice.exe
00000048 0
00000054 0
00000058 0
0000005c 0
00000060 0
000000bc 0
00000064 winedevice.exe
00000068 0
00000078 0
0000007c 0
00000080 0
00000084 0
00000088 0
0000008c 0
0000006c explorer.exe
00000070 0
000000c0 0
000000c4 0
00000090 plugplay.exe
00000094 0
0000009c 0
000000a0 0
000000a4 0
000000a8 svchost.exe
000000ac 0
000000b4 0
000000b8 0
000000cc rpcss.exe
000000d0 0
000000dc 0
000000e0 0
000000e4 0
000000e8 0
000000ec 0
000000f0 0
000000fc (D) Z:\home\jlp\Desktop\TS3-G4-Win64\TetriStory.exe
00000100 0 <==
00000104 0
00000108 0
0000010c 0
00000110 0
00000114 0
00000118 0
0000011c 0
00000120 0
00000124 0
00000128 0
0000012c 0
00000130 0
00000134 0
00000138 0
0000013c 0
00000140 0
00000144 0
00000150 0
System information:
Wine build: wine-7.0
Platform: x86_64
Version: Windows 7
Host system: Linux
Host version: 5.15.0-50-generic
```
________________________________________________________________
### Minimal reproduction project
https://github.com/BetaMaxHero/GDScript_Godot_4_T-Story | 1.0 | v4 Beta 2 Win64 - Windows Desktop exported EXE Does Not Run On Linux WINE? - ### Godot version
v4.0.beta2.official [f8745f2f7]
### System information
Windows 11 Pro 64Bit/Linux Mint 21 Cinnamon 64Bit
### Issue description
Hi,
Version: v4.0.beta2.official [f8745f2f7] does not build
a Windows Desktop EXE that runs on Linux WINE version 7.x ?
Can this be fixed in Beta 3 ?
Jesse
### Steps to reproduce
Run Godot v4.0.beta2.official [f8745f2f7]
On Windows 11 Pro 64Bit
Then export game to Windows Desktop EXE executable.
Copy game folder to a Linux and run in WINE 6.x+.
WINE crashes:
________________________________________________________________
```
Unhandled exception: page fault on read access to 0x0000000000000060 in 64-bit code (0x000000014262bbb7).
Register dump:
rip:000000014262bbb7 rsp:000000000081f260 rbp:000000000081f8f0 eflags:00010206 ( R- -- I - -P- )
rax:0000000000000000 rbx:0000000009da85e0 rcx:0000000000000000 rdx:0000000009d40000
rsi:0000000142599690 rdi:0000000009da85f8 r8:0000000009da89f0 r9:0000000000000000 r10:0000000009f60000
r11:0000000009da8b78 r12:000000000081f320 r13:000000000081f310 r14:0000000009da85e8 r15:0000000009da89f0
Stack dump:
0x000000000081f260: 000000000081f348 000000017002a77b
0x000000000081f270: 0000000000000002 00000001429862ef
0x000000000081f280: 000000000081f338 000000014000cf99
0x000000000081f290: 0000000000000001 00000001c8dd5c5e
0x000000000081f2a0: 0000000000000002 0000000000000002
0x000000000081f2b0: 0000000000000002 0000000000000002
0x000000000081f2c0: 000000000081f328 00000001429b2f44
0x000000000081f2d0: 000000000081f8f0 000000000081f9b0
0x000000000081f2e0: 0000000143709fa0 000000000081f990
0x000000000081f2f0: 000000000081f970 00000001c8dd5c84
0x000000000081f300: 0000000009da8470 0000000000000000
0x000000000081f310: 0000000000000000 0000000000000005
Backtrace:
=>0 0x000000014262bbb7 EntryPoint+0x262a6f7() in tetristory (0x000000000081f8f0)
1 0x0000000142df67b9 EntryPoint+0x2df52f9() in tetristory (0x000000000081f8f0)
2 0x000000014006c9b6 EntryPoint+0x6b4f6() in tetristory (0x000000000085b320)
3 0x00000001430ef0d5 EntryPoint+0x30edc15() in tetristory (0x0000000000000001)
4 0x00000001400013c1 in tetristory (+0x13c1) (0x0000000000000001)
5 0x00000001400014d6 EntryPoint+0x16() in tetristory (0x0000000000000000)
6 0x000000007b62c599 ActivateActCtx+0x20a19() in kernel32 (0x0000000000000000)
7 0x0000000170059d53 A_SHAFinal+0x37e63() in ntdll (0x0000000000000000)
0x000000014262bbb7 tetristory+0x262bbb7: movq 0x0000000000000060(%r9),%rcx
Modules:
Module Address Debug info Name (36 modules)
PE 000000007a850000-000000007a854000 Deferred opengl32
PE 000000007b000000-000000007b0d5000 Deferred kernelbase
PE 000000007b600000-000000007b812000 Export kernel32
PE 0000000140000000-0000000143e03000 Export tetristory
PE 0000000170000000-000000017009a000 Export ntdll
PE 00000001c69e0000-00000001c72fc000 Deferred shell32
PE 00000001c8b40000-00000001c8b60000 Deferred msacm32
PE 00000001c8db0000-00000001c8e46000 Deferred msvcrt
PE 00000001d7cb0000-00000001d7cc1000 Deferred wsock32
PE 00000001ec2b0000-00000001ec2d5000 Deferred ws2_32
PE 00000001f51e0000-00000001f51f0000 Deferred hid
PE 000000021a7e0000-000000021a855000 Deferred setupapi
PE 0000000231ae0000-0000000231b62000 Deferred rpcrt4
PE 000000023d820000-000000023da6a000 Deferred user32
PE 0000000240030000-000000024005e000 Deferred iphlpapi
PE 000000025d740000-000000025d74e000 Deferred dwmapi
PE 000000026b4c0000-000000026b53b000 Deferred gdi32
PE 000000028dfa0000-000000028dfac000 Deferred nsi
PE 000000029cfc0000-000000029cfd6000 Deferred dnsapi
PE 00000002bb750000-00000002bb890000 Deferred comctl32
PE 00000002d4d40000-00000002d4d56000 Deferred bcrypt
PE 00000002e3540000-00000002e3591000 Deferred shlwapi
PE 00000002e8f10000-00000002e902b000 Deferred ole32
PE 00000002f1fa0000-00000002f1fad000 Deferred version
PE 00000002f2930000-00000002f293c000 Deferred avrt
PE 0000000308050000-000000030809b000 Deferred dinput8
PE 000000030a950000-000000030a9c1000 Deferred dwrite
PE 00000003126f0000-0000000312709000 Deferred shcore
PE 0000000327020000-0000000327072000 Deferred combase
PE 000000032a700000-000000032a729000 Deferred sechost
PE 0000000330260000-000000033029f000 Deferred advapi32
PE 0000000375610000-0000000375648000 Deferred win32u
PE 00000003af670000-00000003af72f000 Deferred ucrtbase
PE 00000003afd00000-00000003afd1a000 Deferred imm32
PE 00000003b8f00000-00000003b8fc1000 Deferred winmm
PE 00007f11f4080000-00007f11f4084000 Deferred winex11
Threads:
process tid prio (all id:s are in hex)
00000038 services.exe
0000003c 0
00000040 0
0000004c 0
00000050 0
00000074 0
00000098 0
000000b0 0
000000d4 0
000000d8 0
00000044 winedevice.exe
00000048 0
00000054 0
00000058 0
0000005c 0
00000060 0
000000bc 0
00000064 winedevice.exe
00000068 0
00000078 0
0000007c 0
00000080 0
00000084 0
00000088 0
0000008c 0
0000006c explorer.exe
00000070 0
000000c0 0
000000c4 0
00000090 plugplay.exe
00000094 0
0000009c 0
000000a0 0
000000a4 0
000000a8 svchost.exe
000000ac 0
000000b4 0
000000b8 0
000000cc rpcss.exe
000000d0 0
000000dc 0
000000e0 0
000000e4 0
000000e8 0
000000ec 0
000000f0 0
000000fc (D) Z:\home\jlp\Desktop\TS3-G4-Win64\TetriStory.exe
00000100 0 <==
00000104 0
00000108 0
0000010c 0
00000110 0
00000114 0
00000118 0
0000011c 0
00000120 0
00000124 0
00000128 0
0000012c 0
00000130 0
00000134 0
00000138 0
0000013c 0
00000140 0
00000144 0
00000150 0
System information:
Wine build: wine-7.0
Platform: x86_64
Version: Windows 7
Host system: Linux
Host version: 5.15.0-50-generic
```
________________________________________________________________
### Minimal reproduction project
https://github.com/BetaMaxHero/GDScript_Godot_4_T-Story | test | beta windows desktop exported exe does not run on linux wine godot version official system information windows pro linux mint cinnamon issue description hi version official does not build a windows desktop exe that runs on linux wine version x can this be fixed in beta jesse steps to reproduce run godot official on windows pro then export game to windows desktop exe executable copy game folder to a linux and run in wine x wine crashes unhandled exception page fault on read access to in bit code register dump rip rsp rbp eflags r i p rax rbx rcx rdx rsi rdi stack dump backtrace entrypoint in tetristory entrypoint in tetristory entrypoint in tetristory entrypoint in tetristory in tetristory entrypoint in tetristory activateactctx in a shafinal in ntdll tetristory movq rcx modules module address debug info name modules pe deferred pe deferred kernelbase pe export pe export tetristory pe export ntdll pe deferred pe deferred pe deferred msvcrt pe deferred pe deferred pe deferred hid pe deferred setupapi pe deferred pe deferred pe deferred iphlpapi pe deferred dwmapi pe deferred pe deferred nsi pe deferred dnsapi pe deferred pe deferred bcrypt pe deferred shlwapi pe deferred pe deferred version pe deferred avrt pe deferred pe deferred dwrite pe deferred shcore pe deferred combase pe deferred sechost pe deferred pe deferred pe deferred ucrtbase pe deferred pe deferred winmm pe deferred threads process tid prio all id s are in hex services exe winedevice exe winedevice exe explorer exe plugplay exe svchost exe rpcss exe d z home jlp desktop tetristory exe system information wine build wine platform version windows host system linux host version generic minimal reproduction project | 1 |
40,161 | 5,280,250,523 | IssuesEvent | 2017-02-07 13:45:37 | leoponti/dufry | https://api.github.com/repos/leoponti/dufry | closed | Sist- Administracion- gnerarr excel de Packs-PRIORIDAD | Argentina VOLVER A TESTEAR | Hola , te adjunto el documento , esta fallando al intentar armar la hoja 2
[excel pack_enviar.docx](https://github.com/leoponti/dufry/files/729697/excel.pack_enviar.docx)
| 1.0 | Sist- Administracion- gnerarr excel de Packs-PRIORIDAD - Hola , te adjunto el documento , esta fallando al intentar armar la hoja 2
[excel pack_enviar.docx](https://github.com/leoponti/dufry/files/729697/excel.pack_enviar.docx)
| test | sist administracion gnerarr excel de packs prioridad hola te adjunto el documento esta fallando al intentar armar la hoja | 1 |
316,625 | 27,171,434,103 | IssuesEvent | 2023-02-17 19:50:52 | iho-ohi/S-101_Portrayal-Catalogue | https://api.github.com/repos/iho-ohi/S-101_Portrayal-Catalogue | closed | symbol SOUNDSC2 | enhancement test PC 1.1.0 | The symbol SOUNDSC2 for low accurate soundings inferior or equal to the safety depth does not have the correct color.
Proposal : In the definition of the symbol SOUNDSC2 Change the color SNDG1 to SNDG2. | 1.0 | symbol SOUNDSC2 - The symbol SOUNDSC2 for low accurate soundings inferior or equal to the safety depth does not have the correct color.
Proposal : In the definition of the symbol SOUNDSC2 Change the color SNDG1 to SNDG2. | test | symbol the symbol for low accurate soundings inferior or equal to the safety depth does not have the correct color proposal in the definition of the symbol change the color to | 1 |
264 | 5,105,283,853 | IssuesEvent | 2017-01-05 06:30:29 | xcat2/xcat-core | https://api.github.com/repos/xcat2/xcat-core | closed | [FVT]Failed to run rmvm when setup SN in x86_64 for rh7.3 and rh6.8 | component:automation priority:high | Using latest daily build to install SN, when run below command, case failed.
```
RUN:if [ "x86_64" != "ppc64" -a "kvm" != "ipmi" ];then if [[ "dir:///var/lib/libvirt/images/" =~ "phy" ]]; then rmvm c910f04x18v03 -f -p && mkvm c910f04x18v03; else rmvm c910f04x18v03 -f -p && mkvm c910f04x18v03 -s 20G; fi;fi
[if [ "x86_64" != "ppc64" -a "kvm" != "ipmi" ];then if [[ "dir:///var/lib/libvirt/images/" =~ "phy" ]]; then rmvm c910f04x18v03 -f -p && mkvm c910f04x18v03; else rmvm c910f04x18v03 -f -p && mkvm c910f04x18v03 -s 20G; fi;fi] Running Time:1 sec
RETURN rc = 1
OUTPUT:
c910f04x18v03: Error: Cannot remove guest vm, no such vm found
```
| 1.0 | [FVT]Failed to run rmvm when setup SN in x86_64 for rh7.3 and rh6.8 - Using latest daily build to install SN, when run below command, case failed.
```
RUN:if [ "x86_64" != "ppc64" -a "kvm" != "ipmi" ];then if [[ "dir:///var/lib/libvirt/images/" =~ "phy" ]]; then rmvm c910f04x18v03 -f -p && mkvm c910f04x18v03; else rmvm c910f04x18v03 -f -p && mkvm c910f04x18v03 -s 20G; fi;fi
[if [ "x86_64" != "ppc64" -a "kvm" != "ipmi" ];then if [[ "dir:///var/lib/libvirt/images/" =~ "phy" ]]; then rmvm c910f04x18v03 -f -p && mkvm c910f04x18v03; else rmvm c910f04x18v03 -f -p && mkvm c910f04x18v03 -s 20G; fi;fi] Running Time:1 sec
RETURN rc = 1
OUTPUT:
c910f04x18v03: Error: Cannot remove guest vm, no such vm found
```
| non_test | failed to run rmvm when setup sn in for and using latest daily build to install sn when run below command case failed run if then if then rmvm f p mkvm else rmvm f p mkvm s fi fi then if then rmvm f p mkvm else rmvm f p mkvm s fi fi running time sec return rc output error cannot remove guest vm no such vm found | 0 |
13,353 | 8,197,867,932 | IssuesEvent | 2018-08-31 14:40:29 | hyperapp/hyperapp | https://api.github.com/repos/hyperapp/hyperapp | closed | Rewrite patch algo / improve diffing performance | Feature Performance | I want to rewrite patch to incorporate some prefix/suffix trimming techniques as described in the first link posted below.
Hyperapp unkeyed and keyed fares relatively well according to [js-framework-benchmarks](https://github.com/krausest/js-framework-benchmark) (arguably the best benchmark out there and by far), but we could do a bit better and still get away with our 1 KB proposition.
- https://github.com/hyperapp/hyperapp/issues/216#issuecomment-314051357
- https://github.com/yelouafi/petit-dom/blob/master/src/vdom.js
| True | Rewrite patch algo / improve diffing performance - I want to rewrite patch to incorporate some prefix/suffix trimming techniques as described in the first link posted below.
Hyperapp unkeyed and keyed fares relatively well according to [js-framework-benchmarks](https://github.com/krausest/js-framework-benchmark) (arguably the best benchmark out there and by far), but we could do a bit better and still get away with our 1 KB proposition.
- https://github.com/hyperapp/hyperapp/issues/216#issuecomment-314051357
- https://github.com/yelouafi/petit-dom/blob/master/src/vdom.js
| non_test | rewrite patch algo improve diffing performance i want to rewrite patch to incorporate some prefix suffix trimming techniques as described in the first link posted below hyperapp unkeyed and keyed fares relatively well according to arguably the best benchmark out there and by far but we could do a bit better and still get away with our kb proposition | 0 |
181,180 | 14,855,647,085 | IssuesEvent | 2021-01-18 13:04:58 | poloz-lab/PADIVAR-Hardware | https://api.github.com/repos/poloz-lab/PADIVAR-Hardware | closed | waitingForConnection in ServerSocket has no return field | documentation | waitingForConnection function in ServerSocket class has no \return field in the doxygen documentation but it returns a ClientSocket object. | 1.0 | waitingForConnection in ServerSocket has no return field - waitingForConnection function in ServerSocket class has no \return field in the doxygen documentation but it returns a ClientSocket object. | non_test | waitingforconnection in serversocket has no return field waitingforconnection function in serversocket class has no return field in the doxygen documentation but it returns a clientsocket object | 0 |
15 | 2,490,249,351 | IssuesEvent | 2015-01-02 11:25:50 | tomchristie/mkdocs | https://api.github.com/repos/tomchristie/mkdocs | opened | Document all configuration options | Documentation | Some of the [configuration options](https://github.com/tomchristie/mkdocs/blob/0.11.1/mkdocs/config.py#L10) are not documented.
- `copyright`
- `google_analytics` (only mentioned in the release-notes)
- `repo_name`
- `extra_css` (only mentioned in the release-notes)
- `extra_javascript` (only mentioned in the release-notes)
- `include_nav`
- `include_next_prev`
- `include_search` - but this isn't valid once #222 lands
- `include_sitemap` - this feature doesn't exist next. | 1.0 | Document all configuration options - Some of the [configuration options](https://github.com/tomchristie/mkdocs/blob/0.11.1/mkdocs/config.py#L10) are not documented.
- `copyright`
- `google_analytics` (only mentioned in the release-notes)
- `repo_name`
- `extra_css` (only mentioned in the release-notes)
- `extra_javascript` (only mentioned in the release-notes)
- `include_nav`
- `include_next_prev`
- `include_search` - but this isn't valid once #222 lands
- `include_sitemap` - this feature doesn't exist next. | non_test | document all configuration options some of the are not documented copyright google analytics only mentioned in the release notes repo name extra css only mentioned in the release notes extra javascript only mentioned in the release notes include nav include next prev include search but this isn t valid once lands include sitemap this feature doesn t exist next | 0 |
94,627 | 8,507,087,940 | IssuesEvent | 2018-10-30 18:08:52 | beesEX/be | https://api.github.com/repos/beesEX/be | closed | [be/test] test controller to reset all test data POST /test/reset | BE test | Summary:
Create a controller with a handler function mapped on POST /test/reset to reset all test data in mongodb and prepair new test data for new test session.
TODO:
- create `src/test/test.controller.js` with handler function `reset` mapped on **POST /test/reset** which removes all documents of following collections: **orders**, **trades**, **transactions**, **ohlcv1m**, **ohlcv5m**, **ohlcv60m** and deposits for the called user 100000 BTC and 650000000 USDT. **POST /test/reset** is a secured route. Return 200 if everthing goes fine. | 1.0 | [be/test] test controller to reset all test data POST /test/reset - Summary:
Create a controller with a handler function mapped on POST /test/reset to reset all test data in mongodb and prepair new test data for new test session.
TODO:
- create `src/test/test.controller.js` with handler function `reset` mapped on **POST /test/reset** which removes all documents of following collections: **orders**, **trades**, **transactions**, **ohlcv1m**, **ohlcv5m**, **ohlcv60m** and deposits for the called user 100000 BTC and 650000000 USDT. **POST /test/reset** is a secured route. Return 200 if everthing goes fine. | test | test controller to reset all test data post test reset summary create a controller with a handler function mapped on post test reset to reset all test data in mongodb and prepair new test data for new test session todo create src test test controller js with handler function reset mapped on post test reset which removes all documents of following collections orders trades transactions and deposits for the called user btc and usdt post test reset is a secured route return if everthing goes fine | 1 |
298,140 | 25,792,784,135 | IssuesEvent | 2022-12-10 08:18:57 | neondatabase/neon | https://api.github.com/repos/neondatabase/neon | closed | test_remote_storage_backup_and_restore is flaky: Regex pattern 'tenant xxx already exists, state: Broken' does not match 'tenant xxx already exists, state: Attaching'. | t/bug a/test/flaky | ```
2022-12-09T16:10:58.5890037Z =================================== FAILURES ===================================
2022-12-09T16:10:58.5890555Z _______________ test_remote_storage_backup_and_restore[real_s3] ________________
2022-12-09T16:10:58.5892441Z [gw3] linux -- Python 3.9.2 /github/home/.cache/pypoetry/virtualenvs/neon-_pxWMzVK-py3.9/bin/python
2022-12-09T16:10:58.5893018Z test_runner/fixtures/neon_fixtures.py:1064: in verbose_error
2022-12-09T16:10:58.5893388Z res.raise_for_status()
2022-12-09T16:10:58.5894122Z /github/home/.cache/pypoetry/virtualenvs/neon-_pxWMzVK-py3.9/lib/python3.9/site-packages/requests/models.py:1021: in raise_for_status
2022-12-09T16:10:58.5894630Z raise HTTPError(http_error_msg, response=self)
2022-12-09T16:10:58.5895108Z E requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http://localhost:18077/v1/tenant/f741b9055abf7c666671447f1356d84c/attach
2022-12-09T16:10:58.5895405Z
2022-12-09T16:10:58.5895568Z The above exception was the direct cause of the following exception:
2022-12-09T16:10:58.5896020Z test_runner/regress/test_remote_storage.py:142: in test_remote_storage_backup_and_restore
2022-12-09T16:10:58.5896464Z client.tenant_attach(tenant_id)
2022-12-09T16:10:58.5896799Z test_runner/fixtures/neon_fixtures.py:1118: in tenant_attach
2022-12-09T16:10:58.5897128Z self.verbose_error(res)
2022-12-09T16:10:58.5897502Z test_runner/fixtures/neon_fixtures.py:1070: in verbose_error
2022-12-09T16:10:58.5897932Z raise PageserverApiException(msg) from e
2022-12-09T16:10:58.5898408Z E fixtures.neon_fixtures.PageserverApiException: tenant f741b9055abf7c666671447f1356d84c already exists, state: Attaching
2022-12-09T16:10:58.5901840Z
2022-12-09T16:10:58.5902075Z During handling of the above exception, another exception occurred:
2022-12-09T16:10:58.5902584Z test_runner/regress/test_remote_storage.py:142: in test_remote_storage_backup_and_restore
2022-12-09T16:10:58.5903018Z client.tenant_attach(tenant_id)
2022-12-09T16:10:58.5903932Z E AssertionError: Regex pattern 'tenant f741b9055abf7c666671447f1356d84c already exists, state: Broken' does not match 'tenant f741b9055abf7c666671447f1356d84c already exists, state: Attaching'.
```
https://github.com/neondatabase/neon/actions/runs/3658686712/jobs/6184058795 | 1.0 | test_remote_storage_backup_and_restore is flaky: Regex pattern 'tenant xxx already exists, state: Broken' does not match 'tenant xxx already exists, state: Attaching'. - ```
2022-12-09T16:10:58.5890037Z =================================== FAILURES ===================================
2022-12-09T16:10:58.5890555Z _______________ test_remote_storage_backup_and_restore[real_s3] ________________
2022-12-09T16:10:58.5892441Z [gw3] linux -- Python 3.9.2 /github/home/.cache/pypoetry/virtualenvs/neon-_pxWMzVK-py3.9/bin/python
2022-12-09T16:10:58.5893018Z test_runner/fixtures/neon_fixtures.py:1064: in verbose_error
2022-12-09T16:10:58.5893388Z res.raise_for_status()
2022-12-09T16:10:58.5894122Z /github/home/.cache/pypoetry/virtualenvs/neon-_pxWMzVK-py3.9/lib/python3.9/site-packages/requests/models.py:1021: in raise_for_status
2022-12-09T16:10:58.5894630Z raise HTTPError(http_error_msg, response=self)
2022-12-09T16:10:58.5895108Z E requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http://localhost:18077/v1/tenant/f741b9055abf7c666671447f1356d84c/attach
2022-12-09T16:10:58.5895405Z
2022-12-09T16:10:58.5895568Z The above exception was the direct cause of the following exception:
2022-12-09T16:10:58.5896020Z test_runner/regress/test_remote_storage.py:142: in test_remote_storage_backup_and_restore
2022-12-09T16:10:58.5896464Z client.tenant_attach(tenant_id)
2022-12-09T16:10:58.5896799Z test_runner/fixtures/neon_fixtures.py:1118: in tenant_attach
2022-12-09T16:10:58.5897128Z self.verbose_error(res)
2022-12-09T16:10:58.5897502Z test_runner/fixtures/neon_fixtures.py:1070: in verbose_error
2022-12-09T16:10:58.5897932Z raise PageserverApiException(msg) from e
2022-12-09T16:10:58.5898408Z E fixtures.neon_fixtures.PageserverApiException: tenant f741b9055abf7c666671447f1356d84c already exists, state: Attaching
2022-12-09T16:10:58.5901840Z
2022-12-09T16:10:58.5902075Z During handling of the above exception, another exception occurred:
2022-12-09T16:10:58.5902584Z test_runner/regress/test_remote_storage.py:142: in test_remote_storage_backup_and_restore
2022-12-09T16:10:58.5903018Z client.tenant_attach(tenant_id)
2022-12-09T16:10:58.5903932Z E AssertionError: Regex pattern 'tenant f741b9055abf7c666671447f1356d84c already exists, state: Broken' does not match 'tenant f741b9055abf7c666671447f1356d84c already exists, state: Attaching'.
```
https://github.com/neondatabase/neon/actions/runs/3658686712/jobs/6184058795 | test | test remote storage backup and restore is flaky regex pattern tenant xxx already exists state broken does not match tenant xxx already exists state attaching failures test remote storage backup and restore linux python github home cache pypoetry virtualenvs neon pxwmzvk bin python test runner fixtures neon fixtures py in verbose error res raise for status github home cache pypoetry virtualenvs neon pxwmzvk lib site packages requests models py in raise for status raise httperror http error msg response self e requests exceptions httperror server error internal server error for url the above exception was the direct cause of the following exception test runner regress test remote storage py in test remote storage backup and restore client tenant attach tenant id test runner fixtures neon fixtures py in tenant attach self verbose error res test runner fixtures neon fixtures py in verbose error raise pageserverapiexception msg from e e fixtures neon fixtures pageserverapiexception tenant already exists state attaching during handling of the above exception another exception occurred test runner regress test remote storage py in test remote storage backup and restore client tenant attach tenant id e assertionerror regex pattern tenant already exists state broken does not match tenant already exists state attaching | 1 |
296,180 | 25,535,111,535 | IssuesEvent | 2022-11-29 11:21:39 | vmware-tanzu/tanzu-framework | https://api.github.com/repos/vmware-tanzu/tanzu-framework | closed | Improve the runtime for the CI check 'Main' | area/testing kind/feature needs-triage area/dx | **Describe the feature request**
Successful completion of 'Main' github workflow ranges from ~ 1h 30m to ~1h 50m.
We run two parallel jobs in this workflow:

In build step, following steps are linear and takes time:

These steps can be run in parallel.
The lint step is running in parallel anyway.
Proposal:
Run all the 4 operations into different github workflows altogether for better reporting and signals.
**Describe alternative(s) you've considered**
**Affected product area (please put an X in all that apply)**
- ( ) APIs
- ( ) Addons
- ( ) CLI
- ( ) Docs
- ( ) IAM
- ( ) Installation
- ( ) Plugin
- ( ) Security
- ( ) Test and Release
- ( ) User Experience
- (x) Developer Experience
**Additional context**
| 1.0 | Improve the runtime for the CI check 'Main' - **Describe the feature request**
Successful completion of 'Main' github workflow ranges from ~ 1h 30m to ~1h 50m.
We run two parallel jobs in this workflow:

In build step, following steps are linear and takes time:

These steps can be run in parallel.
The lint step is running in parallel anyway.
Proposal:
Run all the 4 operations into different github workflows altogether for better reporting and signals.
**Describe alternative(s) you've considered**
**Affected product area (please put an X in all that apply)**
- ( ) APIs
- ( ) Addons
- ( ) CLI
- ( ) Docs
- ( ) IAM
- ( ) Installation
- ( ) Plugin
- ( ) Security
- ( ) Test and Release
- ( ) User Experience
- (x) Developer Experience
**Additional context**
| test | improve the runtime for the ci check main describe the feature request successful completion of main github workflow ranges from to we run two parallel jobs in this workflow in build step following steps are linear and takes time these steps can be run in parallel the lint step is running in parallel anyway proposal run all the operations into different github workflows altogether for better reporting and signals describe alternative s you ve considered affected product area please put an x in all that apply apis addons cli docs iam installation plugin security test and release user experience x developer experience additional context | 1 |
48,809 | 5,970,768,143 | IssuesEvent | 2017-05-30 23:48:51 | u01jmg3/ics-parser | https://api.github.com/repos/u01jmg3/ics-parser | closed | Monthly RRULE - CET/CEST | to-be-tested | ### Description of the Issue:
I'm getting hours that are not correct taking into account CEST/CET.
Below I'm passing an event that start at `090500`, but when I request the data I get `08:05` back for winter days and `07:05` back for summer days. I'm not seeing this in daily events for example that span CEST/CET months.
### Steps to Reproduce:
iCal:
```
BEGIN:VEVENT
DTSTART;TZID=Europe/Brussels:20170101CET090500
DTEND;TZID=Europe/Brussels:20170101CET172500
RRULE:BYSETPOS=-2;BYDAY=MO;FREQ=MONTHLY;UNTIL=20170529CEST235959
UID:foo
END:VEVENT
```
Edit:
- Apparently monthly recurring `BYMONTHDAY` _does_ return correct data, `BYDAY` _does not_.
- To clarify the hour is incorrect when the dtstart is in CET/CEST and you request the hours in a CEST/CET day, so for example the days in January/February/March will be ok, from April on, you'll get an hour offset
- Yearly suffers from the same issues that monthly was facing, it appears to also move up a week? By my knowledge there's no RRULE for yearly that could use that sort of iteration.
My fix: https://github.com/weconnectdata/ics-parser/commit/d59cb319306b2ec3e8fb3f7982d3dc6a9fb715be (L1107 and following lines are the most important, the rest was linting. The hard coded `|| true` was debug code and was removed in the commit after that. This seems to fix the problem about CET/CEST, but I'm not sure this covers every single case. | 1.0 | Monthly RRULE - CET/CEST - ### Description of the Issue:
I'm getting hours that are not correct taking into account CEST/CET.
Below I'm passing an event that start at `090500`, but when I request the data I get `08:05` back for winter days and `07:05` back for summer days. I'm not seeing this in daily events for example that span CEST/CET months.
### Steps to Reproduce:
iCal:
```
BEGIN:VEVENT
DTSTART;TZID=Europe/Brussels:20170101CET090500
DTEND;TZID=Europe/Brussels:20170101CET172500
RRULE:BYSETPOS=-2;BYDAY=MO;FREQ=MONTHLY;UNTIL=20170529CEST235959
UID:foo
END:VEVENT
```
Edit:
- Apparently monthly recurring `BYMONTHDAY` _does_ return correct data, `BYDAY` _does not_.
- To clarify the hour is incorrect when the dtstart is in CET/CEST and you request the hours in a CEST/CET day, so for example the days in January/February/March will be ok, from April on, you'll get an hour offset
- Yearly suffers from the same issues that monthly was facing, it appears to also move up a week? By my knowledge there's no RRULE for yearly that could use that sort of iteration.
My fix: https://github.com/weconnectdata/ics-parser/commit/d59cb319306b2ec3e8fb3f7982d3dc6a9fb715be (L1107 and following lines are the most important, the rest was linting. The hard coded `|| true` was debug code and was removed in the commit after that. This seems to fix the problem about CET/CEST, but I'm not sure this covers every single case. | test | monthly rrule cet cest description of the issue i m getting hours that are not correct taking into account cest cet below i m passing an event that start at but when i request the data i get back for winter days and back for summer days i m not seeing this in daily events for example that span cest cet months steps to reproduce ical begin vevent dtstart tzid europe brussels dtend tzid europe brussels rrule bysetpos byday mo freq monthly until uid foo end vevent edit apparently monthly recurring bymonthday does return correct data byday does not to clarify the hour is incorrect when the dtstart is in cet cest and you request the hours in a cest cet day so for example the days in january february march will be ok from april on you ll get an hour offset yearly suffers from the same issues that monthly was facing it appears to also move up a week by my knowledge there s no rrule for yearly that could use that sort of iteration my fix and following lines are the most important the rest was linting the hard coded true was debug code and was removed in the commit after that this seems to fix the problem about cet cest but i m not sure this covers every single case | 1 |
148,655 | 13,242,865,503 | IssuesEvent | 2020-08-19 10:26:52 | olifolkerd/tabulator | https://api.github.com/repos/olifolkerd/tabulator | closed | Docs: yarn command to install Tabulator is not correct | Documentation | **Website Page**
http://tabulator.info/docs/4.7/install
**Describe the issue**
The document tells to run `yarn install tabulator-tables` to get Tabulator via the yarn package manager, but it won't work on the latest yarn (v1.22.4). The correct command is `yarn add tabulator-tables`.
```
$ docker run --rm -it node:14-alpine /bin/ash
# Prepare /tmp/project/package.json
/ # mkdir /tmp/project
/ # cd /tmp/project
/tmp/project # npm init -y
# Run yarn install tabulator-tables
/tmp/project # yarn install tabulator-tables
yarn install v1.22.4
info No lockfile found.
error `install` has been replaced with `add` to add new dependencies. Run "yarn add tabulator-tables" instead.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
# Run yarn add tabulator-tables
/tmp/project # yarn add tabulator-tables
yarn add v1.22.4
info No lockfile found.
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
success Saved 1 new dependency.
info Direct dependencies
└─ tabulator-tables@4.7.2
info All dependencies
└─ tabulator-tables@4.7.2
``` | 1.0 | Docs: yarn command to install Tabulator is not correct - **Website Page**
http://tabulator.info/docs/4.7/install
**Describe the issue**
The document tells to run `yarn install tabulator-tables` to get Tabulator via the yarn package manager, but it won't work on the latest yarn (v1.22.4). The correct command is `yarn add tabulator-tables`.
```
$ docker run --rm -it node:14-alpine /bin/ash
# Prepare /tmp/project/package.json
/ # mkdir /tmp/project
/ # cd /tmp/project
/tmp/project # npm init -y
# Run yarn install tabulator-tables
/tmp/project # yarn install tabulator-tables
yarn install v1.22.4
info No lockfile found.
error `install` has been replaced with `add` to add new dependencies. Run "yarn add tabulator-tables" instead.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
# Run yarn add tabulator-tables
/tmp/project # yarn add tabulator-tables
yarn add v1.22.4
info No lockfile found.
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
success Saved 1 new dependency.
info Direct dependencies
└─ tabulator-tables@4.7.2
info All dependencies
└─ tabulator-tables@4.7.2
``` | non_test | docs yarn command to install tabulator is not correct website page describe the issue the document tells to run yarn install tabulator tables to get tabulator via the yarn package manager but it won t work on the latest yarn the correct command is yarn add tabulator tables docker run rm it node alpine bin ash prepare tmp project package json mkdir tmp project cd tmp project tmp project npm init y run yarn install tabulator tables tmp project yarn install tabulator tables yarn install info no lockfile found error install has been replaced with add to add new dependencies run yarn add tabulator tables instead info visit for documentation about this command run yarn add tabulator tables tmp project yarn add tabulator tables yarn add info no lockfile found resolving packages fetching packages linking dependencies building fresh packages success saved lockfile success saved new dependency info direct dependencies └─ tabulator tables info all dependencies └─ tabulator tables | 0 |
109,066 | 4,369,668,285 | IssuesEvent | 2016-08-04 01:17:55 | twosigma/beaker-notebook | https://api.github.com/repos/twosigma/beaker-notebook | closed | reorganize cell menus | Bug Priority High User Interface | <img width="303" alt="screen shot 2016-07-18 at 11 01 16 pm" src="https://cloud.githubusercontent.com/assets/963093/16937213/ef164860-4d3a-11e6-9f04-d4822b73c29a.png">
to
Initialization Cell
Lock Cell
Word wrap
Options...
-horizontal-rule-
Cut
Paste (append after)
Publish...
-horizontal-rule-
Run
Show input cell
Show output cell
Move up
Move down
Delete
| 1.0 | reorganize cell menus - <img width="303" alt="screen shot 2016-07-18 at 11 01 16 pm" src="https://cloud.githubusercontent.com/assets/963093/16937213/ef164860-4d3a-11e6-9f04-d4822b73c29a.png">
to
Initialization Cell
Lock Cell
Word wrap
Options...
-horizontal-rule-
Cut
Paste (append after)
Publish...
-horizontal-rule-
Run
Show input cell
Show output cell
Move up
Move down
Delete
| non_test | reorganize cell menus img width alt screen shot at pm src to initialization cell lock cell word wrap options horizontal rule cut paste append after publish horizontal rule run show input cell show output cell move up move down delete | 0 |
382,589 | 11,308,695,762 | IssuesEvent | 2020-01-19 07:55:29 | xournalpp/xournalpp | https://api.github.com/repos/xournalpp/xournalpp | opened | Mouse wheel scrolling does not work with touch workaround enabled | Input bug difficulty:easy priority: medium | **Affects versions :**
- OS: Solus Linux
- X11
- GTK 3.24
- Version of Xournal++: 1.1.0+dev
**Describe the bug**
When the "Touch Workaround" is enabled, scrolling with the mouse wheel does not work when the cursor is above the canvas. Scrolling _does_ work when the cursor is above the scrollbar.
**To Reproduce**
Steps to reproduce the behavior:
1. Open a blank document
2. Attempt to scroll with mouse wheel while above the canvas.
**Expected behavior**
Scrolling works.
**Screenshots of Problem**
N/A
**Additional context**
Can be implemented by listening to mouse wheel inputs on the main widget when "Touch Workaround" is enabled.
| 1.0 | Mouse wheel scrolling does not work with touch workaround enabled - **Affects versions :**
- OS: Solus Linux
- X11
- GTK 3.24
- Version of Xournal++: 1.1.0+dev
**Describe the bug**
When the "Touch Workaround" is enabled, scrolling with the mouse wheel does not work when the cursor is above the canvas. Scrolling _does_ work when the cursor is above the scrollbar.
**To Reproduce**
Steps to reproduce the behavior:
1. Open a blank document
2. Attempt to scroll with mouse wheel while above the canvas.
**Expected behavior**
Scrolling works.
**Screenshots of Problem**
N/A
**Additional context**
Can be implemented by listening to mouse wheel inputs on the main widget when "Touch Workaround" is enabled.
| non_test | mouse wheel scrolling does not work with touch workaround enabled affects versions os solus linux gtk version of xournal dev describe the bug when the touch workaround is enabled scrolling with the mouse wheel does not work when the cursor is above the canvas scrolling does work when the cursor is above the scrollbar to reproduce steps to reproduce the behavior open a blank document attempt to scroll with mouse wheel while above the canvas expected behavior scrolling works screenshots of problem n a additional context can be implemented by listening to mouse wheel inputs on the main widget when touch workaround is enabled | 0 |
51,063 | 10,581,992,546 | IssuesEvent | 2019-10-08 10:25:13 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | opened | consider dropping most of symbol information (uris and names) when using dwarf stacks | type-performance vm-aot-code-size | Usually we recommend using obfuscation as a way to reduce code size because it shrinks identifiers (method names and library uri-s). The size drop is rather noticeable on large applications.
We could consider taking this one step further - most of the symbol information is completely unnecessary when running in AOT mode with dwarf-stack traces, so we could consider replacing completely dropping library uri-s and replacing string based symbol names with global numbering (represented as Smi-s).
This might allow to remove significant number of strings from AOT snapshot and thus shrink it.
/cc @mkustermann | 1.0 | consider dropping most of symbol information (uris and names) when using dwarf stacks - Usually we recommend using obfuscation as a way to reduce code size because it shrinks identifiers (method names and library uri-s). The size drop is rather noticeable on large applications.
We could consider taking this one step further - most of the symbol information is completely unnecessary when running in AOT mode with dwarf-stack traces, so we could consider replacing completely dropping library uri-s and replacing string based symbol names with global numbering (represented as Smi-s).
This might allow to remove significant number of strings from AOT snapshot and thus shrink it.
/cc @mkustermann | non_test | consider dropping most of symbol information uris and names when using dwarf stacks usually we recommend using obfuscation as a way to reduce code size because it shrinks identifiers method names and library uri s the size drop is rather noticeable on large applications we could consider taking this one step further most of the symbol information is completely unnecessary when running in aot mode with dwarf stack traces so we could consider replacing completely dropping library uri s and replacing string based symbol names with global numbering represented as smi s this might allow to remove significant number of strings from aot snapshot and thus shrink it cc mkustermann | 0 |
80,650 | 23,269,639,306 | IssuesEvent | 2022-08-04 21:12:55 | spack/spack | https://api.github.com/repos/spack/spack | closed | openblas@0.3.20 %cce@13 build fails: ftn-2307 ftn: "-m" option must be followed by 0, 1, 2, 3 or 4 | cray build-error e4s | ### Steps to reproduce the issue
`openblas@0.3.20 %cce@13.0.1` build fails using:
* `spack/develop` (2515cafb9c60851d5b218d1262212034f5606869 from `Wed Apr 27 16:08:33 2022 -0700`)
* `PrgEnv-cray/8.2.0`
* `cce/13.0.1`
* `SUSE Linux Enterprise Server 15 SP2`
Spack environment:
```
# spack/develop 2515cafb9c60851d5b218d1262212034f5606869
# Wed Apr 27 16:08:33 2022 -0700
spack:
view: false
concretization: separately
config:
concretizer: clingo
compilers:
- compiler:
spec: cce@13.0.1
paths:
cc: cc
cxx: CC
f77: ftn
fc: ftn
operating_system: sles15
target: any
modules:
- PrgEnv-cray
- cce/13.0.1
- compiler:
spec: gcc@7.5.0
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
operating_system: sles15
target: any
packages:
all:
compiler: [cce@13.0.1]
specs:
- openblas@0.3.20 ~bignuma~consistent_fpcsr~ilp64+locking+pic+shared symbol_suffix=none threads=openmp %cce ^ncurses%gcc
```
Concretization:
```
- elprk2s openblas@0.3.20%cce@13.0.1~bignuma~consistent_fpcsr~ilp64+locking+pic+shared symbol_suffix=none threads=openmp arch=cray-sles15-zen
- nylrh3j ^perl@5.34.1%cce@13.0.1+cpanm+shared+threads arch=cray-sles15-zen
- f4matl7 ^berkeley-db@18.1.40%cce@13.0.1+cxx~docs+stl patches=b231fcc arch=cray-sles15-zen
- ughig26 ^bzip2@1.0.8%cce@13.0.1~debug~pic+shared arch=cray-sles15-zen
- xymlw7q ^diffutils@3.8%cce@13.0.1 arch=cray-sles15-zen
- yjofnqb ^libiconv@1.16%cce@13.0.1 libs=shared,static arch=cray-sles15-zen
- ek5tefr ^gdbm@1.19%cce@13.0.1 arch=cray-sles15-zen
- j77pugq ^readline@8.1%cce@13.0.1 arch=cray-sles15-zen
- 5peuggt ^ncurses@6.2%gcc@7.5.0~symlinks+termlib abi=none arch=cray-sles15-zen
- 3javkah ^pkgconf@1.8.0%gcc@7.5.0 arch=cray-sles15-zen
- 6ln5n2o ^zlib@1.2.12%cce@13.0.1+optimize+pic+shared patches=0d38234 arch=cray-sles15-zen
```
Reproducer:
```
$> ssh perlmutter
eugene@perlmutter:login19:~> git clone https://github.com/spack/spack
eugene@perlmutter:login19:~> (cd spack && git checkout 2515cafb)
eugene@perlmutter:login19:~> export SPACK_DISABLE_LOCAL_CONFIG=1
eugene@perlmutter:login19:~> . spack/share/spack/setup-env.sh
eugene@perlmutter:login19:~> spack -e . install
...
==> Installing openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6
==> No binary for openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6 found: installing from source
==> Fetching https://mirror.spack.io/_source-cache/archive/84/8495c9affc536253648e942908e88e097f2ec7753ede55aca52e5dead3029e3c.tar.gz
==> No patches needed for openblas
==> openblas: Executing phase: 'edit'
==> openblas: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j12' 'CC=/global/cfs/cdirs/m3503/ParaTools/openblas/spack/lib/spack/env/cce/cc' 'FC=/global/cfs/cdirs/m3503/ParaTools/openblas/spack/lib/spack/env/cce/ftn' 'MAKE_NB_JOBS=0' 'ARCH=x86_64' 'TARGET=ZEN' 'USE_LOCKING=1' 'USE_OPENMP=1' 'USE_THREAD=1' 'RANLIB=ranlib' 'libs' 'netlib' 'shared'
14 errors found in build log:
2859 ftn-2307 ftn: ERROR in command line
2860 The "-m" option must be followed by 0, 1, 2, 3 or 4.
2861 ftn-2307 ftn: ERROR in command line
2862 The "-m" option must be followed by 0, 1, 2, 3 or 4.
2863 ftn-2307 ftn: ERROR in command line
2864 The "-m" option must be followed by 0, 1, 2, 3 or 4.
>> 2865 make[2]: *** [<builtin>: sgbrfs.o] Error 1
2866 make[2]: *** Waiting for unfinished jobs....
>> 2867 make[2]: *** [<builtin>: spotrf2.o] Error 1
>> 2868 make[2]: *** [<builtin>: sgbcon.o] Error 1
>> 2869 make[2]: *** [<builtin>: sgetrf2.o] Error 1
>> 2870 make[2]: *** [<builtin>: sgbbrd.o] Error 1
>> 2871 make[2]: *** [<builtin>: sgbequ.o] Error 1
>> 2872 make[2]: *** [<builtin>: sgbtrf.o] Error 1
>> 2873 make[2]: *** [<builtin>: sgbtf2.o] Error 1
>> 2874 make[2]: *** [<builtin>: sgbsvx.o] Error 1
>> 2875 make[2]: *** [<builtin>: sgbsv.o] Error 1
>> 2876 make[2]: *** [<builtin>: sgebak.o] Error 1
>> 2877 make[2]: *** [<builtin>: sgbtrs.o] Error 1
2878 make[2]: Leaving directory '/tmp/lpeyrala/spack-stage/spack-stage-openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6/spack-src/lapack-netlib/SRC'
>> 2879 make[1]: *** [Makefile:27: lapacklib] Error 2
2880 make[1]: Leaving directory '/tmp/lpeyrala/spack-stage/spack-stage-openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6/spack-src/lapack-netlib'
>> 2881 make: *** [Makefile:250: netlib] Error 2
```
### Error message
<details><summary>Error message</summary><pre>
==> Installing openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6
==> No binary for openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6 found: installing from source
==> Fetching https://mirror.spack.io/_source-cache/archive/84/8495c9affc536253648e942908e88e097f2ec7753ede55aca52e5dead3029e3c.tar.gz
==> No patches needed for openblas
==> openblas: Executing phase: 'edit'
==> openblas: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j12' 'CC=/global/cfs/cdirs/m3503/ParaTools/openblas/spack/lib/spack/env/cce/cc' 'FC=/global/cfs/cdirs/m3503/ParaTools/openblas/spack/lib/spack/env/cce/ftn' 'MAKE_NB_JOBS=0' 'ARCH=x86_64' 'TARGET=ZEN' 'USE_LOCKING=1' 'USE_OPENMP=1' 'USE_THREAD=1' 'RANLIB=ranlib' 'libs' 'netlib' 'shared'
14 errors found in build log:
2859 ftn-2307 ftn: ERROR in command line
2860 The "-m" option must be followed by 0, 1, 2, 3 or 4.
2861 ftn-2307 ftn: ERROR in command line
2862 The "-m" option must be followed by 0, 1, 2, 3 or 4.
2863 ftn-2307 ftn: ERROR in command line
2864 The "-m" option must be followed by 0, 1, 2, 3 or 4.
>> 2865 make[2]: *** [<builtin>: sgbrfs.o] Error 1
2866 make[2]: *** Waiting for unfinished jobs....
>> 2867 make[2]: *** [<builtin>: spotrf2.o] Error 1
>> 2868 make[2]: *** [<builtin>: sgbcon.o] Error 1
>> 2869 make[2]: *** [<builtin>: sgetrf2.o] Error 1
>> 2870 make[2]: *** [<builtin>: sgbbrd.o] Error 1
>> 2871 make[2]: *** [<builtin>: sgbequ.o] Error 1
>> 2872 make[2]: *** [<builtin>: sgbtrf.o] Error 1
>> 2873 make[2]: *** [<builtin>: sgbtf2.o] Error 1
>> 2874 make[2]: *** [<builtin>: sgbsvx.o] Error 1
>> 2875 make[2]: *** [<builtin>: sgbsv.o] Error 1
>> 2876 make[2]: *** [<builtin>: sgebak.o] Error 1
>> 2877 make[2]: *** [<builtin>: sgbtrs.o] Error 1
2878 make[2]: Leaving directory '/tmp/lpeyrala/spack-stage/spack-stage-openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6/spack-src/lapack-netlib/SRC'
>> 2879 make[1]: *** [Makefile:27: lapacklib] Error 2
2880 make[1]: Leaving directory '/tmp/lpeyrala/spack-stage/spack-stage-openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6/spack-src/lapack-netlib'
>> 2881 make: *** [Makefile:250: netlib] Error 2
</pre></details>
### Information on your system
* **Spack:** 0.18.0.dev0 (2515cafb9c60851d5b218d1262212034f5606869)
* **Python:** 3.6.15
* **Platform:** cray-sles15-zen3
* **Concretizer:** clingo
### Additional information
[spack-build-out.txt](https://github.com/spack/spack/files/8587159/spack-build-out.txt)
[spack-build-env.txt](https://github.com/spack/spack/files/8587161/spack-build-env.txt)
@prckent @haampie @siko1056 @lukebroskop @shahzebsiddiqui @wspear
### General information
- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [X] I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers
- [X] I have uploaded the build log and environment files
- [X] I have searched the issues of this repo and believe this is not a duplicate | 1.0 | openblas@0.3.20 %cce@13 build fails: ftn-2307 ftn: "-m" option must be followed by 0, 1, 2, 3 or 4 - ### Steps to reproduce the issue
`openblas@0.3.20 %cce@13.0.1` build fails using:
* `spack/develop` (2515cafb9c60851d5b218d1262212034f5606869 from `Wed Apr 27 16:08:33 2022 -0700`)
* `PrgEnv-cray/8.2.0`
* `cce/13.0.1`
* `SUSE Linux Enterprise Server 15 SP2`
Spack environment:
```
# spack/develop 2515cafb9c60851d5b218d1262212034f5606869
# Wed Apr 27 16:08:33 2022 -0700
spack:
view: false
concretization: separately
config:
concretizer: clingo
compilers:
- compiler:
spec: cce@13.0.1
paths:
cc: cc
cxx: CC
f77: ftn
fc: ftn
operating_system: sles15
target: any
modules:
- PrgEnv-cray
- cce/13.0.1
- compiler:
spec: gcc@7.5.0
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
operating_system: sles15
target: any
packages:
all:
compiler: [cce@13.0.1]
specs:
- openblas@0.3.20 ~bignuma~consistent_fpcsr~ilp64+locking+pic+shared symbol_suffix=none threads=openmp %cce ^ncurses%gcc
```
Concretization:
```
- elprk2s openblas@0.3.20%cce@13.0.1~bignuma~consistent_fpcsr~ilp64+locking+pic+shared symbol_suffix=none threads=openmp arch=cray-sles15-zen
- nylrh3j ^perl@5.34.1%cce@13.0.1+cpanm+shared+threads arch=cray-sles15-zen
- f4matl7 ^berkeley-db@18.1.40%cce@13.0.1+cxx~docs+stl patches=b231fcc arch=cray-sles15-zen
- ughig26 ^bzip2@1.0.8%cce@13.0.1~debug~pic+shared arch=cray-sles15-zen
- xymlw7q ^diffutils@3.8%cce@13.0.1 arch=cray-sles15-zen
- yjofnqb ^libiconv@1.16%cce@13.0.1 libs=shared,static arch=cray-sles15-zen
- ek5tefr ^gdbm@1.19%cce@13.0.1 arch=cray-sles15-zen
- j77pugq ^readline@8.1%cce@13.0.1 arch=cray-sles15-zen
- 5peuggt ^ncurses@6.2%gcc@7.5.0~symlinks+termlib abi=none arch=cray-sles15-zen
- 3javkah ^pkgconf@1.8.0%gcc@7.5.0 arch=cray-sles15-zen
- 6ln5n2o ^zlib@1.2.12%cce@13.0.1+optimize+pic+shared patches=0d38234 arch=cray-sles15-zen
```
Reproducer:
```
$> ssh perlmutter
eugene@perlmutter:login19:~> git clone https://github.com/spack/spack
eugene@perlmutter:login19:~> (cd spack && git checkout 2515cafb)
eugene@perlmutter:login19:~> export SPACK_DISABLE_LOCAL_CONFIG=1
eugene@perlmutter:login19:~> . spack/share/spack/setup-env.sh
eugene@perlmutter:login19:~> spack -e . install
...
==> Installing openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6
==> No binary for openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6 found: installing from source
==> Fetching https://mirror.spack.io/_source-cache/archive/84/8495c9affc536253648e942908e88e097f2ec7753ede55aca52e5dead3029e3c.tar.gz
==> No patches needed for openblas
==> openblas: Executing phase: 'edit'
==> openblas: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j12' 'CC=/global/cfs/cdirs/m3503/ParaTools/openblas/spack/lib/spack/env/cce/cc' 'FC=/global/cfs/cdirs/m3503/ParaTools/openblas/spack/lib/spack/env/cce/ftn' 'MAKE_NB_JOBS=0' 'ARCH=x86_64' 'TARGET=ZEN' 'USE_LOCKING=1' 'USE_OPENMP=1' 'USE_THREAD=1' 'RANLIB=ranlib' 'libs' 'netlib' 'shared'
14 errors found in build log:
2859 ftn-2307 ftn: ERROR in command line
2860 The "-m" option must be followed by 0, 1, 2, 3 or 4.
2861 ftn-2307 ftn: ERROR in command line
2862 The "-m" option must be followed by 0, 1, 2, 3 or 4.
2863 ftn-2307 ftn: ERROR in command line
2864 The "-m" option must be followed by 0, 1, 2, 3 or 4.
>> 2865 make[2]: *** [<builtin>: sgbrfs.o] Error 1
2866 make[2]: *** Waiting for unfinished jobs....
>> 2867 make[2]: *** [<builtin>: spotrf2.o] Error 1
>> 2868 make[2]: *** [<builtin>: sgbcon.o] Error 1
>> 2869 make[2]: *** [<builtin>: sgetrf2.o] Error 1
>> 2870 make[2]: *** [<builtin>: sgbbrd.o] Error 1
>> 2871 make[2]: *** [<builtin>: sgbequ.o] Error 1
>> 2872 make[2]: *** [<builtin>: sgbtrf.o] Error 1
>> 2873 make[2]: *** [<builtin>: sgbtf2.o] Error 1
>> 2874 make[2]: *** [<builtin>: sgbsvx.o] Error 1
>> 2875 make[2]: *** [<builtin>: sgbsv.o] Error 1
>> 2876 make[2]: *** [<builtin>: sgebak.o] Error 1
>> 2877 make[2]: *** [<builtin>: sgbtrs.o] Error 1
2878 make[2]: Leaving directory '/tmp/lpeyrala/spack-stage/spack-stage-openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6/spack-src/lapack-netlib/SRC'
>> 2879 make[1]: *** [Makefile:27: lapacklib] Error 2
2880 make[1]: Leaving directory '/tmp/lpeyrala/spack-stage/spack-stage-openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6/spack-src/lapack-netlib'
>> 2881 make: *** [Makefile:250: netlib] Error 2
```
### Error message
<details><summary>Error message</summary><pre>
==> Installing openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6
==> No binary for openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6 found: installing from source
==> Fetching https://mirror.spack.io/_source-cache/archive/84/8495c9affc536253648e942908e88e097f2ec7753ede55aca52e5dead3029e3c.tar.gz
==> No patches needed for openblas
==> openblas: Executing phase: 'edit'
==> openblas: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j12' 'CC=/global/cfs/cdirs/m3503/ParaTools/openblas/spack/lib/spack/env/cce/cc' 'FC=/global/cfs/cdirs/m3503/ParaTools/openblas/spack/lib/spack/env/cce/ftn' 'MAKE_NB_JOBS=0' 'ARCH=x86_64' 'TARGET=ZEN' 'USE_LOCKING=1' 'USE_OPENMP=1' 'USE_THREAD=1' 'RANLIB=ranlib' 'libs' 'netlib' 'shared'
14 errors found in build log:
2859 ftn-2307 ftn: ERROR in command line
2860 The "-m" option must be followed by 0, 1, 2, 3 or 4.
2861 ftn-2307 ftn: ERROR in command line
2862 The "-m" option must be followed by 0, 1, 2, 3 or 4.
2863 ftn-2307 ftn: ERROR in command line
2864 The "-m" option must be followed by 0, 1, 2, 3 or 4.
>> 2865 make[2]: *** [<builtin>: sgbrfs.o] Error 1
2866 make[2]: *** Waiting for unfinished jobs....
>> 2867 make[2]: *** [<builtin>: spotrf2.o] Error 1
>> 2868 make[2]: *** [<builtin>: sgbcon.o] Error 1
>> 2869 make[2]: *** [<builtin>: sgetrf2.o] Error 1
>> 2870 make[2]: *** [<builtin>: sgbbrd.o] Error 1
>> 2871 make[2]: *** [<builtin>: sgbequ.o] Error 1
>> 2872 make[2]: *** [<builtin>: sgbtrf.o] Error 1
>> 2873 make[2]: *** [<builtin>: sgbtf2.o] Error 1
>> 2874 make[2]: *** [<builtin>: sgbsvx.o] Error 1
>> 2875 make[2]: *** [<builtin>: sgbsv.o] Error 1
>> 2876 make[2]: *** [<builtin>: sgebak.o] Error 1
>> 2877 make[2]: *** [<builtin>: sgbtrs.o] Error 1
2878 make[2]: Leaving directory '/tmp/lpeyrala/spack-stage/spack-stage-openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6/spack-src/lapack-netlib/SRC'
>> 2879 make[1]: *** [Makefile:27: lapacklib] Error 2
2880 make[1]: Leaving directory '/tmp/lpeyrala/spack-stage/spack-stage-openblas-0.3.20-elprk2sqjmxmwfzkhdpoqvd7h2cgfuf6/spack-src/lapack-netlib'
>> 2881 make: *** [Makefile:250: netlib] Error 2
</pre></details>
### Information on your system
* **Spack:** 0.18.0.dev0 (2515cafb9c60851d5b218d1262212034f5606869)
* **Python:** 3.6.15
* **Platform:** cray-sles15-zen3
* **Concretizer:** clingo
### Additional information
[spack-build-out.txt](https://github.com/spack/spack/files/8587159/spack-build-out.txt)
[spack-build-env.txt](https://github.com/spack/spack/files/8587161/spack-build-env.txt)
@prckent @haampie @siko1056 @lukebroskop @shahzebsiddiqui @wspear
### General information
- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [X] I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers
- [X] I have uploaded the build log and environment files
- [X] I have searched the issues of this repo and believe this is not a duplicate | non_test | openblas cce build fails ftn ftn m option must be followed by or steps to reproduce the issue openblas cce build fails using spack develop from wed apr prgenv cray cce suse linux enterprise server spack environment spack develop wed apr spack view false concretization separately config concretizer clingo compilers compiler spec cce paths cc cc cxx cc ftn fc ftn operating system target any modules prgenv cray cce compiler spec gcc paths cc usr bin gcc cxx usr bin g usr bin gfortran fc usr bin gfortran operating system target any packages all compiler specs openblas bignuma consistent fpcsr locking pic shared symbol suffix none threads openmp cce ncurses gcc concretization openblas cce bignuma consistent fpcsr locking pic shared symbol suffix none threads openmp arch cray zen perl cce cpanm shared threads arch cray zen berkeley db cce cxx docs stl patches arch cray zen cce debug pic shared arch cray zen diffutils cce arch cray zen yjofnqb libiconv cce libs shared static arch cray zen gdbm cce arch cray zen readline cce arch cray zen ncurses gcc symlinks termlib abi none arch cray zen pkgconf gcc arch cray zen zlib cce optimize pic shared patches arch cray zen reproducer ssh perlmutter eugene perlmutter git clone eugene perlmutter cd spack git checkout eugene perlmutter export spack disable local config eugene perlmutter spack share spack setup env sh eugene perlmutter spack e install installing openblas no binary for openblas found installing from source fetching no patches needed for openblas openblas executing phase edit openblas executing phase build error processerror command exited with status make cc global cfs cdirs paratools openblas spack lib spack env cce cc fc global cfs cdirs paratools openblas spack lib spack env cce ftn make nb jobs arch target zen use locking use openmp use thread ranlib ranlib libs netlib shared errors found in build log ftn ftn error in command line the m option must be followed by or ftn ftn error in command line the m option must be followed by or ftn ftn error in command line the m option must be followed by or make error make waiting for unfinished jobs make error make error make error make error make error make error make error make error make error make error make error make leaving directory tmp lpeyrala spack stage spack stage openblas spack src lapack netlib src make error make leaving directory tmp lpeyrala spack stage spack stage openblas spack src lapack netlib make error error message error message installing openblas no binary for openblas found installing from source fetching no patches needed for openblas openblas executing phase edit openblas executing phase build error processerror command exited with status make cc global cfs cdirs paratools openblas spack lib spack env cce cc fc global cfs cdirs paratools openblas spack lib spack env cce ftn make nb jobs arch target zen use locking use openmp use thread ranlib ranlib libs netlib shared errors found in build log ftn ftn error in command line the m option must be followed by or ftn ftn error in command line the m option must be followed by or ftn ftn error in command line the m option must be followed by or make error make waiting for unfinished jobs make error make error make error make error make error make error make error make error make error make error make error make leaving directory tmp lpeyrala spack stage spack stage openblas spack src lapack netlib src make error make leaving directory tmp lpeyrala spack stage spack stage openblas spack src lapack netlib make error information on your system spack python platform cray concretizer clingo additional information prckent haampie lukebroskop shahzebsiddiqui wspear general information i have run spack debug report and reported the version of spack python platform i have run spack maintainers and mentioned any maintainers i have uploaded the build log and environment files i have searched the issues of this repo and believe this is not a duplicate | 0 |
17,040 | 2,615,129,559 | IssuesEvent | 2015-03-01 05:59:11 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | vizualization api query | auto-migrated Priority-Medium Type-Sample | ```
Which Google API and version (e.g. Google Calendar Data API version 2)?
What format (e.g. JSON, Atom)?
What Authentation (e.g. OAuth, OAuth 2, ClientLogin)?
Java environment (e.g. Java 6, Android 2.3, App Engine)?
External references, such as API reference guide?
Please provide any additional information below.
```
Original issue reported on code.google.com by `jatic...@gmail.com` on 12 Oct 2012 at 5:02
* Merged into: #627 | 1.0 | vizualization api query - ```
Which Google API and version (e.g. Google Calendar Data API version 2)?
What format (e.g. JSON, Atom)?
What Authentation (e.g. OAuth, OAuth 2, ClientLogin)?
Java environment (e.g. Java 6, Android 2.3, App Engine)?
External references, such as API reference guide?
Please provide any additional information below.
```
Original issue reported on code.google.com by `jatic...@gmail.com` on 12 Oct 2012 at 5:02
* Merged into: #627 | non_test | vizualization api query which google api and version e g google calendar data api version what format e g json atom what authentation e g oauth oauth clientlogin java environment e g java android app engine external references such as api reference guide please provide any additional information below original issue reported on code google com by jatic gmail com on oct at merged into | 0 |
656,564 | 21,767,935,276 | IssuesEvent | 2022-05-13 05:38:46 | valora-inc/wallet | https://api.github.com/repos/valora-inc/wallet | opened | [Query] Will the Cursor to be seen on next field when user deletes the last character/number and tries to enter the correct address | Priority: P3 wallet qa-report qa-query | **Frequency:** 100%
**Repro on build version:** Android Internal build V1.32.0, iOS Test flight build V1.32.0, **Repro on devices:** Google Pixel 4a(12.0), Google Pixel 2XL (11.0),iPhone 13 mini (15.1.1)
**Pre-condition:**
1. User should have multiple verified accounts with the same phone number for the send secure flow.
2.User must be on "Please confirm your contact account" screen with "Scan QR code" & "Confirm Account Number" options
**Repro Steps:**
1. Tap on Confirm account address
2. Enter invalid last 4 digits/character of your account address.
3. Delete the last digit from the account field.
4. Now again write the correct last digit and observe
**Query:** Cursor is not seen on the next field when user deletes the last character/number and tries to enter the correct address
**Expected Behavior:** Cursor should be seen on next field when user deletes the last character/number and tries to enter the correct address.
**Investigation:**
1.If user enters first two characters/digits in account field and want to remove the first two digit/characters then it should allowed to delete the filed.
**Attachment:** [account address.mp4](https://drive.google.com/file/d/1y3PqUPzeR9G4DdzD-u_qMDvAP1JqZqyg/view?usp=sharing)
| 1.0 | [Query] Will the Cursor to be seen on next field when user deletes the last character/number and tries to enter the correct address - **Frequency:** 100%
**Repro on build version:** Android Internal build V1.32.0, iOS Test flight build V1.32.0, **Repro on devices:** Google Pixel 4a(12.0), Google Pixel 2XL (11.0),iPhone 13 mini (15.1.1)
**Pre-condition:**
1. User should have multiple verified accounts with the same phone number for the send secure flow.
2.User must be on "Please confirm your contact account" screen with "Scan QR code" & "Confirm Account Number" options
**Repro Steps:**
1. Tap on Confirm account address
2. Enter invalid last 4 digits/character of your account address.
3. Delete the last digit from the account field.
4. Now again write the correct last digit and observe
**Query:** Cursor is not seen on the next field when user deletes the last character/number and tries to enter the correct address
**Expected Behavior:** Cursor should be seen on next field when user deletes the last character/number and tries to enter the correct address.
**Investigation:**
1.If user enters first two characters/digits in account field and want to remove the first two digit/characters then it should allowed to delete the filed.
**Attachment:** [account address.mp4](https://drive.google.com/file/d/1y3PqUPzeR9G4DdzD-u_qMDvAP1JqZqyg/view?usp=sharing)
| non_test | will the cursor to be seen on next field when user deletes the last character number and tries to enter the correct address frequency repro on build version android internal build ios test flight build repro on devices google pixel google pixel iphone mini pre condition user should have multiple verified accounts with the same phone number for the send secure flow user must be on please confirm your contact account screen with scan qr code confirm account number options repro steps tap on confirm account address enter invalid last digits character of your account address delete the last digit from the account field now again write the correct last digit and observe query cursor is not seen on the next field when user deletes the last character number and tries to enter the correct address expected behavior cursor should be seen on next field when user deletes the last character number and tries to enter the correct address investigation if user enters first two characters digits in account field and want to remove the first two digit characters then it should allowed to delete the filed attachment | 0 |
182,903 | 14,169,925,447 | IssuesEvent | 2020-11-12 13:53:53 | cybanjar/precheck-in | https://api.github.com/repos/cybanjar/precheck-in | opened | [MCI] Perbedaan Metode Info Message untuk Tamu yang sudah MCI dan Waiting | Feature Improvements Testing | MCI, saat search by name dgn stephen yie, kemudian pada guest list yg mumcul, pilih SY dgn resNo 28291. Maka muncul message spt di atas. Namun... Saat search by booking code dgn mengisi 28291 (res-line yg serupa dgn yg atas), akan muncul message yg berbeda. Sebaiknya dibuat sama. Dan yg ini sptnya lebih informatif. Apakah yg sebelumnya bisa dibuat spt ini juga? | 1.0 | [MCI] Perbedaan Metode Info Message untuk Tamu yang sudah MCI dan Waiting - MCI, saat search by name dgn stephen yie, kemudian pada guest list yg mumcul, pilih SY dgn resNo 28291. Maka muncul message spt di atas. Namun... Saat search by booking code dgn mengisi 28291 (res-line yg serupa dgn yg atas), akan muncul message yg berbeda. Sebaiknya dibuat sama. Dan yg ini sptnya lebih informatif. Apakah yg sebelumnya bisa dibuat spt ini juga? | test | perbedaan metode info message untuk tamu yang sudah mci dan waiting mci saat search by name dgn stephen yie kemudian pada guest list yg mumcul pilih sy dgn resno maka muncul message spt di atas namun saat search by booking code dgn mengisi res line yg serupa dgn yg atas akan muncul message yg berbeda sebaiknya dibuat sama dan yg ini sptnya lebih informatif apakah yg sebelumnya bisa dibuat spt ini juga | 1 |
434,849 | 30,471,615,543 | IssuesEvent | 2023-07-17 13:58:59 | chainguard-dev/edu | https://api.github.com/repos/chainguard-dev/edu | opened | Resource: Terraform Provider apko documentation | documentation images | **What topic are you requesting a resource about?**
- [x] Chainguard product
- [x] Open source related
**Proposed title:** Getting Started with the apko Terraform provider
We are now using Terraform to build our Chainguard Images. Several people have asked for best practices for building aspects of their images (e.g., `-dev` variants of a custom image) provided within a submodule of the apko Terraform provider.
Being able to provide a walkthrough for this process would be beneficial to people building Chainguard Images, as well as educating best practices to the community.
**Description:**
Links:
- [Chainguard Images Terraform README](https://github.com/chainguard-dev/images/blob/main/TERRAFORM.md)
- [GitHub - Terraform Provider apko](https://github.com/chainguard-dev/terraform-provider-apko)
- [Terraform Registry - apko](https://registry.terraform.io/providers/chainguard-dev/apko/latest) | 1.0 | Resource: Terraform Provider apko documentation - **What topic are you requesting a resource about?**
- [x] Chainguard product
- [x] Open source related
**Proposed title:** Getting Started with the apko Terraform provider
We are now using Terraform to build our Chainguard Images. Several people have asked for best practices for building aspects of their images (e.g., `-dev` variants of a custom image) provided within a submodule of the apko Terraform provider.
Being able to provide a walkthrough for this process would be beneficial to people building Chainguard Images, as well as educating best practices to the community.
**Description:**
Links:
- [Chainguard Images Terraform README](https://github.com/chainguard-dev/images/blob/main/TERRAFORM.md)
- [GitHub - Terraform Provider apko](https://github.com/chainguard-dev/terraform-provider-apko)
- [Terraform Registry - apko](https://registry.terraform.io/providers/chainguard-dev/apko/latest) | non_test | resource terraform provider apko documentation what topic are you requesting a resource about chainguard product open source related proposed title getting started with the apko terraform provider we are now using terraform to build our chainguard images several people have asked for best practices for building aspects of their images e g dev variants of a custom image provided within a submodule of the apko terraform provider being able to provide a walkthrough for this process would be beneficial to people building chainguard images as well as educating best practices to the community description links | 0 |
104,047 | 16,613,374,629 | IssuesEvent | 2021-06-02 14:04:40 | Thanraj/linux-4.1.15 | https://api.github.com/repos/Thanraj/linux-4.1.15 | opened | CVE-2020-29371 (Low) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2020-29371 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-4.1.15/commits/5e3fb3e332499e1ad10a0969e55582af1027b085">5e3fb3e332499e1ad10a0969e55582af1027b085</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/fs/romfs/storage.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/fs/romfs/storage.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/fs/romfs/storage.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in romfs_dev_read in fs/romfs/storage.c in the Linux kernel before 5.8.4. Uninitialized memory leaks to userspace, aka CID-bcf85fcedfdd.
<p>Publish Date: 2020-11-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-29371>CVE-2020-29371</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29371">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29371</a></p>
<p>Release Date: 2020-11-28</p>
<p>Fix Resolution: v5.9-rc2,v5.8.4,v5.7.18,v5.4.61</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-29371 (Low) detected in linux-stable-rtv4.1.33 - ## CVE-2020-29371 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-4.1.15/commits/5e3fb3e332499e1ad10a0969e55582af1027b085">5e3fb3e332499e1ad10a0969e55582af1027b085</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/fs/romfs/storage.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/fs/romfs/storage.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/fs/romfs/storage.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in romfs_dev_read in fs/romfs/storage.c in the Linux kernel before 5.8.4. Uninitialized memory leaks to userspace, aka CID-bcf85fcedfdd.
<p>Publish Date: 2020-11-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-29371>CVE-2020-29371</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29371">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29371</a></p>
<p>Release Date: 2020-11-28</p>
<p>Fix Resolution: v5.9-rc2,v5.8.4,v5.7.18,v5.4.61</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve low detected in linux stable cve low severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files linux fs romfs storage c linux fs romfs storage c linux fs romfs storage c vulnerability details an issue was discovered in romfs dev read in fs romfs storage c in the linux kernel before uninitialized memory leaks to userspace aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
219,893 | 17,117,537,560 | IssuesEvent | 2021-07-11 17:10:28 | CodeForPhilly/paws-data-pipeline | https://api.github.com/repos/CodeForPhilly/paws-data-pipeline | opened | Use reduced set of records by default | Importing Testing | Importing and matching are slow, but it's not usually necessary to process a full data set to find problems.
Proposal:
Select a list of IDs (100 to start) from each data source that gives us a good mix of records for testing importing and matching (e.g., people in three sets, people in SF and SL, people in Vol and SF, donations/no donations). Then in the import code for each source, we can do something like ` if contact_id in short_list['salesforcedonations']: ` unless the `USE_FULL_DATA` environment variable is set.
By having known records we can also start developing tests dependent on them.
| 1.0 | Use reduced set of records by default - Importing and matching are slow, but it's not usually necessary to process a full data set to find problems.
Proposal:
Select a list of IDs (100 to start) from each data source that gives us a good mix of records for testing importing and matching (e.g., people in three sets, people in SF and SL, people in Vol and SF, donations/no donations). Then in the import code for each source, we can do something like ` if contact_id in short_list['salesforcedonations']: ` unless the `USE_FULL_DATA` environment variable is set.
By having known records we can also start developing tests dependent on them.
| test | use reduced set of records by default importing and matching are slow but it s not usually necessary to process a full data set to find problems proposal select a list of ids to start from each data source that gives us a good mix of records for testing importing and matching e g people in three sets people in sf and sl people in vol and sf donations no donations then in the import code for each source we can do something like if contact id in short list unless the use full data environment variable is set by having known records we can also start developing tests dependent on them | 1 |
313,273 | 26,914,680,851 | IssuesEvent | 2023-02-07 04:51:46 | QubesOS/updates-status | https://api.github.com/repos/QubesOS/updates-status | closed | grub2-theme v5.14.4-4 (r4.2) | r4.2-host-cur-test | Update of grub2-theme to v5.14.4-4 for Qubes r4.2, see comments below for details and build status.
From commit: https://github.com/QubesOS/qubes-grub2-theme/commit/bb2d7367fea1d0815b7696e138a8f25dc1f7ddde
[Changes since previous version](https://github.com/QubesOS/qubes-grub2-theme/compare/v5.14.4-3...v5.14.4-4):
QubesOS/qubes-grub2-theme@bb2d736 version 5.14.4-4
QubesOS/qubes-grub2-theme@109e1f3 Replace ImageMagick with GraphicsMagick
Referenced issues:
QubesOS/qubes-issues#5009
If you're release manager, you can issue GPG-inline signed command:
* `Upload-component r4.2 grub2-theme bb2d7367fea1d0815b7696e138a8f25dc1f7ddde current all` (available 5 days from now)
* `Upload-component r4.2 grub2-theme bb2d7367fea1d0815b7696e138a8f25dc1f7ddde security-testing`
You can choose subset of distributions like:
* `Upload-component r4.2 grub2-theme bb2d7367fea1d0815b7696e138a8f25dc1f7ddde current vm-bookworm,vm-fc37` (available 5 days from now)
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
For more information on how to test this update, please take a look at https://www.qubes-os.org/doc/testing/#updates.
| 1.0 | grub2-theme v5.14.4-4 (r4.2) - Update of grub2-theme to v5.14.4-4 for Qubes r4.2, see comments below for details and build status.
From commit: https://github.com/QubesOS/qubes-grub2-theme/commit/bb2d7367fea1d0815b7696e138a8f25dc1f7ddde
[Changes since previous version](https://github.com/QubesOS/qubes-grub2-theme/compare/v5.14.4-3...v5.14.4-4):
QubesOS/qubes-grub2-theme@bb2d736 version 5.14.4-4
QubesOS/qubes-grub2-theme@109e1f3 Replace ImageMagick with GraphicsMagick
Referenced issues:
QubesOS/qubes-issues#5009
If you're release manager, you can issue GPG-inline signed command:
* `Upload-component r4.2 grub2-theme bb2d7367fea1d0815b7696e138a8f25dc1f7ddde current all` (available 5 days from now)
* `Upload-component r4.2 grub2-theme bb2d7367fea1d0815b7696e138a8f25dc1f7ddde security-testing`
You can choose subset of distributions like:
* `Upload-component r4.2 grub2-theme bb2d7367fea1d0815b7696e138a8f25dc1f7ddde current vm-bookworm,vm-fc37` (available 5 days from now)
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
For more information on how to test this update, please take a look at https://www.qubes-os.org/doc/testing/#updates.
| test | theme update of theme to for qubes see comments below for details and build status from commit qubesos qubes theme version qubesos qubes theme replace imagemagick with graphicsmagick referenced issues qubesos qubes issues if you re release manager you can issue gpg inline signed command upload component theme current all available days from now upload component theme security testing you can choose subset of distributions like upload component theme current vm bookworm vm available days from now above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it for more information on how to test this update please take a look at | 1 |
428,900 | 12,418,465,436 | IssuesEvent | 2020-05-23 00:26:26 | eclipse-ee4j/glassfish | https://api.github.com/repos/eclipse-ee4j/glassfish | closed | Avoid committing Native binary code in GlassFish SVN workspace | ERR: Assignee Priority: Major Stale Type: Bug | Native binary files are in GlassFish SVN trunk workspace:
main/nucleus/admin/server-mgmt/src/main/resources/lib/winsw.exe
main/nucleus/admin/server-mgmt/target/classes/lib/winsw.exe
main/nucleus/cluster/cli/src/main/resources/lib/DcomConfigurator.exe
main/nucleus/cluster/cli/target/classes/lib/DcomConfigurator.exe
main/nucleus/cluster/uc/src/main/resources/lib/DcomConfigurator.exe
main/nucleus/cluster/uc/target/classes/lib/DcomConfigurator.exe
main/nucleus/cluster/vld/target/classes/lib/DcomConfigurator.exe
We should try avoiding checking-in native binaries in the source tree.
One way to avoid this is to deploy the native binaries to Maven.
See details here: [http://docs.codehaus.org/display/MAVENUSER/Using+Maven+to+manage+.NET+projects](http://docs.codehaus.org/display/MAVENUSER/Using+Maven+to+manage+.NET+projects) | 1.0 | Avoid committing Native binary code in GlassFish SVN workspace - Native binary files are in GlassFish SVN trunk workspace:
main/nucleus/admin/server-mgmt/src/main/resources/lib/winsw.exe
main/nucleus/admin/server-mgmt/target/classes/lib/winsw.exe
main/nucleus/cluster/cli/src/main/resources/lib/DcomConfigurator.exe
main/nucleus/cluster/cli/target/classes/lib/DcomConfigurator.exe
main/nucleus/cluster/uc/src/main/resources/lib/DcomConfigurator.exe
main/nucleus/cluster/uc/target/classes/lib/DcomConfigurator.exe
main/nucleus/cluster/vld/target/classes/lib/DcomConfigurator.exe
We should try avoiding checking-in native binaries in the source tree.
One way to avoid this is to deploy the native binaries to Maven.
See details here: [http://docs.codehaus.org/display/MAVENUSER/Using+Maven+to+manage+.NET+projects](http://docs.codehaus.org/display/MAVENUSER/Using+Maven+to+manage+.NET+projects) | non_test | avoid committing native binary code in glassfish svn workspace native binary files are in glassfish svn trunk workspace main nucleus admin server mgmt src main resources lib winsw exe main nucleus admin server mgmt target classes lib winsw exe main nucleus cluster cli src main resources lib dcomconfigurator exe main nucleus cluster cli target classes lib dcomconfigurator exe main nucleus cluster uc src main resources lib dcomconfigurator exe main nucleus cluster uc target classes lib dcomconfigurator exe main nucleus cluster vld target classes lib dcomconfigurator exe we should try avoiding checking in native binaries in the source tree one way to avoid this is to deploy the native binaries to maven see details here | 0 |
127,257 | 10,456,091,889 | IssuesEvent | 2019-09-19 23:33:03 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | opened | background takes too long to turn dark when opening downloaded media from WebTorrent | QA/Test-Plan-Specified QA/Yes bug feature/webtorrent | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
As per https://github.com/brave/brave-browser/issues/5326#issuecomment-532545074, the background shouldn't be taking this long to switch from `Light` -> `Dark`.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Visit this URL: https://ia800301.us.archive.org/14/items/art_of_war_librivox/art_of_war_librivox_archive.torrent
2. Start the torrent download
3. Click on file 12 "art_of_war_01-02_sun_tzu.png" and confirm that the image shows up.
You'll notice that the entire background will go `dark` after a short delay which also covers up the image.
## Actual result:
<!--Please add screenshots if needed-->

## Expected result:
Background shouldn't take this long to turn `dark` and shouldn't be covering the media that's opened.
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% reproducible using the above STR.
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 0.69.128 Chromium: 77.0.3865.75 (Official Build) (64-bit)
-- | --
Revision | 201e747d032611c5f2785cae06e894cf85be7f8a-refs/branch-heads/3865@{#776}
OS | macOS Version 10.14.6 (Build 18G95)
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? `Yes`
- Can you reproduce this issue with the beta channel? `Yes`
- Can you reproduce this issue with the dev channel? `Yes`
- Can you reproduce this issue with the nightly channel? `Yes`
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? `N/A`
- Does the issue resolve itself when disabling Brave Rewards? `N/A`
- Is the issue reproducible on the latest version of Chrome? `N/A`
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
CCing @feross | 1.0 | background takes too long to turn dark when opening downloaded media from WebTorrent - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
As per https://github.com/brave/brave-browser/issues/5326#issuecomment-532545074, the background shouldn't be taking this long to switch from `Light` -> `Dark`.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Visit this URL: https://ia800301.us.archive.org/14/items/art_of_war_librivox/art_of_war_librivox_archive.torrent
2. Start the torrent download
3. Click on file 12 "art_of_war_01-02_sun_tzu.png" and confirm that the image shows up.
You'll notice that the entire background will go `dark` after a short delay which also covers up the image.
## Actual result:
<!--Please add screenshots if needed-->

## Expected result:
Background shouldn't take this long to turn `dark` and shouldn't be covering the media that's opened.
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% reproducible using the above STR.
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 0.69.128 Chromium: 77.0.3865.75 (Official Build) (64-bit)
-- | --
Revision | 201e747d032611c5f2785cae06e894cf85be7f8a-refs/branch-heads/3865@{#776}
OS | macOS Version 10.14.6 (Build 18G95)
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? `Yes`
- Can you reproduce this issue with the beta channel? `Yes`
- Can you reproduce this issue with the dev channel? `Yes`
- Can you reproduce this issue with the nightly channel? `Yes`
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? `N/A`
- Does the issue resolve itself when disabling Brave Rewards? `N/A`
- Is the issue reproducible on the latest version of Chrome? `N/A`
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
CCing @feross | test | background takes too long to turn dark when opening downloaded media from webtorrent have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description as per the background shouldn t be taking this long to switch from light dark steps to reproduce visit this url start the torrent download click on file art of war sun tzu png and confirm that the image shows up you ll notice that the entire background will go dark after a short delay which also covers up the image actual result expected result background shouldn t take this long to turn dark and shouldn t be covering the media that s opened reproduces how often reproducible using the above str brave version brave version info brave chromium official build bit revision refs branch heads os macos version build version channel information can you reproduce this issue with the current release yes can you reproduce this issue with the beta channel yes can you reproduce this issue with the dev channel yes can you reproduce this issue with the nightly channel yes other additional information does the issue resolve itself when disabling brave shields n a does the issue resolve itself when disabling brave rewards n a is the issue reproducible on the latest version of chrome n a miscellaneous information ccing feross | 1 |
38,541 | 5,190,974,935 | IssuesEvent | 2017-01-21 15:36:48 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | extended test output incorrect: should run a deployment to completion and then scale to zero | component/deployments kind/test-flake priority/P1 | https://ci.openshift.redhat.com/jenkins/job/zz_origin_gce_image/57/testReport/junit/(root)/Extended/deploymentconfigs_with_test_deployments__Conformance__should_run_a_deployment_to_completion_and_then_scale_to_zero/
Extended.deploymentconfigs with test deployments [Conformance] should run a deployment to completion and then scale to zero (from (empty))
```
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:285
Expected
<string>: Flag --latest has been deprecated, use 'oc rollout latest' instead
Started deployment #3
to contain substring
<string>: deployment-test-3 up to 1
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:280
``` | 1.0 | extended test output incorrect: should run a deployment to completion and then scale to zero - https://ci.openshift.redhat.com/jenkins/job/zz_origin_gce_image/57/testReport/junit/(root)/Extended/deploymentconfigs_with_test_deployments__Conformance__should_run_a_deployment_to_completion_and_then_scale_to_zero/
Extended.deploymentconfigs with test deployments [Conformance] should run a deployment to completion and then scale to zero (from (empty))
```
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:285
Expected
<string>: Flag --latest has been deprecated, use 'oc rollout latest' instead
Started deployment #3
to contain substring
<string>: deployment-test-3 up to 1
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:280
``` | test | extended test output incorrect should run a deployment to completion and then scale to zero extended deploymentconfigs with test deployments should run a deployment to completion and then scale to zero from empty data src github com openshift origin test extended deployments deployments go expected flag latest has been deprecated use oc rollout latest instead started deployment to contain substring deployment test up to data src github com openshift origin test extended deployments deployments go | 1 |
310,522 | 23,341,357,081 | IssuesEvent | 2022-08-09 14:16:14 | obsidian-tasks-group/obsidian-tasks | https://api.github.com/repos/obsidian-tasks-group/obsidian-tasks | closed | Docs bug: Advanced Docs page "Notifications" currently incorrect about how to format tasks to work with Reminders | scope: documentation | Thanks to @aubreyz for reporting this (initially in a comment in 910 ) - aubreyz, please add or correct anything I missed!
## Expected Behavior
obsidian-tasks documentation page "Notifications" (in Advanced section) gives specific instructions for how to make Tasks work with the community plugin Reminders: https://obsidian-tasks-group.github.io/obsidian-tasks/advanced/notifications/#where-to-add-the-reminder-date
Expected: users should be able to use Reminders and Tasks without any additional major caveats and warnings than those listed on the page.
## Current Behavior
According to https://github.com/uphy/obsidian-reminder/issues/100#issuecomment-1192998712 using the "Defer" command from Reminders wipes all content in the line between the ⏰ emoji and the 📅 emoji, including priority, start date, etc. from Tasks. This is a bug in Reminders, clearly, but it means that the recommended configuration in Tasks documentation is insufficient in terms of informing users of a data-loss danger when using Reminders.
## Context (Environment)
* Obsidian version: current (0.15.x)
* Tasks version: current (1.10.0)
* Reminders version: current ?? [sorry, I have not repro'd this myself, summarizing others' posts]
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, if you have an idea -->
Since Tasks puts some fields between the description and the due date, it sounds like the documentation cannot recommend a configuration to avoid data loss due to the Reminders bug in its "Defer" command.
Therefore, I suggest adding a warning to the https://obsidian-tasks-group.github.io/obsidian-tasks/advanced/notifications/#where-to-add-the-reminder-date documentation page, suggesting that users check the status of the [Reminders issue around Defer](https://github.com/uphy/obsidian-reminder/issues/100) before using that functionality in combination with Tasks. Maybe also a warning to check the separate [Reminders issue with recurrence](https://github.com/uphy/obsidian-reminder/issues/93) that was mentioned in the investigation of the Defer issue.
**Or maybe instead** a more general warning that Tasks does not continuously check the status/functionality of other plugins mentioned in the Advanced pages and that users should check those plugins for outstanding issues before using them with Tasks. | 1.0 | Docs bug: Advanced Docs page "Notifications" currently incorrect about how to format tasks to work with Reminders - Thanks to @aubreyz for reporting this (initially in a comment in 910 ) - aubreyz, please add or correct anything I missed!
## Expected Behavior
obsidian-tasks documentation page "Notifications" (in Advanced section) gives specific instructions for how to make Tasks work with the community plugin Reminders: https://obsidian-tasks-group.github.io/obsidian-tasks/advanced/notifications/#where-to-add-the-reminder-date
Expected: users should be able to use Reminders and Tasks without any additional major caveats and warnings than those listed on the page.
## Current Behavior
According to https://github.com/uphy/obsidian-reminder/issues/100#issuecomment-1192998712 using the "Defer" command from Reminders wipes all content in the line between the ⏰ emoji and the 📅 emoji, including priority, start date, etc. from Tasks. This is a bug in Reminders, clearly, but it means that the recommended configuration in Tasks documentation is insufficient in terms of informing users of a data-loss danger when using Reminders.
## Context (Environment)
* Obsidian version: current (0.15.x)
* Tasks version: current (1.10.0)
* Reminders version: current ?? [sorry, I have not repro'd this myself, summarizing others' posts]
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, if you have an idea -->
Since Tasks puts some fields between the description and the due date, it sounds like the documentation cannot recommend a configuration to avoid data loss due to the Reminders bug in its "Defer" command.
Therefore, I suggest adding a warning to the https://obsidian-tasks-group.github.io/obsidian-tasks/advanced/notifications/#where-to-add-the-reminder-date documentation page, suggesting that users check the status of the [Reminders issue around Defer](https://github.com/uphy/obsidian-reminder/issues/100) before using that functionality in combination with Tasks. Maybe also a warning to check the separate [Reminders issue with recurrence](https://github.com/uphy/obsidian-reminder/issues/93) that was mentioned in the investigation of the Defer issue.
**Or maybe instead** a more general warning that Tasks does not continuously check the status/functionality of other plugins mentioned in the Advanced pages and that users should check those plugins for outstanding issues before using them with Tasks. | non_test | docs bug advanced docs page notifications currently incorrect about how to format tasks to work with reminders thanks to aubreyz for reporting this initially in a comment in aubreyz please add or correct anything i missed expected behavior obsidian tasks documentation page notifications in advanced section gives specific instructions for how to make tasks work with the community plugin reminders expected users should be able to use reminders and tasks without any additional major caveats and warnings than those listed on the page current behavior according to using the defer command from reminders wipes all content in the line between the ⏰ emoji and the 📅 emoji including priority start date etc from tasks this is a bug in reminders clearly but it means that the recommended configuration in tasks documentation is insufficient in terms of informing users of a data loss danger when using reminders context environment obsidian version current x tasks version current reminders version current possible solution since tasks puts some fields between the description and the due date it sounds like the documentation cannot recommend a configuration to avoid data loss due to the reminders bug in its defer command therefore i suggest adding a warning to the documentation page suggesting that users check the status of the before using that functionality in combination with tasks maybe also a warning to check the separate that was mentioned in the investigation of the defer issue or maybe instead a more general warning that tasks does not continuously check the status functionality of other plugins mentioned in the advanced pages and that users should check those plugins for outstanding issues before using them with tasks | 0 |
233,799 | 19,061,860,429 | IssuesEvent | 2021-11-26 08:54:36 | nromanen/Schedule | https://api.github.com/repos/nromanen/Schedule | closed | Tests: search, renderTeacher files in helper folder | Pri: low test | White unit tests for schedule, search, renderTeacher files in helper folder.
Acceptance Criteria:
Covering of tests are more than 50%;
Sub Tasks \ Bugs:
- [x] -write configurations for tests;
- [x] write tests for schedule;
- [x] write tests for renderTeacher;
- [x] write tests for search;
Estimate: 3 days | 1.0 | Tests: search, renderTeacher files in helper folder - White unit tests for schedule, search, renderTeacher files in helper folder.
Acceptance Criteria:
Covering of tests are more than 50%;
Sub Tasks \ Bugs:
- [x] -write configurations for tests;
- [x] write tests for schedule;
- [x] write tests for renderTeacher;
- [x] write tests for search;
Estimate: 3 days | test | tests search renderteacher files in helper folder white unit tests for schedule search renderteacher files in helper folder acceptance criteria covering of tests are more than sub tasks bugs write configurations for tests write tests for schedule write tests for renderteacher write tests for search estimate days | 1 |
100,555 | 8,749,890,328 | IssuesEvent | 2018-12-13 17:34:15 | Microsoft/AzureStorageExplorer | https://api.github.com/repos/Microsoft/AzureStorageExplorer | closed | [Unstable]The text box of the 'Add user or group' becomes narrow and can't be edited when switching the 'Users and groups' frequently on 'Manage Access' dialog | :gear: adls gen2 testing | **Storage Explorer Version**: 1.6.1
**Platform/OS Version**: Windows 10
**Architecture**: ia32
**Build Number**: 20181213.1
**Commit**: 4ca2a5cb
**Regression From**: Not a regression
#### Steps to Reproduce: ####
1. Expand one ADLS Gen2 account -> Right click one blob container then select 'Manage Access...'.
2. Change the users and groups frequently(Notes: don't save it).
#### Expected Experience: ####
The text box of the 'Add user or group' shows well.
#### Actual Experience: ####
The text box of the 'Add user or group' becomes narrow and can't be edited.

#### More info: ####
This issue is unstable. | 1.0 | [Unstable]The text box of the 'Add user or group' becomes narrow and can't be edited when switching the 'Users and groups' frequently on 'Manage Access' dialog - **Storage Explorer Version**: 1.6.1
**Platform/OS Version**: Windows 10
**Architecture**: ia32
**Build Number**: 20181213.1
**Commit**: 4ca2a5cb
**Regression From**: Not a regression
#### Steps to Reproduce: ####
1. Expand one ADLS Gen2 account -> Right click one blob container then select 'Manage Access...'.
2. Change the users and groups frequently(Notes: don't save it).
#### Expected Experience: ####
The text box of the 'Add user or group' shows well.
#### Actual Experience: ####
The text box of the 'Add user or group' becomes narrow and can't be edited.

#### More info: ####
This issue is unstable. | test | the text box of the add user or group becomes narrow and can t be edited when switching the users and groups frequently on manage access dialog storage explorer version platform os version windows architecture build number commit regression from not a regression steps to reproduce expand one adls account right click one blob container then select manage access change the users and groups frequently notes don t save it expected experience the text box of the add user or group shows well actual experience the text box of the add user or group becomes narrow and can t be edited more info this issue is unstable | 1 |
2,429 | 2,600,302,970 | IssuesEvent | 2015-02-23 15:34:07 | PressForward/pressforward | https://api.github.com/repos/PressForward/pressforward | opened | Screen Options selections are not retained | bug Test | Setting things like which columns "show on screen" isn't working in the Subscribed Feeds section.
Is this the case for anyone but me? @regan008 @lmrhody ? | 1.0 | Screen Options selections are not retained - Setting things like which columns "show on screen" isn't working in the Subscribed Feeds section.
Is this the case for anyone but me? @regan008 @lmrhody ? | test | screen options selections are not retained setting things like which columns show on screen isn t working in the subscribed feeds section is this the case for anyone but me lmrhody | 1 |
205,715 | 15,683,036,187 | IssuesEvent | 2021-03-25 08:13:59 | DadosAbertosDeFeira/maria-quiteria | https://api.github.com/repos/DadosAbertosDeFeira/maria-quiteria | opened | Checar arquivos se arquivos criados estão sendo deletados | para testar | Para extrair o conteúdo dos arquivos nós baixamos o arquivo do nosso S3 e passamos para o Tika. Não precisávamos nos preocupar com o espaço porque no Heroku não havia persistência de dados mas agora que estamos na Absam precisamos verificar se esses arquivos estão ficando lá. Se sim, precisamos deletar os arquivos assim que tivermos certeza que o conteúdo foi extraído. Seria bom verificar também se o arquivo já existe em disco antes de tentar baixar do S3. | 1.0 | Checar arquivos se arquivos criados estão sendo deletados - Para extrair o conteúdo dos arquivos nós baixamos o arquivo do nosso S3 e passamos para o Tika. Não precisávamos nos preocupar com o espaço porque no Heroku não havia persistência de dados mas agora que estamos na Absam precisamos verificar se esses arquivos estão ficando lá. Se sim, precisamos deletar os arquivos assim que tivermos certeza que o conteúdo foi extraído. Seria bom verificar também se o arquivo já existe em disco antes de tentar baixar do S3. | test | checar arquivos se arquivos criados estão sendo deletados para extrair o conteúdo dos arquivos nós baixamos o arquivo do nosso e passamos para o tika não precisávamos nos preocupar com o espaço porque no heroku não havia persistência de dados mas agora que estamos na absam precisamos verificar se esses arquivos estão ficando lá se sim precisamos deletar os arquivos assim que tivermos certeza que o conteúdo foi extraído seria bom verificar também se o arquivo já existe em disco antes de tentar baixar do | 1 |
12,991 | 8,758,279,399 | IssuesEvent | 2018-12-15 01:59:21 | GillesCrebassa/FinancialManager | https://api.github.com/repos/GillesCrebassa/FinancialManager | opened | CVE-2018-11694 High Severity Vulnerability detected by WhiteSource | security vulnerability | ## CVE-2018-11694 - High Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libsass3.4.1</b></p></summary>
<p>
<p>A C/C++ implementation of a Sass compiler</p>
<p>Library home page: <a href=https://github.com/sass/libsass.git>https://github.com/sass/libsass.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Library Source Files (72)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/to_value.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/source_map.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/constants.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/to_c.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/memory_manager.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/node.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/expand.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/listize.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/output.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/parser.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/values.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/emitter.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/debugger.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/units.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/util.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/cssize.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/sass_util.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/emitter.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/eval.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/sass2scss.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/functions.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/functions.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/listize.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/ast.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/units.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/ast_factory.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/ast.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/memory_manager.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/lexer.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/constants.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/to_c.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/to_value.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/cssize.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/environment.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/util.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/eval.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/subset_map.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/include/sass/base.h
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/output.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/operation.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/inspect.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/sass.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/file.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/include/sass/values.h
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/source_map.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/include/sass2scss.h
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/sass.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/extend.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/file.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/node.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/expand.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/context.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/environment.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/include/sass/context.h
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/inspect.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/color_maps.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/json.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/context.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/parser.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/extend.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/color_maps.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/bind.cpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-11694 High Severity Vulnerability detected by WhiteSource - ## CVE-2018-11694 - High Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libsass3.4.1</b></p></summary>
<p>
<p>A C/C++ implementation of a Sass compiler</p>
<p>Library home page: <a href=https://github.com/sass/libsass.git>https://github.com/sass/libsass.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Library Source Files (72)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/to_value.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/source_map.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/constants.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/to_c.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/memory_manager.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/node.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/expand.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/listize.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/output.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/parser.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/values.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/emitter.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/debugger.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/units.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/util.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/cssize.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/sass_util.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/emitter.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/eval.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/sass2scss.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/functions.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/functions.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/listize.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/ast.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/units.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/ast_factory.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/ast.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/memory_manager.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/lexer.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/constants.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/to_c.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/to_value.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/cssize.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/environment.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/util.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/eval.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/subset_map.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/include/sass/base.h
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/output.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/operation.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/inspect.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/sass.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/file.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/include/sass/values.h
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/source_map.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/include/sass2scss.h
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/sass.hpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/extend.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/file.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/node.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/expand.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/context.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/environment.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/include/sass/context.h
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/inspect.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/color_maps.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/json.cpp
- /FinancialManager/web/assets/vendors/select2/node_modules/node-sass/src/libsass/src/context.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/parser.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/extend.cpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/color_maps.hpp
- /FinancialManager/web/assets/vendors/cropper/node_modules/node-sass/src/libsass/src/bind.cpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high severity vulnerability detected by whitesource cve high severity vulnerability vulnerable library a c c implementation of a sass compiler library home page a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries financialmanager web assets vendors node modules node sass src libsass src to value hpp financialmanager web assets vendors node modules node sass src libsass src source map cpp financialmanager web assets vendors node modules node sass src libsass src constants hpp financialmanager web assets vendors node modules node sass src libsass src to c hpp financialmanager web assets vendors cropper node modules node sass src libsass src memory manager hpp financialmanager web assets vendors node modules node sass src libsass src node hpp financialmanager web assets vendors node modules node sass src libsass src sass context cpp financialmanager web assets vendors node modules node sass src libsass src expand cpp financialmanager web assets vendors node modules node sass src libsass src listize cpp financialmanager web assets vendors node modules node sass src libsass src output cpp financialmanager web assets vendors cropper node modules node sass src libsass src parser cpp financialmanager web assets vendors node modules node sass src libsass src values cpp financialmanager web assets vendors node modules node sass src libsass src emitter cpp financialmanager web assets vendors node modules node sass src libsass src debugger hpp financialmanager web assets vendors cropper node modules node sass src libsass src ast fwd decl hpp financialmanager web assets vendors cropper node modules node sass src libsass src units hpp financialmanager web assets vendors node modules node sass src libsass src util hpp financialmanager web assets vendors cropper node modules node sass src libsass src cssize cpp financialmanager web assets vendors node modules node sass src libsass src sass util hpp financialmanager web assets vendors cropper node modules node sass src libsass src error handling cpp financialmanager web assets vendors cropper node modules node sass src libsass src ast def macros hpp financialmanager web assets vendors node modules node sass src libsass src emitter hpp financialmanager web assets vendors cropper node modules node sass src libsass src eval hpp financialmanager web assets vendors cropper node modules node sass src libsass src cpp financialmanager web assets vendors node modules node sass src libsass src functions hpp financialmanager web assets vendors node modules node sass src libsass src functions cpp financialmanager web assets vendors node modules node sass src libsass src listize hpp financialmanager web assets vendors node modules node sass src libsass src ast cpp financialmanager web assets vendors cropper node modules node sass src libsass src units cpp financialmanager web assets vendors node modules node sass src libsass src ast factory hpp financialmanager web assets vendors node modules node sass src libsass src ast hpp financialmanager web assets vendors cropper node modules node sass src libsass src remove placeholders hpp financialmanager web assets vendors cropper node modules node sass src libsass src memory manager cpp financialmanager web assets vendors cropper node modules node sass src libsass src lexer cpp financialmanager web assets vendors cropper node modules node sass src libsass src sass values cpp financialmanager web assets vendors node modules node sass src libsass src constants cpp financialmanager web assets vendors node modules node sass src libsass src to c cpp financialmanager web assets vendors node modules node sass src libsass src to value cpp financialmanager web assets vendors node modules node sass src libsass src cssize hpp financialmanager web assets vendors node modules node sass src libsass src environment cpp financialmanager web assets vendors cropper node modules node sass src libsass src remove placeholders cpp financialmanager web assets vendors cropper node modules node sass src libsass src util cpp financialmanager web assets vendors node modules node sass src libsass src eval cpp financialmanager web assets vendors node modules node sass src libsass src sass context hpp financialmanager web assets vendors node modules node sass src libsass src subset map hpp financialmanager web assets vendors cropper node modules node sass src libsass include sass base h financialmanager web assets vendors node modules node sass src libsass src output hpp financialmanager web assets vendors node modules node sass src libsass src operation hpp financialmanager web assets vendors cropper node modules node sass src libsass src inspect hpp financialmanager web assets vendors cropper node modules node sass src libsass src sass cpp financialmanager web assets vendors node modules node sass src libsass src file hpp financialmanager web assets vendors cropper node modules node sass src libsass include sass values h financialmanager web assets vendors node modules node sass src libsass src error handling hpp financialmanager web assets vendors cropper node modules node sass src libsass src source map hpp financialmanager web assets vendors node modules node sass src libsass include h financialmanager web assets vendors cropper node modules node sass src libsass src sass hpp financialmanager web assets vendors node modules node sass src libsass src extend hpp financialmanager web assets vendors cropper node modules node sass src libsass src file cpp financialmanager web assets vendors cropper node modules node sass src libsass src node cpp financialmanager web assets vendors node modules node sass src libsass src expand hpp financialmanager web assets vendors cropper node modules node sass src libsass src context cpp financialmanager web assets vendors node modules node sass src libsass src environment hpp financialmanager web assets vendors cropper node modules node sass src libsass include sass context h financialmanager web assets vendors node modules node sass src libsass src prelexer cpp financialmanager web assets vendors node modules node sass src libsass src inspect cpp financialmanager web assets vendors cropper node modules node sass src libsass src color maps cpp financialmanager web assets vendors cropper node modules node sass src libsass src json cpp financialmanager web assets vendors node modules node sass src libsass src context hpp financialmanager web assets vendors cropper node modules node sass src libsass src parser hpp financialmanager web assets vendors cropper node modules node sass src libsass src extend cpp financialmanager web assets vendors cropper node modules node sass src libsass src color maps hpp financialmanager web assets vendors cropper node modules node sass src libsass src bind cpp vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass functions selector append which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
148,329 | 11,847,796,993 | IssuesEvent | 2020-03-24 12:42:46 | eclipse/codewind | https://api.github.com/repos/eclipse/codewind | closed | [Eclipse IDE] Problems with automated tests | area/eclipse-ide kind/test | The automated tests are failing at the Codewind install step with the following message:
```
The process failed with exit code: 1, and error: 2020/03/23 20:06:42 WRITE_FILE_ERROR[204]: {"error":"sec_keyring","error_description":"exec: \"dbus-launch\": executable file not found in $PATH"}.
```
Also, when they do fail at this step and no connection is created, getting NPE in CodewindUtil.cleanup. | 1.0 | [Eclipse IDE] Problems with automated tests - The automated tests are failing at the Codewind install step with the following message:
```
The process failed with exit code: 1, and error: 2020/03/23 20:06:42 WRITE_FILE_ERROR[204]: {"error":"sec_keyring","error_description":"exec: \"dbus-launch\": executable file not found in $PATH"}.
```
Also, when they do fail at this step and no connection is created, getting NPE in CodewindUtil.cleanup. | test | problems with automated tests the automated tests are failing at the codewind install step with the following message the process failed with exit code and error write file error error sec keyring error description exec dbus launch executable file not found in path also when they do fail at this step and no connection is created getting npe in codewindutil cleanup | 1 |
332,065 | 29,182,206,498 | IssuesEvent | 2023-05-19 12:50:19 | dotnet/source-build | https://api.github.com/repos/dotnet/source-build | closed | Scan for binaries is failing from wasm file in runtime repo | area-ci-testing | The CI build shows a warning in the `Scan for binaries` task:
```
./.dotnet/dotnet darc vmr scan-binary-files --vmr "/mnt/vss/_work/1/s" --tmp "/mnt/vss/_work/_temp" --baseline-file "src/VirtualMonoRepo/allowed-binaries.txt" || (echo '##[error]Found binaries in the VMR' && exit 1)
========================== Starting Command Output ===========================
/usr/bin/bash --noprofile --norc /mnt/vss/_work/_temp/e45a5dcc-b89a-4f39-8e02-67abfa651e62.sh
src/runtime/src/mono/wasm/runtime/wasm-simd-feature-detect.wasm
##[error]Found binaries in the VMR
##[error]Bash exited with code '1'.
Finishing: Scan for binaries
``` | 1.0 | Scan for binaries is failing from wasm file in runtime repo - The CI build shows a warning in the `Scan for binaries` task:
```
./.dotnet/dotnet darc vmr scan-binary-files --vmr "/mnt/vss/_work/1/s" --tmp "/mnt/vss/_work/_temp" --baseline-file "src/VirtualMonoRepo/allowed-binaries.txt" || (echo '##[error]Found binaries in the VMR' && exit 1)
========================== Starting Command Output ===========================
/usr/bin/bash --noprofile --norc /mnt/vss/_work/_temp/e45a5dcc-b89a-4f39-8e02-67abfa651e62.sh
src/runtime/src/mono/wasm/runtime/wasm-simd-feature-detect.wasm
##[error]Found binaries in the VMR
##[error]Bash exited with code '1'.
Finishing: Scan for binaries
``` | test | scan for binaries is failing from wasm file in runtime repo the ci build shows a warning in the scan for binaries task dotnet dotnet darc vmr scan binary files vmr mnt vss work s tmp mnt vss work temp baseline file src virtualmonorepo allowed binaries txt echo found binaries in the vmr exit starting command output usr bin bash noprofile norc mnt vss work temp sh src runtime src mono wasm runtime wasm simd feature detect wasm found binaries in the vmr bash exited with code finishing scan for binaries | 1 |
55,074 | 6,425,490,699 | IssuesEvent | 2017-08-09 15:30:14 | Syl2010/CoreBot | https://api.github.com/repos/Syl2010/CoreBot | closed | déblocage des commande lors d'un clear chat | Fix Test | Permettre l'utilisation des commandes pendant le traitement de la commande clear chat (supression de messages). | 1.0 | déblocage des commande lors d'un clear chat - Permettre l'utilisation des commandes pendant le traitement de la commande clear chat (supression de messages). | test | déblocage des commande lors d un clear chat permettre l utilisation des commandes pendant le traitement de la commande clear chat supression de messages | 1 |
172,688 | 13,327,469,316 | IssuesEvent | 2020-08-27 13:13:40 | repobee/repobee | https://api.github.com/repos/repobee/repobee | opened | Test `issues list` with --show-body flag | testing | It's currently untested. A single integration test here would suffice. | 1.0 | Test `issues list` with --show-body flag - It's currently untested. A single integration test here would suffice. | test | test issues list with show body flag it s currently untested a single integration test here would suffice | 1 |
102,829 | 12,825,826,513 | IssuesEvent | 2020-07-06 15:32:56 | hpe-design/design-system | https://api.github.com/repos/hpe-design/design-system | closed | Nav - Refactor to new designs, leverage new grommet component and decouple from aries-site Header | design system core design system site | The decoupling of Nav from Header was addressed on recent PR, but there is still more work refactoring to address the new designs. | 2.0 | Nav - Refactor to new designs, leverage new grommet component and decouple from aries-site Header - The decoupling of Nav from Header was addressed on recent PR, but there is still more work refactoring to address the new designs. | non_test | nav refactor to new designs leverage new grommet component and decouple from aries site header the decoupling of nav from header was addressed on recent pr but there is still more work refactoring to address the new designs | 0 |
37,803 | 8,364,530,316 | IssuesEvent | 2018-10-03 23:31:15 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | opened | JIT: Support object stack allocation | area-CodeGen | This issue will track work on supporting object stack allocation in the jit. See #20251 for the document describing the work.
The initial goal is to be able to remove heap allocation in a simple example like this:
```
class Foo
{
public int f1;
public int f2;
public Foo(int f1, int f2)
{
this.f1 = f1;
this.f2 = f2;
}
}
class Test
{
static int f1;
static int f2;
public static int Main()
{
Foo foo = new Foo(f1, f2);
return foo.f1 + foo.f2;
}
}
```
and then in a similar example where the class has gc fields.
Proposed initial steps are:
- [ ] add getHeapClassSize jit interface method and its implementations
- [ ] add classHasFinalizer jit interface method and its implementations
- [ ] add getObjHeaderSize jit interface method and its implementations
- [ ] modify getClassGCLayout jit interface method to work on reference types
- [ ] add COMPlus_JitObjectStackAllocation environment variable to control this optimization (off by default)
- [ ] move ObjectAllocator phase to be closer to inlining
- [ ] modify lvaSetStruct to allow creating locals corresponding to stack-allocated classes
- [ ] modify gc reporting to properly report gc fields of stack-allocated objects
- [ ] modify gc writebarrier logic to apply appropriate barriers when assigning to fields of (possibly) stack-allocated objects
- [ ] modify gc code that asserts that all objects live on the heap
- [ ] add simple conservative escape analysis sufficient for the example above
- [ ] make the analysis more sophisticated to handle increasingly more complex examples
I will be modifying and extending this list as the work progresses.
cc @dotnet/jit-contrib | 1.0 | JIT: Support object stack allocation - This issue will track work on supporting object stack allocation in the jit. See #20251 for the document describing the work.
The initial goal is to be able to remove heap allocation in a simple example like this:
```
class Foo
{
public int f1;
public int f2;
public Foo(int f1, int f2)
{
this.f1 = f1;
this.f2 = f2;
}
}
class Test
{
static int f1;
static int f2;
public static int Main()
{
Foo foo = new Foo(f1, f2);
return foo.f1 + foo.f2;
}
}
```
and then in a similar example where the class has gc fields.
Proposed initial steps are:
- [ ] add getHeapClassSize jit interface method and its implementations
- [ ] add classHasFinalizer jit interface method and its implementations
- [ ] add getObjHeaderSize jit interface method and its implementations
- [ ] modify getClassGCLayout jit interface method to work on reference types
- [ ] add COMPlus_JitObjectStackAllocation environment variable to control this optimization (off by default)
- [ ] move ObjectAllocator phase to be closer to inlining
- [ ] modify lvaSetStruct to allow creating locals corresponding to stack-allocated classes
- [ ] modify gc reporting to properly report gc fields of stack-allocated objects
- [ ] modify gc writebarrier logic to apply appropriate barriers when assigning to fields of (possibly) stack-allocated objects
- [ ] modify gc code that asserts that all objects live on the heap
- [ ] add simple conservative escape analysis sufficient for the example above
- [ ] make the analysis more sophisticated to handle increasingly more complex examples
I will be modifying and extending this list as the work progresses.
cc @dotnet/jit-contrib | non_test | jit support object stack allocation this issue will track work on supporting object stack allocation in the jit see for the document describing the work the initial goal is to be able to remove heap allocation in a simple example like this class foo public int public int public foo int int this this class test static int static int public static int main foo foo new foo return foo foo and then in a similar example where the class has gc fields proposed initial steps are add getheapclasssize jit interface method and its implementations add classhasfinalizer jit interface method and its implementations add getobjheadersize jit interface method and its implementations modify getclassgclayout jit interface method to work on reference types add complus jitobjectstackallocation environment variable to control this optimization off by default move objectallocator phase to be closer to inlining modify lvasetstruct to allow creating locals corresponding to stack allocated classes modify gc reporting to properly report gc fields of stack allocated objects modify gc writebarrier logic to apply appropriate barriers when assigning to fields of possibly stack allocated objects modify gc code that asserts that all objects live on the heap add simple conservative escape analysis sufficient for the example above make the analysis more sophisticated to handle increasingly more complex examples i will be modifying and extending this list as the work progresses cc dotnet jit contrib | 0 |
256,133 | 22,040,610,764 | IssuesEvent | 2022-05-29 09:42:45 | ValveSoftware/steam-for-linux | https://api.github.com/repos/ValveSoftware/steam-for-linux | closed | Could not connect to Steam Network... error - Manjaro | Steam client Need Retest Distro Family: Arch | #### Your system information
* Steam client: Steam Manjaro 1.0.0.74-1 and Steam-Native 1.0.0.70-2
* Distribution: Manjaro 21.2.2 KDE version (Plasma 5.24.2) last update March 14 2022.
* Opted into Steam client beta?: No
* Have you checked for system updates?: Yes
* Nvidia proprietery driver 510.54
#### Please describe your issue in as much detail as possible:
On and off since January I have had problems running Steam (up til then running flawlessly) increasingly often getting the "Could not connect to Steam Network..." error. Since Feb 24 I have not been able to start Steam at all. I have tried steam --reset, I have tried running with steam -tcp, both have worked earlier but now only present me with the "Could not connect to Steam Network..." error.
After starting I will get to login-window, and on to "Updating Steam information" and then immediately get the "Could not connect to Steam Network..." error.
I have tried every possible solution I could find, up to and including a fresh OS install. Same result. In addition to current kernel (5.16) I have also tried the LTS kernels 5.15 and 5.10 with same results.
I have also found several posts for Manjaro/Arch suggesting networking changes, none of these have had any effect.
Included files are from a fresh OS install (also fresh Steam install, purged version installed with OS, deleted folders, then reinstalled).
#### Steps for reproducing this issue:
1. Install Manjaro, install Steam
2. run Steam
3. alternatively run steam --reset then steam or steam -tcp
[bootstrap_log.txt](https://github.com/ValveSoftware/steam-for-linux/files/8232383/bootstrap_log.txt)
[cef_log.txt](https://github.com/ValveSoftware/steam-for-linux/files/8232384/cef_log.txt)
[configstore_log.txt](https://github.com/ValveSoftware/steam-for-linux/files/8232385/configstore_log.txt)
[connection_log.txt](https://github.com/ValveSoftware/steam-for-linux/files/8232386/connection_log.txt)
FOLLOWUP #1: Installing flathub version on same system gives same error.
FOLLOWUP #2: Tried installing Fedora 35, latest Pop OS, latest OpenSuse Tumbleweed, same errors there.
FOLLOWUP #3 Installing the windows version using wine (Wine-valve 3.7.0.1b-1 from AUR) installs and runs just fine. Which is strange? Should at least rule out any IP related problems? Still same error with (non-wine) steam-runtime
FOLLOWUP #4 Downgraded Nvidia proprietary driver to 470 series on fresh Manjaro install. Still same error (see #3). | 1.0 | Could not connect to Steam Network... error - Manjaro - #### Your system information
* Steam client: Steam Manjaro 1.0.0.74-1 and Steam-Native 1.0.0.70-2
* Distribution: Manjaro 21.2.2 KDE version (Plasma 5.24.2) last update March 14 2022.
* Opted into Steam client beta?: No
* Have you checked for system updates?: Yes
* Nvidia proprietery driver 510.54
#### Please describe your issue in as much detail as possible:
On and off since January I have had problems running Steam (up til then running flawlessly) increasingly often getting the "Could not connect to Steam Network..." error. Since Feb 24 I have not been able to start Steam at all. I have tried steam --reset, I have tried running with steam -tcp, both have worked earlier but now only present me with the "Could not connect to Steam Network..." error.
After starting I will get to login-window, and on to "Updating Steam information" and then immediately get the "Could not connect to Steam Network..." error.
I have tried every possible solution I could find, up to and including a fresh OS install. Same result. In addition to current kernel (5.16) I have also tried the LTS kernels 5.15 and 5.10 with same results.
I have also found several posts for Manjaro/Arch suggesting networking changes, none of these have had any effect.
Included files are from a fresh OS install (also fresh Steam install, purged version installed with OS, deleted folders, then reinstalled).
#### Steps for reproducing this issue:
1. Install Manjaro, install Steam
2. run Steam
3. alternatively run steam --reset then steam or steam -tcp
[bootstrap_log.txt](https://github.com/ValveSoftware/steam-for-linux/files/8232383/bootstrap_log.txt)
[cef_log.txt](https://github.com/ValveSoftware/steam-for-linux/files/8232384/cef_log.txt)
[configstore_log.txt](https://github.com/ValveSoftware/steam-for-linux/files/8232385/configstore_log.txt)
[connection_log.txt](https://github.com/ValveSoftware/steam-for-linux/files/8232386/connection_log.txt)
FOLLOWUP #1: Installing flathub version on same system gives same error.
FOLLOWUP #2: Tried installing Fedora 35, latest Pop OS, latest OpenSuse Tumbleweed, same errors there.
FOLLOWUP #3 Installing the windows version using wine (Wine-valve 3.7.0.1b-1 from AUR) installs and runs just fine. Which is strange? Should at least rule out any IP related problems? Still same error with (non-wine) steam-runtime
FOLLOWUP #4 Downgraded Nvidia proprietary driver to 470 series on fresh Manjaro install. Still same error (see #3). | test | could not connect to steam network error manjaro your system information steam client steam manjaro and steam native distribution manjaro kde version plasma last update march opted into steam client beta no have you checked for system updates yes nvidia proprietery driver please describe your issue in as much detail as possible on and off since january i have had problems running steam up til then running flawlessly increasingly often getting the could not connect to steam network error since feb i have not been able to start steam at all i have tried steam reset i have tried running with steam tcp both have worked earlier but now only present me with the could not connect to steam network error after starting i will get to login window and on to updating steam information and then immediately get the could not connect to steam network error i have tried every possible solution i could find up to and including a fresh os install same result in addition to current kernel i have also tried the lts kernels and with same results i have also found several posts for manjaro arch suggesting networking changes none of these have had any effect included files are from a fresh os install also fresh steam install purged version installed with os deleted folders then reinstalled steps for reproducing this issue install manjaro install steam run steam alternatively run steam reset then steam or steam tcp followup installing flathub version on same system gives same error followup tried installing fedora latest pop os latest opensuse tumbleweed same errors there followup installing the windows version using wine wine valve from aur installs and runs just fine which is strange should at least rule out any ip related problems still same error with non wine steam runtime followup downgraded nvidia proprietary driver to series on fresh manjaro install still same error see | 1 |
28,991 | 4,461,297,176 | IssuesEvent | 2016-08-24 04:35:57 | NishantUpadhyay-BTC/BLISS-Issue-Tracking | https://api.github.com/repos/NishantUpadhyay-BTC/BLISS-Issue-Tracking | closed | #1430 - Guest UI: Feedback Link | Change Request Deployed to Test | We'd like to create a method for users of the Guest UI to provide feedback or report problems to the Office Team. | 1.0 | #1430 - Guest UI: Feedback Link - We'd like to create a method for users of the Guest UI to provide feedback or report problems to the Office Team. | test | guest ui feedback link we d like to create a method for users of the guest ui to provide feedback or report problems to the office team | 1 |
258,159 | 27,563,861,313 | IssuesEvent | 2023-03-08 01:11:43 | LynRodWS/alcor | https://api.github.com/repos/LynRodWS/alcor | opened | CVE-2020-15522 (Medium) detected in bcprov-jdk15on-1.60.jar | security vulnerability | ## CVE-2020-15522 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.60.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /services/api_gateway/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/bouncycastle/bcprov-jdk15on/1.60/bcprov-jdk15on-1.60.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-gateway-2.1.2.RELEASE.jar (Root Library)
- spring-cloud-starter-2.1.2.RELEASE.jar
- spring-security-rsa-1.0.7.RELEASE.jar
- bcpkix-jdk15on-1.60.jar
- :x: **bcprov-jdk15on-1.60.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Bouncy Castle BC Java before 1.66, BC C# .NET before 1.8.7, BC-FJA before 1.0.1.2, 1.0.2.1, and BC-FNA before 1.0.1.1 have a timing issue within the EC math library that can expose information about the private key when an attacker is able to observe timing information for the generation of multiple deterministic ECDSA signatures.
<p>Publish Date: 2021-05-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-15522>CVE-2020-15522</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15522">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15522</a></p>
<p>Release Date: 2021-05-20</p>
<p>Fix Resolution (org.bouncycastle:bcprov-jdk15on): 1.66</p>
<p>Direct dependency fix Resolution (org.springframework.cloud:spring-cloud-starter-gateway): 3.0.3</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2020-15522 (Medium) detected in bcprov-jdk15on-1.60.jar - ## CVE-2020-15522 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.60.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /services/api_gateway/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/bouncycastle/bcprov-jdk15on/1.60/bcprov-jdk15on-1.60.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-gateway-2.1.2.RELEASE.jar (Root Library)
- spring-cloud-starter-2.1.2.RELEASE.jar
- spring-security-rsa-1.0.7.RELEASE.jar
- bcpkix-jdk15on-1.60.jar
- :x: **bcprov-jdk15on-1.60.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Bouncy Castle BC Java before 1.66, BC C# .NET before 1.8.7, BC-FJA before 1.0.1.2, 1.0.2.1, and BC-FNA before 1.0.1.1 have a timing issue within the EC math library that can expose information about the private key when an attacker is able to observe timing information for the generation of multiple deterministic ECDSA signatures.
<p>Publish Date: 2021-05-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-15522>CVE-2020-15522</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15522">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15522</a></p>
<p>Release Date: 2021-05-20</p>
<p>Fix Resolution (org.bouncycastle:bcprov-jdk15on): 1.66</p>
<p>Direct dependency fix Resolution (org.springframework.cloud:spring-cloud-starter-gateway): 3.0.3</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_test | cve medium detected in bcprov jar cve medium severity vulnerability vulnerable library bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk to jdk library home page a href path to dependency file services api gateway pom xml path to vulnerable library home wss scanner repository org bouncycastle bcprov bcprov jar dependency hierarchy spring cloud starter gateway release jar root library spring cloud starter release jar spring security rsa release jar bcpkix jar x bcprov jar vulnerable library found in base branch master vulnerability details bouncy castle bc java before bc c net before bc fja before and bc fna before have a timing issue within the ec math library that can expose information about the private key when an attacker is able to observe timing information for the generation of multiple deterministic ecdsa signatures publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org bouncycastle bcprov direct dependency fix resolution org springframework cloud spring cloud starter gateway check this box to open an automated fix pr | 0 |
207,904 | 15,858,396,009 | IssuesEvent | 2021-04-08 06:38:50 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] AutoFollowIT midnight timing related failures | :Core/Features/Data streams >test-failure Team:Core/Features | **Build scan**: https://gradle-enterprise.elastic.co/s/btcm27tw2kzqk
**Repro line**: `./gradlew ':x-pack:plugin:ccr:qa:multi-cluster:follow-cluster' --tests "org.elasticsearch.xpack.ccr.AutoFollowIT.testDataStreams_autoFollowAfterDataStreamCreated" -Dtests.seed=AFE0A472E2813558 -Dtests.security.manager=true -Dtests.locale=be-BY -Dtests.timezone=Africa/Maseru -Druntime.java=11`
and: `./gradlew ':x-pack:plugin:ccr:qa:multi-cluster:follow-cluster' --tests "org.elasticsearch.xpack.ccr.AutoFollowIT.testDataStreamsBiDirectionalReplication" -Dtests.seed=AFE0A472E2813558 -Dtests.security.manager=true -Dtests.locale=be-BY -Dtests.timezone=Africa/Maseru -Druntime.java=11`
**Reproduces locally?**: Not tried yet.
**Applicable branches**: Currently master, but this test should fail around midnight as well.
**Failure history**: [at least 4 failures in the last 7 days](https://gradle-enterprise.elastic.co/scans/tests?search.relativeStartTime=P7D&search.timeZoneId=Europe/Amsterdam&tests.container=org.elasticsearch.xpack.ccr.AutoFollowIT&tests.sortField=FAILED&tests.test=testDataStreamsBiDirectionalReplication&tests.unstableOnly=true).
**Failure excerpt**:
```
java.lang.AssertionError: |
-- | --
| Expected: ".ds-logs-http-eu-2021.04.02-000001" |
| but: was ".ds-logs-http-eu-2021.04.01-000001" |
at __randomizedtesting.SeedInfo.seed([AFE0A472E2813558:5D547772E9490053]:0) |
-- | --
| | •••
| | at org.elasticsearch.xpack.ccr.ESCCRRestTestCase.verifyDataStream(ESCCRRestTestCase.java:319) |
| | at org.elasticsearch.xpack.ccr.AutoFollowIT.testDataStreamsBiDirectionalReplication(AutoFollowIT.java:581)
```
The tests shouldn't do an equal check on the name of the backing index. Instead it should check that the backing index name has the expected structure, data stream name and generation. | 1.0 | [CI] AutoFollowIT midnight timing related failures - **Build scan**: https://gradle-enterprise.elastic.co/s/btcm27tw2kzqk
**Repro line**: `./gradlew ':x-pack:plugin:ccr:qa:multi-cluster:follow-cluster' --tests "org.elasticsearch.xpack.ccr.AutoFollowIT.testDataStreams_autoFollowAfterDataStreamCreated" -Dtests.seed=AFE0A472E2813558 -Dtests.security.manager=true -Dtests.locale=be-BY -Dtests.timezone=Africa/Maseru -Druntime.java=11`
and: `./gradlew ':x-pack:plugin:ccr:qa:multi-cluster:follow-cluster' --tests "org.elasticsearch.xpack.ccr.AutoFollowIT.testDataStreamsBiDirectionalReplication" -Dtests.seed=AFE0A472E2813558 -Dtests.security.manager=true -Dtests.locale=be-BY -Dtests.timezone=Africa/Maseru -Druntime.java=11`
**Reproduces locally?**: Not tried yet.
**Applicable branches**: Currently master, but this test should fail around midnight as well.
**Failure history**: [at least 4 failures in the last 7 days](https://gradle-enterprise.elastic.co/scans/tests?search.relativeStartTime=P7D&search.timeZoneId=Europe/Amsterdam&tests.container=org.elasticsearch.xpack.ccr.AutoFollowIT&tests.sortField=FAILED&tests.test=testDataStreamsBiDirectionalReplication&tests.unstableOnly=true).
**Failure excerpt**:
```
java.lang.AssertionError: |
-- | --
| Expected: ".ds-logs-http-eu-2021.04.02-000001" |
| but: was ".ds-logs-http-eu-2021.04.01-000001" |
at __randomizedtesting.SeedInfo.seed([AFE0A472E2813558:5D547772E9490053]:0) |
-- | --
| | •••
| | at org.elasticsearch.xpack.ccr.ESCCRRestTestCase.verifyDataStream(ESCCRRestTestCase.java:319) |
| | at org.elasticsearch.xpack.ccr.AutoFollowIT.testDataStreamsBiDirectionalReplication(AutoFollowIT.java:581)
```
The tests shouldn't do an equal check on the name of the backing index. Instead it should check that the backing index name has the expected structure, data stream name and generation. | test | autofollowit midnight timing related failures build scan repro line gradlew x pack plugin ccr qa multi cluster follow cluster tests org elasticsearch xpack ccr autofollowit testdatastreams autofollowafterdatastreamcreated dtests seed dtests security manager true dtests locale be by dtests timezone africa maseru druntime java and gradlew x pack plugin ccr qa multi cluster follow cluster tests org elasticsearch xpack ccr autofollowit testdatastreamsbidirectionalreplication dtests seed dtests security manager true dtests locale be by dtests timezone africa maseru druntime java reproduces locally not tried yet applicable branches currently master but this test should fail around midnight as well failure history failure excerpt java lang assertionerror expected ds logs http eu but was ds logs http eu at randomizedtesting seedinfo seed ••• at org elasticsearch xpack ccr esccrresttestcase verifydatastream esccrresttestcase java at org elasticsearch xpack ccr autofollowit testdatastreamsbidirectionalreplication autofollowit java the tests shouldn t do an equal check on the name of the backing index instead it should check that the backing index name has the expected structure data stream name and generation | 1 |
796,142 | 28,100,025,452 | IssuesEvent | 2023-03-30 18:42:32 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | opened | Master crashed while restoring backup on same Universe (Invalid value of enum SysSnapshotEntryPB::State) | kind/bug area/docdb priority/high jira-originated | Jira Link: [DB-5683](https://yugabyte.atlassian.net/browse/DB-5683)
| 1.0 | Master crashed while restoring backup on same Universe (Invalid value of enum SysSnapshotEntryPB::State) - Jira Link: [DB-5683](https://yugabyte.atlassian.net/browse/DB-5683)
| non_test | master crashed while restoring backup on same universe invalid value of enum syssnapshotentrypb state jira link | 0 |
103,281 | 8,894,028,555 | IssuesEvent | 2019-01-16 02:04:54 | warlocks-of-the-midwest/mystic-get-together | https://api.github.com/repos/warlocks-of-the-midwest/mystic-get-together | opened | Add tests for Firebase Security Rules | firebase tests | We should have tests to ensure that the security rules:
- only allow authenticated users to access any data
- only allow an authenticated user to write to their profile
- only allow authenticated users in a game to write any data (outside users can read, e.g. spectate)
- only allow a card's owner to write to it (e.g. only I can tap my lands, no one else)
| 1.0 | Add tests for Firebase Security Rules - We should have tests to ensure that the security rules:
- only allow authenticated users to access any data
- only allow an authenticated user to write to their profile
- only allow authenticated users in a game to write any data (outside users can read, e.g. spectate)
- only allow a card's owner to write to it (e.g. only I can tap my lands, no one else)
| test | add tests for firebase security rules we should have tests to ensure that the security rules only allow authenticated users to access any data only allow an authenticated user to write to their profile only allow authenticated users in a game to write any data outside users can read e g spectate only allow a card s owner to write to it e g only i can tap my lands no one else | 1 |
38,341 | 19,103,222,891 | IssuesEvent | 2021-11-30 02:17:52 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | closed | "AND 1" significantly affects the time of query | performance v20.1 | ClickHouse: `20.1.4.14`
Base query:
```
SELECT count()
FROM numbers(10000000000)
WHERE number % 4
┌────count()─┐
│ 7500000000 │
└────────────┘
1 rows in set. Elapsed: 4.483 sec. Processed 10.00 billion rows, 80.00 GB (2.23 billion rows/s., 17.85 GB/s.)
```
Just add `AND 1` to WHERE
```
SELECT count()
FROM numbers(10000000000)
WHERE (number % 4) AND 1
┌────count()─┐
│ 7500000000 │
└────────────┘
1 rows in set. Elapsed: 12.272 sec. Processed 10.00 billion rows, 80.00 GB (814.85 million rows/s., 6.52 GB/s.)
``` | True | "AND 1" significantly affects the time of query - ClickHouse: `20.1.4.14`
Base query:
```
SELECT count()
FROM numbers(10000000000)
WHERE number % 4
┌────count()─┐
│ 7500000000 │
└────────────┘
1 rows in set. Elapsed: 4.483 sec. Processed 10.00 billion rows, 80.00 GB (2.23 billion rows/s., 17.85 GB/s.)
```
Just add `AND 1` to WHERE
```
SELECT count()
FROM numbers(10000000000)
WHERE (number % 4) AND 1
┌────count()─┐
│ 7500000000 │
└────────────┘
1 rows in set. Elapsed: 12.272 sec. Processed 10.00 billion rows, 80.00 GB (814.85 million rows/s., 6.52 GB/s.)
``` | non_test | and significantly affects the time of query clickhouse base query select count from numbers where number ┌────count ─┐ │ │ └────────────┘ rows in set elapsed sec processed billion rows gb billion rows s gb s just add and to where select count from numbers where number and ┌────count ─┐ │ │ └────────────┘ rows in set elapsed sec processed billion rows gb million rows s gb s | 0 |
226,551 | 7,521,039,557 | IssuesEvent | 2018-04-12 15:58:50 | MarkH817/network-comm-project | https://api.github.com/repos/MarkH817/network-comm-project | opened | Show current roster of logged in users | priority: high | Before you submit an issue:
* Check for similar issues
* Provide error messages, if applicable
* Feature requests/suggestions are always welcome for discussion
* Title the issue appropriately
## Description
On the public chat portion, show who is currently on with their respective nickname.
| 1.0 | Show current roster of logged in users - Before you submit an issue:
* Check for similar issues
* Provide error messages, if applicable
* Feature requests/suggestions are always welcome for discussion
* Title the issue appropriately
## Description
On the public chat portion, show who is currently on with their respective nickname.
| non_test | show current roster of logged in users before you submit an issue check for similar issues provide error messages if applicable feature requests suggestions are always welcome for discussion title the issue appropriately description on the public chat portion show who is currently on with their respective nickname | 0 |
241,209 | 20,108,435,353 | IssuesEvent | 2022-02-07 12:55:44 | IntellectualSites/FastAsyncWorldEdit | https://api.github.com/repos/IntellectualSites/FastAsyncWorldEdit | opened | Does it support 1.15? | Requires Testing | ### Server Implementation
Paper
### Server Version
1.15.2
### Describe the bug
> 1.15.2, 1.16.5, 1.17.1, 1.18, 1.18.1 - FastAsyncWorldEdit actively develops against and supports these versions.
I launch my server with Zulu openjdk 17.0.2 but it shows
> Unsupported Java detected (61.0). Only up to Java 14 is supported.
It's it not support? or I have to use some Server Implementation I do not tried?
### To Reproduce
1. Start the server with Java 17
2. Server shut down.
### Expected behaviour
FAWE needs Java 17 but Spigot seems support up to Java 14
### Screenshots / Videos
_No response_
### Error log (if applicable)
_No response_
### Fawe Debugpaste
none
### Fawe Version
latest
### Checklist
- [X] I have included a Fawe debugpaste.
- [X] I am using the newest build from https://ci.athion.net/job/FastAsyncWorldEdit/ and the issue still persists.
### Anything else?
_No response_ | 1.0 | Does it support 1.15? - ### Server Implementation
Paper
### Server Version
1.15.2
### Describe the bug
> 1.15.2, 1.16.5, 1.17.1, 1.18, 1.18.1 - FastAsyncWorldEdit actively develops against and supports these versions.
I launch my server with Zulu openjdk 17.0.2 but it shows
> Unsupported Java detected (61.0). Only up to Java 14 is supported.
It's it not support? or I have to use some Server Implementation I do not tried?
### To Reproduce
1. Start the server with Java 17
2. Server shut down.
### Expected behaviour
FAWE needs Java 17 but Spigot seems support up to Java 14
### Screenshots / Videos
_No response_
### Error log (if applicable)
_No response_
### Fawe Debugpaste
none
### Fawe Version
latest
### Checklist
- [X] I have included a Fawe debugpaste.
- [X] I am using the newest build from https://ci.athion.net/job/FastAsyncWorldEdit/ and the issue still persists.
### Anything else?
_No response_ | test | does it support server implementation paper server version describe the bug fastasyncworldedit actively develops against and supports these versions i launch my server with zulu openjdk but it shows unsupported java detected only up to java is supported it s it not support or i have to use some server implementation i do not tried to reproduce start the server with java server shut down expected behaviour fawe needs java but spigot seems support up to java screenshots videos no response error log if applicable no response fawe debugpaste none fawe version latest checklist i have included a fawe debugpaste i am using the newest build from and the issue still persists anything else no response | 1 |
256,151 | 22,041,119,401 | IssuesEvent | 2022-05-29 11:21:25 | SAA-SDT/eac-cpf-schema | https://api.github.com/repos/SAA-SDT/eac-cpf-schema | closed | <eventDateTime> | Element Tested by Schema Team | ## Maintenance Event Type
- add attributes
`@audience`
`@scriptOfElement`
- keep element name and definition
## Creator of issue
1. Silke Jagodzinski
2. TS-EAS: EAC-CPF subgroup
3. silkejagodzinski@gmail.com
## Related issues / documents
## EAD3 Reconciliation
**Additional EAD 3 attributes**
`@altrender` - Optional
`@audience` - Optional (values limited to: external, internal)
`@encodinganalog` - Optional
`@script` - Optional
## Context
The date and time of a maintenance event for the EAC-CPF instance.
**May contain**: [text]
**May occur within**: <maintenanceEvent>
**Attributes:** `@standardDateTime`, `@xml:id`, `@xml:lang` - all optional
**Availability:** Mandatory, Non-repeatable
## Solution documentation
Rephrasing _Summary_, _Description and Usage_ and _Attribute usage_ needed?
**May contain**: [text]
**May occur within**: <maintenanceEvent>
**Attributes:**
`@audience` - optional (values limited to: external, internal)
`@id` - optional
`@languageOfElement` - optional
`@scriptOfElement` - optional
`@standardDateTime` - optional
**Availability:** Required, not repeatable
- New or other example needed?
## Example encoding
```
<control>
<recordId>records identifier</recordId>
<maintenanceAgency> [...] </maintenanceAgency>
<maintenanceHistory>
<maintenanceEvent>
<eventDateTime audience="external" id="eventdatetime1" languageOfElement="de" scriptOfElement="lat" standardDateTime="2020-07-01T13:01:48">01.07.2020</eventDateTime>
<agent>agent name</agent>
</maintenanceEvent>
</maintenanceHistory>
</control>
``` | 1.0 | <eventDateTime> - ## Maintenance Event Type
- add attributes
`@audience`
`@scriptOfElement`
- keep element name and definition
## Creator of issue
1. Silke Jagodzinski
2. TS-EAS: EAC-CPF subgroup
3. silkejagodzinski@gmail.com
## Related issues / documents
## EAD3 Reconciliation
**Additional EAD 3 attributes**
`@altrender` - Optional
`@audience` - Optional (values limited to: external, internal)
`@encodinganalog` - Optional
`@script` - Optional
## Context
The date and time of a maintenance event for the EAC-CPF instance.
**May contain**: [text]
**May occur within**: <maintenanceEvent>
**Attributes:** `@standardDateTime`, `@xml:id`, `@xml:lang` - all optional
**Availability:** Mandatory, Non-repeatable
## Solution documentation
Rephrasing _Summary_, _Description and Usage_ and _Attribute usage_ needed?
**May contain**: [text]
**May occur within**: <maintenanceEvent>
**Attributes:**
`@audience` - optional (values limited to: external, internal)
`@id` - optional
`@languageOfElement` - optional
`@scriptOfElement` - optional
`@standardDateTime` - optional
**Availability:** Required, not repeatable
- New or other example needed?
## Example encoding
```
<control>
<recordId>records identifier</recordId>
<maintenanceAgency> [...] </maintenanceAgency>
<maintenanceHistory>
<maintenanceEvent>
<eventDateTime audience="external" id="eventdatetime1" languageOfElement="de" scriptOfElement="lat" standardDateTime="2020-07-01T13:01:48">01.07.2020</eventDateTime>
<agent>agent name</agent>
</maintenanceEvent>
</maintenanceHistory>
</control>
``` | test | maintenance event type add attributes audience scriptofelement keep element name and definition creator of issue silke jagodzinski ts eas eac cpf subgroup silkejagodzinski gmail com related issues documents reconciliation additional ead attributes altrender optional audience optional values limited to external internal encodinganalog optional script optional context the date and time of a maintenance event for the eac cpf instance may contain may occur within attributes standarddatetime xml id xml lang all optional availability mandatory non repeatable solution documentation rephrasing summary description and usage and attribute usage needed may contain may occur within attributes audience optional values limited to external internal id optional languageofelement optional scriptofelement optional standarddatetime optional availability required not repeatable new or other example needed example encoding records identifier agent name | 1 |
32,451 | 4,772,084,217 | IssuesEvent | 2016-10-26 19:49:49 | efronbs/ProfileSharing | https://api.github.com/repos/efronbs/ProfileSharing | closed | (TEST THE FIX) Handle leading period in cookie domain name. | bug needs tests P1 | Currently I am organizing everything by domain name. This gets populated by gathering the browser cookies. Some cookies will have a leading period to signify that those cookies should be readable by subdomains. I the past I didn't handle this because requiring a leading period is deprecated behavior, so it didn't matter whether I tracked that or not.
I want to keep local storage under the same domain section in the profile JSON (whether I am storing actual data or just using it as a lookup for a database). However when parsing a url to do a lookup for local storage, it is impossible to know whether the key for that domain will have a leading period or not. I need to find a way to handle this.
Current options are:
1. Remove leading period when fetching all browser cookies. This is probably the best option. I don't think the website needs to know whether the domain has a leading period or not when getting or setting cookies, so I think that I can safely remove a leading zero if it exists. This would also simplify logic elsewhere
2. If option 1 doesn't work I can always send a domain without a leading zero, and the try fetching both. This is messy and less preferable, but it should work. | 1.0 | (TEST THE FIX) Handle leading period in cookie domain name. - Currently I am organizing everything by domain name. This gets populated by gathering the browser cookies. Some cookies will have a leading period to signify that those cookies should be readable by subdomains. I the past I didn't handle this because requiring a leading period is deprecated behavior, so it didn't matter whether I tracked that or not.
I want to keep local storage under the same domain section in the profile JSON (whether I am storing actual data or just using it as a lookup for a database). However when parsing a url to do a lookup for local storage, it is impossible to know whether the key for that domain will have a leading period or not. I need to find a way to handle this.
Current options are:
1. Remove leading period when fetching all browser cookies. This is probably the best option. I don't think the website needs to know whether the domain has a leading period or not when getting or setting cookies, so I think that I can safely remove a leading zero if it exists. This would also simplify logic elsewhere
2. If option 1 doesn't work I can always send a domain without a leading zero, and the try fetching both. This is messy and less preferable, but it should work. | test | test the fix handle leading period in cookie domain name currently i am organizing everything by domain name this gets populated by gathering the browser cookies some cookies will have a leading period to signify that those cookies should be readable by subdomains i the past i didn t handle this because requiring a leading period is deprecated behavior so it didn t matter whether i tracked that or not i want to keep local storage under the same domain section in the profile json whether i am storing actual data or just using it as a lookup for a database however when parsing a url to do a lookup for local storage it is impossible to know whether the key for that domain will have a leading period or not i need to find a way to handle this current options are remove leading period when fetching all browser cookies this is probably the best option i don t think the website needs to know whether the domain has a leading period or not when getting or setting cookies so i think that i can safely remove a leading zero if it exists this would also simplify logic elsewhere if option doesn t work i can always send a domain without a leading zero and the try fetching both this is messy and less preferable but it should work | 1 |
170,332 | 13,184,023,383 | IssuesEvent | 2020-08-12 18:36:31 | microsoft/PowerToys | https://api.github.com/repos/microsoft/PowerToys | opened | Keyboard Manager UI needs Win App Driver tests | Area-Tests Product-Keyboard Shortcut Manager | Unit tests for the UI logic for validating shortcut and key selections was added in #5718, however these tests only cover the scenarios of a single drop down changing. In order to test more complex interactions where an error while changing one drop down results in multiple warnings and multiple drop downs getting modified requires the use of Win App driver tests.
Win app driver tests will also allow us to test adding/removing rows in the UI. | 1.0 | Keyboard Manager UI needs Win App Driver tests - Unit tests for the UI logic for validating shortcut and key selections was added in #5718, however these tests only cover the scenarios of a single drop down changing. In order to test more complex interactions where an error while changing one drop down results in multiple warnings and multiple drop downs getting modified requires the use of Win App driver tests.
Win app driver tests will also allow us to test adding/removing rows in the UI. | test | keyboard manager ui needs win app driver tests unit tests for the ui logic for validating shortcut and key selections was added in however these tests only cover the scenarios of a single drop down changing in order to test more complex interactions where an error while changing one drop down results in multiple warnings and multiple drop downs getting modified requires the use of win app driver tests win app driver tests will also allow us to test adding removing rows in the ui | 1 |
129,472 | 17,787,390,860 | IssuesEvent | 2021-08-31 12:44:41 | carbon-design-system/carbon-addons-iot-react | https://api.github.com/repos/carbon-design-system/carbon-addons-iot-react | opened | [DateTimePicker] add onValidation prop to increase flexibility | type: enhancement :bulb: status: needs triage :mag: status: needs priority :inbox_tray: status: needs design :art: | <!--
Use this template if you want to request a new feature, or a change to an
existing feature.
If you'd like to request an entirely new component, please use the component request template instead.
If you are reporting a bug or problem, please use the bug template instead.
-->
### What package is this for?
- [X] React
- [ ] Angular
### Summary
We can improve the usability of this component by adding an `onValidation` prop that moves validation out of the internals of the component and into user-land. This should allow us to remove restrictions like 24-hr time inputs in absolute mode. It's possible there are other design ui/ux changes that can make this a better experience, too, but I don't know what work has been done in that department.
**Additional context** Add any other context or screenshots about the feature
request here.
https://github.com/carbon-design-system/carbon-addons-iot-react/pull/2778#pullrequestreview-741899895 | 1.0 | [DateTimePicker] add onValidation prop to increase flexibility - <!--
Use this template if you want to request a new feature, or a change to an
existing feature.
If you'd like to request an entirely new component, please use the component request template instead.
If you are reporting a bug or problem, please use the bug template instead.
-->
### What package is this for?
- [X] React
- [ ] Angular
### Summary
We can improve the usability of this component by adding an `onValidation` prop that moves validation out of the internals of the component and into user-land. This should allow us to remove restrictions like 24-hr time inputs in absolute mode. It's possible there are other design ui/ux changes that can make this a better experience, too, but I don't know what work has been done in that department.
**Additional context** Add any other context or screenshots about the feature
request here.
https://github.com/carbon-design-system/carbon-addons-iot-react/pull/2778#pullrequestreview-741899895 | non_test | add onvalidation prop to increase flexibility use this template if you want to request a new feature or a change to an existing feature if you d like to request an entirely new component please use the component request template instead if you are reporting a bug or problem please use the bug template instead what package is this for react angular summary we can improve the usability of this component by adding an onvalidation prop that moves validation out of the internals of the component and into user land this should allow us to remove restrictions like hr time inputs in absolute mode it s possible there are other design ui ux changes that can make this a better experience too but i don t know what work has been done in that department additional context add any other context or screenshots about the feature request here | 0 |
48,184 | 7,389,519,661 | IssuesEvent | 2018-03-16 09:01:33 | wso2/product-sp | https://api.github.com/repos/wso2/product-sp | closed | Improvement on information tool tip location | Compliance/GDPR Priority/Normal Type/Documentation Type/Improvement | **Description:**
Tooltip as in the below screen, should be moved to T row in the table since, whenever user provides T option the TID is mandatory.

[1] https://docs.wso2.com/display/SP4xx/Removing+Personally+Identifiable+Information+via+the+Forget-me+Tool
| 1.0 | Improvement on information tool tip location - **Description:**
Tooltip as in the below screen, should be moved to T row in the table since, whenever user provides T option the TID is mandatory.

[1] https://docs.wso2.com/display/SP4xx/Removing+Personally+Identifiable+Information+via+the+Forget-me+Tool
| non_test | improvement on information tool tip location description tooltip as in the below screen should be moved to t row in the table since whenever user provides t option the tid is mandatory | 0 |
602,177 | 18,454,014,271 | IssuesEvent | 2021-10-15 14:16:58 | sButtons/sbuttons | https://api.github.com/repos/sButtons/sbuttons | closed | bundle the npm run clean-css command along with npm start | enhancement buttons Priority: Medium stale-issue | **Is your feature request related to a problem? Please describe.**
no
**Describe the solution you'd like**
run both the commands npm run compile and npm run clean-css (which would look for changes in sbuttons.css) simultaneously with the command npm start which would indeed facilitate faster development.
**Additional notes**
I would like to work on this.
| 1.0 | bundle the npm run clean-css command along with npm start - **Is your feature request related to a problem? Please describe.**
no
**Describe the solution you'd like**
run both the commands npm run compile and npm run clean-css (which would look for changes in sbuttons.css) simultaneously with the command npm start which would indeed facilitate faster development.
**Additional notes**
I would like to work on this.
| non_test | bundle the npm run clean css command along with npm start is your feature request related to a problem please describe no describe the solution you d like run both the commands npm run compile and npm run clean css which would look for changes in sbuttons css simultaneously with the command npm start which would indeed facilitate faster development additional notes i would like to work on this | 0 |
3,125 | 6,156,498,228 | IssuesEvent | 2017-06-28 16:50:25 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | Requests are reducing over time | log-processing | Question:
I am using goAccess for a production site I have live for some months now.
Furthermore, I have a script that parses the log files and uploads the html report.
Initially everything was working great. On my last report update, I noticed that the requests are actually less for a longer amount of time.
Given that:
1. logs are generated by the apache in an excellent production server from a big enterprise
2. I actually parsed them one by one and the sum is the same small number
3. nothing changed in my script
Could you think of a reason why this happens?
My script for parsing logs:
```bash
ssh <username>@server zcat $APP_PATH'access.http.log.*.gz' >> file.log
ssh <username>@server zcat $APP_PATH'access.https.log.*.gz' >> file.log
ssh <username>@server 'cat '$APP_PATH'access.http.log' >> file.log
ssh <username>@server 'cat '$APP_PATH'access.http.log.1' >> file.log
ssh <username>@server 'cat '$APP_PATH'access.https.log' >> file.log
ssh <username>@server 'cat '$APP_PATH'access.https.log.1' >> file.log
goaccess file.log -o /var/www/ngcc_report.html
```
Logs:
```bash
$ls -1
access.http.log
access.http.log.1
access.http.log.2.gz
access.http.log.3.gz
access.http.log.4.gz
access.http.log.5.gz
access.https.log
access.https.log.1
access.https.log.2.gz
access.https.log.3.gz
access.https.log.4.gz
access.https.log.5.gz
access.https.log.6.gz
# + error logs
```
Stats:
First 3 months: ~45000 total requests
Last update (first 3 months + 2 more): ~23000 total requests | 1.0 | Requests are reducing over time - Question:
I am using goAccess for a production site I have live for some months now.
Furthermore, I have a script that parses the log files and uploads the html report.
Initially everything was working great. On my last report update, I noticed that the requests are actually less for a longer amount of time.
Given that:
1. logs are generated by the apache in an excellent production server from a big enterprise
2. I actually parsed them one by one and the sum is the same small number
3. nothing changed in my script
Could you think of a reason why this happens?
My script for parsing logs:
```bash
ssh <username>@server zcat $APP_PATH'access.http.log.*.gz' >> file.log
ssh <username>@server zcat $APP_PATH'access.https.log.*.gz' >> file.log
ssh <username>@server 'cat '$APP_PATH'access.http.log' >> file.log
ssh <username>@server 'cat '$APP_PATH'access.http.log.1' >> file.log
ssh <username>@server 'cat '$APP_PATH'access.https.log' >> file.log
ssh <username>@server 'cat '$APP_PATH'access.https.log.1' >> file.log
goaccess file.log -o /var/www/ngcc_report.html
```
Logs:
```bash
$ls -1
access.http.log
access.http.log.1
access.http.log.2.gz
access.http.log.3.gz
access.http.log.4.gz
access.http.log.5.gz
access.https.log
access.https.log.1
access.https.log.2.gz
access.https.log.3.gz
access.https.log.4.gz
access.https.log.5.gz
access.https.log.6.gz
# + error logs
```
Stats:
First 3 months: ~45000 total requests
Last update (first 3 months + 2 more): ~23000 total requests | non_test | requests are reducing over time question i am using goaccess for a production site i have live for some months now furthermore i have a script that parses the log files and uploads the html report initially everything was working great on my last report update i noticed that the requests are actually less for a longer amount of time given that logs are generated by the apache in an excellent production server from a big enterprise i actually parsed them one by one and the sum is the same small number nothing changed in my script could you think of a reason why this happens my script for parsing logs bash ssh server zcat app path access http log gz file log ssh server zcat app path access https log gz file log ssh server cat app path access http log file log ssh server cat app path access http log file log ssh server cat app path access https log file log ssh server cat app path access https log file log goaccess file log o var www ngcc report html logs bash ls access http log access http log access http log gz access http log gz access http log gz access http log gz access https log access https log access https log gz access https log gz access https log gz access https log gz access https log gz error logs stats first months total requests last update first months more total requests | 0 |
312,449 | 26,865,682,535 | IssuesEvent | 2023-02-03 23:20:23 | void-linux/void-packages | https://api.github.com/repos/void-linux/void-packages | closed | org.gnome.DiskUtility.desktop not executing "gnome-disks" executable through an App Launcher, yet using a custom gendesk .desktop file works fine, CLI as well works | bug needs-testing | ### System
* xuname:
Void 5.15.45_1 x86_64 GenuineIntel uptodate rFFFF
* package:
gnome-disk-utility-41.0_1
### Expected behavior
"org.gnome.DiskUtility.desktop" file located into "/usr/share/applications" should execute the "gnome-disks" executable when selected through a given App Launcher Application compatible with freedesktop.org specifications, thus launching the Gnome-Disks App.
### Actual behavior
"org.gnome.DiskUtility.desktop" file located into "/usr/share/applications" fails completely with no error messages when selected into a given App Launcher Application ("wofi" in my current case, with "sway" WM).
Yet, executing the "gnome-disks" executable through CLI works perfectly, and the App itself works perfectly too.
Also, I went ahead, and utilized "gendesk" to create a custom .desktop file which I put into "~/local/share/applications". I can confirm that such a custom .desktop file WORKS perfectly, and have in fact been using it for a few months already (Hence why I forgot to report before).
Code of MY custom .desktop file, created through "gendesk", and located into "~/local/share/applications", is as follows:
```
[Desktop Entry]
Version=1.0
Type=Application
Name=Disks-Gnome
Comment=Disks-Gnome
Exec=gnome-disks
Icon=/home/username/.local/share/applications/FlatDisks.png
Terminal=false
StartupNotify=false
Categories=Application;
```
My custom .desktop file, aside from missing localization tags, also misses the "DBusActivatable=true" parameter which I found into "org.gnome.DiskUtility.desktop" file located into "/usr/share/applications", yet I presume this is not very relevant to the issue at hand.
This has been happening for a few months already (My custom .desktop works fine, thus I forgot to report in the meantime), and has been happening on three separate machines to boot, all with manual install only. App Launcher is currently "wofi" with "sway" WM, but months ago I also saw this happening with "rofi", on "openbox" WM.
### Steps to reproduce the behavior
Simply select the "org.gnome.DiskUtility.desktop" (Tag-named to "Disks" if App Launcher can read such tags, and is configured to do so) into a given freedesktop.org-compatible App Launcher (This has been happening both with "wofi" and "rofi" on me), and it will NOT execute the "gnome-disks" executable, thus NOT launching the Gnome-Disks App.
However, executing "gnome-disks" through the CLI, or otherwise utilizing a custom .desktop file created through "gendesk" as mentioned above, both launch the App just fine.
Maybe some post-install MIME update is missing? This has been happening also on a very recent (Less than 4 weeks) manual Void Install. | 1.0 | org.gnome.DiskUtility.desktop not executing "gnome-disks" executable through an App Launcher, yet using a custom gendesk .desktop file works fine, CLI as well works - ### System
* xuname:
Void 5.15.45_1 x86_64 GenuineIntel uptodate rFFFF
* package:
gnome-disk-utility-41.0_1
### Expected behavior
"org.gnome.DiskUtility.desktop" file located into "/usr/share/applications" should execute the "gnome-disks" executable when selected through a given App Launcher Application compatible with freedesktop.org specifications, thus launching the Gnome-Disks App.
### Actual behavior
"org.gnome.DiskUtility.desktop" file located into "/usr/share/applications" fails completely with no error messages when selected into a given App Launcher Application ("wofi" in my current case, with "sway" WM).
Yet, executing the "gnome-disks" executable through CLI works perfectly, and the App itself works perfectly too.
Also, I went ahead, and utilized "gendesk" to create a custom .desktop file which I put into "~/local/share/applications". I can confirm that such a custom .desktop file WORKS perfectly, and have in fact been using it for a few months already (Hence why I forgot to report before).
Code of MY custom .desktop file, created through "gendesk", and located into "~/local/share/applications", is as follows:
```
[Desktop Entry]
Version=1.0
Type=Application
Name=Disks-Gnome
Comment=Disks-Gnome
Exec=gnome-disks
Icon=/home/username/.local/share/applications/FlatDisks.png
Terminal=false
StartupNotify=false
Categories=Application;
```
My custom .desktop file, aside from missing localization tags, also misses the "DBusActivatable=true" parameter which I found into "org.gnome.DiskUtility.desktop" file located into "/usr/share/applications", yet I presume this is not very relevant to the issue at hand.
This has been happening for a few months already (My custom .desktop works fine, thus I forgot to report in the meantime), and has been happening on three separate machines to boot, all with manual install only. App Launcher is currently "wofi" with "sway" WM, but months ago I also saw this happening with "rofi", on "openbox" WM.
### Steps to reproduce the behavior
Simply select the "org.gnome.DiskUtility.desktop" (Tag-named to "Disks" if App Launcher can read such tags, and is configured to do so) into a given freedesktop.org-compatible App Launcher (This has been happening both with "wofi" and "rofi" on me), and it will NOT execute the "gnome-disks" executable, thus NOT launching the Gnome-Disks App.
However, executing "gnome-disks" through the CLI, or otherwise utilizing a custom .desktop file created through "gendesk" as mentioned above, both launch the App just fine.
Maybe some post-install MIME update is missing? This has been happening also on a very recent (Less than 4 weeks) manual Void Install. | test | org gnome diskutility desktop not executing gnome disks executable through an app launcher yet using a custom gendesk desktop file works fine cli as well works system xuname void genuineintel uptodate rffff package gnome disk utility expected behavior org gnome diskutility desktop file located into usr share applications should execute the gnome disks executable when selected through a given app launcher application compatible with freedesktop org specifications thus launching the gnome disks app actual behavior org gnome diskutility desktop file located into usr share applications fails completely with no error messages when selected into a given app launcher application wofi in my current case with sway wm yet executing the gnome disks executable through cli works perfectly and the app itself works perfectly too also i went ahead and utilized gendesk to create a custom desktop file which i put into local share applications i can confirm that such a custom desktop file works perfectly and have in fact been using it for a few months already hence why i forgot to report before code of my custom desktop file created through gendesk and located into local share applications is as follows version type application name disks gnome comment disks gnome exec gnome disks icon home username local share applications flatdisks png terminal false startupnotify false categories application my custom desktop file aside from missing localization tags also misses the dbusactivatable true parameter which i found into org gnome diskutility desktop file located into usr share applications yet i presume this is not very relevant to the issue at hand this has been happening for a few months already my custom desktop works fine thus i forgot to report in the meantime and has been happening on three separate machines to boot all with manual install only app launcher is currently wofi with sway wm but months ago i also saw this happening with rofi on openbox wm steps to reproduce the behavior simply select the org gnome diskutility desktop tag named to disks if app launcher can read such tags and is configured to do so into a given freedesktop org compatible app launcher this has been happening both with wofi and rofi on me and it will not execute the gnome disks executable thus not launching the gnome disks app however executing gnome disks through the cli or otherwise utilizing a custom desktop file created through gendesk as mentioned above both launch the app just fine maybe some post install mime update is missing this has been happening also on a very recent less than weeks manual void install | 1 |
151,269 | 5,809,004,423 | IssuesEvent | 2017-05-04 12:21:17 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | Not translatable values in Activities/Popup_picker.php (develop branch) | bug Fix Proposed Low Priority | https://github.com/salesagility/SuiteCRM/blob/develop/modules/Activities/Popup_picker.php#L143
https://github.com/salesagility/SuiteCRM/blob/develop/modules/Activities/Popup_picker.php#L193
https://github.com/salesagility/SuiteCRM/blob/develop/modules/Activities/Popup_picker.php#L247
https://github.com/salesagility/SuiteCRM/blob/develop/modules/Activities/Popup_picker.php#L288
https://github.com/salesagility/SuiteCRM/blob/develop/modules/Activities/Popup_picker.php#L312
.....

| 1.0 | Not translatable values in Activities/Popup_picker.php (develop branch) - https://github.com/salesagility/SuiteCRM/blob/develop/modules/Activities/Popup_picker.php#L143
https://github.com/salesagility/SuiteCRM/blob/develop/modules/Activities/Popup_picker.php#L193
https://github.com/salesagility/SuiteCRM/blob/develop/modules/Activities/Popup_picker.php#L247
https://github.com/salesagility/SuiteCRM/blob/develop/modules/Activities/Popup_picker.php#L288
https://github.com/salesagility/SuiteCRM/blob/develop/modules/Activities/Popup_picker.php#L312
.....

| non_test | not translatable values in activities popup picker php develop branch | 0 |
69,725 | 3,313,940,420 | IssuesEvent | 2015-11-06 01:04:40 | infoScoop/infoscoop | https://api.github.com/repos/infoScoop/infoscoop | opened | pom.xmlのリファクタリング | Priority-Medium Type-Refactoring | 以下の問題のためプロジェクトのビルドに失敗する
・ jta-1.0.1B.jar(Hibernateの依存ライブラリ)がリポジトリから削除されている
・ log4j-datedFileAppender-1.0.2.jarの提供元リポジトリcloudhopper.comがサービス終了 | 1.0 | pom.xmlのリファクタリング - 以下の問題のためプロジェクトのビルドに失敗する
・ jta-1.0.1B.jar(Hibernateの依存ライブラリ)がリポジトリから削除されている
・ log4j-datedFileAppender-1.0.2.jarの提供元リポジトリcloudhopper.comがサービス終了 | non_test | pom xmlのリファクタリング 以下の問題のためプロジェクトのビルドに失敗する ・ jta jar hibernateの依存ライブラリ がリポジトリから削除されている ・ datedfileappender jarの提供元リポジトリcloudhopper comがサービス終了 | 0 |
144,242 | 13,099,906,287 | IssuesEvent | 2020-08-03 22:50:06 | fbdevelopercircles/open-source-edu-bot | https://api.github.com/repos/fbdevelopercircles/open-source-edu-bot | closed | Add getting started guide, to quickly help new comers to play with the bot | documentation enhancement good first issue | The readme describe very well how to get the app running, however it doesn't describe how to integrate it with messenger nor how to test the bot. We need to add documentation to cover this part
| 1.0 | Add getting started guide, to quickly help new comers to play with the bot - The readme describe very well how to get the app running, however it doesn't describe how to integrate it with messenger nor how to test the bot. We need to add documentation to cover this part
| non_test | add getting started guide to quickly help new comers to play with the bot the readme describe very well how to get the app running however it doesn t describe how to integrate it with messenger nor how to test the bot we need to add documentation to cover this part | 0 |
322,131 | 27,584,514,600 | IssuesEvent | 2023-03-08 18:38:06 | cisagov/ScubaGear | https://api.github.com/repos/cisagov/ScubaGear | opened | Dolphin release candidate testing - EXO Product Assessment Sanity Testing | Testing | # 💡 Summary #
In preparation of releasing the Dolphin or v0.3.0 of ScubaGear code, conduct sanity testing of the EXO product. Objective and Scope of the task are provided below.
Objectives:
1. There are no regression issues in the EXO product assessments with Dolphin Release
2. Additional sanity testing to ensure that: each policy assessment result is shown in the report, assessment works against all available tenants (G5/E5, G3/E3)
3. EXO product assessment works both in interactive and non-interactive (service principal) modes.
Scope:
1. Detailed functional testing of each policy statement result is out of scope
2. Consistent results between interactive/non-interactive modes and no operational issues in running the test against all tenant types are within the scope.
## Motivation and context ##
This would be useful to ensure that Dolphin release is stable
## Implementation notes ##
Before the test, ensure that test user has minimum user role on a given tenant to assess EXO (look into README). Then, execute the EXO product assessment on all available tenants – first in interactive mode and then in non-interactive mode. After the test verify the following:
1. Verify that all tests run without errors and results reports are generated. Each policy has a result (no empty results)
2. Ensure that there are no regressions from Coral release – for the tested tenant compare the result report from current assessment against saved Coral release results - ensure that any different result is consistent with code change (provide a detailed explanation on any observed diff in results)
## Acceptance criteria ##
1. EXO product assessment works in both interactive and non-interactive mode against G5, E5, G3 and E3 tenants.
2. There are no crashes and/or empty results
3. Results are consistent with Coral release assessment results - any diff is consistent with code changes (viz. support for conditional access policies)
| 1.0 | Dolphin release candidate testing - EXO Product Assessment Sanity Testing - # 💡 Summary #
In preparation of releasing the Dolphin or v0.3.0 of ScubaGear code, conduct sanity testing of the EXO product. Objective and Scope of the task are provided below.
Objectives:
1. There are no regression issues in the EXO product assessments with Dolphin Release
2. Additional sanity testing to ensure that: each policy assessment result is shown in the report, assessment works against all available tenants (G5/E5, G3/E3)
3. EXO product assessment works both in interactive and non-interactive (service principal) modes.
Scope:
1. Detailed functional testing of each policy statement result is out of scope
2. Consistent results between interactive/non-interactive modes and no operational issues in running the test against all tenant types are within the scope.
## Motivation and context ##
This would be useful to ensure that Dolphin release is stable
## Implementation notes ##
Before the test, ensure that test user has minimum user role on a given tenant to assess EXO (look into README). Then, execute the EXO product assessment on all available tenants – first in interactive mode and then in non-interactive mode. After the test verify the following:
1. Verify that all tests run without errors and results reports are generated. Each policy has a result (no empty results)
2. Ensure that there are no regressions from Coral release – for the tested tenant compare the result report from current assessment against saved Coral release results - ensure that any different result is consistent with code change (provide a detailed explanation on any observed diff in results)
## Acceptance criteria ##
1. EXO product assessment works in both interactive and non-interactive mode against G5, E5, G3 and E3 tenants.
2. There are no crashes and/or empty results
3. Results are consistent with Coral release assessment results - any diff is consistent with code changes (viz. support for conditional access policies)
| test | dolphin release candidate testing exo product assessment sanity testing 💡 summary in preparation of releasing the dolphin or of scubagear code conduct sanity testing of the exo product objective and scope of the task are provided below objectives there are no regression issues in the exo product assessments with dolphin release additional sanity testing to ensure that each policy assessment result is shown in the report assessment works against all available tenants exo product assessment works both in interactive and non interactive service principal modes scope detailed functional testing of each policy statement result is out of scope consistent results between interactive non interactive modes and no operational issues in running the test against all tenant types are within the scope motivation and context this would be useful to ensure that dolphin release is stable implementation notes before the test ensure that test user has minimum user role on a given tenant to assess exo look into readme then execute the exo product assessment on all available tenants – first in interactive mode and then in non interactive mode after the test verify the following verify that all tests run without errors and results reports are generated each policy has a result no empty results ensure that there are no regressions from coral release – for the tested tenant compare the result report from current assessment against saved coral release results ensure that any different result is consistent with code change provide a detailed explanation on any observed diff in results acceptance criteria exo product assessment works in both interactive and non interactive mode against and tenants there are no crashes and or empty results results are consistent with coral release assessment results any diff is consistent with code changes viz support for conditional access policies | 1 |
129,204 | 10,566,890,534 | IssuesEvent | 2019-10-05 22:18:42 | golang/go | https://api.github.com/repos/golang/go | closed | cmd/go: test prompts to authenticate vcs-test.golang.org | NeedsInvestigation Testing | When running `go test cmd/go/...` for the first time on a new machine, I saw an `ssh` prompt on the terminal:
```
The authenticity of host 'vcs-test.golang.org (35.184.38.56)' can't be established.
ECDSA key fingerprint is SHA256:[…].
Are you sure you want to continue connecting (yes/no)? yes
```
Perhaps we can add a `known_hosts` file somewhere in `cmd/go/testdata` and set some environment variable to suppress the prompt?
(See also #27494.)
CC @hanwen @FiloSottile @jayconrod | 1.0 | cmd/go: test prompts to authenticate vcs-test.golang.org - When running `go test cmd/go/...` for the first time on a new machine, I saw an `ssh` prompt on the terminal:
```
The authenticity of host 'vcs-test.golang.org (35.184.38.56)' can't be established.
ECDSA key fingerprint is SHA256:[…].
Are you sure you want to continue connecting (yes/no)? yes
```
Perhaps we can add a `known_hosts` file somewhere in `cmd/go/testdata` and set some environment variable to suppress the prompt?
(See also #27494.)
CC @hanwen @FiloSottile @jayconrod | test | cmd go test prompts to authenticate vcs test golang org when running go test cmd go for the first time on a new machine i saw an ssh prompt on the terminal the authenticity of host vcs test golang org can t be established ecdsa key fingerprint is are you sure you want to continue connecting yes no yes perhaps we can add a known hosts file somewhere in cmd go testdata and set some environment variable to suppress the prompt see also cc hanwen filosottile jayconrod | 1 |
81,742 | 7,801,736,834 | IssuesEvent | 2018-06-10 01:59:02 | MarlinFirmware/Marlin | https://api.github.com/repos/MarlinFirmware/Marlin | closed | 1.1.x BugFix - Step loss | Bug: Potential ? Needs: Testing | I have been testing the latest bigfixes on 2 machines here. One is a Tornado running the stock board and the other is a custom machine with a RAMPS board in it. Since switching the custom one over to bugfix I am getting step loss about 75% of the time in a print. Tried lowering accel, jerk, and print speeds. This printer works perfectly on 1.1.8.
Firmware: https://content.timothyhoogland.com/files/ShareXUpload/Marlin-bugfix-1.1.x_SMARTT.zip
Running DRV8825 Drivers on the RAMPS board in 1/32 step mode. | 1.0 | 1.1.x BugFix - Step loss - I have been testing the latest bigfixes on 2 machines here. One is a Tornado running the stock board and the other is a custom machine with a RAMPS board in it. Since switching the custom one over to bugfix I am getting step loss about 75% of the time in a print. Tried lowering accel, jerk, and print speeds. This printer works perfectly on 1.1.8.
Firmware: https://content.timothyhoogland.com/files/ShareXUpload/Marlin-bugfix-1.1.x_SMARTT.zip
Running DRV8825 Drivers on the RAMPS board in 1/32 step mode. | test | x bugfix step loss i have been testing the latest bigfixes on machines here one is a tornado running the stock board and the other is a custom machine with a ramps board in it since switching the custom one over to bugfix i am getting step loss about of the time in a print tried lowering accel jerk and print speeds this printer works perfectly on firmware running drivers on the ramps board in step mode | 1 |
81,934 | 7,807,729,446 | IssuesEvent | 2018-06-11 17:50:22 | Students-of-the-city-of-Kostroma/Student-timetable | https://api.github.com/repos/Students-of-the-city-of-Kostroma/Student-timetable | closed | Ручное тестирование по сценариям функционального тестирования для Story 4 | Functional test Manual testing | Затрагивает Story #4, Scenario #17
Выполнить тесты, заполнить [таблицу](https://docs.google.com/spreadsheets/d/114F1wKsHoGB75gmF2p_XUR5zgbUb6IeQNX1ziO_BSIw/edit#gid=0&range=F55), завести баги на меня. | 2.0 | Ручное тестирование по сценариям функционального тестирования для Story 4 - Затрагивает Story #4, Scenario #17
Выполнить тесты, заполнить [таблицу](https://docs.google.com/spreadsheets/d/114F1wKsHoGB75gmF2p_XUR5zgbUb6IeQNX1ziO_BSIw/edit#gid=0&range=F55), завести баги на меня. | test | ручное тестирование по сценариям функционального тестирования для story затрагивает story scenario выполнить тесты заполнить завести баги на меня | 1 |
92,386 | 8,362,489,143 | IssuesEvent | 2018-10-03 16:58:12 | chartjs/Chart.js | https://api.github.com/repos/chartjs/Chart.js | closed | line toggle is overwritten on new data [FEATURE/BUG?] | status: needs test case type: bug | ## Expected Behavior
when the user toggles the data shown in the line chart by clicking the legend items, the toggle setting is kept even as new data flows into the chart.
## Current Behavior
I poll for data every 15 seconds, the user toggles the legend, new data comes in and the legend toggles are all visible again (very annoying)
## Possible Solution
I tried the click event on the toggle button, changing a variable from true to false for the line hidden object key, but no luck.
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. have at least one data source
2. change the data or reload it by interval or polling
3. click toggle the legend
4. wait for new data and see the legend and line show again
## Context
The issue is I write software for real time analysts and they need to monitor specific streams of data, being able to keep a preference is helpful
## Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Chart.js version: 2.x
* Browser name and version: Firefox/Chrome/IE/Edge latest
* Angular 6
| 1.0 | line toggle is overwritten on new data [FEATURE/BUG?] - ## Expected Behavior
when the user toggles the data shown in the line chart by clicking the legend items, the toggle setting is kept even as new data flows into the chart.
## Current Behavior
I poll for data every 15 seconds, the user toggles the legend, new data comes in and the legend toggles are all visible again (very annoying)
## Possible Solution
I tried the click event on the toggle button, changing a variable from true to false for the line hidden object key, but no luck.
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. have at least one data source
2. change the data or reload it by interval or polling
3. click toggle the legend
4. wait for new data and see the legend and line show again
## Context
The issue is I write software for real time analysts and they need to monitor specific streams of data, being able to keep a preference is helpful
## Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Chart.js version: 2.x
* Browser name and version: Firefox/Chrome/IE/Edge latest
* Angular 6
| test | line toggle is overwritten on new data expected behavior when the user toggles the data shown in the line chart by clicking the legend items the toggle setting is kept even as new data flows into the chart current behavior i poll for data every seconds the user toggles the legend new data comes in and the legend toggles are all visible again very annoying possible solution i tried the click event on the toggle button changing a variable from true to false for the line hidden object key but no luck steps to reproduce for bugs have at least one data source change the data or reload it by interval or polling click toggle the legend wait for new data and see the legend and line show again context the issue is i write software for real time analysts and they need to monitor specific streams of data being able to keep a preference is helpful environment chart js version x browser name and version firefox chrome ie edge latest angular | 1 |
142,529 | 11,484,261,386 | IssuesEvent | 2020-02-11 02:58:04 | microsoft/vscode-python | https://api.github.com/repos/microsoft/vscode-python | closed | Test view feedback | feature-testing needs spec type-enhancement | Hi VS Code dev here :wave:
Some general feedback regarding the Python test viewlet:
1) Idealy VS Code will provide the Test viewlet functionality so the Python can just plugin a view. This depends on https://github.com/microsoft/vscode/issues/9505
2) Icons are out of style: they look like Visual Studio icons and not Visual Studio code icons. Show test output is using a wrong icon which looks like an termian input. Output is readonly thus I think this icon does not fit
3) Name of the view is PYTHON, it should be Python
4) Consider to provide some useful actions and not just an empty view when there are no tests


| 1.0 | Test view feedback - Hi VS Code dev here :wave:
Some general feedback regarding the Python test viewlet:
1) Idealy VS Code will provide the Test viewlet functionality so the Python can just plugin a view. This depends on https://github.com/microsoft/vscode/issues/9505
2) Icons are out of style: they look like Visual Studio icons and not Visual Studio code icons. Show test output is using a wrong icon which looks like an termian input. Output is readonly thus I think this icon does not fit
3) Name of the view is PYTHON, it should be Python
4) Consider to provide some useful actions and not just an empty view when there are no tests


| test | test view feedback hi vs code dev here wave some general feedback regarding the python test viewlet idealy vs code will provide the test viewlet functionality so the python can just plugin a view this depends on icons are out of style they look like visual studio icons and not visual studio code icons show test output is using a wrong icon which looks like an termian input output is readonly thus i think this icon does not fit name of the view is python it should be python consider to provide some useful actions and not just an empty view when there are no tests | 1 |
89,276 | 25,733,085,906 | IssuesEvent | 2022-12-07 21:58:02 | tarantool/tarantool | https://api.github.com/repos/tarantool/tarantool | opened | `tarantool.build.flags` doesn't include `*_DEBUG` and `*_RELWITHDEBINFO` flags | build bug verbosity | **Steps to reproduce**
```lua
tarantool> require('tarantool').build.flags
---
- ' -fexceptions -funwind-tables -fno-common -fopenmp -msse2 -Wformat -Wformat-security
-Werror=format-security -fstack-protector-strong -fPIC -fmacro-prefix-map=/git/tarantool=.
-std=c11 -Wall -Wextra -Wno-gnu-alignof-expression -fno-gnu89-inline -Wno-cast-function-type
-Werror'
...
```
or
```sh
$ tarantool -v
Tarantool 2.11.0-entrypoint-758-g85ef11180
Target: Linux-x86_64-Debug
Build options: cmake . -DCMAKE_INSTALL_PREFIX=/usr/local -DENABLE_BACKTRACE=FALSE
Compiler: GNU-8.3.0
C_FLAGS: -fexceptions -funwind-tables -fno-common -fopenmp -msse2 -Wformat -Wformat-security -Werror=format-security -fstack-protector-strong -fPIC -fmacro-prefix-map=/git/tarantool=. -std=c11 -Wall -Wextra -Wno-gnu-alignof-expression -fno-gnu89-inline -Wno-cast-function-type -Werror
CXX_FLAGS: -fexceptions -funwind-tables -fno-common -fopenmp -msse2 -Wformat -Wformat-security -Werror=format-security -fstack-protector-strong -fPIC -fmacro-prefix-map=/git/tarantool=. -std=c++11 -Wall -Wextra -Wno-invalid-offsetof -Wno-gnu-alignof-expression -Wno-cast-function-type -Werror
```
**Actual behavior**
Note that build type is `Debug`, but there are neither `-g` nor `-O0` options.
**Expected behavior**
`TARANTOOL_C_FLAGS` should include `CMAKE_C_FLAGS_DEBUG` or `CMAKE_C_FLAGS_RELWITHDEBINFO`, depending on the build type. The same applies to `TARANTOOL_CXX_FLAGS`. | 1.0 | `tarantool.build.flags` doesn't include `*_DEBUG` and `*_RELWITHDEBINFO` flags - **Steps to reproduce**
```lua
tarantool> require('tarantool').build.flags
---
- ' -fexceptions -funwind-tables -fno-common -fopenmp -msse2 -Wformat -Wformat-security
-Werror=format-security -fstack-protector-strong -fPIC -fmacro-prefix-map=/git/tarantool=.
-std=c11 -Wall -Wextra -Wno-gnu-alignof-expression -fno-gnu89-inline -Wno-cast-function-type
-Werror'
...
```
or
```sh
$ tarantool -v
Tarantool 2.11.0-entrypoint-758-g85ef11180
Target: Linux-x86_64-Debug
Build options: cmake . -DCMAKE_INSTALL_PREFIX=/usr/local -DENABLE_BACKTRACE=FALSE
Compiler: GNU-8.3.0
C_FLAGS: -fexceptions -funwind-tables -fno-common -fopenmp -msse2 -Wformat -Wformat-security -Werror=format-security -fstack-protector-strong -fPIC -fmacro-prefix-map=/git/tarantool=. -std=c11 -Wall -Wextra -Wno-gnu-alignof-expression -fno-gnu89-inline -Wno-cast-function-type -Werror
CXX_FLAGS: -fexceptions -funwind-tables -fno-common -fopenmp -msse2 -Wformat -Wformat-security -Werror=format-security -fstack-protector-strong -fPIC -fmacro-prefix-map=/git/tarantool=. -std=c++11 -Wall -Wextra -Wno-invalid-offsetof -Wno-gnu-alignof-expression -Wno-cast-function-type -Werror
```
**Actual behavior**
Note that build type is `Debug`, but there are neither `-g` nor `-O0` options.
**Expected behavior**
`TARANTOOL_C_FLAGS` should include `CMAKE_C_FLAGS_DEBUG` or `CMAKE_C_FLAGS_RELWITHDEBINFO`, depending on the build type. The same applies to `TARANTOOL_CXX_FLAGS`. | non_test | tarantool build flags doesn t include debug and relwithdebinfo flags steps to reproduce lua tarantool require tarantool build flags fexceptions funwind tables fno common fopenmp wformat wformat security werror format security fstack protector strong fpic fmacro prefix map git tarantool std wall wextra wno gnu alignof expression fno inline wno cast function type werror or sh tarantool v tarantool entrypoint target linux debug build options cmake dcmake install prefix usr local denable backtrace false compiler gnu c flags fexceptions funwind tables fno common fopenmp wformat wformat security werror format security fstack protector strong fpic fmacro prefix map git tarantool std wall wextra wno gnu alignof expression fno inline wno cast function type werror cxx flags fexceptions funwind tables fno common fopenmp wformat wformat security werror format security fstack protector strong fpic fmacro prefix map git tarantool std c wall wextra wno invalid offsetof wno gnu alignof expression wno cast function type werror actual behavior note that build type is debug but there are neither g nor options expected behavior tarantool c flags should include cmake c flags debug or cmake c flags relwithdebinfo depending on the build type the same applies to tarantool cxx flags | 0 |
55,485 | 11,434,632,011 | IssuesEvent | 2020-02-04 17:45:41 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | [4.0] [RFC] front end editing using modal instead of refresh | J4 Issue No Code Attached Yet RFC | Editing content on the front end was fine and the editing options opened in the main content area/component.
But this does not seem intuitive to me when it comes to modules. You could edit a module in the header and the editor could appear in the footer. It can be far removed from the place you initially click edit.
For me this is confusing for users. Although I can tell admins to look around for the editor box, I think having to ask people to do this instead of it being immediately obvious that something that is damaging for the perception of Joomla!
Do you think a pop-up window would be more effective now-a-days given we can do front end menu and module editing? Even if it s rolled to the editor it would be more palettable. | 1.0 | [4.0] [RFC] front end editing using modal instead of refresh - Editing content on the front end was fine and the editing options opened in the main content area/component.
But this does not seem intuitive to me when it comes to modules. You could edit a module in the header and the editor could appear in the footer. It can be far removed from the place you initially click edit.
For me this is confusing for users. Although I can tell admins to look around for the editor box, I think having to ask people to do this instead of it being immediately obvious that something that is damaging for the perception of Joomla!
Do you think a pop-up window would be more effective now-a-days given we can do front end menu and module editing? Even if it s rolled to the editor it would be more palettable. | non_test | front end editing using modal instead of refresh editing content on the front end was fine and the editing options opened in the main content area component but this does not seem intuitive to me when it comes to modules you could edit a module in the header and the editor could appear in the footer it can be far removed from the place you initially click edit for me this is confusing for users although i can tell admins to look around for the editor box i think having to ask people to do this instead of it being immediately obvious that something that is damaging for the perception of joomla do you think a pop up window would be more effective now a days given we can do front end menu and module editing even if it s rolled to the editor it would be more palettable | 0 |
26,317 | 4,215,356,426 | IssuesEvent | 2016-06-30 03:29:52 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | code coverage is failed at some cases | area/test help-wanted team/CSI-API Machinery SIG | ```console
KUBE_COVER="y" KUBE_COVERPROCS=8 ./hack/test-go.sh -- -p=2
--- FAIL: TestSnippetWriter (0.00s)
snippet_writer_test.go:85: Expected "snippet_writer_test.go:78" but didn't find it in "template: /home/stack/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/libs/go2idl/generator /snippet_writer_test.go:77:1: unclosed action"
FAIL
coverage: 10.2% of statements
FAIL k8s.io/kubernetes/cmd/libs/go2idl/generator 0.053s
``` | 1.0 | code coverage is failed at some cases - ```console
KUBE_COVER="y" KUBE_COVERPROCS=8 ./hack/test-go.sh -- -p=2
--- FAIL: TestSnippetWriter (0.00s)
snippet_writer_test.go:85: Expected "snippet_writer_test.go:78" but didn't find it in "template: /home/stack/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/libs/go2idl/generator /snippet_writer_test.go:77:1: unclosed action"
FAIL
coverage: 10.2% of statements
FAIL k8s.io/kubernetes/cmd/libs/go2idl/generator 0.053s
``` | test | code coverage is failed at some cases console kube cover y kube coverprocs hack test go sh p fail testsnippetwriter snippet writer test go expected snippet writer test go but didn t find it in template home stack kubernetes output local go src io kubernetes cmd libs generator snippet writer test go unclosed action fail coverage of statements fail io kubernetes cmd libs generator | 1 |
18,119 | 6,550,686,178 | IssuesEvent | 2017-09-05 12:09:56 | travis-ci/travis-ci | https://api.github.com/repos/travis-ci/travis-ci | closed | `sudo`-less Trusty images try to use Java 8 for Java 7 builds | bug build environment java trusty-container | It looks like the new `sudo`-less Trusty images try to use Java 8 to run Java 7 builds, which [causes the build to fail](https://travis-ci.org/relayrides/pushy/builds/230779135):
```
$ sudo rm -f /usr/lib/jvm/java-8-oracle-amd64
Reload jdk_switcher
$ source $HOME/.jdk_switcher_rc
$ jdk_switcher use openjdk7
Switching to OpenJDK7 (java-1.7.0-openjdk), JAVA_HOME will be set to /usr/lib/jvm/java-7-openjdk
update-java-alternatives: directory does not exist: /usr/lib/jvm/java-1.7.0-openjdk
...
[extra log output removed]
...
$ java -Xmx32m -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
$ javac -J-Xmx32m -version
javac 1.8.0_111
travis_fold:start:install
travis_time:start:07ebd460
$ mvn install -DskipTests=true -Dmaven.javadoc.skip=true -B -V
Error: JAVA_HOME is not defined correctly.
We cannot execute /usr/lib/jvm/java-7-openjdk/bin/java
```
This affects at least `oraclejdk7` and `openjdk7`. | 1.0 | `sudo`-less Trusty images try to use Java 8 for Java 7 builds - It looks like the new `sudo`-less Trusty images try to use Java 8 to run Java 7 builds, which [causes the build to fail](https://travis-ci.org/relayrides/pushy/builds/230779135):
```
$ sudo rm -f /usr/lib/jvm/java-8-oracle-amd64
Reload jdk_switcher
$ source $HOME/.jdk_switcher_rc
$ jdk_switcher use openjdk7
Switching to OpenJDK7 (java-1.7.0-openjdk), JAVA_HOME will be set to /usr/lib/jvm/java-7-openjdk
update-java-alternatives: directory does not exist: /usr/lib/jvm/java-1.7.0-openjdk
...
[extra log output removed]
...
$ java -Xmx32m -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
$ javac -J-Xmx32m -version
javac 1.8.0_111
travis_fold:start:install
travis_time:start:07ebd460
$ mvn install -DskipTests=true -Dmaven.javadoc.skip=true -B -V
Error: JAVA_HOME is not defined correctly.
We cannot execute /usr/lib/jvm/java-7-openjdk/bin/java
```
This affects at least `oraclejdk7` and `openjdk7`. | non_test | sudo less trusty images try to use java for java builds it looks like the new sudo less trusty images try to use java to run java builds which sudo rm f usr lib jvm java oracle reload jdk switcher source home jdk switcher rc jdk switcher use switching to java openjdk java home will be set to usr lib jvm java openjdk update java alternatives directory does not exist usr lib jvm java openjdk java version java version java tm se runtime environment build java hotspot tm bit server vm build mixed mode javac j version javac travis fold start install travis time start mvn install dskiptests true dmaven javadoc skip true b v error java home is not defined correctly we cannot execute usr lib jvm java openjdk bin java this affects at least and | 0 |
124,613 | 4,927,647,435 | IssuesEvent | 2016-11-26 21:29:19 | Pterodactyl/Panel | https://api.github.com/repos/Pterodactyl/Panel | closed | Deleting a node with existing allocations fails | bug priority: medium | <!-- The checkboxes below can be clicked once you submit this report if you'd like -->
<!-- You can also use "- [x]" to mark it as checked. -->
## Product
Please check the corresponding boxes below for which products this is about.
- [X] Panel
- [ ] Daemon
- [ ] Dockerfile(s) [Please list if so: __ ]
## Type
- [x] Bug or Issue
- [ ] Feature Request
- [ ] Enhancement
- [ ] Other
<!-- You only need to fill out the information below if this is a bug report. -->
<!-- Please delete this line and everything below if this is NOT a bug report. -->
## What Happens
Node deletion fails if there are any ports/IPs still allocated
## How to Reproduce
Attempt to delete a node that still has an IP or Port allocated.
Step 1: Create node
Step 2: ?
Step 3: Allocate IP/Port
Step 4: Attempt to delete
## Error Logs
http://pastebin.com/ekG9yVTk
## System Information
#### Output of `uname -a`:
Linux [redacted] 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:46 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
#### Output of `php -v` (if Panel):
PHP 7.0.8-0ubuntu0.16.04.3 (cli) ( NTS )
Copyright (c) 1997-2016 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies
with Zend OPcache v7.0.8-0ubuntu0.16.04.3, Copyright (c) 1999-2016, by Zend Technologies
#### Output of `node -v` (if Daemon):
v6.9.1
| 1.0 | Deleting a node with existing allocations fails - <!-- The checkboxes below can be clicked once you submit this report if you'd like -->
<!-- You can also use "- [x]" to mark it as checked. -->
## Product
Please check the corresponding boxes below for which products this is about.
- [X] Panel
- [ ] Daemon
- [ ] Dockerfile(s) [Please list if so: __ ]
## Type
- [x] Bug or Issue
- [ ] Feature Request
- [ ] Enhancement
- [ ] Other
<!-- You only need to fill out the information below if this is a bug report. -->
<!-- Please delete this line and everything below if this is NOT a bug report. -->
## What Happens
Node deletion fails if there are any ports/IPs still allocated
## How to Reproduce
Attempt to delete a node that still has an IP or Port allocated.
Step 1: Create node
Step 2: ?
Step 3: Allocate IP/Port
Step 4: Attempt to delete
## Error Logs
http://pastebin.com/ekG9yVTk
## System Information
#### Output of `uname -a`:
Linux [redacted] 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:46 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
#### Output of `php -v` (if Panel):
PHP 7.0.8-0ubuntu0.16.04.3 (cli) ( NTS )
Copyright (c) 1997-2016 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies
with Zend OPcache v7.0.8-0ubuntu0.16.04.3, Copyright (c) 1999-2016, by Zend Technologies
#### Output of `node -v` (if Daemon):
v6.9.1
| non_test | deleting a node with existing allocations fails product please check the corresponding boxes below for which products this is about panel daemon dockerfile s type bug or issue feature request enhancement other what happens node deletion fails if there are any ports ips still allocated how to reproduce attempt to delete a node that still has an ip or port allocated step create node step step allocate ip port step attempt to delete error logs system information output of uname a linux generic ubuntu smp thu may utc gnu linux output of php v if panel php cli nts copyright c the php group zend engine copyright c zend technologies with zend opcache copyright c by zend technologies output of node v if daemon | 0 |
191,958 | 14,597,801,382 | IssuesEvent | 2020-12-20 21:47:50 | BarskiTeam/BostarDecksWeb | https://api.github.com/repos/BarskiTeam/BostarDecksWeb | closed | test .env and .env.example _ I check syntax and correspondence name of variable between .env and .env.example | test | Check syntax and correspondence name of variable between .env and .env.example | 1.0 | test .env and .env.example _ I check syntax and correspondence name of variable between .env and .env.example - Check syntax and correspondence name of variable between .env and .env.example | test | test env and env example i check syntax and correspondence name of variable between env and env example check syntax and correspondence name of variable between env and env example | 1 |
148,487 | 11,854,098,702 | IssuesEvent | 2020-03-24 23:45:07 | MonoGame/MonoGame | https://api.github.com/repos/MonoGame/MonoGame | closed | iPad iOS7 missing 40 pixels space at the top. | Needs Testing iOS | 

Please take a look for the screen shots.
| 1.0 | iPad iOS7 missing 40 pixels space at the top. - 

Please take a look for the screen shots.
| test | ipad missing pixels space at the top please take a look for the screen shots | 1 |
331,316 | 28,886,773,230 | IssuesEvent | 2023-05-06 00:32:40 | rancher/dashboard | https://api.github.com/repos/rancher/dashboard | closed | Update static loading indicator to respect dark mode | kind/bug [zube]: To Test QA/None | **Setup**
- Rancher version: 2.6.5
**Describe the bug**
When dark mode is enabled / picked up by OS theme the user goes from a dark mode log in page, bright white loading page and then a dark home page. It can be quite a quick transition so the page can flash.
~~The file is `/shell/components/Loading.vue`~~ see comment
| 1.0 | Update static loading indicator to respect dark mode - **Setup**
- Rancher version: 2.6.5
**Describe the bug**
When dark mode is enabled / picked up by OS theme the user goes from a dark mode log in page, bright white loading page and then a dark home page. It can be quite a quick transition so the page can flash.
~~The file is `/shell/components/Loading.vue`~~ see comment
| test | update static loading indicator to respect dark mode setup rancher version describe the bug when dark mode is enabled picked up by os theme the user goes from a dark mode log in page bright white loading page and then a dark home page it can be quite a quick transition so the page can flash the file is shell components loading vue see comment | 1 |
450,844 | 31,995,321,104 | IssuesEvent | 2023-09-21 08:50:54 | privy-open-source/design-system | https://api.github.com/repos/privy-open-source/design-system | closed | Select: Menu-container with divider variant | documentation enhancement | - Enable option `divider` in select-menu-container
- Enable option `menuClass`
- Enable option `menuSize` | 1.0 | Select: Menu-container with divider variant - - Enable option `divider` in select-menu-container
- Enable option `menuClass`
- Enable option `menuSize` | non_test | select menu container with divider variant enable option divider in select menu container enable option menuclass enable option menusize | 0 |
185,854 | 14,383,372,920 | IssuesEvent | 2020-12-02 09:02:05 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | TestSceneCatchModRelax behaves erratically | ruleset:osu!catch type:testing | ERROR: type should be string, got "https://drive.google.com/file/d/1xlhx2XevqLfx4mMGmJCuM7aSFsRvvvbw/view?usp=sharing\r\n\r\n* Jittering back and forth.\r\n* Moving to the hyperdash for one frame before any hitobjects become visible.\r\n* Prior to https://github.com/ppy/osu/pull/10966, it would miss the hyperdash at 200% speed.\r\n\r\nPerhaps an issue with framed replay handler? I've tested one commit prior to https://github.com/ppy/osu/pull/10605 and it still occurs, so it's not a recent regression." | 1.0 | TestSceneCatchModRelax behaves erratically - https://drive.google.com/file/d/1xlhx2XevqLfx4mMGmJCuM7aSFsRvvvbw/view?usp=sharing
* Jittering back and forth.
* Moving to the hyperdash for one frame before any hitobjects become visible.
* Prior to https://github.com/ppy/osu/pull/10966, it would miss the hyperdash at 200% speed.
Perhaps an issue with framed replay handler? I've tested one commit prior to https://github.com/ppy/osu/pull/10605 and it still occurs, so it's not a recent regression. | test | testscenecatchmodrelax behaves erratically jittering back and forth moving to the hyperdash for one frame before any hitobjects become visible prior to it would miss the hyperdash at speed perhaps an issue with framed replay handler i ve tested one commit prior to and it still occurs so it s not a recent regression | 1 |
252,041 | 21,554,164,972 | IssuesEvent | 2022-04-30 05:38:28 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: sqlsmith/setup=rand-tables/setting=no-mutations failed | C-test-failure O-robot O-roachtest branch-master release-blocker | roachtest.sqlsmith/setup=rand-tables/setting=no-mutations [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=5060835&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=5060835&tab=artifacts#/sqlsmith/setup=rand-tables/setting=no-mutations) on master @ [a2e1910f51593bd2ef72e1d7c615e08f95791186](https://github.com/cockroachdb/cockroach/commits/a2e1910f51593bd2ef72e1d7c615e08f95791186):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /artifacts/sqlsmith/setup=rand-tables/setting=no-mutations/run_1
sqlsmith.go:265,sqlsmith.go:305,test_runner.go:876: error: pq: internal error: in-between filters didn't yield a constraint
stmt:
WITH
with_60848 (col_348252)
AS (
SELECT
*
FROM
(VALUES ('hviat':::rand_typ_0), ('ymlez':::rand_typ_0), ('pzlue':::rand_typ_0), ('ivkys':::rand_typ_0))
AS tab_152645 (col_348252)
)
SELECT
NULL AS col_348253,
tab_152646.col1_3 AS col_348254,
tab_152646.col1_6 AS col_348255,
tab_152646.col1_8 AS col_348256,
23871:::INT8 AS col_348257,
'\x5c91d7a0d2edd6fc0f':::BYTES AS col_348258,
tab_152646.col1_5 AS col_348259,
e'F\x01W[\x12':::STRING AS col_348260,
(-6.335565852276255591E+29):::DECIMAL AS col_348261,
tab_152646.col1_4 AS col_348262,
NULL AS col_348263
FROM
defaultdb.public.table1@[0] AS tab_152646
WHERE
(7679919245303374124:::INT8 < tab_152646.col1_2)
ORDER BY
tab_152646.tableoid DESC;
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=rand-tables/setting=no-mutations.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: sqlsmith/setup=rand-tables/setting=no-mutations failed - roachtest.sqlsmith/setup=rand-tables/setting=no-mutations [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=5060835&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=5060835&tab=artifacts#/sqlsmith/setup=rand-tables/setting=no-mutations) on master @ [a2e1910f51593bd2ef72e1d7c615e08f95791186](https://github.com/cockroachdb/cockroach/commits/a2e1910f51593bd2ef72e1d7c615e08f95791186):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /artifacts/sqlsmith/setup=rand-tables/setting=no-mutations/run_1
sqlsmith.go:265,sqlsmith.go:305,test_runner.go:876: error: pq: internal error: in-between filters didn't yield a constraint
stmt:
WITH
with_60848 (col_348252)
AS (
SELECT
*
FROM
(VALUES ('hviat':::rand_typ_0), ('ymlez':::rand_typ_0), ('pzlue':::rand_typ_0), ('ivkys':::rand_typ_0))
AS tab_152645 (col_348252)
)
SELECT
NULL AS col_348253,
tab_152646.col1_3 AS col_348254,
tab_152646.col1_6 AS col_348255,
tab_152646.col1_8 AS col_348256,
23871:::INT8 AS col_348257,
'\x5c91d7a0d2edd6fc0f':::BYTES AS col_348258,
tab_152646.col1_5 AS col_348259,
e'F\x01W[\x12':::STRING AS col_348260,
(-6.335565852276255591E+29):::DECIMAL AS col_348261,
tab_152646.col1_4 AS col_348262,
NULL AS col_348263
FROM
defaultdb.public.table1@[0] AS tab_152646
WHERE
(7679919245303374124:::INT8 < tab_152646.col1_2)
ORDER BY
tab_152646.tableoid DESC;
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=rand-tables/setting=no-mutations.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest sqlsmith setup rand tables setting no mutations failed roachtest sqlsmith setup rand tables setting no mutations with on master the test failed on branch master cloud gce test artifacts and logs in artifacts sqlsmith setup rand tables setting no mutations run sqlsmith go sqlsmith go test runner go error pq internal error in between filters didn t yield a constraint stmt with with col as select from values hviat rand typ ymlez rand typ pzlue rand typ ivkys rand typ as tab col select null as col tab as col tab as col tab as col as col bytes as col tab as col e f string as col decimal as col tab as col null as col from defaultdb public as tab where tab order by tab tableoid desc help see see cc cockroachdb sql queries | 1 |
15,943 | 5,195,704,375 | IssuesEvent | 2017-01-23 10:17:53 | SemsTestOrg/combinearchive-web | https://api.github.com/repos/SemsTestOrg/combinearchive-web | closed | [ArchiveContent] Relayout the Archive Content section | code fixed migrated minor task | ## Trac Ticket #14
**component:** code
**owner:** somebody
**reporter:** martinP
**created:** 2014-07-31 08:22:22
**milestone:**
**type:** task
**version:**
**keywords:**
## comment 1
**time:** 2014-07-31 09:57:01
**author:** martin
idea was:
* file content almost 100%
* tree hidden behind a smart-phone-like-button
* one mouseover or touch/click show tree above content
## comment 2
**time:** 2014-08-07 19:57:13
**author:** martin
Updated **cc** to **martin, martinP**
## comment 3
**time:** 2014-08-07 19:57:13
**author:** martin
the sequence of entries should be only lexicographically. moving bbbb before aaaa should not affect anything (not even the filetree...)
## comment 4
**time:** 2014-08-07 19:58:25
**author:** martin
add icons for certain file types. atm everything looks like a folder...
## comment 5
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
In changeset:"1ed1b073629ea2ff92f72988b34805f2ea0e83fc"]:
```CommitTicketReference repository="" revision="1ed1b073629ea2ff92f72988b34805f2ea0e83fc"
Merge branch 'feature_newdesgin'
[fixes #14]
```
## comment 6
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **resolution** to **fixed**
## comment 7
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **status** to **closed**
## comment 8
**time:** 2014-09-25 17:03:19
**author:** mp487 <martin.peters3@uni-rostock.de>
In changeset:"1ed1b073629ea2ff92f72988b34805f2ea0e83fc"]:
```CommitTicketReference repository="" revision="1ed1b073629ea2ff92f72988b34805f2ea0e83fc"
Merge branch 'feature_newdesgin'
[fixes #14]
```
| 1.0 | [ArchiveContent] Relayout the Archive Content section - ## Trac Ticket #14
**component:** code
**owner:** somebody
**reporter:** martinP
**created:** 2014-07-31 08:22:22
**milestone:**
**type:** task
**version:**
**keywords:**
## comment 1
**time:** 2014-07-31 09:57:01
**author:** martin
idea was:
* file content almost 100%
* tree hidden behind a smart-phone-like-button
* one mouseover or touch/click show tree above content
## comment 2
**time:** 2014-08-07 19:57:13
**author:** martin
Updated **cc** to **martin, martinP**
## comment 3
**time:** 2014-08-07 19:57:13
**author:** martin
the sequence of entries should be only lexicographically. moving bbbb before aaaa should not affect anything (not even the filetree...)
## comment 4
**time:** 2014-08-07 19:58:25
**author:** martin
add icons for certain file types. atm everything looks like a folder...
## comment 5
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
In changeset:"1ed1b073629ea2ff92f72988b34805f2ea0e83fc"]:
```CommitTicketReference repository="" revision="1ed1b073629ea2ff92f72988b34805f2ea0e83fc"
Merge branch 'feature_newdesgin'
[fixes #14]
```
## comment 6
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **resolution** to **fixed**
## comment 7
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **status** to **closed**
## comment 8
**time:** 2014-09-25 17:03:19
**author:** mp487 <martin.peters3@uni-rostock.de>
In changeset:"1ed1b073629ea2ff92f72988b34805f2ea0e83fc"]:
```CommitTicketReference repository="" revision="1ed1b073629ea2ff92f72988b34805f2ea0e83fc"
Merge branch 'feature_newdesgin'
[fixes #14]
```
| non_test | relayout the archive content section trac ticket component code owner somebody reporter martinp created milestone type task version keywords comment time author martin idea was file content almost tree hidden behind a smart phone like button one mouseover or touch click show tree above content comment time author martin updated cc to martin martinp comment time author martin the sequence of entries should be only lexicographically moving bbbb before aaaa should not affect anything not even the filetree comment time author martin add icons for certain file types atm everything looks like a folder comment time author in changeset committicketreference repository revision merge branch feature newdesgin comment time author updated resolution to fixed comment time author updated status to closed comment time author in changeset committicketreference repository revision merge branch feature newdesgin | 0 |
70,884 | 7,202,138,131 | IssuesEvent | 2018-02-06 02:09:38 | kcigeospatial/SWMFAC-Enhancements | https://api.github.com/repos/kcigeospatial/SWMFAC-Enhancements | opened | Data lag loading Error | SHA Dev - Post UAT Testing | The system does not refresh and load data in the tab section after user selects a record from the location manager. In some cases the system would load the last record selected after user selects a new record. To replicate, select different records in the location manager to confirm. This issue is not occurring in KCI Dev

| 1.0 | Data lag loading Error - The system does not refresh and load data in the tab section after user selects a record from the location manager. In some cases the system would load the last record selected after user selects a new record. To replicate, select different records in the location manager to confirm. This issue is not occurring in KCI Dev

| test | data lag loading error the system does not refresh and load data in the tab section after user selects a record from the location manager in some cases the system would load the last record selected after user selects a new record to replicate select different records in the location manager to confirm this issue is not occurring in kci dev | 1 |
60,285 | 6,678,283,217 | IssuesEvent | 2017-10-05 13:46:40 | Transkribus/TWI-edit | https://api.github.com/repos/Transkribus/TWI-edit | closed | 500 Internal server error when accessing some documents with edit app | bug high priority ready to test | OS and browser(s) [Windows 10] Chrome
OS and browser(s) [Linux] Firefox
URL: https://transkribus.eu/readTest/view/2305/4949/1
Screen shots : https://drive.google.com/open?id=0B7dmP0OCT3cnRmNFNkpmYV9xUzA
Steps to reproduce: Follow Url provided
Expected results and actual results :
Expected: edit/view for a document. Actual : "Server Error (500)" message
Any other information: The Document IMAGES (4949) loads OK using the expert client and has 4 pages. Error on dev shows edit/templates/edit/correct.html, error at line 458 | 1.0 | 500 Internal server error when accessing some documents with edit app - OS and browser(s) [Windows 10] Chrome
OS and browser(s) [Linux] Firefox
URL: https://transkribus.eu/readTest/view/2305/4949/1
Screen shots : https://drive.google.com/open?id=0B7dmP0OCT3cnRmNFNkpmYV9xUzA
Steps to reproduce: Follow Url provided
Expected results and actual results :
Expected: edit/view for a document. Actual : "Server Error (500)" message
Any other information: The Document IMAGES (4949) loads OK using the expert client and has 4 pages. Error on dev shows edit/templates/edit/correct.html, error at line 458 | test | internal server error when accessing some documents with edit app os and browser s chrome os and browser s firefox url screen shots steps to reproduce follow url provided expected results and actual results expected edit view for a document actual server error message any other information the document images loads ok using the expert client and has pages error on dev shows edit templates edit correct html error at line | 1 |
292,813 | 25,241,055,371 | IssuesEvent | 2022-11-15 07:28:09 | risingwavelabs/risingwave | https://api.github.com/repos/risingwavelabs/risingwave | opened | Increase the intensity of `main-cron` daily tests | type/enhancement component/test | By looking at https://github.com/risingwavelabs/risingwave/blob/main/ci/workflows/main-cron.yml,
the timeout is usually set to 40 minutes at most, with some others being set to 5, 15, or 30 minutes.
We can run many more and/or longer in the `main-cron` daily test:
- [ ] use more random seeds for deterministic Madsim tests
- [ ] use different random seeds every day
- [ ] more node killing/recovery
- [ ] ......
With the intention to bring the effective time for each test, if it can, to 3 hours.
If we can afford the cost, increase it to an even larger number. | 1.0 | Increase the intensity of `main-cron` daily tests - By looking at https://github.com/risingwavelabs/risingwave/blob/main/ci/workflows/main-cron.yml,
the timeout is usually set to 40 minutes at most, with some others being set to 5, 15, or 30 minutes.
We can run many more and/or longer in the `main-cron` daily test:
- [ ] use more random seeds for deterministic Madsim tests
- [ ] use different random seeds every day
- [ ] more node killing/recovery
- [ ] ......
With the intention to bring the effective time for each test, if it can, to 3 hours.
If we can afford the cost, increase it to an even larger number. | test | increase the intensity of main cron daily tests by looking at the timeout is usually set to minutes at most with some others being set to or minutes we can run many more and or longer in the main cron daily test use more random seeds for deterministic madsim tests use different random seeds every day more node killing recovery with the intention to bring the effective time for each test if it can to hours if we can afford the cost increase it to an even larger number | 1 |
43,952 | 5,578,522,765 | IssuesEvent | 2017-03-28 12:38:39 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | deploymentconfigs when run iteratively [Conformance] [It] should only deploy the last deployment | area/tests component/deployments kind/test-flake priority/P2 | Finally a new one:
```
• Failure [142.736 seconds]
deploymentconfigs
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:840
when run iteratively [Conformance]
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:196
should only deploy the last deployment [It]
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:135
Expected error:
<*errors.errorString | 0xc820e5f3c0>: {
s: "found multiple running deployments: [deployment-simple-3 deployment-simple-4]",
}
found multiple running deployments: [deployment-simple-3 deployment-simple-4]
not to have occurred
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:124
------------------------------
```
Third deployment thinks its deployer pod is gone, 4th deployer gets created.
```
apiVersion: v1
kind: ReplicationController
metadata:
annotations:
kubectl.kubernetes.io/desired-replicas: "2"
openshift.io/deployer-pod.name: deployment-simple-3-deploy
openshift.io/deployment-config.latest-version: "3"
openshift.io/deployment-config.name: deployment-simple
openshift.io/deployment.phase: Failed
openshift.io/deployment.replicas: "0"
openshift.io/deployment.status-reason: deployer pod no longer exists
```
https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_conformance/8960/consoleFull#-169259343956c60d7be4b02b88ae8c268b | 2.0 | deploymentconfigs when run iteratively [Conformance] [It] should only deploy the last deployment - Finally a new one:
```
• Failure [142.736 seconds]
deploymentconfigs
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:840
when run iteratively [Conformance]
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:196
should only deploy the last deployment [It]
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:135
Expected error:
<*errors.errorString | 0xc820e5f3c0>: {
s: "found multiple running deployments: [deployment-simple-3 deployment-simple-4]",
}
found multiple running deployments: [deployment-simple-3 deployment-simple-4]
not to have occurred
/data/src/github.com/openshift/origin/test/extended/deployments/deployments.go:124
------------------------------
```
Third deployment thinks its deployer pod is gone, 4th deployer gets created.
```
apiVersion: v1
kind: ReplicationController
metadata:
annotations:
kubectl.kubernetes.io/desired-replicas: "2"
openshift.io/deployer-pod.name: deployment-simple-3-deploy
openshift.io/deployment-config.latest-version: "3"
openshift.io/deployment-config.name: deployment-simple
openshift.io/deployment.phase: Failed
openshift.io/deployment.replicas: "0"
openshift.io/deployment.status-reason: deployer pod no longer exists
```
https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_conformance/8960/consoleFull#-169259343956c60d7be4b02b88ae8c268b | test | deploymentconfigs when run iteratively should only deploy the last deployment finally a new one • failure deploymentconfigs data src github com openshift origin test extended deployments deployments go when run iteratively data src github com openshift origin test extended deployments deployments go should only deploy the last deployment data src github com openshift origin test extended deployments deployments go expected error s found multiple running deployments found multiple running deployments not to have occurred data src github com openshift origin test extended deployments deployments go third deployment thinks its deployer pod is gone deployer gets created apiversion kind replicationcontroller metadata annotations kubectl kubernetes io desired replicas openshift io deployer pod name deployment simple deploy openshift io deployment config latest version openshift io deployment config name deployment simple openshift io deployment phase failed openshift io deployment replicas openshift io deployment status reason deployer pod no longer exists | 1 |
104,754 | 16,621,104,117 | IssuesEvent | 2021-06-03 01:13:27 | nihalmurmu/2048 | https://api.github.com/repos/nihalmurmu/2048 | opened | CVE-2019-20149 (High) detected in kind-of-6.0.2.tgz | security vulnerability | ## CVE-2019-20149 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary>
<p>Get the native type of a value.</p>
<p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p>
<p>Path to dependency file: 2048/package.json</p>
<p>Path to vulnerable library: 2048/node_modules/jest-runner/node_modules/kind-of/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.59.9.tgz (Root Library)
- cli-1.12.0.tgz
- metro-0.51.1.tgz
- jest-haste-map-24.0.0-alpha.6.tgz
- micromatch-2.3.11.tgz
- braces-1.8.5.tgz
- expand-range-1.8.2.tgz
- fill-range-2.2.4.tgz
- randomatic-3.1.1.tgz
- :x: **kind-of-6.0.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by 'constructor': {'name':'Symbol'}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result.
<p>Publish Date: 2019-12-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149>CVE-2019-20149</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149</a></p>
<p>Release Date: 2019-12-30</p>
<p>Fix Resolution: 6.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-20149 (High) detected in kind-of-6.0.2.tgz - ## CVE-2019-20149 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary>
<p>Get the native type of a value.</p>
<p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p>
<p>Path to dependency file: 2048/package.json</p>
<p>Path to vulnerable library: 2048/node_modules/jest-runner/node_modules/kind-of/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.59.9.tgz (Root Library)
- cli-1.12.0.tgz
- metro-0.51.1.tgz
- jest-haste-map-24.0.0-alpha.6.tgz
- micromatch-2.3.11.tgz
- braces-1.8.5.tgz
- expand-range-1.8.2.tgz
- fill-range-2.2.4.tgz
- randomatic-3.1.1.tgz
- :x: **kind-of-6.0.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by 'constructor': {'name':'Symbol'}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result.
<p>Publish Date: 2019-12-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149>CVE-2019-20149</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149</a></p>
<p>Release Date: 2019-12-30</p>
<p>Fix Resolution: 6.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in kind of tgz cve high severity vulnerability vulnerable library kind of tgz get the native type of a value library home page a href path to dependency file package json path to vulnerable library node modules jest runner node modules kind of package json dependency hierarchy react native tgz root library cli tgz metro tgz jest haste map alpha tgz micromatch tgz braces tgz expand range tgz fill range tgz randomatic tgz x kind of tgz vulnerable library vulnerability details ctorname in index js in kind of allows external user input to overwrite certain internal attributes via a conflicting name as demonstrated by constructor name symbol hence a crafted payload can overwrite this builtin attribute to manipulate the type detection result publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
329,698 | 28,301,972,288 | IssuesEvent | 2023-04-10 07:08:18 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix raw_ops.test_tensorflow_Cumsum | TensorFlow Frontend Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4648778808/jobs/8226693483" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4648778808/jobs/8226693483" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4648778808/jobs/8226693483" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4648778808/jobs/8226693483" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_raw_ops.py::test_tensorflow_Cumsum[cpu-ivy.functional.backends.torch-False-False]</summary>
2023-04-09T05:22:21.8456870Z E ivy.utils.exceptions.IvyException: Cannot convert to ivy dtype. uint32 is not supported by PyTorch backend.
2023-04-09T05:22:21.8460458Z E ivy.utils.exceptions.IvyBackendException: torch: as_ivy_dtype: Cannot convert to ivy dtype. uint32 is not supported by PyTorch backend.
2023-04-09T05:22:21.8464658Z E ivy.utils.exceptions.IvyBackendException: torch: default_uint_dtype: torch: as_ivy_dtype: Cannot convert to ivy dtype. uint32 is not supported by PyTorch backend.
2023-04-09T05:22:21.8472271Z E ivy.utils.exceptions.IvyBackendException: torch: infer_default_dtype: torch: default_uint_dtype: torch: as_ivy_dtype: Cannot convert to ivy dtype. uint32 is not supported by PyTorch backend.
2023-04-09T05:22:21.8480923Z E ivy.utils.exceptions.IvyBackendException: torch: cumsum: torch: infer_default_dtype: torch: default_uint_dtype: torch: as_ivy_dtype: Cannot convert to ivy dtype. uint32 is not supported by PyTorch backend.
2023-04-09T05:22:21.8481532Z E Falsifying example: test_tensorflow_Cumsum(
2023-04-09T05:22:21.8482221Z E dtype_x_axis=(['uint8'], [array([0], dtype=uint8)], 0),
2023-04-09T05:22:21.8482545Z E exclusive=False,
2023-04-09T05:22:21.8482808Z E reverse=False,
2023-04-09T05:22:21.8483123Z E test_flags=FrontendFunctionTestFlags(
2023-04-09T05:22:21.8483461Z E num_positional_args=0,
2023-04-09T05:22:21.8483735Z E with_out=False,
2023-04-09T05:22:21.8483997Z E inplace=False,
2023-04-09T05:22:21.8484268Z E as_variable=[False],
2023-04-09T05:22:21.8484550Z E native_arrays=[False],
2023-04-09T05:22:21.8484842Z E generate_frontend_arrays=False,
2023-04-09T05:22:21.8485114Z E ),
2023-04-09T05:22:21.8485561Z E fn_tree='ivy.functional.frontends.tensorflow.raw_ops.Cumsum',
2023-04-09T05:22:21.8485999Z E frontend='tensorflow',
2023-04-09T05:22:21.8486312Z E on_device='cpu',
2023-04-09T05:22:21.8486553Z E )
2023-04-09T05:22:21.8486766Z E
2023-04-09T05:22:21.8487387Z E You can reproduce this example by temporarily adding @reproduce_failure('6.71.0', b'AXicY2BkQAIAABsAAg==') as a decorator on your test case
</details>
| 1.0 | Fix raw_ops.test_tensorflow_Cumsum - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4648778808/jobs/8226693483" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4648778808/jobs/8226693483" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4648778808/jobs/8226693483" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4648778808/jobs/8226693483" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_raw_ops.py::test_tensorflow_Cumsum[cpu-ivy.functional.backends.torch-False-False]</summary>
2023-04-09T05:22:21.8456870Z E ivy.utils.exceptions.IvyException: Cannot convert to ivy dtype. uint32 is not supported by PyTorch backend.
2023-04-09T05:22:21.8460458Z E ivy.utils.exceptions.IvyBackendException: torch: as_ivy_dtype: Cannot convert to ivy dtype. uint32 is not supported by PyTorch backend.
2023-04-09T05:22:21.8464658Z E ivy.utils.exceptions.IvyBackendException: torch: default_uint_dtype: torch: as_ivy_dtype: Cannot convert to ivy dtype. uint32 is not supported by PyTorch backend.
2023-04-09T05:22:21.8472271Z E ivy.utils.exceptions.IvyBackendException: torch: infer_default_dtype: torch: default_uint_dtype: torch: as_ivy_dtype: Cannot convert to ivy dtype. uint32 is not supported by PyTorch backend.
2023-04-09T05:22:21.8480923Z E ivy.utils.exceptions.IvyBackendException: torch: cumsum: torch: infer_default_dtype: torch: default_uint_dtype: torch: as_ivy_dtype: Cannot convert to ivy dtype. uint32 is not supported by PyTorch backend.
2023-04-09T05:22:21.8481532Z E Falsifying example: test_tensorflow_Cumsum(
2023-04-09T05:22:21.8482221Z E dtype_x_axis=(['uint8'], [array([0], dtype=uint8)], 0),
2023-04-09T05:22:21.8482545Z E exclusive=False,
2023-04-09T05:22:21.8482808Z E reverse=False,
2023-04-09T05:22:21.8483123Z E test_flags=FrontendFunctionTestFlags(
2023-04-09T05:22:21.8483461Z E num_positional_args=0,
2023-04-09T05:22:21.8483735Z E with_out=False,
2023-04-09T05:22:21.8483997Z E inplace=False,
2023-04-09T05:22:21.8484268Z E as_variable=[False],
2023-04-09T05:22:21.8484550Z E native_arrays=[False],
2023-04-09T05:22:21.8484842Z E generate_frontend_arrays=False,
2023-04-09T05:22:21.8485114Z E ),
2023-04-09T05:22:21.8485561Z E fn_tree='ivy.functional.frontends.tensorflow.raw_ops.Cumsum',
2023-04-09T05:22:21.8485999Z E frontend='tensorflow',
2023-04-09T05:22:21.8486312Z E on_device='cpu',
2023-04-09T05:22:21.8486553Z E )
2023-04-09T05:22:21.8486766Z E
2023-04-09T05:22:21.8487387Z E You can reproduce this example by temporarily adding @reproduce_failure('6.71.0', b'AXicY2BkQAIAABsAAg==') as a decorator on your test case
</details>
| test | fix raw ops test tensorflow cumsum tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test frontends test tensorflow test raw ops py test tensorflow cumsum e ivy utils exceptions ivyexception cannot convert to ivy dtype is not supported by pytorch backend e ivy utils exceptions ivybackendexception torch as ivy dtype cannot convert to ivy dtype is not supported by pytorch backend e ivy utils exceptions ivybackendexception torch default uint dtype torch as ivy dtype cannot convert to ivy dtype is not supported by pytorch backend e ivy utils exceptions ivybackendexception torch infer default dtype torch default uint dtype torch as ivy dtype cannot convert to ivy dtype is not supported by pytorch backend e ivy utils exceptions ivybackendexception torch cumsum torch infer default dtype torch default uint dtype torch as ivy dtype cannot convert to ivy dtype is not supported by pytorch backend e falsifying example test tensorflow cumsum e dtype x axis dtype e exclusive false e reverse false e test flags frontendfunctiontestflags e num positional args e with out false e inplace false e as variable e native arrays e generate frontend arrays false e e fn tree ivy functional frontends tensorflow raw ops cumsum e frontend tensorflow e on device cpu e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case | 1 |
378,164 | 26,285,647,608 | IssuesEvent | 2023-01-07 20:05:35 | tijlleenders/ZinZen | https://api.github.com/repos/tijlleenders/ZinZen | closed | Develop Share feature v2 - Allow & Implement Collaboration changes requests for subgoals | documentation enhancement UI personalisation ease of use | Currently, the share 1:1 has a collaboration feature that only sends changes on the root goal. The target of the second version is to allow to make changes in subgoals as well and should be sent to the partner so that they can make choices ( accept/ignore ) for themselves by looking at the changes like we have for the root goal. | 1.0 | Develop Share feature v2 - Allow & Implement Collaboration changes requests for subgoals - Currently, the share 1:1 has a collaboration feature that only sends changes on the root goal. The target of the second version is to allow to make changes in subgoals as well and should be sent to the partner so that they can make choices ( accept/ignore ) for themselves by looking at the changes like we have for the root goal. | non_test | develop share feature allow implement collaboration changes requests for subgoals currently the share has a collaboration feature that only sends changes on the root goal the target of the second version is to allow to make changes in subgoals as well and should be sent to the partner so that they can make choices accept ignore for themselves by looking at the changes like we have for the root goal | 0 |
170,304 | 13,183,230,686 | IssuesEvent | 2020-08-12 17:08:12 | GoogleCloudPlatform/cloud-spanner-r2dbc | https://api.github.com/repos/GoogleCloudPlatform/cloud-spanner-r2dbc | opened | Autodiscover project ID in integration tests | P3 V2 testing | Autodiscover project ID in integration tests. Since we are bringing in client library and its dependencies, should be easy. | 1.0 | Autodiscover project ID in integration tests - Autodiscover project ID in integration tests. Since we are bringing in client library and its dependencies, should be easy. | test | autodiscover project id in integration tests autodiscover project id in integration tests since we are bringing in client library and its dependencies should be easy | 1 |
63,319 | 6,842,488,806 | IssuesEvent | 2017-11-12 02:22:51 | MajkiIT/polish-ads-filter | https://api.github.com/repos/MajkiIT/polish-ads-filter | closed | pay-card.pl | reguły gotowe/testowanie social filters | http://pay-card.pl/
Panel social:
Jeśli cały panel zdjąć to tak:
```
||pay-card.pl/wp-content/uploads/2017/06/social-bg.jpg
pay-card.pl##.et_pb_section_6
```
A jeśli tylko przyciski to tak:
```
||pay-card.pl/wp-content/uploads/2017/06/fb-1.svg
||pay-card.pl/wp-content/uploads/2017/06/yb-1.svg
```
 | 1.0 | pay-card.pl - http://pay-card.pl/
Panel social:
Jeśli cały panel zdjąć to tak:
```
||pay-card.pl/wp-content/uploads/2017/06/social-bg.jpg
pay-card.pl##.et_pb_section_6
```
A jeśli tylko przyciski to tak:
```
||pay-card.pl/wp-content/uploads/2017/06/fb-1.svg
||pay-card.pl/wp-content/uploads/2017/06/yb-1.svg
```
 | test | pay card pl panel social jeśli cały panel zdjąć to tak pay card pl wp content uploads social bg jpg pay card pl et pb section a jeśli tylko przyciski to tak pay card pl wp content uploads fb svg pay card pl wp content uploads yb svg | 1 |
288,115 | 24,882,768,525 | IssuesEvent | 2022-10-28 03:47:09 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | closed | Teste de generalizacao para a tag Orçamento - Execução - Córrego Danta | generalization test development template - Memory (66) tag - Orçamento subtag - Execução | DoD: Realizar o teste de Generalização do validador da tag Orçamento - Execução para o Município de Córrego Danta. | 1.0 | Teste de generalizacao para a tag Orçamento - Execução - Córrego Danta - DoD: Realizar o teste de Generalização do validador da tag Orçamento - Execução para o Município de Córrego Danta. | test | teste de generalizacao para a tag orçamento execução córrego danta dod realizar o teste de generalização do validador da tag orçamento execução para o município de córrego danta | 1 |
1,326 | 3,601,078,675 | IssuesEvent | 2016-02-03 09:46:07 | CartoDB/cartodb | https://api.github.com/repos/CartoDB/cartodb | closed | Check Excel files before passing them to in2csv | Data-services | https://github.com/CartoDB/cartodb-management/issues/4383
It seems that a text file (CSV) with XLS extension might leave in2csv stuck. | 1.0 | Check Excel files before passing them to in2csv - https://github.com/CartoDB/cartodb-management/issues/4383
It seems that a text file (CSV) with XLS extension might leave in2csv stuck. | non_test | check excel files before passing them to it seems that a text file csv with xls extension might leave stuck | 0 |
66,293 | 14,768,158,884 | IssuesEvent | 2021-01-10 10:42:14 | liorzilberg/swagger-core | https://api.github.com/repos/liorzilberg/swagger-core | opened | CVE-2019-16942 (High) detected in jackson-databind-2.9.5.jar | security vulnerability | ## CVE-2019-16942 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: swagger-core/modules/swagger-integration/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,swagger-core/modules/swagger-models/target/lib/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/liorzilberg/swagger-core/commits/bf7d49a31f9fb41a5a4907a3f3445fb00493a0f5">bf7d49a31f9fb41a5a4907a3f3445fb00493a0f5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942>CVE-2019-16942</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-16942 (High) detected in jackson-databind-2.9.5.jar - ## CVE-2019-16942 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: swagger-core/modules/swagger-integration/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,swagger-core/modules/swagger-models/target/lib/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/liorzilberg/swagger-core/commits/bf7d49a31f9fb41a5a4907a3f3445fb00493a0f5">bf7d49a31f9fb41a5a4907a3f3445fb00493a0f5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942>CVE-2019-16942</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file swagger core modules swagger integration pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar swagger core modules swagger models target lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the commons dbcp jar in the classpath and an attacker can find an rmi service endpoint to access it is possible to make the service execute a malicious payload this issue exists because of org apache commons dbcp datasources sharedpooldatasource and org apache commons dbcp datasources peruserpooldatasource mishandling publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
701,972 | 24,117,957,820 | IssuesEvent | 2022-09-20 16:06:14 | DanielCoelho112/synfeal | https://api.github.com/repos/DanielCoelho112/synfeal | closed | Combining the global features from PointNet with the local features from CNN | enhancement low priority | Hi @miguelriemoliveira and @pmdjdias,
today I had an idea that we could explore: build a neural network that receives both the point cloud and gray image. A sketch of the architecture is the following:

Using the point cloud we can extract the **global features** and using the gray image we can extract **local features**. By merging both information, maybe we can achieve something...
Of course, before entering this we first need to explore both modalities individually, but this could be a good place to innovate. | 1.0 | Combining the global features from PointNet with the local features from CNN - Hi @miguelriemoliveira and @pmdjdias,
today I had an idea that we could explore: build a neural network that receives both the point cloud and gray image. A sketch of the architecture is the following:

Using the point cloud we can extract the **global features** and using the gray image we can extract **local features**. By merging both information, maybe we can achieve something...
Of course, before entering this we first need to explore both modalities individually, but this could be a good place to innovate. | non_test | combining the global features from pointnet with the local features from cnn hi miguelriemoliveira and pmdjdias today i had an idea that we could explore build a neural network that receives both the point cloud and gray image a sketch of the architecture is the following using the point cloud we can extract the global features and using the gray image we can extract local features by merging both information maybe we can achieve something of course before entering this we first need to explore both modalities individually but this could be a good place to innovate | 0 |
277,069 | 24,046,260,349 | IssuesEvent | 2022-09-16 08:36:51 | IDgis/geoportaal-test | https://api.github.com/repos/IDgis/geoportaal-test | closed | Tekst scherm onderzoeksbank zonder copyright of jaartal | gebruikerstest wens impact laag onderzoeksbank | Links onderin het scherm van de onderzoeksbank daar staat "(c) Overijssel 2020". Copyright is onjuist en jaartal ook. Verzoek beide weg te laten en alleen te laten staan: Onderzoekbank provincie Overijssel.
| 1.0 | Tekst scherm onderzoeksbank zonder copyright of jaartal - Links onderin het scherm van de onderzoeksbank daar staat "(c) Overijssel 2020". Copyright is onjuist en jaartal ook. Verzoek beide weg te laten en alleen te laten staan: Onderzoekbank provincie Overijssel.
| test | tekst scherm onderzoeksbank zonder copyright of jaartal links onderin het scherm van de onderzoeksbank daar staat c overijssel copyright is onjuist en jaartal ook verzoek beide weg te laten en alleen te laten staan onderzoekbank provincie overijssel | 1 |
24,213 | 7,467,771,867 | IssuesEvent | 2018-04-02 16:33:07 | gfngfn/SATySFi | https://api.github.com/repos/gfngfn/SATySFi | closed | Use opam repo instead of submodules | build | See: https://github.com/ocamllabs/advanced-fp-repo
A customized repo can be set up to have your customized camlpdf, otfm, and ucorelib there.
I can send some pull requests when I'm free. | 1.0 | Use opam repo instead of submodules - See: https://github.com/ocamllabs/advanced-fp-repo
A customized repo can be set up to have your customized camlpdf, otfm, and ucorelib there.
I can send some pull requests when I'm free. | non_test | use opam repo instead of submodules see a customized repo can be set up to have your customized camlpdf otfm and ucorelib there i can send some pull requests when i m free | 0 |
365,167 | 10,778,779,609 | IssuesEvent | 2019-11-04 09:04:05 | ROCm-Developer-Tools/HIP | https://api.github.com/repos/ROCm-Developer-Tools/HIP | closed | [HIPIFY] hipify-clang and hipify-perl are not changing CUB lines to hipCUB. | difficulty:B_Medium hipify priority: P4 (low) type:feature | hipify-clang and hipify-perl are not changing CUB lines.
Hipified code:
```
#include <hip/hip_runtime.h>
#include <iostream>
#include <hiprand.h>
#include <cub/cub.cuh>
template <typename T>
__global__ void sort(const T* data_in, T* data_out){
typedef cub::BlockRadixSort<T, 1024, 4> BlockRadixSortT;
__shared__ typename BlockRadixSortT::TempStorage tmp_sort;
double items[4];
int i0 = 4 * (blockIdx.x * blockDim.x + threadIdx.x);
for (int i = 0; i < 4; ++i){
items[i] = data_in[i0 + i];
}
BlockRadixSortT(tmp_sort).Sort(items);
for (int i = 0; i < 4; ++i){
data_out[i0 + i] = items[i];
}
}
int main(){
double* d_gpu = NULL;
double* result_gpu = NULL;
double* data_sorted = new double[4096];
// Allocate memory on the GPU
hipMalloc(&d_gpu, 4096 * sizeof(double));
hipMalloc(&result_gpu, 4096 * sizeof(double));
hiprandGenerator_t gen;
// Create generator
hiprandCreateGenerator(&gen, HIPRAND_RNG_PSEUDO_DEFAULT);
// Fill array with random numbers
hiprandGenerateNormalDouble(gen, d_gpu, 4096, 0.0, 1.0);
// Destroy generator
hiprandDestroyGenerator(gen);
// Sort data
hipLaunchKernelGGL((sort), dim3(1), dim3(1024), 0, 0, d_gpu, result_gpu);
hipMemcpy(data_sorted, result_gpu, 4096 * sizeof(double), hipMemcpyDeviceToHost);
// Write the sorted data to standard out
for (int i = 0; i < 4096; ++i){
std::cout << data_sorted[i] << ", ";
}
std::cout << std::endl;
}
```
Working code:
```
#include <hip/hip_runtime.h>
#include <iostream>
#include <hiprand.h>
#include <hipcub/hipcub.hpp> // THIS LINE
template <typename T>
__global__ void sort(const T* data_in, T* data_out){
typedef hipcub::BlockRadixSort<T, 1024, 4> BlockRadixSortT; // THIS LINE
__shared__ typename BlockRadixSortT::TempStorage tmp_sort;
double items[4];
int i0 = 4 * (blockIdx.x * blockDim.x + threadIdx.x);
for (int i = 0; i < 4; ++i){
items[i] = data_in[i0 + i];
}
BlockRadixSortT(tmp_sort).Sort(items);
for (int i = 0; i < 4; ++i){
data_out[i0 + i] = items[i];
}
}
int main(){
double* d_gpu = NULL;
double* result_gpu = NULL;
double* data_sorted = new double[4096];
// Allocate memory on the GPU
hipMalloc(&d_gpu, 4096 * sizeof(double));
hipMalloc(&result_gpu, 4096 * sizeof(double));
hiprandGenerator_t gen;
// Create generator
hiprandCreateGenerator(&gen, HIPRAND_RNG_PSEUDO_DEFAULT);
// Fill array with random numbers
hiprandGenerateNormalDouble(gen, d_gpu, 4096, 0.0, 1.0);
// Destroy generator
hiprandDestroyGenerator(gen);
// Sort data
hipLaunchKernelGGL(sort, dim3(1), dim3(1024), 0, 0, d_gpu, result_gpu);
hipMemcpy(data_sorted, result_gpu, 4096 * sizeof(double), hipMemcpyDeviceToHost);
// Write the sorted data to standard out
for (int i = 0; i < 4096; ++i){
std::cout << data_sorted[i] << ", ";
}
std::cout << std::endl;
}
```
Original CUDA code:
```
#include <iostream>
#include <curand.h>
#include <cub/cub.cuh>
template <typename T>
__global__ void sort(const T* data_in, T* data_out){
typedef cub::BlockRadixSort<T, 1024, 4> BlockRadixSortT;
__shared__ typename BlockRadixSortT::TempStorage tmp_sort;
double items[4];
int i0 = 4 * (blockIdx.x * blockDim.x + threadIdx.x);
for (int i = 0; i < 4; ++i){
items[i] = data_in[i0 + i];
}
BlockRadixSortT(tmp_sort).Sort(items);
for (int i = 0; i < 4; ++i){
data_out[i0 + i] = items[i];
}
}
int main(){
double* d_gpu = NULL;
double* result_gpu = NULL;
double* data_sorted = new double[4096];
// Allocate memory on the GPU
cudaMalloc(&d_gpu, 4096 * sizeof(double));
cudaMalloc(&result_gpu, 4096 * sizeof(double));
curandGenerator_t gen;
// Create generator
curandCreateGenerator(&gen, CURAND_RNG_PSEUDO_DEFAULT);
// Fill array with random numbers
curandGenerateNormalDouble(gen, d_gpu, 4096, 0.0, 1.0);
// Destroy generator
curandDestroyGenerator(gen);
// Sort data
sort<<<1, 1024>>>(d_gpu, result_gpu);
cudaMemcpy(data_sorted, result_gpu, 4096 * sizeof(double), cudaMemcpyDeviceToHost);
// Write the sorted data to standard out
for (int i = 0; i < 4096; ++i){
std::cout << data_sorted[i] << ", ";
}
std::cout << std::endl;
}
``` | 1.0 | [HIPIFY] hipify-clang and hipify-perl are not changing CUB lines to hipCUB. - hipify-clang and hipify-perl are not changing CUB lines.
Hipified code:
```
#include <hip/hip_runtime.h>
#include <iostream>
#include <hiprand.h>
#include <cub/cub.cuh>
template <typename T>
__global__ void sort(const T* data_in, T* data_out){
typedef cub::BlockRadixSort<T, 1024, 4> BlockRadixSortT;
__shared__ typename BlockRadixSortT::TempStorage tmp_sort;
double items[4];
int i0 = 4 * (blockIdx.x * blockDim.x + threadIdx.x);
for (int i = 0; i < 4; ++i){
items[i] = data_in[i0 + i];
}
BlockRadixSortT(tmp_sort).Sort(items);
for (int i = 0; i < 4; ++i){
data_out[i0 + i] = items[i];
}
}
int main(){
double* d_gpu = NULL;
double* result_gpu = NULL;
double* data_sorted = new double[4096];
// Allocate memory on the GPU
hipMalloc(&d_gpu, 4096 * sizeof(double));
hipMalloc(&result_gpu, 4096 * sizeof(double));
hiprandGenerator_t gen;
// Create generator
hiprandCreateGenerator(&gen, HIPRAND_RNG_PSEUDO_DEFAULT);
// Fill array with random numbers
hiprandGenerateNormalDouble(gen, d_gpu, 4096, 0.0, 1.0);
// Destroy generator
hiprandDestroyGenerator(gen);
// Sort data
hipLaunchKernelGGL((sort), dim3(1), dim3(1024), 0, 0, d_gpu, result_gpu);
hipMemcpy(data_sorted, result_gpu, 4096 * sizeof(double), hipMemcpyDeviceToHost);
// Write the sorted data to standard out
for (int i = 0; i < 4096; ++i){
std::cout << data_sorted[i] << ", ";
}
std::cout << std::endl;
}
```
Working code:
```
#include <hip/hip_runtime.h>
#include <iostream>
#include <hiprand.h>
#include <hipcub/hipcub.hpp> // THIS LINE
template <typename T>
__global__ void sort(const T* data_in, T* data_out){
typedef hipcub::BlockRadixSort<T, 1024, 4> BlockRadixSortT; // THIS LINE
__shared__ typename BlockRadixSortT::TempStorage tmp_sort;
double items[4];
int i0 = 4 * (blockIdx.x * blockDim.x + threadIdx.x);
for (int i = 0; i < 4; ++i){
items[i] = data_in[i0 + i];
}
BlockRadixSortT(tmp_sort).Sort(items);
for (int i = 0; i < 4; ++i){
data_out[i0 + i] = items[i];
}
}
int main(){
double* d_gpu = NULL;
double* result_gpu = NULL;
double* data_sorted = new double[4096];
// Allocate memory on the GPU
hipMalloc(&d_gpu, 4096 * sizeof(double));
hipMalloc(&result_gpu, 4096 * sizeof(double));
hiprandGenerator_t gen;
// Create generator
hiprandCreateGenerator(&gen, HIPRAND_RNG_PSEUDO_DEFAULT);
// Fill array with random numbers
hiprandGenerateNormalDouble(gen, d_gpu, 4096, 0.0, 1.0);
// Destroy generator
hiprandDestroyGenerator(gen);
// Sort data
hipLaunchKernelGGL(sort, dim3(1), dim3(1024), 0, 0, d_gpu, result_gpu);
hipMemcpy(data_sorted, result_gpu, 4096 * sizeof(double), hipMemcpyDeviceToHost);
// Write the sorted data to standard out
for (int i = 0; i < 4096; ++i){
std::cout << data_sorted[i] << ", ";
}
std::cout << std::endl;
}
```
Original CUDA code:
```
#include <iostream>
#include <curand.h>
#include <cub/cub.cuh>
template <typename T>
__global__ void sort(const T* data_in, T* data_out){
typedef cub::BlockRadixSort<T, 1024, 4> BlockRadixSortT;
__shared__ typename BlockRadixSortT::TempStorage tmp_sort;
double items[4];
int i0 = 4 * (blockIdx.x * blockDim.x + threadIdx.x);
for (int i = 0; i < 4; ++i){
items[i] = data_in[i0 + i];
}
BlockRadixSortT(tmp_sort).Sort(items);
for (int i = 0; i < 4; ++i){
data_out[i0 + i] = items[i];
}
}
int main(){
double* d_gpu = NULL;
double* result_gpu = NULL;
double* data_sorted = new double[4096];
// Allocate memory on the GPU
cudaMalloc(&d_gpu, 4096 * sizeof(double));
cudaMalloc(&result_gpu, 4096 * sizeof(double));
curandGenerator_t gen;
// Create generator
curandCreateGenerator(&gen, CURAND_RNG_PSEUDO_DEFAULT);
// Fill array with random numbers
curandGenerateNormalDouble(gen, d_gpu, 4096, 0.0, 1.0);
// Destroy generator
curandDestroyGenerator(gen);
// Sort data
sort<<<1, 1024>>>(d_gpu, result_gpu);
cudaMemcpy(data_sorted, result_gpu, 4096 * sizeof(double), cudaMemcpyDeviceToHost);
// Write the sorted data to standard out
for (int i = 0; i < 4096; ++i){
std::cout << data_sorted[i] << ", ";
}
std::cout << std::endl;
}
``` | non_test | hipify clang and hipify perl are not changing cub lines to hipcub hipify clang and hipify perl are not changing cub lines hipified code include include include include template global void sort const t data in t data out typedef cub blockradixsort blockradixsortt shared typename blockradixsortt tempstorage tmp sort double items int blockidx x blockdim x threadidx x for int i i i items data in blockradixsortt tmp sort sort items for int i i i data out items int main double d gpu null double result gpu null double data sorted new double allocate memory on the gpu hipmalloc d gpu sizeof double hipmalloc result gpu sizeof double hiprandgenerator t gen create generator hiprandcreategenerator gen hiprand rng pseudo default fill array with random numbers hiprandgeneratenormaldouble gen d gpu destroy generator hipranddestroygenerator gen sort data hiplaunchkernelggl sort d gpu result gpu hipmemcpy data sorted result gpu sizeof double hipmemcpydevicetohost write the sorted data to standard out for int i i i std cout data sorted std cout std endl working code include include include include this line template global void sort const t data in t data out typedef hipcub blockradixsort blockradixsortt this line shared typename blockradixsortt tempstorage tmp sort double items int blockidx x blockdim x threadidx x for int i i i items data in blockradixsortt tmp sort sort items for int i i i data out items int main double d gpu null double result gpu null double data sorted new double allocate memory on the gpu hipmalloc d gpu sizeof double hipmalloc result gpu sizeof double hiprandgenerator t gen create generator hiprandcreategenerator gen hiprand rng pseudo default fill array with random numbers hiprandgeneratenormaldouble gen d gpu destroy generator hipranddestroygenerator gen sort data hiplaunchkernelggl sort d gpu result gpu hipmemcpy data sorted result gpu sizeof double hipmemcpydevicetohost write the sorted data to standard out for int i i i std cout data sorted std cout std endl original cuda code include include include template global void sort const t data in t data out typedef cub blockradixsort blockradixsortt shared typename blockradixsortt tempstorage tmp sort double items int blockidx x blockdim x threadidx x for int i i i items data in blockradixsortt tmp sort sort items for int i i i data out items int main double d gpu null double result gpu null double data sorted new double allocate memory on the gpu cudamalloc d gpu sizeof double cudamalloc result gpu sizeof double curandgenerator t gen create generator curandcreategenerator gen curand rng pseudo default fill array with random numbers curandgeneratenormaldouble gen d gpu destroy generator curanddestroygenerator gen sort data sort d gpu result gpu cudamemcpy data sorted result gpu sizeof double cudamemcpydevicetohost write the sorted data to standard out for int i i i std cout data sorted std cout std endl | 0 |
93,631 | 8,439,850,915 | IssuesEvent | 2018-10-18 04:11:43 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Intermittent parallel/test-http-regr-gh-2821 failure on V6.9.2 (Windows) | CI / flaky test http test windows | Version: 6.9.2
Platform: 32-bit (Win2012)
```
=== release test-http-regr-gh-2821 ===
Path: parallel/test-http-regr-gh-2821
events.js:160
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET
at exports._errnoException (util.js:1022:11)
at TCP.onread (net.js:569:26)
Command: C:\node\node\out\Release\node.exe C:\node\node\test\parallel\test-http-regr-gh-2821.js
```
We're seeing this failure intermittently on Windows. | 2.0 | Intermittent parallel/test-http-regr-gh-2821 failure on V6.9.2 (Windows) - Version: 6.9.2
Platform: 32-bit (Win2012)
```
=== release test-http-regr-gh-2821 ===
Path: parallel/test-http-regr-gh-2821
events.js:160
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET
at exports._errnoException (util.js:1022:11)
at TCP.onread (net.js:569:26)
Command: C:\node\node\out\Release\node.exe C:\node\node\test\parallel\test-http-regr-gh-2821.js
```
We're seeing this failure intermittently on Windows. | test | intermittent parallel test http regr gh failure on windows version platform bit release test http regr gh path parallel test http regr gh events js throw er unhandled error event error read econnreset at exports errnoexception util js at tcp onread net js command c node node out release node exe c node node test parallel test http regr gh js we re seeing this failure intermittently on windows | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.