Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
438,983 | 12,675,192,814 | IssuesEvent | 2020-06-19 00:50:52 | Poobslag/turbofat | https://api.github.com/repos/Poobslag/turbofat | closed | Loading screen stays locked at 99% forever | bug priority-3 | When starting the game, there's a chance for the game to go up to 99% loaded but never fully load.
This happened frequently when I started testing on android, I never saw it happen on my desktop. | 1.0 | Loading screen stays locked at 99% forever - When starting the game, there's a chance for the game to go up to 99% loaded but never fully load.
This happened frequently when I started testing on android, I never saw it happen on my desktop. | non_test | loading screen stays locked at forever when starting the game there s a chance for the game to go up to loaded but never fully load this happened frequently when i started testing on android i never saw it happen on my desktop | 0 |
76,035 | 26,207,028,777 | IssuesEvent | 2023-01-04 00:11:22 | scoutplan/scoutplan | https://api.github.com/repos/scoutplan/scoutplan | closed | [Scoutplan Production/production] NoMethodError: undefined method `participants' for nil:NilClass | defect | ## Backtrace
line 5 of [PROJECT_ROOT]/app/views/events/partials/show/_chat.slim: _app_views_events_partials_show__chat_slim__4314489820536178484_1604680
line 10 of [PROJECT_ROOT]/app/views/events/partials/show/_sidecar.slim: _app_views_events_partials_show__sidecar_slim___4225736444204542362_1603680
line 6 of [PROJECT_ROOT]/app/views/events/show.html.slim: block in _app_views_events_show_html_slim___3708478878251258444_1603660
[View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/97676/faults/92163317) | 1.0 | [Scoutplan Production/production] NoMethodError: undefined method `participants' for nil:NilClass - ## Backtrace
line 5 of [PROJECT_ROOT]/app/views/events/partials/show/_chat.slim: _app_views_events_partials_show__chat_slim__4314489820536178484_1604680
line 10 of [PROJECT_ROOT]/app/views/events/partials/show/_sidecar.slim: _app_views_events_partials_show__sidecar_slim___4225736444204542362_1603680
line 6 of [PROJECT_ROOT]/app/views/events/show.html.slim: block in _app_views_events_show_html_slim___3708478878251258444_1603660
[View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/97676/faults/92163317) | non_test | nomethoderror undefined method participants for nil nilclass backtrace line of app views events partials show chat slim app views events partials show chat slim line of app views events partials show sidecar slim app views events partials show sidecar slim line of app views events show html slim block in app views events show html slim | 0 |
52,937 | 6,286,994,775 | IssuesEvent | 2017-07-19 14:09:07 | apache/couchdb | https://api.github.com/repos/apache/couchdb | closed | badmatch on #rep record in couch_replicator_compact_tests | testsuite | ```
Compaction during replication tests
local -> local
couch_replicator_compact_tests:95: should_run_replication...*failed*
in function gen_server:call/2 (gen_server.erl, line 204)
in call from couch_replicator_compact_tests:'-wait_for_replicator/1-fun-0-'/1 (test/couch_replicator_compact_tests.erl, line 146)
in call from couch_replicator_compact_tests:wait_for_replicator/1 (test/couch_replicator_compact_tests.erl, line 146)
in call from couch_replicator_compact_tests:check_active_tasks/4 (test/couch_replicator_compact_tests.erl, line 115)
**exit:{{{badmatch,
{rep,
{"924741351f14d1d3b6f1c4b9ae2f6de4","+continuous"},
<<"eunit-test-db-1500325249968559">>,
<<"eunit-test-db-1500325249972195">>,
[{checkpoint_interval,30000},
{connection_timeout,30000},
{continuous,true},
{http_connections,20},
{retries,10},
{socket_options,[{keepalive,...},{...}]},
{use_checkpoints,true},
{worker_batch_size,500},
{worker_processes,...}],
{user_ctx,null,[<<"_admin">>],undefined},
db,nil,null,null,
{0,0,0}}},
[{couch_replicator_scheduler_job,terminate,2,
[{file,"src/couch_replicator_scheduler_job.erl"},{line,423}]},
{gen_server,try_terminate,3,[{file,"gen_server.erl"},{line,643}]},
{gen_server,terminate,7,[{file,"gen_server.erl"},{line,809}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]},
{gen_server,call,[<0.31427.1>,get_details]}}
output:<<"">>
couch_replicator_compact_tests:86: should_all_processes_be_alive...*failed*
in function couch_replicator_compact_tests:'-should_all_processes_be_alive/3-fun-0-'/1 (test/couch_replicator_compact_tests.erl, line 89)
in call from couch_replicator_compact_tests:'-should_all_processes_be_alive/3-fun-3-'/3 (test/couch_replicator_compact_tests.erl, line 89)
**error:{assert,[{module,couch_replicator_compact_tests},
{line,89},
{expression,"is_process_alive ( RepPid )"},
{expected,true},
{value,false}]}
output:<<"">>
couch_replicator_compact_tests:166: should_populate_and_compact...*failed*
in function couch_replicator_compact_tests:'-should_populate_and_compact/5-fun-0-'/1 (test/couch_replicator_compact_tests.erl, line 176)
in call from couch_replicator_compact_tests:'-should_populate_and_compact/5-fun-8-'/6 (test/couch_replicator_compact_tests.erl, line 176)
in call from lists:foreach/2 (lists.erl, line 1337)
in call from couch_replicator_compact_tests:'-should_populate_and_compact/5-fun-9-'/5 (test/couch_replicator_compact_tests.erl, line 169)
**error:{assert,[{module,couch_replicator_compact_tests},
{line,176},
{expression,"is_process_alive ( RepPid )"},
{expected,true},
{value,false}]}
output:<<"">>
couch_replicator_compact_tests:213: should_wait_target_in_sync...ok
couch_replicator_compact_tests:98: should_ensure_replication_still_running...*failed*
in function couch_replicator_test_helper:'-get_pid/1-fun-0-'/1 (test/couch_replicator_test_helper.erl, line 116)
in call from couch_replicator_test_helper:get_pid/1 (test/couch_replicator_test_helper.erl, line 116)
in call from couch_replicator_compact_tests:rep_details/1 (test/couch_replicator_compact_tests.erl, line 135)
in call from couch_replicator_compact_tests:'-wait_for_replicator/1-fun-0-'/1 (test/couch_replicator_compact_tests.erl, line 146)
in call from couch_replicator_compact_tests:wait_for_replicator/1 (test/couch_replicator_compact_tests.erl, line 146)
in call from couch_replicator_compact_tests:check_active_tasks/4 (test/couch_replicator_compact_tests.erl, line 115)
**error:{assert,[{module,couch_replicator_test_helper},
{line,116},
{expression,"is_pid ( Pid )"},
{expected,true},
{value,false}]}
output:<<"">>
couch_replicator_compact_tests:160: should_cancel_replication...ok
couch_replicator_compact_tests:246: should_compare_databases...ok
[done in 0.032 s]
[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
local -> remote
```
https://couchdb-vm2.apache.org/ci_errorlogs/travis-couchdb-254606046-2017-07-17T21%3A02%3A41.583379/couchlog.tar.gz? | 1.0 | badmatch on #rep record in couch_replicator_compact_tests - ```
Compaction during replication tests
local -> local
couch_replicator_compact_tests:95: should_run_replication...*failed*
in function gen_server:call/2 (gen_server.erl, line 204)
in call from couch_replicator_compact_tests:'-wait_for_replicator/1-fun-0-'/1 (test/couch_replicator_compact_tests.erl, line 146)
in call from couch_replicator_compact_tests:wait_for_replicator/1 (test/couch_replicator_compact_tests.erl, line 146)
in call from couch_replicator_compact_tests:check_active_tasks/4 (test/couch_replicator_compact_tests.erl, line 115)
**exit:{{{badmatch,
{rep,
{"924741351f14d1d3b6f1c4b9ae2f6de4","+continuous"},
<<"eunit-test-db-1500325249968559">>,
<<"eunit-test-db-1500325249972195">>,
[{checkpoint_interval,30000},
{connection_timeout,30000},
{continuous,true},
{http_connections,20},
{retries,10},
{socket_options,[{keepalive,...},{...}]},
{use_checkpoints,true},
{worker_batch_size,500},
{worker_processes,...}],
{user_ctx,null,[<<"_admin">>],undefined},
db,nil,null,null,
{0,0,0}}},
[{couch_replicator_scheduler_job,terminate,2,
[{file,"src/couch_replicator_scheduler_job.erl"},{line,423}]},
{gen_server,try_terminate,3,[{file,"gen_server.erl"},{line,643}]},
{gen_server,terminate,7,[{file,"gen_server.erl"},{line,809}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]},
{gen_server,call,[<0.31427.1>,get_details]}}
output:<<"">>
couch_replicator_compact_tests:86: should_all_processes_be_alive...*failed*
in function couch_replicator_compact_tests:'-should_all_processes_be_alive/3-fun-0-'/1 (test/couch_replicator_compact_tests.erl, line 89)
in call from couch_replicator_compact_tests:'-should_all_processes_be_alive/3-fun-3-'/3 (test/couch_replicator_compact_tests.erl, line 89)
**error:{assert,[{module,couch_replicator_compact_tests},
{line,89},
{expression,"is_process_alive ( RepPid )"},
{expected,true},
{value,false}]}
output:<<"">>
couch_replicator_compact_tests:166: should_populate_and_compact...*failed*
in function couch_replicator_compact_tests:'-should_populate_and_compact/5-fun-0-'/1 (test/couch_replicator_compact_tests.erl, line 176)
in call from couch_replicator_compact_tests:'-should_populate_and_compact/5-fun-8-'/6 (test/couch_replicator_compact_tests.erl, line 176)
in call from lists:foreach/2 (lists.erl, line 1337)
in call from couch_replicator_compact_tests:'-should_populate_and_compact/5-fun-9-'/5 (test/couch_replicator_compact_tests.erl, line 169)
**error:{assert,[{module,couch_replicator_compact_tests},
{line,176},
{expression,"is_process_alive ( RepPid )"},
{expected,true},
{value,false}]}
output:<<"">>
couch_replicator_compact_tests:213: should_wait_target_in_sync...ok
couch_replicator_compact_tests:98: should_ensure_replication_still_running...*failed*
in function couch_replicator_test_helper:'-get_pid/1-fun-0-'/1 (test/couch_replicator_test_helper.erl, line 116)
in call from couch_replicator_test_helper:get_pid/1 (test/couch_replicator_test_helper.erl, line 116)
in call from couch_replicator_compact_tests:rep_details/1 (test/couch_replicator_compact_tests.erl, line 135)
in call from couch_replicator_compact_tests:'-wait_for_replicator/1-fun-0-'/1 (test/couch_replicator_compact_tests.erl, line 146)
in call from couch_replicator_compact_tests:wait_for_replicator/1 (test/couch_replicator_compact_tests.erl, line 146)
in call from couch_replicator_compact_tests:check_active_tasks/4 (test/couch_replicator_compact_tests.erl, line 115)
**error:{assert,[{module,couch_replicator_test_helper},
{line,116},
{expression,"is_pid ( Pid )"},
{expected,true},
{value,false}]}
output:<<"">>
couch_replicator_compact_tests:160: should_cancel_replication...ok
couch_replicator_compact_tests:246: should_compare_databases...ok
[done in 0.032 s]
[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
local -> remote
```
https://couchdb-vm2.apache.org/ci_errorlogs/travis-couchdb-254606046-2017-07-17T21%3A02%3A41.583379/couchlog.tar.gz? | test | badmatch on rep record in couch replicator compact tests compaction during replication tests local local couch replicator compact tests should run replication failed in function gen server call gen server erl line in call from couch replicator compact tests wait for replicator fun test couch replicator compact tests erl line in call from couch replicator compact tests wait for replicator test couch replicator compact tests erl line in call from couch replicator compact tests check active tasks test couch replicator compact tests erl line exit badmatch rep continuous checkpoint interval connection timeout continuous true http connections retries socket options use checkpoints true worker batch size worker processes user ctx null undefined db nil null null couch replicator scheduler job terminate gen server try terminate gen server terminate proc lib init p do apply gen server call output couch replicator compact tests should all processes be alive failed in function couch replicator compact tests should all processes be alive fun test couch replicator compact tests erl line in call from couch replicator compact tests should all processes be alive fun test couch replicator compact tests erl line error assert module couch replicator compact tests line expression is process alive reppid expected true value false output couch replicator compact tests should populate and compact failed in function couch replicator compact tests should populate and compact fun test couch replicator compact tests erl line in call from couch replicator compact tests should populate and compact fun test couch replicator compact tests erl line in call from lists foreach lists erl line in call from couch replicator compact tests should populate and compact fun test couch replicator compact tests erl line error assert module couch replicator compact tests line expression is process alive reppid expected true value false output couch replicator compact tests should wait target in sync ok couch replicator compact tests should ensure replication still running failed in function couch replicator test helper get pid fun test couch replicator test helper erl line in call from couch replicator test helper get pid test couch replicator test helper erl line in call from couch replicator compact tests rep details test couch replicator compact tests erl line in call from couch replicator compact tests wait for replicator fun test couch replicator compact tests erl line in call from couch replicator compact tests wait for replicator test couch replicator compact tests erl line in call from couch replicator compact tests check active tasks test couch replicator compact tests erl line error assert module couch replicator test helper line expression is pid pid expected true value false output couch replicator compact tests should cancel replication ok couch replicator compact tests should compare databases ok memory supervisor port memsup erlang has closed cpu supervisor port cpu sup erlang has closed local remote | 1 |
76,936 | 3,506,199,996 | IssuesEvent | 2016-01-08 04:33:53 | ankidroid/Anki-Android | https://api.github.com/repos/ankidroid/Anki-Android | closed | Crash when importing due to ending an unopen transaction | accepted Priority-High | It looks like we're closing the collection at one point inside the importing routine which is interfering with the rollback mechanism we have in DeckTask. Maybe we need to move the rollback / transaction code into the importing routine itself.
android.database.sqlite.SQLiteException: cannot rollback - no transaction is active (code 1)
at android.database.sqlite.SQLiteConnection.nativeExecute(Native Method)
at android.database.sqlite.SQLiteConnection.execute(SQLiteConnection.java:552)
at android.database.sqlite.SQLiteSession.endTransactionUnchecked(SQLiteSession.java:439)
at android.database.sqlite.SQLiteSession.endTransaction(SQLiteSession.java:401)
at android.database.sqlite.SQLiteDatabase.endTransaction(SQLiteDatabase.java:522)
at com.ichi2.async.DeckTask.doInBackgroundImportAdd(DeckTask.java:801)
at com.ichi2.async.DeckTask.doInBackground(DeckTask.java:280)
at com.ichi2.async.DeckTask.doInBackground(DeckTask.java:66) | 1.0 | Crash when importing due to ending an unopen transaction - It looks like we're closing the collection at one point inside the importing routine which is interfering with the rollback mechanism we have in DeckTask. Maybe we need to move the rollback / transaction code into the importing routine itself.
android.database.sqlite.SQLiteException: cannot rollback - no transaction is active (code 1)
at android.database.sqlite.SQLiteConnection.nativeExecute(Native Method)
at android.database.sqlite.SQLiteConnection.execute(SQLiteConnection.java:552)
at android.database.sqlite.SQLiteSession.endTransactionUnchecked(SQLiteSession.java:439)
at android.database.sqlite.SQLiteSession.endTransaction(SQLiteSession.java:401)
at android.database.sqlite.SQLiteDatabase.endTransaction(SQLiteDatabase.java:522)
at com.ichi2.async.DeckTask.doInBackgroundImportAdd(DeckTask.java:801)
at com.ichi2.async.DeckTask.doInBackground(DeckTask.java:280)
at com.ichi2.async.DeckTask.doInBackground(DeckTask.java:66) | non_test | crash when importing due to ending an unopen transaction it looks like we re closing the collection at one point inside the importing routine which is interfering with the rollback mechanism we have in decktask maybe we need to move the rollback transaction code into the importing routine itself android database sqlite sqliteexception cannot rollback no transaction is active code at android database sqlite sqliteconnection nativeexecute native method at android database sqlite sqliteconnection execute sqliteconnection java at android database sqlite sqlitesession endtransactionunchecked sqlitesession java at android database sqlite sqlitesession endtransaction sqlitesession java at android database sqlite sqlitedatabase endtransaction sqlitedatabase java at com async decktask doinbackgroundimportadd decktask java at com async decktask doinbackground decktask java at com async decktask doinbackground decktask java | 0 |
126,627 | 26,886,312,448 | IssuesEvent | 2023-02-06 03:45:02 | bevyengine/bevy | https://api.github.com/repos/bevyengine/bevy | closed | Remove the ability to directly use strings as labels | A-ECS C-Code-Quality | ## What problem does this solve or what need does it fill?
Following #4219, system function types can be used directly as labels.
This removes the last serious use of strings as labels: for "quick and dirty" implementations.
"Stringly typed" labels are inferior because:
1. They are not IDE or compiler aware, and so typos are very challenging to detect.
2. They can clash in very surprising ways between crates.
3. They cannot be kept private.
## What solution would you like?
1. Remove the ability to use string types as labels.
2. Update codebase, including examples and tests, to reflect this change.
## Additional context
Raised in #4340 by @DJMcNab.
Note that labels that store a string may be useful in some applications for e.g. scripting integration. This is still supported: you just have to newtype your string. | 1.0 | Remove the ability to directly use strings as labels - ## What problem does this solve or what need does it fill?
Following #4219, system function types can be used directly as labels.
This removes the last serious use of strings as labels: for "quick and dirty" implementations.
"Stringly typed" labels are inferior because:
1. They are not IDE or compiler aware, and so typos are very challenging to detect.
2. They can clash in very surprising ways between crates.
3. They cannot be kept private.
## What solution would you like?
1. Remove the ability to use string types as labels.
2. Update codebase, including examples and tests, to reflect this change.
## Additional context
Raised in #4340 by @DJMcNab.
Note that labels that store a string may be useful in some applications for e.g. scripting integration. This is still supported: you just have to newtype your string. | non_test | remove the ability to directly use strings as labels what problem does this solve or what need does it fill following system function types can be used directly as labels this removes the last serious use of strings as labels for quick and dirty implementations stringly typed labels are inferior because they are not ide or compiler aware and so typos are very challenging to detect they can clash in very surprising ways between crates they cannot be kept private what solution would you like remove the ability to use string types as labels update codebase including examples and tests to reflect this change additional context raised in by djmcnab note that labels that store a string may be useful in some applications for e g scripting integration this is still supported you just have to newtype your string | 0 |
179,557 | 13,887,529,478 | IssuesEvent | 2020-10-19 04:05:54 | apache/shardingsphere | https://api.github.com/repos/apache/shardingsphere | opened | Add H2DatabaseMetaDataDialectHandlerTest | in: test status: volunteer wanted | Hi, alll the friends in the community.
Could you do me a little favor? We need to add test different database dialect handler , ex (H2, MariaDB, PostgreSQL, SQLServer).
How to contributer?
1. First,you can follow this : https://shardingsphere.apache.org/community/en/contribute/contributor/
2. Then, you can new H2DatabaseMetaDataDialectHandlerTest in shardingsphere-infra-common
3. Please extends AbstractDatabaseMetaDataDialectHandlerTest
4. complete to add assertGetSchema(), assertFormatTableNamePattern() ,assertGetQuoteCharacter()
You don't have to worry at all , You can refer to MySQLDatabaseMetaDataDialectHandlerTest
If you have any other questions, please feel free to reply here. We will help you with your questions
If you complete this task, you will become an official contributor, Thank you again for your contribute。 | 1.0 | Add H2DatabaseMetaDataDialectHandlerTest - Hi, alll the friends in the community.
Could you do me a little favor? We need to add test different database dialect handler , ex (H2, MariaDB, PostgreSQL, SQLServer).
How to contributer?
1. First,you can follow this : https://shardingsphere.apache.org/community/en/contribute/contributor/
2. Then, you can new H2DatabaseMetaDataDialectHandlerTest in shardingsphere-infra-common
3. Please extends AbstractDatabaseMetaDataDialectHandlerTest
4. complete to add assertGetSchema(), assertFormatTableNamePattern() ,assertGetQuoteCharacter()
You don't have to worry at all , You can refer to MySQLDatabaseMetaDataDialectHandlerTest
If you have any other questions, please feel free to reply here. We will help you with your questions
If you complete this task, you will become an official contributor, Thank you again for your contribute。 | test | add hi alll the friends in the community could you do me a little favor we need to add test different database dialect handler ex mariadb postgresql sqlserver how to contributer first you can follow this then you can new in shardingsphere infra common please extends abstractdatabasemetadatadialecthandlertest complete to add assertgetschema assertformattablenamepattern assertgetquotecharacter you don t have to worry at all you can refer to mysqldatabasemetadatadialecthandlertest if you have any other questions please feel free to reply here we will help you with your questions if you complete this task you will become an official contributor, thank you again for your contribute。 | 1 |
728,222 | 25,071,946,315 | IssuesEvent | 2022-11-07 12:52:14 | Uuvana-Studios/longvinter-windows-client | https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client | closed | Autumn Isle Vendor Issues in build 1.0.3 (misspelled vending unit name & low price on item for sale) | Low Priority | Image: https://steamuserimages-a.akamaihd.net/ugc/1861684358566157797/1AADAD038DA908BB55FEB1E19A6720CF2AA0C701/
Build: 1.0.3
Description: At the large northern base POI on the autumn islands, there was a new vending machine put up called Julia's Mysterious Shop at 17°N, 35°W, with a rotating stock of sold items. There is currently 2 issues present with this vending machine:
1) The name has a typo, it is listed as "Julia's Mysterios Shop", with the 'u' in 'Mysterious' missing.
2) In any rotation involving 'Mysterious Bag' for sale, it is listed for a sale price of just 15mk. This amount is absurdly low, as Mysterious Bags are easily tradable for items on Pix's Island that would easily go for 400-500mk elsewhere. The price of the bags from this vendor should be drastically increased, to at least 250-300mk apiece, or else somehow limit the amount of bags that can be purchased by one person. | 1.0 | Autumn Isle Vendor Issues in build 1.0.3 (misspelled vending unit name & low price on item for sale) - Image: https://steamuserimages-a.akamaihd.net/ugc/1861684358566157797/1AADAD038DA908BB55FEB1E19A6720CF2AA0C701/
Build: 1.0.3
Description: At the large northern base POI on the autumn islands, there was a new vending machine put up called Julia's Mysterious Shop at 17°N, 35°W, with a rotating stock of sold items. There is currently 2 issues present with this vending machine:
1) The name has a typo, it is listed as "Julia's Mysterios Shop", with the 'u' in 'Mysterious' missing.
2) In any rotation involving 'Mysterious Bag' for sale, it is listed for a sale price of just 15mk. This amount is absurdly low, as Mysterious Bags are easily tradable for items on Pix's Island that would easily go for 400-500mk elsewhere. The price of the bags from this vendor should be drastically increased, to at least 250-300mk apiece, or else somehow limit the amount of bags that can be purchased by one person. | non_test | autumn isle vendor issues in build misspelled vending unit name low price on item for sale image build description at the large northern base poi on the autumn islands there was a new vending machine put up called julia s mysterious shop at °n °w with a rotating stock of sold items there is currently issues present with this vending machine the name has a typo it is listed as julia s mysterios shop with the u in mysterious missing in any rotation involving mysterious bag for sale it is listed for a sale price of just this amount is absurdly low as mysterious bags are easily tradable for items on pix s island that would easily go for elsewhere the price of the bags from this vendor should be drastically increased to at least apiece or else somehow limit the amount of bags that can be purchased by one person | 0 |
200,077 | 15,089,456,578 | IssuesEvent | 2021-02-06 05:50:28 | kotest/kotest | https://api.github.com/repos/kotest/kotest | closed | Generation of larger sets via Arb.set now throws an exception | bug property-testing | Starting with Kotest 4.4.0, generation of larger sets now fails. This appears related to https://github.com/kotest/kotest/issues/1931
With the change to termination, `Arb.set` now gives up after 1,000 loops. This means that, even when given a generator with sufficient cardinality, passing in a range that generates desired sizes of greater than 1,000 elements will fail.
| 1.0 | Generation of larger sets via Arb.set now throws an exception - Starting with Kotest 4.4.0, generation of larger sets now fails. This appears related to https://github.com/kotest/kotest/issues/1931
With the change to termination, `Arb.set` now gives up after 1,000 loops. This means that, even when given a generator with sufficient cardinality, passing in a range that generates desired sizes of greater than 1,000 elements will fail.
| test | generation of larger sets via arb set now throws an exception starting with kotest generation of larger sets now fails this appears related to with the change to termination arb set now gives up after loops this means that even when given a generator with sufficient cardinality passing in a range that generates desired sizes of greater than elements will fail | 1 |
247,237 | 20,965,998,071 | IssuesEvent | 2022-03-28 06:45:52 | gravitee-io/issues | https://api.github.com/repos/gravitee-io/issues | closed | [cypress] Complete test set regarding Swagger import | project: APIM Test Automation | - [ ] complete Swagger import tests via file (rest to SOAP transformer policy still missing)
- [ ] implement Cypress tests for Swagger import via URL
| 1.0 | [cypress] Complete test set regarding Swagger import - - [ ] complete Swagger import tests via file (rest to SOAP transformer policy still missing)
- [ ] implement Cypress tests for Swagger import via URL
| test | complete test set regarding swagger import complete swagger import tests via file rest to soap transformer policy still missing implement cypress tests for swagger import via url | 1 |
305,268 | 26,374,480,287 | IssuesEvent | 2023-01-12 00:24:19 | phetsims/projectile-motion | https://api.github.com/repos/phetsims/projectile-motion | closed | CT required tandems must be supplied | type:automated-testing | ```
projectile-motion : phet-io-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1673352571583/projectile-motion/projectile-motion_en.html?continuousTest=%7B%22test%22%3A%5B%22projectile-motion%22%2C%22phet-io-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1673352571583%22%2C%22timestamp%22%3A1673354716974%7D&ea&brand=phet-io&phetioStandalone&fuzz&memoryLimit=1000
Query: ea&brand=phet-io&phetioStandalone&fuzz&memoryLimit=1000
Uncaught Error: Assertion failed: required tandems must be supplied
Error: Assertion failed: required tandems must be supplied
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1673352571583/assert/js/assert.js:28:13)
at assert (PhetioObject.ts:200:16)
at initializePhetioObject (Node.ts:6320:10)
at initializePhetioObject (Node.ts:6310:9)
at mutate (Node.ts:830:11)
at (ScreenView.ts:97:4)
at (ProjectileMotionScreenView.ts:114:4)
at (IntroScreenView.js:49:4)
at (IntroScreen.ts:38:15)
at createView (Screen.ts:304:22)
id: Bayes Puppeteer
Snapshot from 1/10/2023, 5:09:31 AM
``` | 1.0 | CT required tandems must be supplied - ```
projectile-motion : phet-io-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1673352571583/projectile-motion/projectile-motion_en.html?continuousTest=%7B%22test%22%3A%5B%22projectile-motion%22%2C%22phet-io-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1673352571583%22%2C%22timestamp%22%3A1673354716974%7D&ea&brand=phet-io&phetioStandalone&fuzz&memoryLimit=1000
Query: ea&brand=phet-io&phetioStandalone&fuzz&memoryLimit=1000
Uncaught Error: Assertion failed: required tandems must be supplied
Error: Assertion failed: required tandems must be supplied
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1673352571583/assert/js/assert.js:28:13)
at assert (PhetioObject.ts:200:16)
at initializePhetioObject (Node.ts:6320:10)
at initializePhetioObject (Node.ts:6310:9)
at mutate (Node.ts:830:11)
at (ScreenView.ts:97:4)
at (ProjectileMotionScreenView.ts:114:4)
at (IntroScreenView.js:49:4)
at (IntroScreen.ts:38:15)
at createView (Screen.ts:304:22)
id: Bayes Puppeteer
Snapshot from 1/10/2023, 5:09:31 AM
``` | test | ct required tandems must be supplied projectile motion phet io fuzz unbuilt query ea brand phet io phetiostandalone fuzz memorylimit uncaught error assertion failed required tandems must be supplied error assertion failed required tandems must be supplied at window assertions assertfunction at assert phetioobject ts at initializephetioobject node ts at initializephetioobject node ts at mutate node ts at screenview ts at projectilemotionscreenview ts at introscreenview js at introscreen ts at createview screen ts id bayes puppeteer snapshot from am | 1 |
211,168 | 16,431,611,872 | IssuesEvent | 2021-05-20 02:55:40 | cancerDHC/tools | https://api.github.com/repos/cancerDHC/tools | closed | Choose a format for publishing mappings | documentation | We currently have an initial, incomplete set of mappings from the SNOMED terms in DICOM to the NCI Thesaurus (https://github.com/cancerDHC/operations/issues/48), and need to chose a format to publish this to the CCDH website that would maintain provenance information. During our last internals call (June 3, 2020), we came up with a list of several possible formats:
- [SSSOM](https://github.com/OBOFoundry/SSSOM) format for sharing mappings with provenance information
- kBOOM/Boomer output to provenance in SSSOM?
- [CTS-2](https://www.omg.org/spec/CTS2/) as an alternate format
- [FHIR concept maps](https://www.hl7.org/fhir/conceptmap.html) borrow and simplify from CTS-2
- [OMOP provenance model](https://www.ohdsi.org/data-standardization/the-common-data-model/)
We also determined that we needed OWL expressions to be allowable, as well as value set mappings.
This issue covers comparing these formats for our purposes and to translate the incomplete mappings into that format as a potential exemplar. If one of these formats works for our needs, that would close cancerDHC/operations#22. | 1.0 | Choose a format for publishing mappings - We currently have an initial, incomplete set of mappings from the SNOMED terms in DICOM to the NCI Thesaurus (https://github.com/cancerDHC/operations/issues/48), and need to chose a format to publish this to the CCDH website that would maintain provenance information. During our last internals call (June 3, 2020), we came up with a list of several possible formats:
- [SSSOM](https://github.com/OBOFoundry/SSSOM) format for sharing mappings with provenance information
- kBOOM/Boomer output to provenance in SSSOM?
- [CTS-2](https://www.omg.org/spec/CTS2/) as an alternate format
- [FHIR concept maps](https://www.hl7.org/fhir/conceptmap.html) borrow and simplify from CTS-2
- [OMOP provenance model](https://www.ohdsi.org/data-standardization/the-common-data-model/)
We also determined that we needed OWL expressions to be allowable, as well as value set mappings.
This issue covers comparing these formats for our purposes and to translate the incomplete mappings into that format as a potential exemplar. If one of these formats works for our needs, that would close cancerDHC/operations#22. | non_test | choose a format for publishing mappings we currently have an initial incomplete set of mappings from the snomed terms in dicom to the nci thesaurus and need to chose a format to publish this to the ccdh website that would maintain provenance information during our last internals call june we came up with a list of several possible formats format for sharing mappings with provenance information kboom boomer output to provenance in sssom as an alternate format borrow and simplify from cts we also determined that we needed owl expressions to be allowable as well as value set mappings this issue covers comparing these formats for our purposes and to translate the incomplete mappings into that format as a potential exemplar if one of these formats works for our needs that would close cancerdhc operations | 0 |
133,712 | 18,299,056,988 | IssuesEvent | 2021-10-05 23:56:49 | bsbtd/Teste | https://api.github.com/repos/bsbtd/Teste | opened | CVE-2020-14001 (High) detected in kramdown-1.15.0.gem, kramdown-1.17.0.gem | security vulnerability | ## CVE-2020-14001 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>kramdown-1.15.0.gem</b>, <b>kramdown-1.17.0.gem</b></p></summary>
<p>
<details><summary><b>kramdown-1.15.0.gem</b></p></summary>
<p>kramdown is yet-another-markdown-parser but fast, pure Ruby,
using a strict syntax definition and supporting several common extensions.
</p>
<p>Library home page: <a href="https://rubygems.org/gems/kramdown-1.15.0.gem">https://rubygems.org/gems/kramdown-1.15.0.gem</a></p>
<p>
Dependency Hierarchy:
- jekyll-3.6.2.gem (Root Library)
- :x: **kramdown-1.15.0.gem** (Vulnerable Library)
</details>
<details><summary><b>kramdown-1.17.0.gem</b></p></summary>
<p>kramdown is yet-another-markdown-parser but fast, pure Ruby,
using a strict syntax definition and supporting several common extensions.
</p>
<p>Library home page: <a href="https://rubygems.org/gems/kramdown-1.17.0.gem">https://rubygems.org/gems/kramdown-1.17.0.gem</a></p>
<p>
Dependency Hierarchy:
- github-pages-201.gem (Root Library)
- jekyll-theme-midnight-0.1.1.gem
- jekyll-seo-tag-2.5.0.gem
- jekyll-3.8.5.gem
- :x: **kramdown-1.17.0.gem** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The kramdown gem before 2.3.0 for Ruby processes the template option inside Kramdown documents by default, which allows unintended read access (such as template="/etc/passwd") or unintended embedded Ruby code execution (such as a string that begins with template="string://<%= `). NOTE: kramdown is used in Jekyll, GitLab Pages, GitHub Pages, and Thredded Forum.
<p>Publish Date: 2020-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14001>CVE-2020-14001</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14001">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14001</a></p>
<p>Release Date: 2020-07-17</p>
<p>Fix Resolution: kramdown - 2.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-14001 (High) detected in kramdown-1.15.0.gem, kramdown-1.17.0.gem - ## CVE-2020-14001 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>kramdown-1.15.0.gem</b>, <b>kramdown-1.17.0.gem</b></p></summary>
<p>
<details><summary><b>kramdown-1.15.0.gem</b></p></summary>
<p>kramdown is yet-another-markdown-parser but fast, pure Ruby,
using a strict syntax definition and supporting several common extensions.
</p>
<p>Library home page: <a href="https://rubygems.org/gems/kramdown-1.15.0.gem">https://rubygems.org/gems/kramdown-1.15.0.gem</a></p>
<p>
Dependency Hierarchy:
- jekyll-3.6.2.gem (Root Library)
- :x: **kramdown-1.15.0.gem** (Vulnerable Library)
</details>
<details><summary><b>kramdown-1.17.0.gem</b></p></summary>
<p>kramdown is yet-another-markdown-parser but fast, pure Ruby,
using a strict syntax definition and supporting several common extensions.
</p>
<p>Library home page: <a href="https://rubygems.org/gems/kramdown-1.17.0.gem">https://rubygems.org/gems/kramdown-1.17.0.gem</a></p>
<p>
Dependency Hierarchy:
- github-pages-201.gem (Root Library)
- jekyll-theme-midnight-0.1.1.gem
- jekyll-seo-tag-2.5.0.gem
- jekyll-3.8.5.gem
- :x: **kramdown-1.17.0.gem** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The kramdown gem before 2.3.0 for Ruby processes the template option inside Kramdown documents by default, which allows unintended read access (such as template="/etc/passwd") or unintended embedded Ruby code execution (such as a string that begins with template="string://<%= `). NOTE: kramdown is used in Jekyll, GitLab Pages, GitHub Pages, and Thredded Forum.
<p>Publish Date: 2020-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14001>CVE-2020-14001</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14001">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14001</a></p>
<p>Release Date: 2020-07-17</p>
<p>Fix Resolution: kramdown - 2.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in kramdown gem kramdown gem cve high severity vulnerability vulnerable libraries kramdown gem kramdown gem kramdown gem kramdown is yet another markdown parser but fast pure ruby using a strict syntax definition and supporting several common extensions library home page a href dependency hierarchy jekyll gem root library x kramdown gem vulnerable library kramdown gem kramdown is yet another markdown parser but fast pure ruby using a strict syntax definition and supporting several common extensions library home page a href dependency hierarchy github pages gem root library jekyll theme midnight gem jekyll seo tag gem jekyll gem x kramdown gem vulnerable library found in head commit a href vulnerability details the kramdown gem before for ruby processes the template option inside kramdown documents by default which allows unintended read access such as template etc passwd or unintended embedded ruby code execution such as a string that begins with template string note kramdown is used in jekyll gitlab pages github pages and thredded forum publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution kramdown step up your open source security game with whitesource | 0 |
251,320 | 8,013,697,365 | IssuesEvent | 2018-07-25 01:54:19 | KnowledgeCaptureAndDiscovery/ASSET | https://api.github.com/repos/KnowledgeCaptureAndDiscovery/ASSET | opened | Adding arrows to the same component | enhancement high priority | Right now it's not possible to add an arrow from a component to itself.
We should enable that | 1.0 | Adding arrows to the same component - Right now it's not possible to add an arrow from a component to itself.
We should enable that | non_test | adding arrows to the same component right now it s not possible to add an arrow from a component to itself we should enable that | 0 |
157,579 | 19,959,071,991 | IssuesEvent | 2022-01-28 05:24:06 | JeffResc/IP-API-Node.js | https://api.github.com/repos/JeffResc/IP-API-Node.js | closed | CVE-2019-1010266 (Medium) detected in multiple libraries | security vulnerability | ## CVE-2019-1010266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-1.0.2.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-2.4.2.tgz</b></p></summary>
<p>
<details><summary><b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: IP-API-Node.js/package.json</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.8.11.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: IP-API-Node.js/package.json</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/gulp-jshint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-jshint-1.10.0.tgz (Root Library)
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: IP-API-Node.js/package.json</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/rcloader/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-jshint-1.10.0.tgz (Root Library)
- rcloader-0.1.2.tgz
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/JeffResc/IP-API-Node.js/commit/99b7653bfce099be086c1b68c2b7b8499c3d63af">99b7653bfce099be086c1b68c2b7b8499c3d63af</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-1010266>CVE-2019-1010266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-1010266 (Medium) detected in multiple libraries - ## CVE-2019-1010266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-1.0.2.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-2.4.2.tgz</b></p></summary>
<p>
<details><summary><b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: IP-API-Node.js/package.json</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.8.11.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: IP-API-Node.js/package.json</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/gulp-jshint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-jshint-1.10.0.tgz (Root Library)
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: IP-API-Node.js/package.json</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/rcloader/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-jshint-1.10.0.tgz (Root Library)
- rcloader-0.1.2.tgz
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/JeffResc/IP-API-Node.js/commit/99b7653bfce099be086c1b68c2b7b8499c3d63af">99b7653bfce099be086c1b68c2b7b8499c3d63af</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-1010266>CVE-2019-1010266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file ip api node js package json path to vulnerable library ip api node js node modules lodash package json dependency hierarchy gulp tgz root library vinyl fs tgz glob watcher tgz gaze tgz globule tgz x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file ip api node js package json path to vulnerable library ip api node js node modules gulp jshint node modules lodash package json dependency hierarchy gulp jshint tgz root library x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file ip api node js package json path to vulnerable library ip api node js node modules rcloader node modules lodash package json dependency hierarchy gulp jshint tgz root library rcloader tgz x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash prior to is affected by cwe uncontrolled resource consumption the impact is denial of service the component is date handler the attack vector is attacker provides very long strings which the library attempts to match using a regular expression the fixed version is publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
57,683 | 14,175,359,328 | IssuesEvent | 2020-11-12 21:27:32 | uiowa/uiowa | https://api.github.com/repos/uiowa/uiowa | closed | WSOD when editing layout on page after attempting to add Main content block in LB | bug layout builder | @pyrello @richardbporter Any thoughts on how to fix this?
Apparently Im not the only one having this issue: [https://www.drupal.org/project/layout_library/issues/3099195](https://www.drupal.org/project/layout_library/issues/3099195).
What I tried to do was create a new section (2 column, 75/25) in which I could put the Main page content block in the left (75%) column, and then stack two Statistic blocks in the right (25%) column. I didnt want to lock down the left hand main content block to admin-only editing, as I wanted to allow editors to edit the content in that part of the section. Basically I want to create a section like the opening section on [https://uiowa.edu/academics/leading-programs](https://uiowa.edu/academics/leading-programs), but allow content editors to edit the page rather than get into LB and change a text area in that left-hand column.
To reproduce error: Add a new section, set to any # of columns (doesnt matter % widths). In the new column, click on dropdown arrow and select More > System > Main page content. It does NOT add the new column (it appears as if it cant select the option). After this point, if you exit out without Discarding Changes, it will lock the page in a WSOD with the error: Temporarily Unavailable. The website that you're trying to reach is having technical difficulties and is currently unavailable...
I tried a cache rebuild on the prod site, but it did not fix the problem. Is there a remote command I can use to discard the changes or to avoid this WSOD? | 1.0 | WSOD when editing layout on page after attempting to add Main content block in LB - @pyrello @richardbporter Any thoughts on how to fix this?
Apparently Im not the only one having this issue: [https://www.drupal.org/project/layout_library/issues/3099195](https://www.drupal.org/project/layout_library/issues/3099195).
What I tried to do was create a new section (2 column, 75/25) in which I could put the Main page content block in the left (75%) column, and then stack two Statistic blocks in the right (25%) column. I didnt want to lock down the left hand main content block to admin-only editing, as I wanted to allow editors to edit the content in that part of the section. Basically I want to create a section like the opening section on [https://uiowa.edu/academics/leading-programs](https://uiowa.edu/academics/leading-programs), but allow content editors to edit the page rather than get into LB and change a text area in that left-hand column.
To reproduce error: Add a new section, set to any # of columns (doesnt matter % widths). In the new column, click on dropdown arrow and select More > System > Main page content. It does NOT add the new column (it appears as if it cant select the option). After this point, if you exit out without Discarding Changes, it will lock the page in a WSOD with the error: Temporarily Unavailable. The website that you're trying to reach is having technical difficulties and is currently unavailable...
I tried a cache rebuild on the prod site, but it did not fix the problem. Is there a remote command I can use to discard the changes or to avoid this WSOD? | non_test | wsod when editing layout on page after attempting to add main content block in lb pyrello richardbporter any thoughts on how to fix this apparently im not the only one having this issue what i tried to do was create a new section column in which i could put the main page content block in the left column and then stack two statistic blocks in the right column i didnt want to lock down the left hand main content block to admin only editing as i wanted to allow editors to edit the content in that part of the section basically i want to create a section like the opening section on but allow content editors to edit the page rather than get into lb and change a text area in that left hand column to reproduce error add a new section set to any of columns doesnt matter widths in the new column click on dropdown arrow and select more system main page content it does not add the new column it appears as if it cant select the option after this point if you exit out without discarding changes it will lock the page in a wsod with the error temporarily unavailable the website that you re trying to reach is having technical difficulties and is currently unavailable i tried a cache rebuild on the prod site but it did not fix the problem is there a remote command i can use to discard the changes or to avoid this wsod | 0 |
142,441 | 11,472,752,629 | IssuesEvent | 2020-02-09 19:07:09 | CookieComputing/muscala | https://api.github.com/repos/CookieComputing/muscala | closed | Set up CI/CD pipeline for master merges + PRs | testing | A simple CI/CD pipeline using travis that runs all unit tests in `/tests` would be nice to have set up for potential PRs and to have another machine running checks as opposed to just a developer machine. | 1.0 | Set up CI/CD pipeline for master merges + PRs - A simple CI/CD pipeline using travis that runs all unit tests in `/tests` would be nice to have set up for potential PRs and to have another machine running checks as opposed to just a developer machine. | test | set up ci cd pipeline for master merges prs a simple ci cd pipeline using travis that runs all unit tests in tests would be nice to have set up for potential prs and to have another machine running checks as opposed to just a developer machine | 1 |
27,037 | 27,580,537,459 | IssuesEvent | 2023-03-08 15:55:52 | internetarchive/wari | https://api.github.com/repos/internetarchive/wari | closed | as a data consumer I want the first domain counts to be a dictionary with key=value like so: "archive.org"=5 instead of an array of small dictionaries | enhancement backend usability get statistics endpoint data structure | feedback from sawood | True | as a data consumer I want the first domain counts to be a dictionary with key=value like so: "archive.org"=5 instead of an array of small dictionaries - feedback from sawood | non_test | as a data consumer i want the first domain counts to be a dictionary with key value like so archive org instead of an array of small dictionaries feedback from sawood | 0 |
578,974 | 17,169,436,298 | IssuesEvent | 2021-07-15 00:36:19 | googleapis/nodejs-spanner | https://api.github.com/repos/googleapis/nodejs-spanner | closed | Spanner: should create an encrypted backup of the database failed | api: spanner flakybot: flaky flakybot: issue priority: p1 type: bug | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 44895d2114ac4faa1b6304a5117d5a394de2bc48
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/820dd03e-c33c-4ece-be69-48237a6f2788), [Sponge](http://sponge2/820dd03e-c33c-4ece-be69-48237a6f2788)
status: failed
<details><summary>Test output</summary><br><pre>expected 'Creating backup of database projects/long-door-651/instances/test-instance-1625788339/databases/test-database-1625788339.\n' to match /Backup (.+)test-backup-1625788339-enc of size/
AssertionError: expected 'Creating backup of database projects/long-door-651/instances/test-instance-1625788339/databases/test-database-1625788339.\n' to match /Backup (.+)test-backup-1625788339-enc of size/
at Context.<anonymous> (system-test/spanner.test.js:964:12)</pre></details> | 1.0 | Spanner: should create an encrypted backup of the database failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 44895d2114ac4faa1b6304a5117d5a394de2bc48
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/820dd03e-c33c-4ece-be69-48237a6f2788), [Sponge](http://sponge2/820dd03e-c33c-4ece-be69-48237a6f2788)
status: failed
<details><summary>Test output</summary><br><pre>expected 'Creating backup of database projects/long-door-651/instances/test-instance-1625788339/databases/test-database-1625788339.\n' to match /Backup (.+)test-backup-1625788339-enc of size/
AssertionError: expected 'Creating backup of database projects/long-door-651/instances/test-instance-1625788339/databases/test-database-1625788339.\n' to match /Backup (.+)test-backup-1625788339-enc of size/
at Context.<anonymous> (system-test/spanner.test.js:964:12)</pre></details> | non_test | spanner should create an encrypted backup of the database failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output expected creating backup of database projects long door instances test instance databases test database n to match backup test backup enc of size assertionerror expected creating backup of database projects long door instances test instance databases test database n to match backup test backup enc of size at context system test spanner test js | 0 |
220,407 | 17,193,435,950 | IssuesEvent | 2021-07-16 14:09:07 | vmware-tanzu/velero | https://api.github.com/repos/vmware-tanzu/velero | opened | The "velero-plugin-for-vsphere" used in E2E test should be upgrade to "v1.1.1" | E2E Tests | Otherwise will get the error described in this issue: https://github.com/vmware-tanzu/velero-plugin-for-vsphere/issues/290
**Vote on this issue!**
This is an invitation to the Velero community to vote on issues, you can see the project's [top voted issues listed here](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "I would like to see this bug fixed as soon as possible"
- :-1: for "There are more important bugs to focus on right now"
| 1.0 | The "velero-plugin-for-vsphere" used in E2E test should be upgrade to "v1.1.1" - Otherwise will get the error described in this issue: https://github.com/vmware-tanzu/velero-plugin-for-vsphere/issues/290
**Vote on this issue!**
This is an invitation to the Velero community to vote on issues, you can see the project's [top voted issues listed here](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "I would like to see this bug fixed as soon as possible"
- :-1: for "There are more important bugs to focus on right now"
| test | the velero plugin for vsphere used in test should be upgrade to otherwise will get the error described in this issue vote on this issue this is an invitation to the velero community to vote on issues you can see the project s use the reaction smiley face up to the right of this comment to vote for i would like to see this bug fixed as soon as possible for there are more important bugs to focus on right now | 1 |
281,614 | 24,407,777,475 | IssuesEvent | 2022-10-05 09:31:24 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | opened | [CI] RestartIndexFollowingIT testFollowIndex failing | >test-failure :Distributed/CCR | **Build scan:**
https://gradle-enterprise.elastic.co/s/3ia43d647stqy/tests/:x-pack:plugin:ccr:internalClusterTest/org.elasticsearch.xpack.ccr.RestartIndexFollowingIT/testFollowIndex
**Reproduction line:**
`gradlew ':x-pack:plugin:ccr:internalClusterTest' --tests "org.elasticsearch.xpack.ccr.RestartIndexFollowingIT.testFollowIndex" -Dtests.seed=88838658B243966D -Dtests.locale=en-AU -Dtests.timezone=Israel -Druntime.java=18`
**Applicable branches:**
main
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.ccr.RestartIndexFollowingIT&tests.test=testFollowIndex
**Failure excerpt:**
```
java.lang.AssertionError:
Expected: <411L>
but: was <370L>
at __randomizedtesting.SeedInfo.seed([88838658B243966D:6BEF496A8BFBAFE7]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.elasticsearch.xpack.ccr.RestartIndexFollowingIT.lambda$testFollowIndex$1(RestartIndexFollowingIT.java:96)
at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:1105)
at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:1078)
at org.elasticsearch.xpack.ccr.RestartIndexFollowingIT.testFollowIndex(RestartIndexFollowingIT.java:95)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.lang.reflect.Method.invoke(Method.java:577)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
``` | 1.0 | [CI] RestartIndexFollowingIT testFollowIndex failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/3ia43d647stqy/tests/:x-pack:plugin:ccr:internalClusterTest/org.elasticsearch.xpack.ccr.RestartIndexFollowingIT/testFollowIndex
**Reproduction line:**
`gradlew ':x-pack:plugin:ccr:internalClusterTest' --tests "org.elasticsearch.xpack.ccr.RestartIndexFollowingIT.testFollowIndex" -Dtests.seed=88838658B243966D -Dtests.locale=en-AU -Dtests.timezone=Israel -Druntime.java=18`
**Applicable branches:**
main
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.ccr.RestartIndexFollowingIT&tests.test=testFollowIndex
**Failure excerpt:**
```
java.lang.AssertionError:
Expected: <411L>
but: was <370L>
at __randomizedtesting.SeedInfo.seed([88838658B243966D:6BEF496A8BFBAFE7]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.elasticsearch.xpack.ccr.RestartIndexFollowingIT.lambda$testFollowIndex$1(RestartIndexFollowingIT.java:96)
at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:1105)
at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:1078)
at org.elasticsearch.xpack.ccr.RestartIndexFollowingIT.testFollowIndex(RestartIndexFollowingIT.java:95)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.lang.reflect.Method.invoke(Method.java:577)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
``` | test | restartindexfollowingit testfollowindex failing build scan reproduction line gradlew x pack plugin ccr internalclustertest tests org elasticsearch xpack ccr restartindexfollowingit testfollowindex dtests seed dtests locale en au dtests timezone israel druntime java applicable branches main reproduces locally no failure history failure excerpt java lang assertionerror expected but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org junit assert assertthat assert java at org elasticsearch xpack ccr restartindexfollowingit lambda testfollowindex restartindexfollowingit java at org elasticsearch test estestcase assertbusy estestcase java at org elasticsearch test estestcase assertbusy estestcase java at org elasticsearch xpack ccr restartindexfollowingit testfollowindex restartindexfollowingit java at jdk internal reflect directmethodhandleaccessor invoke directmethodhandleaccessor java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java | 1 |
531,081 | 15,440,230,361 | IssuesEvent | 2021-03-08 02:44:37 | open-wa/wa-automate-nodejs | https://api.github.com/repos/open-wa/wa-automate-nodejs | closed | Feature: webhook config obj for CLI | PRIORITY | There should be a better way to register webhooks and change webhooks without restarting sessions
- [x] Right now, you can only set one single webhook for all events. This differs from modern web standards for webhooks. You should be able to set as many webhooks as you'd like each with a selectable range of possible events.
- [x] Allow webhook registrations to consume basic authentication (maybe there is a username and password for post requests at the webhook address)
- [x] CLI: consume a file (for example `webhooks.json`) that sets up webhooks
- [x] CLI: If only `-w` is provided, then register all events there.
- [x] `removeWebhook` by webhook ID
- [x] Implement `updateWebhook` | 1.0 | Feature: webhook config obj for CLI - There should be a better way to register webhooks and change webhooks without restarting sessions
- [x] Right now, you can only set one single webhook for all events. This differs from modern web standards for webhooks. You should be able to set as many webhooks as you'd like each with a selectable range of possible events.
- [x] Allow webhook registrations to consume basic authentication (maybe there is a username and password for post requests at the webhook address)
- [x] CLI: consume a file (for example `webhooks.json`) that sets up webhooks
- [x] CLI: If only `-w` is provided, then register all events there.
- [x] `removeWebhook` by webhook ID
- [x] Implement `updateWebhook` | non_test | feature webhook config obj for cli there should be a better way to register webhooks and change webhooks without restarting sessions right now you can only set one single webhook for all events this differs from modern web standards for webhooks you should be able to set as many webhooks as you d like each with a selectable range of possible events allow webhook registrations to consume basic authentication maybe there is a username and password for post requests at the webhook address cli consume a file for example webhooks json that sets up webhooks cli if only w is provided then register all events there removewebhook by webhook id implement updatewebhook | 0 |
172,082 | 14,350,150,503 | IssuesEvent | 2020-11-29 19:37:14 | Shiroraven/NayuBot | https://api.github.com/repos/Shiroraven/NayuBot | opened | Come up with a core set of commands for the core module. | documentation enhancement good first issue question | Nayubot is modular by default, but every installation contains at least the core module. Please discuss possible features to be included in Nayubot's standard core library, such that development of those features can start asap. | 1.0 | Come up with a core set of commands for the core module. - Nayubot is modular by default, but every installation contains at least the core module. Please discuss possible features to be included in Nayubot's standard core library, such that development of those features can start asap. | non_test | come up with a core set of commands for the core module nayubot is modular by default but every installation contains at least the core module please discuss possible features to be included in nayubot s standard core library such that development of those features can start asap | 0 |
121,275 | 10,163,651,269 | IssuesEvent | 2019-08-07 09:45:17 | Azure/Azurite | https://api.github.com/repos/Azure/Azurite | closed | Test Failing : Azure-Storage-Node - BlobContainer - listBlobs - should work with blob with space only | azure-storage-node-testcase blob-storage | Failing test case:
Under : ./externaltests/azure-storage-node/test/services/blob/blobservice-container-tests.js:**897:20**
```javascript
it('should work with blob with space only', function(done) {
var blobName1 = ' ';
var blobText1 = 'hello1';
blobs.length = 0;
listBlobs(null, null, null, function() {
assert.equal(blobs.length, 0);
blobService.createBlockBlobFromText(containerName, blobName1, blobText1, function (blobErr1) {
assert.equal(blobErr1, null);
// Test listing 1 blob
listBlobs(null, null, null, function() {
assert.equal(blobs.length, 1);
assert.equal(blobs[0].name, blobName1);
done();
});
});
});
});
```
azure-storage-node tests
base.js:266
BlobContainer
listBlobs
should work with blob with space only:
Uncaught AssertionError [ERR_ASSERTION]: '' == ' '
+ expected - actual
+
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:897:20
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:1024:7
at finalCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:5824:7)
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\filters\retrypolicyfilter.js:189:13
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:801:17
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:1014:11
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:800:15
at processResponseCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:5827:5)
at Request.processResponseCallback [as _callback] (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:329:13)
at Request.self.callback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:185:22)
at Request.<anonymous> (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1161:10)
at IncomingMessage.<anonymous> (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1083:12)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
| 1.0 | Test Failing : Azure-Storage-Node - BlobContainer - listBlobs - should work with blob with space only - Failing test case:
Under : ./externaltests/azure-storage-node/test/services/blob/blobservice-container-tests.js:**897:20**
```javascript
it('should work with blob with space only', function(done) {
var blobName1 = ' ';
var blobText1 = 'hello1';
blobs.length = 0;
listBlobs(null, null, null, function() {
assert.equal(blobs.length, 0);
blobService.createBlockBlobFromText(containerName, blobName1, blobText1, function (blobErr1) {
assert.equal(blobErr1, null);
// Test listing 1 blob
listBlobs(null, null, null, function() {
assert.equal(blobs.length, 1);
assert.equal(blobs[0].name, blobName1);
done();
});
});
});
});
```
azure-storage-node tests
base.js:266
BlobContainer
listBlobs
should work with blob with space only:
Uncaught AssertionError [ERR_ASSERTION]: '' == ' '
+ expected - actual
+
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:897:20
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\test\services\blob\blobservice-container-tests.js:1024:7
at finalCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:5824:7)
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\filters\retrypolicyfilter.js:189:13
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:801:17
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:1014:11
at E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:800:15
at processResponseCallback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\services\blob\blobservice.core.js:5827:5)
at Request.processResponseCallback [as _callback] (E:\repo\azurite\Azurite\externaltests\azure-storage-node\lib\common\services\storageserviceclient.js:329:13)
at Request.self.callback (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:185:22)
at Request.<anonymous> (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1161:10)
at IncomingMessage.<anonymous> (E:\repo\azurite\Azurite\externaltests\azure-storage-node\node_modules\request\request.js:1083:12)
at endReadableNT (_stream_readable.js:1045:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
| test | test failing azure storage node blobcontainer listblobs should work with blob with space only failing test case under externaltests azure storage node test services blob blobservice container tests js javascript it should work with blob with space only function done var var blobs length listblobs null null null function assert equal blobs length blobservice createblockblobfromtext containername function assert equal null test listing blob listblobs null null null function assert equal blobs length assert equal blobs name done azure storage node tests base js blobcontainer listblobs should work with blob with space only uncaught assertionerror expected actual at e repo azurite azurite externaltests azure storage node test services blob blobservice container tests js at e repo azurite azurite externaltests azure storage node test services blob blobservice container tests js at finalcallback e repo azurite azurite externaltests azure storage node lib services blob blobservice core js at e repo azurite azurite externaltests azure storage node lib common filters retrypolicyfilter js at e repo azurite azurite externaltests azure storage node lib common services storageserviceclient js at e repo azurite azurite externaltests azure storage node lib common services storageserviceclient js at e repo azurite azurite externaltests azure storage node lib common services storageserviceclient js at processresponsecallback e repo azurite azurite externaltests azure storage node lib services blob blobservice core js at request processresponsecallback e repo azurite azurite externaltests azure storage node lib common services storageserviceclient js at request self callback e repo azurite azurite externaltests azure storage node node modules request request js at request e repo azurite azurite externaltests azure storage node node modules request request js at incomingmessage e repo azurite azurite externaltests azure storage node node modules request request js at endreadablent stream readable js at combinedtickcallback internal process next tick js at process tickcallback internal process next tick js | 1 |
151,410 | 12,035,346,494 | IssuesEvent | 2020-04-13 17:43:15 | rThamb/soen390-schoolmap | https://api.github.com/repos/rThamb/soen390-schoolmap | opened | Acceptance Test: UC-65 | To be tested by PO sprint4 test | **UC-65, As a user, I want to see the nearest outdoor points of interest.**
**Acceptance Criteria**
The user should be able to view a list of all points of interest that are close to both campuses.
**Procedure**
1. Launch the app
2. Click on the hamburger button situated at the top left hand corner of the screen.
3. Select “Nearby Point Of Interest”
4. Select one of the four types of POI (restaurants, banks, shopping centers, or hospitals)
**Expected output**
It display the 20 closest POI from the type selected (ex: 20 closest restaurants from your current location)
| 2.0 | Acceptance Test: UC-65 - **UC-65, As a user, I want to see the nearest outdoor points of interest.**
**Acceptance Criteria**
The user should be able to view a list of all points of interest that are close to both campuses.
**Procedure**
1. Launch the app
2. Click on the hamburger button situated at the top left hand corner of the screen.
3. Select “Nearby Point Of Interest”
4. Select one of the four types of POI (restaurants, banks, shopping centers, or hospitals)
**Expected output**
It display the 20 closest POI from the type selected (ex: 20 closest restaurants from your current location)
| test | acceptance test uc uc as a user i want to see the nearest outdoor points of interest acceptance criteria the user should be able to view a list of all points of interest that are close to both campuses procedure launch the app click on the hamburger button situated at the top left hand corner of the screen select “nearby point of interest” select one of the four types of poi restaurants banks shopping centers or hospitals expected output it display the closest poi from the type selected ex closest restaurants from your current location | 1 |
21,254 | 6,132,543,974 | IssuesEvent | 2017-06-25 03:35:29 | ganeti/ganeti | https://api.github.com/repos/ganeti/ganeti | closed | Speed up build time | imported_from_google_code Status:Obsolete Type-Refactoring | Originally reported of Google Code with ID 360.
```
It would be nice if we could build each haskell file just once, if possible.
Thanks,
Guido
```
Originally added on 2013-02-06 15:03:47 +0000 UTC. | 1.0 | Speed up build time - Originally reported of Google Code with ID 360.
```
It would be nice if we could build each haskell file just once, if possible.
Thanks,
Guido
```
Originally added on 2013-02-06 15:03:47 +0000 UTC. | non_test | speed up build time originally reported of google code with id it would be nice if we could build each haskell file just once if possible thanks guido originally added on utc | 0 |
310,549 | 26,722,933,586 | IssuesEvent | 2023-01-29 11:10:45 | PalisadoesFoundation/talawa-api | https://api.github.com/repos/PalisadoesFoundation/talawa-api | closed | Resolvers: Create tests for src/lib/directives/authDirective.ts | good first issue points 01 test | - Please coordinate **issue assignment** and **PR reviews** with the contributors listed in this issue https://github.com/PalisadoesFoundation/talawa/issues/359
The Talawa-API code base needs to be 100% reliable. This means we need to have 100% test code coverage.
Tests need to be written for file `src/lib/directives/authDirective.ts`
- We will need the API to be refactored for all methods, classes and/or functions found in this file for testing to be correctly executed.
- When complete, all methods, classes and/or functions in the refactored file will need to be tested. These tests must be placed in a
single file with the name `talawa-api/__tests__/directives/authDirective.spec.ts`. You may need to create the appropriate directory structure to do this.
### IMPORTANT:
Please refer to the parent issue on how to implement these tests correctly:
- https://github.com/PalisadoesFoundation/talawa-api/issues/490
### PR Acceptance Criteria
- When complete this file must show **100%** coverage when merged into the code base. This will be clearly visible when you submit your PR.
- [The current code coverage for the file can be found here](https://app.codecov.io/gh/PalisadoesFoundation/talawa-api/blob/c691d5377f98da582d7b0c1f930b9ec657f5274e/src/lib/directives/authDirective.ts). If the file isn't found in this directory, or there is a 404 error, then tests have not been created.
- The PR will show a report for the code coverage for the file you have added. You can use that as a guide. | 1.0 | Resolvers: Create tests for src/lib/directives/authDirective.ts - - Please coordinate **issue assignment** and **PR reviews** with the contributors listed in this issue https://github.com/PalisadoesFoundation/talawa/issues/359
The Talawa-API code base needs to be 100% reliable. This means we need to have 100% test code coverage.
Tests need to be written for file `src/lib/directives/authDirective.ts`
- We will need the API to be refactored for all methods, classes and/or functions found in this file for testing to be correctly executed.
- When complete, all methods, classes and/or functions in the refactored file will need to be tested. These tests must be placed in a
single file with the name `talawa-api/__tests__/directives/authDirective.spec.ts`. You may need to create the appropriate directory structure to do this.
### IMPORTANT:
Please refer to the parent issue on how to implement these tests correctly:
- https://github.com/PalisadoesFoundation/talawa-api/issues/490
### PR Acceptance Criteria
- When complete this file must show **100%** coverage when merged into the code base. This will be clearly visible when you submit your PR.
- [The current code coverage for the file can be found here](https://app.codecov.io/gh/PalisadoesFoundation/talawa-api/blob/c691d5377f98da582d7b0c1f930b9ec657f5274e/src/lib/directives/authDirective.ts). If the file isn't found in this directory, or there is a 404 error, then tests have not been created.
- The PR will show a report for the code coverage for the file you have added. You can use that as a guide. | test | resolvers create tests for src lib directives authdirective ts please coordinate issue assignment and pr reviews with the contributors listed in this issue the talawa api code base needs to be reliable this means we need to have test code coverage tests need to be written for file src lib directives authdirective ts we will need the api to be refactored for all methods classes and or functions found in this file for testing to be correctly executed when complete all methods classes and or functions in the refactored file will need to be tested these tests must be placed in a single file with the name talawa api tests directives authdirective spec ts you may need to create the appropriate directory structure to do this important please refer to the parent issue on how to implement these tests correctly pr acceptance criteria when complete this file must show coverage when merged into the code base this will be clearly visible when you submit your pr if the file isn t found in this directory or there is a error then tests have not been created the pr will show a report for the code coverage for the file you have added you can use that as a guide | 1 |
140,148 | 18,895,238,737 | IssuesEvent | 2021-11-15 17:08:40 | bgoonz/searchAwesome | https://api.github.com/repos/bgoonz/searchAwesome | closed | CVE-2019-6286 (Medium) detected in lportalliferay-ce-portal-src-7.3.5-ga6-20200930172312275, node-sass-4.11.0.tgz | security vulnerability | ## CVE-2019-6286 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lportalliferay-ce-portal-src-7.3.5-ga6-20200930172312275</b>, <b>node-sass-4.11.0.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.11.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz</a></p>
<p>Path to dependency file: searchAwesome/clones/awesome-stacks/package.json</p>
<p>Path to vulnerable library: /clones/awesome-stacks/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.11.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/bgoonz/searchAwesome/commit/8c366c860f88ff2849d4a7b7832c781154d89ece">8c366c860f88ff2849d4a7b7832c781154d89ece</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::skip_over_scopes in prelexer.hpp when called from Sass::Parser::parse_import(), a similar issue to CVE-2018-11693.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6286>CVE-2019-6286</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/libsass/releases/tag/3.6.0">https://github.com/sass/libsass/releases/tag/3.6.0</a></p>
<p>Release Date: 2019-07-23</p>
<p>Fix Resolution: libsass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-6286 (Medium) detected in lportalliferay-ce-portal-src-7.3.5-ga6-20200930172312275, node-sass-4.11.0.tgz - ## CVE-2019-6286 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lportalliferay-ce-portal-src-7.3.5-ga6-20200930172312275</b>, <b>node-sass-4.11.0.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.11.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz</a></p>
<p>Path to dependency file: searchAwesome/clones/awesome-stacks/package.json</p>
<p>Path to vulnerable library: /clones/awesome-stacks/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.11.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/bgoonz/searchAwesome/commit/8c366c860f88ff2849d4a7b7832c781154d89ece">8c366c860f88ff2849d4a7b7832c781154d89ece</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::skip_over_scopes in prelexer.hpp when called from Sass::Parser::parse_import(), a similar issue to CVE-2018-11693.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6286>CVE-2019-6286</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/libsass/releases/tag/3.6.0">https://github.com/sass/libsass/releases/tag/3.6.0</a></p>
<p>Release Date: 2019-07-23</p>
<p>Fix Resolution: libsass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in lportalliferay ce portal src node sass tgz cve medium severity vulnerability vulnerable libraries lportalliferay ce portal src node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file searchawesome clones awesome stacks package json path to vulnerable library clones awesome stacks node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href found in base branch master vulnerability details in libsass a heap based buffer over read exists in sass prelexer skip over scopes in prelexer hpp when called from sass parser parse import a similar issue to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource | 0 |
27,361 | 4,307,643,346 | IssuesEvent | 2016-07-21 09:46:47 | actimeo/var | https://api.github.com/repos/actimeo/var | opened | Test token_assert is present in all functions | backend tests | Test that all functions from backend raises an error with a wrong token | 1.0 | Test token_assert is present in all functions - Test that all functions from backend raises an error with a wrong token | test | test token assert is present in all functions test that all functions from backend raises an error with a wrong token | 1 |
133,533 | 10,833,377,547 | IssuesEvent | 2019-11-11 12:47:39 | pingcap/pd | https://api.github.com/repos/pingcap/pd | opened | testCoordinatorSuite.TestDispatch is not stable | area/testing kind/bug | Occured in https://github.com/pingcap/pd/pull/1921
<details>
```
[2019-11-11T11:22:41.668Z] FAIL: coordinator_test.go:205: testCoordinatorSuite.TestDispatch
[2019-11-11T11:22:41.668Z]
[2019-11-11T11:22:41.668Z] wait start
[2019-11-11T11:22:41.668Z] wait start
[2019-11-11T11:22:41.668Z] coordinator_test.go:228:
[2019-11-11T11:22:41.668Z] testutil.CheckTransferLeader(c, co.opController.GetOperator(2), operator.OpBalance, 4, 2)
[2019-11-11T11:22:41.668Z] /home/jenkins/agent/workspace/pd_test/go/src/github.com/pingcap/pd/pkg/testutil/operator_check.go:46:
[2019-11-11T11:22:41.668Z] c.Assert(op.Step(0), check.Equals, operator.TransferLeader{FromStore: sourceID, ToStore: targetID})
[2019-11-11T11:22:41.668Z] ... obtained operator.TransferLeader = operator.TransferLeader{FromStore:0x4, ToStore:0x3} ("transfer leader from store 4 to store 3")
[2019-11-11T11:22:41.668Z] ... expected operator.TransferLeader = operator.TransferLeader{FromStore:0x4, ToStore:0x2} ("transfer leader from store 4 to store 2")
```
</details>
| 1.0 | testCoordinatorSuite.TestDispatch is not stable - Occured in https://github.com/pingcap/pd/pull/1921
<details>
```
[2019-11-11T11:22:41.668Z] FAIL: coordinator_test.go:205: testCoordinatorSuite.TestDispatch
[2019-11-11T11:22:41.668Z]
[2019-11-11T11:22:41.668Z] wait start
[2019-11-11T11:22:41.668Z] wait start
[2019-11-11T11:22:41.668Z] coordinator_test.go:228:
[2019-11-11T11:22:41.668Z] testutil.CheckTransferLeader(c, co.opController.GetOperator(2), operator.OpBalance, 4, 2)
[2019-11-11T11:22:41.668Z] /home/jenkins/agent/workspace/pd_test/go/src/github.com/pingcap/pd/pkg/testutil/operator_check.go:46:
[2019-11-11T11:22:41.668Z] c.Assert(op.Step(0), check.Equals, operator.TransferLeader{FromStore: sourceID, ToStore: targetID})
[2019-11-11T11:22:41.668Z] ... obtained operator.TransferLeader = operator.TransferLeader{FromStore:0x4, ToStore:0x3} ("transfer leader from store 4 to store 3")
[2019-11-11T11:22:41.668Z] ... expected operator.TransferLeader = operator.TransferLeader{FromStore:0x4, ToStore:0x2} ("transfer leader from store 4 to store 2")
```
</details>
| test | testcoordinatorsuite testdispatch is not stable occured in fail coordinator test go testcoordinatorsuite testdispatch wait start wait start coordinator test go testutil checktransferleader c co opcontroller getoperator operator opbalance home jenkins agent workspace pd test go src github com pingcap pd pkg testutil operator check go c assert op step check equals operator transferleader fromstore sourceid tostore targetid obtained operator transferleader operator transferleader fromstore tostore transfer leader from store to store expected operator transferleader operator transferleader fromstore tostore transfer leader from store to store | 1 |
274,271 | 23,827,702,531 | IssuesEvent | 2022-09-05 16:20:20 | elastic/kibana | https://api.github.com/repos/elastic/kibana | reopened | Failing test: Chrome UI Functional Tests.test/functional/apps/console/_autocomplete·ts - console app console autocomplete feature with conditional templates should insert different templates depending on the value of type | failed-test needs-team | A test failed on a tracked branch
```
Error: retry.try timeout: Error: expected '\n POST _snapshot/test_repo{\n "type": "fs"\n POST\n }' to contain '"location": "path"'
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.contain (node_modules/@kbn/expect/expect.js:442:10)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-e2796ae8388045d8/elastic/kibana-on-merge/kibana/test/functional/apps/console/_autocomplete.ts:106:32
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at runAttempt (test/common/services/retry/retry_for_success.ts:29:15)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:21)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at Context.<anonymous> (test/functional/apps/console/_autocomplete.ts:103:11)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
at onFailure (test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at Context.<anonymous> (test/functional/apps/console/_autocomplete.ts:103:11)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/15267#885eb642-78c8-416c-8655-287569982f22)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/console/_autocomplete·ts","test.name":"console app console autocomplete feature with conditional templates should insert different templates depending on the value of type","test.failCount":6}} --> | 1.0 | Failing test: Chrome UI Functional Tests.test/functional/apps/console/_autocomplete·ts - console app console autocomplete feature with conditional templates should insert different templates depending on the value of type - A test failed on a tracked branch
```
Error: retry.try timeout: Error: expected '\n POST _snapshot/test_repo{\n "type": "fs"\n POST\n }' to contain '"location": "path"'
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.contain (node_modules/@kbn/expect/expect.js:442:10)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-e2796ae8388045d8/elastic/kibana-on-merge/kibana/test/functional/apps/console/_autocomplete.ts:106:32
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at runAttempt (test/common/services/retry/retry_for_success.ts:29:15)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:21)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at Context.<anonymous> (test/functional/apps/console/_autocomplete.ts:103:11)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
at onFailure (test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at Context.<anonymous> (test/functional/apps/console/_autocomplete.ts:103:11)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/15267#885eb642-78c8-416c-8655-287569982f22)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/console/_autocomplete·ts","test.name":"console app console autocomplete feature with conditional templates should insert different templates depending on the value of type","test.failCount":6}} --> | test | failing test chrome ui functional tests test functional apps console autocomplete·ts console app console autocomplete feature with conditional templates should insert different templates depending on the value of type a test failed on a tracked branch error retry try timeout error expected n post snapshot test repo n type fs n post n to contain location path at assertion assert node modules kbn expect expect js at assertion contain node modules kbn expect expect js at var lib buildkite agent builds kb spot elastic kibana on merge kibana test functional apps console autocomplete ts at runmicrotasks at processticksandrejections node internal process task queues at runattempt test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice try test common services retry retry ts at context test functional apps console autocomplete ts at object apply node modules kbn test target node functional test runner lib mocha wrap function js at onfailure test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice try test common services retry retry ts at context test functional apps console autocomplete ts at object apply node modules kbn test target node functional test runner lib mocha wrap function js first failure | 1 |
14,626 | 3,870,068,550 | IssuesEvent | 2016-04-10 23:39:33 | sensu/sensu | https://api.github.com/repos/sensu/sensu | closed | Unable to connect to RabbitMQ w/ self-signed cert | Configuration Documentation | Howdy all,
I have been thrashing against this problem for awhile and am at a loss on what the issue is. We have an internal CA which we use to deploy certificates; our own services use this to connect to RabbitMQ successfully. However, I have had no such luck with Sensu. An important note is RabbitMQ using both `verify_peer` instead of `verify_none`, and also `fail_if_no_peer_cert` is true, not false (as in the builtin cookbook template), although it fails in both cases.
Some steps I've gone through to validate the config for RMQ and certificates:
* Connect to an `openssl s_server` using `s_client`
* Connect to RabbitMQ using `s_client` and gotten a correct response
* Inspected text output of certs using `openssl x509 -text`
I've read over the RabbitMQ [SSL](http://rabbitmq.com/ssl.html) and [SSL troubleshooting](http://rabbitmq.com/troubleshooting-ssl.html) guides and nothing jumps out. I haven't done the stunnel test they have there but I should get the chance to do that at some point.
The next thing I did was hop into Sensu's embedded irb to see if I could connect. I was unable to. [Here](https://gist.github.com/jakedavis/8422928) is the irb output and relevant RabbitMQ logs from that, as well as openssl output and the sensu client logs.
The credentials are correct as I can connect successfully to the non-SSL port. It is only when I add the SSL options hash to the AMQP connection call that things go sideways. It points to the same certs I successfully use to connect to RMQ using openssl directly. The virtualhost, host, and port are also correct.
The only difference between the certs with which Sensu ships and our own is sha1 vs. sha256 hashing, respectively (and AFAICT). I'm not really sure how to proceed but appreciate any advice.
Let me know what other information I can provide. | 1.0 | Unable to connect to RabbitMQ w/ self-signed cert - Howdy all,
I have been thrashing against this problem for awhile and am at a loss on what the issue is. We have an internal CA which we use to deploy certificates; our own services use this to connect to RabbitMQ successfully. However, I have had no such luck with Sensu. An important note is RabbitMQ using both `verify_peer` instead of `verify_none`, and also `fail_if_no_peer_cert` is true, not false (as in the builtin cookbook template), although it fails in both cases.
Some steps I've gone through to validate the config for RMQ and certificates:
* Connect to an `openssl s_server` using `s_client`
* Connect to RabbitMQ using `s_client` and gotten a correct response
* Inspected text output of certs using `openssl x509 -text`
I've read over the RabbitMQ [SSL](http://rabbitmq.com/ssl.html) and [SSL troubleshooting](http://rabbitmq.com/troubleshooting-ssl.html) guides and nothing jumps out. I haven't done the stunnel test they have there but I should get the chance to do that at some point.
The next thing I did was hop into Sensu's embedded irb to see if I could connect. I was unable to. [Here](https://gist.github.com/jakedavis/8422928) is the irb output and relevant RabbitMQ logs from that, as well as openssl output and the sensu client logs.
The credentials are correct as I can connect successfully to the non-SSL port. It is only when I add the SSL options hash to the AMQP connection call that things go sideways. It points to the same certs I successfully use to connect to RMQ using openssl directly. The virtualhost, host, and port are also correct.
The only difference between the certs with which Sensu ships and our own is sha1 vs. sha256 hashing, respectively (and AFAICT). I'm not really sure how to proceed but appreciate any advice.
Let me know what other information I can provide. | non_test | unable to connect to rabbitmq w self signed cert howdy all i have been thrashing against this problem for awhile and am at a loss on what the issue is we have an internal ca which we use to deploy certificates our own services use this to connect to rabbitmq successfully however i have had no such luck with sensu an important note is rabbitmq using both verify peer instead of verify none and also fail if no peer cert is true not false as in the builtin cookbook template although it fails in both cases some steps i ve gone through to validate the config for rmq and certificates connect to an openssl s server using s client connect to rabbitmq using s client and gotten a correct response inspected text output of certs using openssl text i ve read over the rabbitmq and guides and nothing jumps out i haven t done the stunnel test they have there but i should get the chance to do that at some point the next thing i did was hop into sensu s embedded irb to see if i could connect i was unable to is the irb output and relevant rabbitmq logs from that as well as openssl output and the sensu client logs the credentials are correct as i can connect successfully to the non ssl port it is only when i add the ssl options hash to the amqp connection call that things go sideways it points to the same certs i successfully use to connect to rmq using openssl directly the virtualhost host and port are also correct the only difference between the certs with which sensu ships and our own is vs hashing respectively and afaict i m not really sure how to proceed but appreciate any advice let me know what other information i can provide | 0 |
174,101 | 21,216,879,038 | IssuesEvent | 2022-04-11 08:14:22 | hisptz/scorecard-app | https://api.github.com/repos/hisptz/scorecard-app | reopened | CVE-2022-21681 (High) detected in marked-1.2.9.tgz | security vulnerability | ## CVE-2022-21681 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-1.2.9.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-1.2.9.tgz">https://registry.npmjs.org/marked/-/marked-1.2.9.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- app-2.5.1-beta.1.tgz (Root Library)
- cucumber-7.0.0-rc.0.tgz
- :x: **marked-1.2.9.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hisptz/scorecard-app/commit/e4127c5243859ff9db5def3d533ef17a70551b55">e4127c5243859ff9db5def3d533ef17a70551b55</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Marked is a markdown parser and compiler. Prior to version 4.0.10, the regular expression `inline.reflinkSearch` may cause catastrophic backtracking against some strings and lead to a denial of service (DoS). Anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected. This issue is patched in version 4.0.10. As a workaround, avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources.
<p>Publish Date: 2022-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21681>CVE-2022-21681</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5v2h-r2cx-5xgj">https://github.com/advisories/GHSA-5v2h-r2cx-5xgj</a></p>
<p>Release Date: 2022-01-14</p>
<p>Fix Resolution: marked - 4.0.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-21681 (High) detected in marked-1.2.9.tgz - ## CVE-2022-21681 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-1.2.9.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-1.2.9.tgz">https://registry.npmjs.org/marked/-/marked-1.2.9.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- app-2.5.1-beta.1.tgz (Root Library)
- cucumber-7.0.0-rc.0.tgz
- :x: **marked-1.2.9.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hisptz/scorecard-app/commit/e4127c5243859ff9db5def3d533ef17a70551b55">e4127c5243859ff9db5def3d533ef17a70551b55</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Marked is a markdown parser and compiler. Prior to version 4.0.10, the regular expression `inline.reflinkSearch` may cause catastrophic backtracking against some strings and lead to a denial of service (DoS). Anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected. This issue is patched in version 4.0.10. As a workaround, avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources.
<p>Publish Date: 2022-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21681>CVE-2022-21681</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5v2h-r2cx-5xgj">https://github.com/advisories/GHSA-5v2h-r2cx-5xgj</a></p>
<p>Release Date: 2022-01-14</p>
<p>Fix Resolution: marked - 4.0.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in marked tgz cve high severity vulnerability vulnerable library marked tgz a markdown parser built for speed library home page a href path to dependency file package json path to vulnerable library node modules marked package json dependency hierarchy app beta tgz root library cucumber rc tgz x marked tgz vulnerable library found in head commit a href found in base branch develop vulnerability details marked is a markdown parser and compiler prior to version the regular expression inline reflinksearch may cause catastrophic backtracking against some strings and lead to a denial of service dos anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected this issue is patched in version as a workaround avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution marked step up your open source security game with whitesource | 0 |
15,051 | 3,439,808,752 | IssuesEvent | 2015-12-14 11:28:40 | Geodan/rws-imagine | https://api.github.com/repos/Geodan/rws-imagine | closed | Element in het midden is moeilijk aanklikbaar. | 3D Renewabad test please | Het waren vroeger meerdere objecten maar lijkt nu 1 object geworden. | 1.0 | Element in het midden is moeilijk aanklikbaar. - Het waren vroeger meerdere objecten maar lijkt nu 1 object geworden. | test | element in het midden is moeilijk aanklikbaar het waren vroeger meerdere objecten maar lijkt nu object geworden | 1 |
305,006 | 26,355,566,676 | IssuesEvent | 2023-01-11 09:26:58 | wazuh/wazuh-qa | https://api.github.com/repos/wazuh/wazuh-qa | opened | Read buffer overflow in wazuh-authd when parsing requests | team/qa type/dev-testing status/not-tracked | | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
| 4.4.0|https://github.com/wazuh/wazuh/issues/15861| |
<!-- Important: No section may be left blank. If not, delete it directly (in principle only Steps to reproduce could be left blank in case of not proceeding, although there are always exceptions). -->
## Description
This issue will fix a buffer overflow hazard in wazuh-authd.
## Proposed checks
- [ ] Run auto-enrollment in the agent.
- [ ] Run agent-auth in the agent.
- [ ] Run a custom SSL client and pass the critical request (will be provided later).
## Steps to reproduce
- Compile the manager with AddressSanitizer.
- Set a 4095-byte password in authd.pass.
- Run any SSL client (or a modified agent-auth) and pass the critical string.
## Expected results
The agent shall enroll normally.
| 1.0 | Read buffer overflow in wazuh-authd when parsing requests - | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
| 4.4.0|https://github.com/wazuh/wazuh/issues/15861| |
<!-- Important: No section may be left blank. If not, delete it directly (in principle only Steps to reproduce could be left blank in case of not proceeding, although there are always exceptions). -->
## Description
This issue will fix a buffer overflow hazard in wazuh-authd.
## Proposed checks
- [ ] Run auto-enrollment in the agent.
- [ ] Run agent-auth in the agent.
- [ ] Run a custom SSL client and pass the critical request (will be provided later).
## Steps to reproduce
- Compile the manager with AddressSanitizer.
- Set a 4095-byte password in authd.pass.
- Run any SSL client (or a modified agent-auth) and pass the critical string.
## Expected results
The agent shall enroll normally.
| test | read buffer overflow in wazuh authd when parsing requests target version related issue related pr description this issue will fix a buffer overflow hazard in wazuh authd proposed checks run auto enrollment in the agent run agent auth in the agent run a custom ssl client and pass the critical request will be provided later steps to reproduce compile the manager with addresssanitizer set a byte password in authd pass run any ssl client or a modified agent auth and pass the critical string expected results the agent shall enroll normally | 1 |
199,649 | 15,052,628,908 | IssuesEvent | 2021-02-03 15:23:39 | tracim/tracim | https://api.github.com/repos/tracim/tracim | closed | Bug: a translation is broken in a notification (added {{user}} to [...]) | frontend manually tested not in changelog | ## Description and expectations
A notification is in English in the wall. More on that after the fix.
### Version information
- Tracim version: soon-to-be-released 3.5 | 1.0 | Bug: a translation is broken in a notification (added {{user}} to [...]) - ## Description and expectations
A notification is in English in the wall. More on that after the fix.
### Version information
- Tracim version: soon-to-be-released 3.5 | test | bug a translation is broken in a notification added user to description and expectations a notification is in english in the wall more on that after the fix version information tracim version soon to be released | 1 |
142,751 | 5,477,009,086 | IssuesEvent | 2017-03-12 02:48:42 | NCEAS/eml | https://api.github.com/repos/NCEAS/eml | closed | Data Manager Library: API to enumerate table and field names | Category: datamanager Component: Bugzilla-Id Priority: Normal Status: Resolved Tracker: Bug | ---
Author Name: **Duane Costa** (Duane Costa)
Original Redmine Issue: 2577, https://projects.ecoinformatics.org/ecoinfo/issues/2577
Original Date: 2006-10-27
Original Assignee: Duane Costa
---
Some applications may want to do direct queries on the data tables in the database. The application will need to map entity names to table names, and attribute names to field names. Extend the Data Manager Library API to provide a method to enumerate the table and field names for a given entity.
| 1.0 | Data Manager Library: API to enumerate table and field names - ---
Author Name: **Duane Costa** (Duane Costa)
Original Redmine Issue: 2577, https://projects.ecoinformatics.org/ecoinfo/issues/2577
Original Date: 2006-10-27
Original Assignee: Duane Costa
---
Some applications may want to do direct queries on the data tables in the database. The application will need to map entity names to table names, and attribute names to field names. Extend the Data Manager Library API to provide a method to enumerate the table and field names for a given entity.
| non_test | data manager library api to enumerate table and field names author name duane costa duane costa original redmine issue original date original assignee duane costa some applications may want to do direct queries on the data tables in the database the application will need to map entity names to table names and attribute names to field names extend the data manager library api to provide a method to enumerate the table and field names for a given entity | 0 |
9,497 | 3,047,229,218 | IssuesEvent | 2015-08-11 02:23:23 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | Test failure in CI build 5712 | test-failure | The following test appears to have failed:
[#5712](https://circleci.com/gh/cockroachdb/cockroach/5712):
```
I0811 02:21:45.223529 285 kv/range_cache.go:148 adding descriptor: key="\x00\x00meta2\xff\xff" desc=range_id:4 start_key:"g" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I0811 02:21:45.223916 285 kv/range_cache.go:269 clearing overlapping descriptor: key="\x00\x00meta2\xff\xff" desc=range_id:4 start_key:"g" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I0811 02:21:45.227422 285 kv/range_cache.go:124 lookup range descriptor: key="\x00\x00\x00kg\x00\x01rtn-"
I0811 02:21:45.229519 285 kv/range_cache.go:148 adding descriptor: key="\x00\x00meta2\xff\xff" desc=range_id:4 start_key:"g" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I0811 02:21:45.229950 285 kv/range_cache.go:269 clearing overlapping descriptor: key="\x00\x00meta2\xff\xff" desc=range_id:4 start_key:"g" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
--- FAIL: TestReverseScanWithSplitAndMerge (2.31s)
dist_sender_server_test.go:309: unexpected error on ReverseScan: storage/store.go:141: end key "b" must be greater than start "c"
=== RUN TestStartEqualsEndKeyScan
I0811 02:21:45.260736 285 base/context.go:141 setting up TLS from certificates directory: test_certs
I0811 02:21:45.323132 285 base/context.go:103 setting up TLS from certificates directory: test_certs
I0811 02:21:45.384506 285 rpc/clock_offset.go:155 monitoring cluster offset
I0811 02:21:45.384967 285 multiraft/multiraft.go:446 node 100000001 starting
I0811 02:21:45.385772 285 raft/raft.go:394 100000001 became follower at term 5
I0811 02:21:45.386116 285 raft/raft.go:207 newRaft 100000001 [peers: [100000001], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0811 02:21:45.386327 285 raft/raft.go:473 100000001 is starting a new election at term 5
I0811 02:21:45.386471 285 raft/raft.go:407 100000001 became candidate at term 6
--
c259706863690660.98 127.0.0.1:0 02:21:47.074311 0 ··sending to 127.0.0.1:55063 rpc/send.go:171
c259706863690660.98 127.0.0.1:0 02:21:47.077566 0 ·reply error: *proto.Error kv/txn_coord_sender.go:304
I0811 02:21:47.081510 285 client/db.go:451 failed AdminSplit: storage/replica_command.go:1041: cannot split range at key "\x00\x00meta2\xff\xff"
--- PASS: TestSplitByMeta2KeyMax (0.89s)
FAIL
FAIL github.com/cockroachdb/cockroach/kv 42.762s
=== RUN TestHeartbeatSingleGroup
I0811 02:21:06.810625 298 multiraft/multiraft.go:446 node 1 starting
I0811 02:21:06.811193 298 multiraft/multiraft.go:446 node 2 starting
I0811 02:21:06.812103 298 raft/raft.go:394 1 became follower at term 5
I0811 02:21:06.812430 298 raft/raft.go:207 newRaft 1 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0811 02:21:06.812751 298 raft/raft.go:394 2 became follower at term 5
I0811 02:21:06.813830 298 raft/raft.go:207 newRaft 2 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0811 02:21:06.814214 298 raft/raft.go:473 1 is starting a new election at term 5
I0811 02:21:06.816157 298 raft/raft.go:407 1 became candidate at term 6
I0811 02:21:06.816540 298 raft/raft.go:456 1 received vote from 1 at term 6
```
Please assign, take a look and update the issue accordingly. | 1.0 | Test failure in CI build 5712 - The following test appears to have failed:
[#5712](https://circleci.com/gh/cockroachdb/cockroach/5712):
```
I0811 02:21:45.223529 285 kv/range_cache.go:148 adding descriptor: key="\x00\x00meta2\xff\xff" desc=range_id:4 start_key:"g" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I0811 02:21:45.223916 285 kv/range_cache.go:269 clearing overlapping descriptor: key="\x00\x00meta2\xff\xff" desc=range_id:4 start_key:"g" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I0811 02:21:45.227422 285 kv/range_cache.go:124 lookup range descriptor: key="\x00\x00\x00kg\x00\x01rtn-"
I0811 02:21:45.229519 285 kv/range_cache.go:148 adding descriptor: key="\x00\x00meta2\xff\xff" desc=range_id:4 start_key:"g" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I0811 02:21:45.229950 285 kv/range_cache.go:269 clearing overlapping descriptor: key="\x00\x00meta2\xff\xff" desc=range_id:4 start_key:"g" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
--- FAIL: TestReverseScanWithSplitAndMerge (2.31s)
dist_sender_server_test.go:309: unexpected error on ReverseScan: storage/store.go:141: end key "b" must be greater than start "c"
=== RUN TestStartEqualsEndKeyScan
I0811 02:21:45.260736 285 base/context.go:141 setting up TLS from certificates directory: test_certs
I0811 02:21:45.323132 285 base/context.go:103 setting up TLS from certificates directory: test_certs
I0811 02:21:45.384506 285 rpc/clock_offset.go:155 monitoring cluster offset
I0811 02:21:45.384967 285 multiraft/multiraft.go:446 node 100000001 starting
I0811 02:21:45.385772 285 raft/raft.go:394 100000001 became follower at term 5
I0811 02:21:45.386116 285 raft/raft.go:207 newRaft 100000001 [peers: [100000001], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0811 02:21:45.386327 285 raft/raft.go:473 100000001 is starting a new election at term 5
I0811 02:21:45.386471 285 raft/raft.go:407 100000001 became candidate at term 6
--
c259706863690660.98 127.0.0.1:0 02:21:47.074311 0 ··sending to 127.0.0.1:55063 rpc/send.go:171
c259706863690660.98 127.0.0.1:0 02:21:47.077566 0 ·reply error: *proto.Error kv/txn_coord_sender.go:304
I0811 02:21:47.081510 285 client/db.go:451 failed AdminSplit: storage/replica_command.go:1041: cannot split range at key "\x00\x00meta2\xff\xff"
--- PASS: TestSplitByMeta2KeyMax (0.89s)
FAIL
FAIL github.com/cockroachdb/cockroach/kv 42.762s
=== RUN TestHeartbeatSingleGroup
I0811 02:21:06.810625 298 multiraft/multiraft.go:446 node 1 starting
I0811 02:21:06.811193 298 multiraft/multiraft.go:446 node 2 starting
I0811 02:21:06.812103 298 raft/raft.go:394 1 became follower at term 5
I0811 02:21:06.812430 298 raft/raft.go:207 newRaft 1 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0811 02:21:06.812751 298 raft/raft.go:394 2 became follower at term 5
I0811 02:21:06.813830 298 raft/raft.go:207 newRaft 2 [peers: [1,2], term: 5, commit: 10, applied: 10, lastindex: 10, lastterm: 5]
I0811 02:21:06.814214 298 raft/raft.go:473 1 is starting a new election at term 5
I0811 02:21:06.816157 298 raft/raft.go:407 1 became candidate at term 6
I0811 02:21:06.816540 298 raft/raft.go:456 1 received vote from 1 at term 6
```
Please assign, take a look and update the issue accordingly. | test | test failure in ci build the following test appears to have failed kv range cache go adding descriptor key xff xff desc range id start key g end key replicas next replica id kv range cache go clearing overlapping descriptor key xff xff desc range id start key g end key replicas next replica id kv range cache go lookup range descriptor key kv range cache go adding descriptor key xff xff desc range id start key g end key replicas next replica id kv range cache go clearing overlapping descriptor key xff xff desc range id start key g end key replicas next replica id fail testreversescanwithsplitandmerge dist sender server test go unexpected error on reversescan storage store go end key b must be greater than start c run teststartequalsendkeyscan base context go setting up tls from certificates directory test certs base context go setting up tls from certificates directory test certs rpc clock offset go monitoring cluster offset multiraft multiraft go node starting raft raft go became follower at term raft raft go newraft term commit applied lastindex lastterm raft raft go is starting a new election at term raft raft go became candidate at term ··sending to rpc send go ·reply error proto error kv txn coord sender go client db go failed adminsplit storage replica command go cannot split range at key xff xff pass fail fail github com cockroachdb cockroach kv run testheartbeatsinglegroup multiraft multiraft go node starting multiraft multiraft go node starting raft raft go became follower at term raft raft go newraft term commit applied lastindex lastterm raft raft go became follower at term raft raft go newraft term commit applied lastindex lastterm raft raft go is starting a new election at term raft raft go became candidate at term raft raft go received vote from at term please assign take a look and update the issue accordingly | 1 |
240,641 | 20,053,543,059 | IssuesEvent | 2022-02-03 09:37:09 | ably/ably-dotnet | https://api.github.com/repos/ably/ably-dotnet | closed | Skipped Test: PresenceSandboxSpecs.WhenChannelBecomesAttached_ShouldSendQueuedMessagesAndInitiateSYNC | failing-test | Re-enable the `PresenceSandboxSpecs.WhenChannelBecomesAttached_ShouldSendQueuedMessagesAndInitiateSYNC` test.
┆Issue is synchronized with this [Jira Uncategorised](https://ably.atlassian.net/browse/SDK-1200) by [Unito](https://www.unito.io)
| 1.0 | Skipped Test: PresenceSandboxSpecs.WhenChannelBecomesAttached_ShouldSendQueuedMessagesAndInitiateSYNC - Re-enable the `PresenceSandboxSpecs.WhenChannelBecomesAttached_ShouldSendQueuedMessagesAndInitiateSYNC` test.
┆Issue is synchronized with this [Jira Uncategorised](https://ably.atlassian.net/browse/SDK-1200) by [Unito](https://www.unito.io)
| test | skipped test presencesandboxspecs whenchannelbecomesattached shouldsendqueuedmessagesandinitiatesync re enable the presencesandboxspecs whenchannelbecomesattached shouldsendqueuedmessagesandinitiatesync test ┆issue is synchronized with this by | 1 |
269,781 | 23,465,179,784 | IssuesEvent | 2022-08-16 16:06:55 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Font color issue in pagination of Query loop block in editor side | [Type] Bug Needs Testing [Block] Query Pagination | ### Description
When I create a page and I use query loop block in it, then the issue is coming in font color of pagination of that block on the **backend** side.
Whenever I am giving font color in pagination, then font color is being applied only on the **current** page number and **dots**. But when I set the background color in that pagination, then all the numbers of the pagination, that font color gets applied. So there is no consistency in the font color of Pagination, that's why there is an issue in the font color of Pagination on the backend side.
### Step-by-step reproduction instructions
1. Go to the backend of any page and post
2. Now select the query loop block of the Gutenberg blocks
3. and first of all change the font color of the pagination
4. If you change the font color of Pagination, then only dots and the current Pagination number will be applied to that font color.
5. Now in Pagination you select the background color. So after changing the background color, that font color will be applied to all the numbers of the pagination.
### Screenshots, screen recording, code snippet
I have added a video link of the pagination font color issue in the query loop block.
[https://www.loom.com/share/d869aae81ab944ca93c5779ecf103f73](https://www.loom.com/share/d869aae81ab944ca93c5779ecf103f73)
### Environment info
- WordPress v5.9.1
- Twenty Twenty-One Theme
- Chrome Browser ( Version 99.0.4844.51 )
- Mac OS
### Please confirm that you have searched existing issues in the repo.
No
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
Yes | 1.0 | Font color issue in pagination of Query loop block in editor side - ### Description
When I create a page and I use query loop block in it, then the issue is coming in font color of pagination of that block on the **backend** side.
Whenever I am giving font color in pagination, then font color is being applied only on the **current** page number and **dots**. But when I set the background color in that pagination, then all the numbers of the pagination, that font color gets applied. So there is no consistency in the font color of Pagination, that's why there is an issue in the font color of Pagination on the backend side.
### Step-by-step reproduction instructions
1. Go to the backend of any page and post
2. Now select the query loop block of the Gutenberg blocks
3. and first of all change the font color of the pagination
4. If you change the font color of Pagination, then only dots and the current Pagination number will be applied to that font color.
5. Now in Pagination you select the background color. So after changing the background color, that font color will be applied to all the numbers of the pagination.
### Screenshots, screen recording, code snippet
I have added a video link of the pagination font color issue in the query loop block.
[https://www.loom.com/share/d869aae81ab944ca93c5779ecf103f73](https://www.loom.com/share/d869aae81ab944ca93c5779ecf103f73)
### Environment info
- WordPress v5.9.1
- Twenty Twenty-One Theme
- Chrome Browser ( Version 99.0.4844.51 )
- Mac OS
### Please confirm that you have searched existing issues in the repo.
No
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
Yes | test | font color issue in pagination of query loop block in editor side description when i create a page and i use query loop block in it then the issue is coming in font color of pagination of that block on the backend side whenever i am giving font color in pagination then font color is being applied only on the current page number and dots but when i set the background color in that pagination then all the numbers of the pagination that font color gets applied so there is no consistency in the font color of pagination that s why there is an issue in the font color of pagination on the backend side step by step reproduction instructions go to the backend of any page and post now select the query loop block of the gutenberg blocks and first of all change the font color of the pagination if you change the font color of pagination then only dots and the current pagination number will be applied to that font color now in pagination you select the background color so after changing the background color that font color will be applied to all the numbers of the pagination screenshots screen recording code snippet i have added a video link of the pagination font color issue in the query loop block environment info wordpress twenty twenty one theme chrome browser version mac os please confirm that you have searched existing issues in the repo no please confirm that you have tested with all plugins deactivated except gutenberg yes | 1 |
304,712 | 26,327,004,191 | IssuesEvent | 2023-01-10 07:44:20 | saleor/saleor-dashboard | https://api.github.com/repos/saleor/saleor-dashboard | closed | Standardize test locators to data-test-id | maintenance automation tests | ### What I'm trying to achieve
I want the frontend team to set all the test attributes to data-test-id
At the moment it is very consistent, some of those are named 'data-test' and some 'data-test-id'
### Describe a proposed solution
Run all the cypress test
ctrl-h replace and set every test locator to data-test-id
Run all the cypress tests again to make sure nothing stopped working
### Acceptance Criteria
- [ ] All test locators are set to data-test-id
Feel free to reach to me if You have any questions
| 1.0 | Standardize test locators to data-test-id - ### What I'm trying to achieve
I want the frontend team to set all the test attributes to data-test-id
At the moment it is very consistent, some of those are named 'data-test' and some 'data-test-id'
### Describe a proposed solution
Run all the cypress test
ctrl-h replace and set every test locator to data-test-id
Run all the cypress tests again to make sure nothing stopped working
### Acceptance Criteria
- [ ] All test locators are set to data-test-id
Feel free to reach to me if You have any questions
| test | standardize test locators to data test id what i m trying to achieve i want the frontend team to set all the test attributes to data test id at the moment it is very consistent some of those are named data test and some data test id describe a proposed solution run all the cypress test ctrl h replace and set every test locator to data test id run all the cypress tests again to make sure nothing stopped working acceptance criteria all test locators are set to data test id feel free to reach to me if you have any questions | 1 |
70,244 | 7,180,963,766 | IssuesEvent | 2018-02-01 02:01:53 | Microsoft/vscode | https://api.github.com/repos/Microsoft/vscode | closed | Test: multi root compound debugging | debug testplan-item | Refs: https://github.com/Microsoft/vscode/issues/38134
- [x] win @rebornix
- [x] mac **@weinand**
- [x] linux @RMacfarlane
Complexity: 4
This milestone we have added support for launch configurations in a multi root worskpace. Verify:
* You can add a new launch configuration in your workspace settings file and this configuration shows nicely in the debug dropdown and can be properly executed
* You can add a compound configuration in your workspace settings which can reference launch configurations across different folders. Launching this compound configuration starts the listed configurations as expected and each seperate configuration has it's variable substitued in the context of its root (example `${workspaceFolder}`)
* You can scope compound configuration list per folder by specifiying an object which more precisely scopes the configuration.
* You can resolve variables per folder. For example `${workspaceFolder:isidor}` will be resolved against your workspace folder named `isidor` thus giving the path of my folder named isidor. This should nicely work for all variables | 1.0 | Test: multi root compound debugging - Refs: https://github.com/Microsoft/vscode/issues/38134
- [x] win @rebornix
- [x] mac **@weinand**
- [x] linux @RMacfarlane
Complexity: 4
This milestone we have added support for launch configurations in a multi root worskpace. Verify:
* You can add a new launch configuration in your workspace settings file and this configuration shows nicely in the debug dropdown and can be properly executed
* You can add a compound configuration in your workspace settings which can reference launch configurations across different folders. Launching this compound configuration starts the listed configurations as expected and each seperate configuration has it's variable substitued in the context of its root (example `${workspaceFolder}`)
* You can scope compound configuration list per folder by specifiying an object which more precisely scopes the configuration.
* You can resolve variables per folder. For example `${workspaceFolder:isidor}` will be resolved against your workspace folder named `isidor` thus giving the path of my folder named isidor. This should nicely work for all variables | test | test multi root compound debugging refs win rebornix mac weinand linux rmacfarlane complexity this milestone we have added support for launch configurations in a multi root worskpace verify you can add a new launch configuration in your workspace settings file and this configuration shows nicely in the debug dropdown and can be properly executed you can add a compound configuration in your workspace settings which can reference launch configurations across different folders launching this compound configuration starts the listed configurations as expected and each seperate configuration has it s variable substitued in the context of its root example workspacefolder you can scope compound configuration list per folder by specifiying an object which more precisely scopes the configuration you can resolve variables per folder for example workspacefolder isidor will be resolved against your workspace folder named isidor thus giving the path of my folder named isidor this should nicely work for all variables | 1 |
92,977 | 8,390,240,956 | IssuesEvent | 2018-10-09 12:03:56 | kartoza/geonode | https://api.github.com/repos/kartoza/geonode | closed | fix testing urls | testing | After changing staging.geonode.kartoza.com to testing.geonode.kartoza.com, thumbnails and possibly other content is not working.
Update urls to make everything work again. | 1.0 | fix testing urls - After changing staging.geonode.kartoza.com to testing.geonode.kartoza.com, thumbnails and possibly other content is not working.
Update urls to make everything work again. | test | fix testing urls after changing staging geonode kartoza com to testing geonode kartoza com thumbnails and possibly other content is not working update urls to make everything work again | 1 |
177,825 | 13,749,529,982 | IssuesEvent | 2020-10-06 10:38:06 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | Multiple CacheStatisticsMXBeanImpl as Leftover MBeans failures | Priority: High Source: Internal Team: Core Type: Test-Failure | - Fails on `Hazelcast-4.master-CorrettoJDK8`
- Fails on [Build #88 (Sep 2, 2020 6:15:00 AM)](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-4.master-CorrettoJDK8/88/testReport/junit/com.hazelcast.cache/HazelcastServerCachingProviderTest/com_hazelcast_cache_HazelcastServerCachingProviderTest/)
- Error
```
Leftover MBeans are still registered with the platform MBeanServer: [com.hazelcast.cache.impl.CacheStatisticsMXBeanImpl[javax.cache:type=CacheStatistics,CacheManager=,Cache=cache1]]
```
- Stacktrace
```
java.lang.AssertionError: Leftover MBeans are still registered with the platform MBeanServer: [com.hazelcast.cache.impl.CacheStatisticsMXBeanImpl[javax.cache:type=CacheStatistics,CacheManager=,Cache=cache1]]
at org.junit.Assert.fail(Assert.java:88)
at com.hazelcast.cache.jsr.JsrTestUtil.assertNoMBeanLeftovers(JsrTestUtil.java:209)
at com.hazelcast.cache.jsr.JsrTestUtil.setup(JsrTestUtil.java:57)
at com.hazelcast.cache.HazelcastServerCachingProviderTest.init(HazelcastServerCachingProviderTest.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at com.hazelcast.test.AbstractHazelcastClassRunner$1.evaluate(AbstractHazelcastClassRunner.java:301)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
....
```
- Similar failure for `HazelcastClientCachingProviderTest `
```
Leftover MBeans are still registered with the platform MBeanServer: [com.hazelcast.cache.impl.CacheStatisticsMXBeanImpl[javax.cache:type=CacheStatistics,CacheManager=hazelcast,Cache=cache-B], com.hazelcast.cache.impl.CacheStatisticsMXBeanImpl[javax.cache:type=CacheStatistics,CacheManager=hazelcast,Cache=cache-A], com.hazelcast.cache.impl.CacheStatisticsMXBeanImpl[javax.cache:type=CacheStatistics,CacheManager=,Cache=cache1]]
```
- Causing [CacheCreateUseDestroyTest](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-4.master-Solaris-OracleJDK8/217/com.hazelcast$hazelcast/testReport/com.hazelcast.cache.impl/CacheCreateUseDestroyTest/testCache_whenDestroyedByCacheManager_OBJECT_/) to fail on `Hazelcast-4.master-Solaris-OracleJDK8`
- Causing [MergePolicyValidatorCachingProviderIntegrationTest](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-4.master-Solaris-OracleJDK8/217/com.hazelcast$hazelcast/testReport/com.hazelcast.internal.config/MergePolicyValidatorCachingProviderIntegrationTest/com_hazelcast_internal_config_MergePolicyValidatorCachingProviderIntegrationTest/) to fail on `Hazelcast-4.master-Solaris-OracleJDK8`
- It is also observed many other builds. i.e. `Hazelcast-4.master-sonar`
- It is also causing failures on `EE` side. [Sample on EE](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-EE-4.master-sonar/587/testReport/com.hazelcast.cache/EnterpriseCacheCreationTest/createSingleCache/)
- kindly check | 1.0 | Multiple CacheStatisticsMXBeanImpl as Leftover MBeans failures - - Fails on `Hazelcast-4.master-CorrettoJDK8`
- Fails on [Build #88 (Sep 2, 2020 6:15:00 AM)](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-4.master-CorrettoJDK8/88/testReport/junit/com.hazelcast.cache/HazelcastServerCachingProviderTest/com_hazelcast_cache_HazelcastServerCachingProviderTest/)
- Error
```
Leftover MBeans are still registered with the platform MBeanServer: [com.hazelcast.cache.impl.CacheStatisticsMXBeanImpl[javax.cache:type=CacheStatistics,CacheManager=,Cache=cache1]]
```
- Stacktrace
```
java.lang.AssertionError: Leftover MBeans are still registered with the platform MBeanServer: [com.hazelcast.cache.impl.CacheStatisticsMXBeanImpl[javax.cache:type=CacheStatistics,CacheManager=,Cache=cache1]]
at org.junit.Assert.fail(Assert.java:88)
at com.hazelcast.cache.jsr.JsrTestUtil.assertNoMBeanLeftovers(JsrTestUtil.java:209)
at com.hazelcast.cache.jsr.JsrTestUtil.setup(JsrTestUtil.java:57)
at com.hazelcast.cache.HazelcastServerCachingProviderTest.init(HazelcastServerCachingProviderTest.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at com.hazelcast.test.AbstractHazelcastClassRunner$1.evaluate(AbstractHazelcastClassRunner.java:301)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
....
```
- Similar failure for `HazelcastClientCachingProviderTest `
```
Leftover MBeans are still registered with the platform MBeanServer: [com.hazelcast.cache.impl.CacheStatisticsMXBeanImpl[javax.cache:type=CacheStatistics,CacheManager=hazelcast,Cache=cache-B], com.hazelcast.cache.impl.CacheStatisticsMXBeanImpl[javax.cache:type=CacheStatistics,CacheManager=hazelcast,Cache=cache-A], com.hazelcast.cache.impl.CacheStatisticsMXBeanImpl[javax.cache:type=CacheStatistics,CacheManager=,Cache=cache1]]
```
- Causing [CacheCreateUseDestroyTest](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-4.master-Solaris-OracleJDK8/217/com.hazelcast$hazelcast/testReport/com.hazelcast.cache.impl/CacheCreateUseDestroyTest/testCache_whenDestroyedByCacheManager_OBJECT_/) to fail on `Hazelcast-4.master-Solaris-OracleJDK8`
- Causing [MergePolicyValidatorCachingProviderIntegrationTest](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-4.master-Solaris-OracleJDK8/217/com.hazelcast$hazelcast/testReport/com.hazelcast.internal.config/MergePolicyValidatorCachingProviderIntegrationTest/com_hazelcast_internal_config_MergePolicyValidatorCachingProviderIntegrationTest/) to fail on `Hazelcast-4.master-Solaris-OracleJDK8`
- It is also observed many other builds. i.e. `Hazelcast-4.master-sonar`
- It is also causing failures on `EE` side. [Sample on EE](http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-EE-4.master-sonar/587/testReport/com.hazelcast.cache/EnterpriseCacheCreationTest/createSingleCache/)
- kindly check | test | multiple cachestatisticsmxbeanimpl as leftover mbeans failures fails on hazelcast master fails on error leftover mbeans are still registered with the platform mbeanserver stacktrace java lang assertionerror leftover mbeans are still registered with the platform mbeanserver at org junit assert fail assert java at com hazelcast cache jsr jsrtestutil assertnombeanleftovers jsrtestutil java at com hazelcast cache jsr jsrtestutil setup jsrtestutil java at com hazelcast cache hazelcastservercachingprovidertest init hazelcastservercachingprovidertest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at com hazelcast test abstracthazelcastclassrunner evaluate abstracthazelcastclassrunner java at org junit runners parentrunner run parentrunner java similar failure for hazelcastclientcachingprovidertest leftover mbeans are still registered with the platform mbeanserver com hazelcast cache impl cachestatisticsmxbeanimpl com hazelcast cache impl cachestatisticsmxbeanimpl causing to fail on hazelcast master solaris causing to fail on hazelcast master solaris it is also observed many other builds i e hazelcast master sonar it is also causing failures on ee side kindly check | 1 |
39,168 | 5,221,063,608 | IssuesEvent | 2017-01-26 23:51:45 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | teamcity: failed tests on master: testrace/TestStoreRangeUpReplicate | Robot test-failure | The following tests appear to have failed:
[#130061](https://teamcity.cockroachdb.com/viewLog.html?buildId=130061):
```
--- FAIL: testrace/TestStoreRangeUpReplicate (0.003s)
client_raft_test.go:1032: expected 1 preemptive snapshots, but found 2
```
Please assign, take a look and update the issue accordingly. | 1.0 | teamcity: failed tests on master: testrace/TestStoreRangeUpReplicate - The following tests appear to have failed:
[#130061](https://teamcity.cockroachdb.com/viewLog.html?buildId=130061):
```
--- FAIL: testrace/TestStoreRangeUpReplicate (0.003s)
client_raft_test.go:1032: expected 1 preemptive snapshots, but found 2
```
Please assign, take a look and update the issue accordingly. | test | teamcity failed tests on master testrace teststorerangeupreplicate the following tests appear to have failed fail testrace teststorerangeupreplicate client raft test go expected preemptive snapshots but found please assign take a look and update the issue accordingly | 1 |
9,563 | 8,034,320,047 | IssuesEvent | 2018-07-29 17:15:26 | Khan/KaTeX | https://api.github.com/repos/Khan/KaTeX | closed | Add codecov to package.json as a devDependency | infrastructure | If we do this, we can cache it on circleci and avoid a separate install. | 1.0 | Add codecov to package.json as a devDependency - If we do this, we can cache it on circleci and avoid a separate install. | non_test | add codecov to package json as a devdependency if we do this we can cache it on circleci and avoid a separate install | 0 |
66,562 | 12,800,956,177 | IssuesEvent | 2020-07-02 18:08:14 | iKostanOrg/codewars | https://api.github.com/repos/iKostanOrg/codewars | closed | Unused import pytest | Codacy Code Style bug code quality codewars issues | ### [Codacy](https://app.codacy.com/manual/ikostan/codewars/commit?cid=455197410) detected an issue:
#### Message: `Unused import pytest`
#### Occurred on:
+ **Commit**: 2438c020afdffc8abc21d6c699b7570b6caf1e6a
+ **File**: [kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py](https://github.com/ikostan/codewars/blob/2438c020afdffc8abc21d6c699b7570b6caf1e6a/kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py)
+ **LineNum**: [8](https://github.com/ikostan/codewars/blob/2438c020afdffc8abc21d6c699b7570b6caf1e6a/kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py#L8)
+ **Code**: `import pytest`
#### Currently on:
+ **Commit**: 7a1f5f875db8008e1c79e077ffd7c51110fa1e6f
+ **File**: [kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py](https://github.com/ikostan/codewars/blob/7a1f5f875db8008e1c79e077ffd7c51110fa1e6f/kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py)
+ **LineNum**: [8](https://github.com/ikostan/codewars/blob/7a1f5f875db8008e1c79e077ffd7c51110fa1e6f/kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py#L8)
| 3.0 | Unused import pytest - ### [Codacy](https://app.codacy.com/manual/ikostan/codewars/commit?cid=455197410) detected an issue:
#### Message: `Unused import pytest`
#### Occurred on:
+ **Commit**: 2438c020afdffc8abc21d6c699b7570b6caf1e6a
+ **File**: [kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py](https://github.com/ikostan/codewars/blob/2438c020afdffc8abc21d6c699b7570b6caf1e6a/kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py)
+ **LineNum**: [8](https://github.com/ikostan/codewars/blob/2438c020afdffc8abc21d6c699b7570b6caf1e6a/kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py#L8)
+ **Code**: `import pytest`
#### Currently on:
+ **Commit**: 7a1f5f875db8008e1c79e077ffd7c51110fa1e6f
+ **File**: [kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py](https://github.com/ikostan/codewars/blob/7a1f5f875db8008e1c79e077ffd7c51110fa1e6f/kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py)
+ **LineNum**: [8](https://github.com/ikostan/codewars/blob/7a1f5f875db8008e1c79e077ffd7c51110fa1e6f/kyu_6/binary_to_text_ascii_conversion/test_binary_to_string.py#L8)
| non_test | unused import pytest detected an issue message unused import pytest occurred on commit file linenum code import pytest currently on commit file linenum | 0 |
47,142 | 24,888,193,658 | IssuesEvent | 2022-10-28 09:35:36 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | Commit latency is fluctuating and too high | kind/bug scope/broker area/performance severity/high kind/research | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Based on experiments and benchmarks done https://github.com/camunda-cloud/zeebe/issues/8425#issuecomment-1007227467 we have observed a high commit latency (100-250ms), which seem to ofc affect all other latencies and in the end also impact the throughput.
Normally we can reach 25-50ms in commit latency (especially in 1.2.9 it is more likely to reach this latency), but it happens from time to time (in 1.3.0 more often) that we jump to a latency of 100-250 ms.

We should investigate further what can cause this and why.
Might be related to #8132
**To Reproduce**
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
See https://github.com/camunda-cloud/zeebe/issues/8425 run a benchmark with 1.2.x or with 1.3 (more likely). Ideally to decrease the blast radius/scope use one partition and lesser load (100 pi/s and 3 worker is enough).
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The commit latency is constant at a level of 25-50 ms.
**Environment:**
- Zeebe Version: 1.2.9, 1.3.0, develop | True | Commit latency is fluctuating and too high - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Based on experiments and benchmarks done https://github.com/camunda-cloud/zeebe/issues/8425#issuecomment-1007227467 we have observed a high commit latency (100-250ms), which seem to ofc affect all other latencies and in the end also impact the throughput.
Normally we can reach 25-50ms in commit latency (especially in 1.2.9 it is more likely to reach this latency), but it happens from time to time (in 1.3.0 more often) that we jump to a latency of 100-250 ms.

We should investigate further what can cause this and why.
Might be related to #8132
**To Reproduce**
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
See https://github.com/camunda-cloud/zeebe/issues/8425 run a benchmark with 1.2.x or with 1.3 (more likely). Ideally to decrease the blast radius/scope use one partition and lesser load (100 pi/s and 3 worker is enough).
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The commit latency is constant at a level of 25-50 ms.
**Environment:**
- Zeebe Version: 1.2.9, 1.3.0, develop | non_test | commit latency is fluctuating and too high describe the bug based on experiments and benchmarks done we have observed a high commit latency which seem to ofc affect all other latencies and in the end also impact the throughput normally we can reach in commit latency especially in it is more likely to reach this latency but it happens from time to time in more often that we jump to a latency of ms we should investigate further what can cause this and why might be related to to reproduce steps to reproduce the behavior if possible add a minimal reproducer code sample when using the java client see run a benchmark with x or with more likely ideally to decrease the blast radius scope use one partition and lesser load pi s and worker is enough expected behavior the commit latency is constant at a level of ms environment zeebe version develop | 0 |
132,737 | 10,761,822,360 | IssuesEvent | 2019-10-31 21:42:38 | mustachematt/JamNotJelly-Note-Taker | https://api.github.com/repos/mustachematt/JamNotJelly-Note-Taker | opened | Add testing tags to html files | enhancement testing | This will make testing easier as testers can select the elements by id to use for testing. | 1.0 | Add testing tags to html files - This will make testing easier as testers can select the elements by id to use for testing. | test | add testing tags to html files this will make testing easier as testers can select the elements by id to use for testing | 1 |
28,127 | 4,365,300,885 | IssuesEvent | 2016-08-03 10:17:35 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | stress: failed test in cockroach/kv/kv.test: TestKVDBTransaction | Robot test-failure | Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/543b71b3103549cb00427276bb5a35cbce6fba30
Stress build found a failed test:
```
=== RUN TestKVDBTransaction
I160615 04:30:52.719062 storage/engine/rocksdb.go:143 opening in memory rocksdb instance
W160615 04:30:52.722284 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
W160615 04:30:52.723251 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
I160615 04:30:52.723306 server/node.go:401 store store=0:0 ([]=) not bootstrapped
I160615 04:30:52.725381 storage/replica_command.go:1570 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 407212h30m57.224946901s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0 [physicalTime=2016-06-15 04:30:52.725284559 +0000 UTC]
I160615 04:30:52.726064 server/node.go:327 **** cluster {0a14925e-4d5c-4dc6-a629-a787c3378b70} has been created
I160615 04:30:52.726089 server/node.go:328 **** add additional nodes by specifying --join=127.0.0.1:46038
W160615 04:30:52.726098 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
I160615 04:30:52.728522 server/node.go:414 initialized store store=1:1 ([]=): {Capacity:10365558784 Available:8221777920 RangeCount:0}
I160615 04:30:52.728559 server/node.go:302 node ID 1 initialized
I160615 04:30:52.731203 storage/stores.go:287 read 0 node addresses from persistent storage
I160615 04:30:52.731363 server/node.go:535 connecting to gossip network to verify cluster ID...
I160615 04:30:52.731386 server/node.go:556 node connected via gossip and verified as part of cluster {"0a14925e-4d5c-4dc6-a629-a787c3378b70"}
I160615 04:30:52.731431 server/node.go:355 [node=1] Started node with [[]=] engine(s) and attributes []
I160615 04:30:52.731454 server/server.go:417 starting https server at 127.0.0.1:39479
I160615 04:30:52.731464 server/server.go:418 starting grpc/postgres server at 127.0.0.1:46038
I160615 04:30:52.731573 storage/split_queue.go:97 splitting range=1 [/Min-/Max) at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0]
I160615 04:30:52.760433 server/updates.go:156 No previous updates check time.
I160615 04:30:52.764853 storage/replica_command.go:2118 initiating a split of range=1 [/Min-/Max) at key /Table/11
I160615 04:30:52.779985 storage/replica_command.go:1570 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 407212h30m57.329635497s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0 [physicalTime=2016-06-15 04:30:52.779941077 +0000 UTC]
I160615 04:30:52.780153 storage/replica_command.go:2118 initiating a split of range=2 [/Table/11-/Max) at key /Table/12
I160615 04:30:52.786522 storage/replica_command.go:1570 range 3: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 407212h30m57.33615875s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0 [physicalTime=2016-06-15 04:30:52.786475889 +0000 UTC]
I160615 04:30:52.786724 storage/replica_command.go:2118 initiating a split of range=3 [/Table/12-/Max) at key /Table/13
I160615 04:30:52.794078 storage/replica_command.go:1570 range 4: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 407212h30m57.343708031s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0 [physicalTime=2016-06-15 04:30:52.794030724 +0000 UTC]
I160615 04:30:52.794618 storage/replica_command.go:2118 initiating a split of range=4 [/Table/13-/Max) at key /Table/14
--- FAIL: TestKVDBTransaction (5.16s)
leaktest.go:86: Leaked goroutine: goroutine 6776 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).processRaft.func1()
/go/src/github.com/cockroachdb/cockroach/storage/store.go:2114 +0xbf1
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc820aa63f0, 0xc8209a4d00)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
```
Run Details:
```
0 runs so far, 0 failures, over 5s
0 runs so far, 0 failures, over 10s
0 runs so far, 0 failures, over 15s
5 runs so far, 0 failures, over 20s
13 runs so far, 0 failures, over 25s
15 runs so far, 0 failures, over 30s
16 runs so far, 0 failures, over 35s
20 runs so far, 0 failures, over 40s
28 runs so far, 0 failures, over 45s
31 runs so far, 0 failures, over 50s
32 runs so far, 0 failures, over 55s
34 runs so far, 0 failures, over 1m0s
39 runs so far, 0 failures, over 1m5s
46 runs so far, 0 failures, over 1m10s
48 runs so far, 0 failures, over 1m15s
50 runs so far, 0 failures, over 1m20s
52 runs so far, 0 failures, over 1m25s
57 runs so far, 0 failures, over 1m30s
63 runs so far, 0 failures, over 1m35s
66 runs so far, 0 failures, over 1m40s
68 runs so far, 0 failures, over 1m45s
71 runs so far, 0 failures, over 1m50s
76 runs so far, 0 failures, over 1m55s
79 runs so far, 0 failures, over 2m0s
81 runs completed, 1 failures, over 2m4s
FAIL
```
Please assign, take a look and update the issue accordingly. | 1.0 | stress: failed test in cockroach/kv/kv.test: TestKVDBTransaction - Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/543b71b3103549cb00427276bb5a35cbce6fba30
Stress build found a failed test:
```
=== RUN TestKVDBTransaction
I160615 04:30:52.719062 storage/engine/rocksdb.go:143 opening in memory rocksdb instance
W160615 04:30:52.722284 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
W160615 04:30:52.723251 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
I160615 04:30:52.723306 server/node.go:401 store store=0:0 ([]=) not bootstrapped
I160615 04:30:52.725381 storage/replica_command.go:1570 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 407212h30m57.224946901s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0 [physicalTime=2016-06-15 04:30:52.725284559 +0000 UTC]
I160615 04:30:52.726064 server/node.go:327 **** cluster {0a14925e-4d5c-4dc6-a629-a787c3378b70} has been created
I160615 04:30:52.726089 server/node.go:328 **** add additional nodes by specifying --join=127.0.0.1:46038
W160615 04:30:52.726098 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
I160615 04:30:52.728522 server/node.go:414 initialized store store=1:1 ([]=): {Capacity:10365558784 Available:8221777920 RangeCount:0}
I160615 04:30:52.728559 server/node.go:302 node ID 1 initialized
I160615 04:30:52.731203 storage/stores.go:287 read 0 node addresses from persistent storage
I160615 04:30:52.731363 server/node.go:535 connecting to gossip network to verify cluster ID...
I160615 04:30:52.731386 server/node.go:556 node connected via gossip and verified as part of cluster {"0a14925e-4d5c-4dc6-a629-a787c3378b70"}
I160615 04:30:52.731431 server/node.go:355 [node=1] Started node with [[]=] engine(s) and attributes []
I160615 04:30:52.731454 server/server.go:417 starting https server at 127.0.0.1:39479
I160615 04:30:52.731464 server/server.go:418 starting grpc/postgres server at 127.0.0.1:46038
I160615 04:30:52.731573 storage/split_queue.go:97 splitting range=1 [/Min-/Max) at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0]
I160615 04:30:52.760433 server/updates.go:156 No previous updates check time.
I160615 04:30:52.764853 storage/replica_command.go:2118 initiating a split of range=1 [/Min-/Max) at key /Table/11
I160615 04:30:52.779985 storage/replica_command.go:1570 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 407212h30m57.329635497s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0 [physicalTime=2016-06-15 04:30:52.779941077 +0000 UTC]
I160615 04:30:52.780153 storage/replica_command.go:2118 initiating a split of range=2 [/Table/11-/Max) at key /Table/12
I160615 04:30:52.786522 storage/replica_command.go:1570 range 3: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 407212h30m57.33615875s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0 [physicalTime=2016-06-15 04:30:52.786475889 +0000 UTC]
I160615 04:30:52.786724 storage/replica_command.go:2118 initiating a split of range=3 [/Table/12-/Max) at key /Table/13
I160615 04:30:52.794078 storage/replica_command.go:1570 range 4: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 407212h30m57.343708031s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0 [physicalTime=2016-06-15 04:30:52.794030724 +0000 UTC]
I160615 04:30:52.794618 storage/replica_command.go:2118 initiating a split of range=4 [/Table/13-/Max) at key /Table/14
--- FAIL: TestKVDBTransaction (5.16s)
leaktest.go:86: Leaked goroutine: goroutine 6776 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).processRaft.func1()
/go/src/github.com/cockroachdb/cockroach/storage/store.go:2114 +0xbf1
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc820aa63f0, 0xc8209a4d00)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
```
Run Details:
```
0 runs so far, 0 failures, over 5s
0 runs so far, 0 failures, over 10s
0 runs so far, 0 failures, over 15s
5 runs so far, 0 failures, over 20s
13 runs so far, 0 failures, over 25s
15 runs so far, 0 failures, over 30s
16 runs so far, 0 failures, over 35s
20 runs so far, 0 failures, over 40s
28 runs so far, 0 failures, over 45s
31 runs so far, 0 failures, over 50s
32 runs so far, 0 failures, over 55s
34 runs so far, 0 failures, over 1m0s
39 runs so far, 0 failures, over 1m5s
46 runs so far, 0 failures, over 1m10s
48 runs so far, 0 failures, over 1m15s
50 runs so far, 0 failures, over 1m20s
52 runs so far, 0 failures, over 1m25s
57 runs so far, 0 failures, over 1m30s
63 runs so far, 0 failures, over 1m35s
66 runs so far, 0 failures, over 1m40s
68 runs so far, 0 failures, over 1m45s
71 runs so far, 0 failures, over 1m50s
76 runs so far, 0 failures, over 1m55s
79 runs so far, 0 failures, over 2m0s
81 runs completed, 1 failures, over 2m4s
FAIL
```
Please assign, take a look and update the issue accordingly. | test | stress failed test in cockroach kv kv test testkvdbtransaction binary cockroach static tests tar gz sha stress build found a failed test run testkvdbtransaction storage engine rocksdb go opening in memory rocksdb instance gossip gossip go not connected to cluster use join to specify a connected node gossip gossip go not connected to cluster use join to specify a connected node server node go store store not bootstrapped storage replica command go range new leader lease replica utc following replica utc server node go cluster has been created server node go add additional nodes by specifying join gossip gossip go not connected to cluster use join to specify a connected node server node go initialized store store capacity available rangecount server node go node id initialized storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go started node with engine s and attributes server server go starting https server at server server go starting grpc postgres server at storage split queue go splitting range server updates go no previous updates check time storage replica command go initiating a split of range min max at key table storage replica command go range new leader lease replica utc following replica utc storage replica command go initiating a split of range table max at key table storage replica command go range new leader lease replica utc following replica utc storage replica command go initiating a split of range table max at key table storage replica command go range new leader lease replica utc following replica utc storage replica command go initiating a split of range table max at key table fail testkvdbtransaction leaktest go leaked goroutine goroutine github com cockroachdb cockroach storage store processraft go src github com cockroachdb cockroach storage store go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go run details runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs completed failures over fail please assign take a look and update the issue accordingly | 1 |
311,997 | 26,830,461,634 | IssuesEvent | 2023-02-02 15:40:55 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | opened | [CI] DelayedAllocationIT testDelayedAllocationChangeWithSettingTo100ms failing | >test-failure :Distributed/Allocation | **Build scan:**
https://gradle-enterprise.elastic.co/s/mwmnrfwrtfv3o/tests/:server:internalClusterTest/org.elasticsearch.cluster.routing.DelayedAllocationIT/testDelayedAllocationChangeWithSettingTo100ms
**Reproduction line:**
```
./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.cluster.routing.DelayedAllocationIT.testDelayedAllocationChangeWithSettingTo100ms" -Dtests.seed=EA193B9FA95E8FD2 -Dtests.locale=ru -Dtests.timezone=CET -Druntime.java=17
```
**Applicable branches:**
main
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.cluster.routing.DelayedAllocationIT&tests.test=testDelayedAllocationChangeWithSettingTo100ms
**Failure excerpt:**
```
java.lang.AssertionError: timed out waiting for green state
at __randomizedtesting.SeedInfo.seed([EA193B9FA95E8FD2:E7FEE4C31B8466FA]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.elasticsearch.test.ESIntegTestCase.ensureColor(ESIntegTestCase.java:971)
at org.elasticsearch.test.ESIntegTestCase.ensureGreen(ESIntegTestCase.java:910)
at org.elasticsearch.test.ESIntegTestCase.ensureGreen(ESIntegTestCase.java:899)
at org.elasticsearch.cluster.routing.DelayedAllocationIT.testDelayedAllocationChangeWithSettingTo100ms(DelayedAllocationIT.java:138)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
``` | 1.0 | [CI] DelayedAllocationIT testDelayedAllocationChangeWithSettingTo100ms failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/mwmnrfwrtfv3o/tests/:server:internalClusterTest/org.elasticsearch.cluster.routing.DelayedAllocationIT/testDelayedAllocationChangeWithSettingTo100ms
**Reproduction line:**
```
./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.cluster.routing.DelayedAllocationIT.testDelayedAllocationChangeWithSettingTo100ms" -Dtests.seed=EA193B9FA95E8FD2 -Dtests.locale=ru -Dtests.timezone=CET -Druntime.java=17
```
**Applicable branches:**
main
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.cluster.routing.DelayedAllocationIT&tests.test=testDelayedAllocationChangeWithSettingTo100ms
**Failure excerpt:**
```
java.lang.AssertionError: timed out waiting for green state
at __randomizedtesting.SeedInfo.seed([EA193B9FA95E8FD2:E7FEE4C31B8466FA]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.elasticsearch.test.ESIntegTestCase.ensureColor(ESIntegTestCase.java:971)
at org.elasticsearch.test.ESIntegTestCase.ensureGreen(ESIntegTestCase.java:910)
at org.elasticsearch.test.ESIntegTestCase.ensureGreen(ESIntegTestCase.java:899)
at org.elasticsearch.cluster.routing.DelayedAllocationIT.testDelayedAllocationChangeWithSettingTo100ms(DelayedAllocationIT.java:138)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
``` | test | delayedallocationit failing build scan reproduction line gradlew server internalclustertest tests org elasticsearch cluster routing delayedallocationit dtests seed dtests locale ru dtests timezone cet druntime java applicable branches main reproduces locally didn t try failure history failure excerpt java lang assertionerror timed out waiting for green state at randomizedtesting seedinfo seed at org junit assert fail assert java at org elasticsearch test esintegtestcase ensurecolor esintegtestcase java at org elasticsearch test esintegtestcase ensuregreen esintegtestcase java at org elasticsearch test esintegtestcase ensuregreen esintegtestcase java at org elasticsearch cluster routing delayedallocationit delayedallocationit java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java | 1 |
18,843 | 13,133,944,224 | IssuesEvent | 2020-08-06 22:03:06 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Need better validation for pasture database connection | CLEM interface/infrastructure | We need better validation and error checking in the pasture management and pasture database reader to assist users in getting simulations setup and provide transparency in model outputs.
This is needed to help identify what is wrong with the Richmond grass production database. | 1.0 | Need better validation for pasture database connection - We need better validation and error checking in the pasture management and pasture database reader to assist users in getting simulations setup and provide transparency in model outputs.
This is needed to help identify what is wrong with the Richmond grass production database. | non_test | need better validation for pasture database connection we need better validation and error checking in the pasture management and pasture database reader to assist users in getting simulations setup and provide transparency in model outputs this is needed to help identify what is wrong with the richmond grass production database | 0 |
59,449 | 6,651,900,083 | IssuesEvent | 2017-09-28 21:56:53 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Investigate flaky parallel/test-net-better-error-messages-port-hostname | CI / flaky test net test | * **Version**: v8.0.0-pre
* **Platform**: centos5-32
* **Subsystem**: test, net
<!-- Enter your issue details below this comment. -->
https://ci.nodejs.org/job/node-test-commit-linux/8716/nodes=centos5-32/console
```console
not ok 740 parallel/test-net-better-error-messages-port-hostname
---
duration_ms: 40.342
severity: fail
stack: |-
assert.js:81
throw new assert.AssertionError({
^
AssertionError: 'EAI_AGAIN' === 'ENOTFOUND'
at Socket.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/centos5-32/test/parallel/test-net-better-error-messages-port-hostname.js:11:10)
at Socket.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/centos5-32/test/common.js:461:15)
at emitOne (events.js:115:13)
at Socket.emit (events.js:210:7)
at connectErrorNT (net.js:1052:8)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)
...
``` | 2.0 | Investigate flaky parallel/test-net-better-error-messages-port-hostname - * **Version**: v8.0.0-pre
* **Platform**: centos5-32
* **Subsystem**: test, net
<!-- Enter your issue details below this comment. -->
https://ci.nodejs.org/job/node-test-commit-linux/8716/nodes=centos5-32/console
```console
not ok 740 parallel/test-net-better-error-messages-port-hostname
---
duration_ms: 40.342
severity: fail
stack: |-
assert.js:81
throw new assert.AssertionError({
^
AssertionError: 'EAI_AGAIN' === 'ENOTFOUND'
at Socket.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/centos5-32/test/parallel/test-net-better-error-messages-port-hostname.js:11:10)
at Socket.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/centos5-32/test/common.js:461:15)
at emitOne (events.js:115:13)
at Socket.emit (events.js:210:7)
at connectErrorNT (net.js:1052:8)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)
...
``` | test | investigate flaky parallel test net better error messages port hostname version pre platform subsystem test net console not ok parallel test net better error messages port hostname duration ms severity fail stack assert js throw new assert assertionerror assertionerror eai again enotfound at socket home iojs build workspace node test commit linux nodes test parallel test net better error messages port hostname js at socket home iojs build workspace node test commit linux nodes test common js at emitone events js at socket emit events js at connecterrornt net js at combinedtickcallback internal process next tick js at process tickcallback internal process next tick js | 1 |
318,129 | 27,288,832,821 | IssuesEvent | 2023-02-23 15:16:35 | hashicorp/terraform-provider-google | https://api.github.com/repos/hashicorp/terraform-provider-google | closed | Failing test(s): Organization has reached maximum number of scoped policies. | size/s priority/2 test failure crosslinked | ### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* google_access_context_manager_access_policy_iam_policy
<!--- This is a template for reporting test failures on nightly builds. It should only be used by core contributors who have access to our CI/CD results. --->
<!-- i.e. "Consistently since X date" or "X% failure in MONTH" -->
Failure rate: 100% since 2022-10-21
<!-- List all impacted tests for searchability. The title of the issue can instead list one or more groups of tests, or describe the overall root cause. -->
Impacted tests:
- TestAccAccessContextManagerAccessPolicyIamPolicy
<!-- Link to the nightly build(s), ideally with one impacted test opened -->
Nightly builds:
- https://ci-oss.hashicorp.engineering/buildConfiguration/GoogleCloud_ProviderGoogleCloudGoogleProject/348525?buildTab=tests&expandedTest=-6957108147711246159
<!-- The error message that displays in the tests tab, for reference -->
Message:
```
Error: Error creating AccessPolicy: googleapi: Error 400: Organization has reached maximum number of scoped policies.
```
| 1.0 | Failing test(s): Organization has reached maximum number of scoped policies. - ### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* google_access_context_manager_access_policy_iam_policy
<!--- This is a template for reporting test failures on nightly builds. It should only be used by core contributors who have access to our CI/CD results. --->
<!-- i.e. "Consistently since X date" or "X% failure in MONTH" -->
Failure rate: 100% since 2022-10-21
<!-- List all impacted tests for searchability. The title of the issue can instead list one or more groups of tests, or describe the overall root cause. -->
Impacted tests:
- TestAccAccessContextManagerAccessPolicyIamPolicy
<!-- Link to the nightly build(s), ideally with one impacted test opened -->
Nightly builds:
- https://ci-oss.hashicorp.engineering/buildConfiguration/GoogleCloud_ProviderGoogleCloudGoogleProject/348525?buildTab=tests&expandedTest=-6957108147711246159
<!-- The error message that displays in the tests tab, for reference -->
Message:
```
Error: Error creating AccessPolicy: googleapi: Error 400: Organization has reached maximum number of scoped policies.
```
| test | failing test s organization has reached maximum number of scoped policies affected resource s google access context manager access policy iam policy failure rate since impacted tests testaccaccesscontextmanageraccesspolicyiampolicy nightly builds message error error creating accesspolicy googleapi error organization has reached maximum number of scoped policies | 1 |
255,857 | 8,126,559,783 | IssuesEvent | 2018-08-17 03:00:51 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | There are a few unnecessary scripts in svn_bin, they should be removed | Bug Likelihood: 2 - Rare Priority: Normal Severity: 2 - Minor Irritation | I think the following scripts are obsolete. Should we remove them?
visit-bin-dist
visit-windows-dist
visit-windows-dist.old.ccase
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 848
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: There are a few unnecessary scripts in svn_bin, they should be removed
Assigned to: Brad Whitlock
Category:
Target version: 2.4
Author: Eric Brugger
Start: 09/19/2011
Due date:
% Done: 100
Estimated time:
Created: 09/19/2011 06:24 pm
Updated: 10/12/2011 12:20 pm
Likelihood: 2 - Rare
Severity: 2 - Minor Irritation
Found in version: 2.3.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
I think the following scripts are obsolete. Should we remove them?
visit-bin-dist
visit-windows-dist
visit-windows-dist.old.ccase
Comments:
I removed some scripts that were no longer used.
| 1.0 | There are a few unnecessary scripts in svn_bin, they should be removed - I think the following scripts are obsolete. Should we remove them?
visit-bin-dist
visit-windows-dist
visit-windows-dist.old.ccase
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 848
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: There are a few unnecessary scripts in svn_bin, they should be removed
Assigned to: Brad Whitlock
Category:
Target version: 2.4
Author: Eric Brugger
Start: 09/19/2011
Due date:
% Done: 100
Estimated time:
Created: 09/19/2011 06:24 pm
Updated: 10/12/2011 12:20 pm
Likelihood: 2 - Rare
Severity: 2 - Minor Irritation
Found in version: 2.3.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
I think the following scripts are obsolete. Should we remove them?
visit-bin-dist
visit-windows-dist
visit-windows-dist.old.ccase
Comments:
I removed some scripts that were no longer used.
| non_test | there are a few unnecessary scripts in svn bin they should be removed i think the following scripts are obsolete should we remove them visit bin dist visit windows dist visit windows dist old ccase redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject there are a few unnecessary scripts in svn bin they should be removed assigned to brad whitlock category target version author eric brugger start due date done estimated time created pm updated pm likelihood rare severity minor irritation found in version impact expected use os all support group any description i think the following scripts are obsolete should we remove them visit bin dist visit windows dist visit windows dist old ccase comments i removed some scripts that were no longer used | 0 |
214,945 | 16,619,068,875 | IssuesEvent | 2021-06-02 20:59:16 | blynkkk/blynk_Issues | https://api.github.com/repos/blynkkk/blynk_Issues | closed | Can't upload file in Blynk.Air | bug hardware ready to test web | I can not upload file.bin in Blynk.air
I click on Upload firmware file (.bin, .tar, .zip)
Then I select the .bin
I have the message "Uploading file..."
But no firmware appears.
According to bug #79
"This is intended. If you select no devices, a "Live" upgrade is created.
Which means, all of the devices (i.e. even newly added) of selected product will be upgraded"
I have no device selected.
| 1.0 | Can't upload file in Blynk.Air - I can not upload file.bin in Blynk.air
I click on Upload firmware file (.bin, .tar, .zip)
Then I select the .bin
I have the message "Uploading file..."
But no firmware appears.
According to bug #79
"This is intended. If you select no devices, a "Live" upgrade is created.
Which means, all of the devices (i.e. even newly added) of selected product will be upgraded"
I have no device selected.
| test | can t upload file in blynk air i can not upload file bin in blynk air i click on upload firmware file bin tar zip then i select the bin i have the message uploading file but no firmware appears according to bug this is intended if you select no devices a live upgrade is created which means all of the devices i e even newly added of selected product will be upgraded i have no device selected | 1 |
211,691 | 16,454,660,505 | IssuesEvent | 2021-05-21 10:48:19 | gii-is-psg2/PSG2-2021-G3-33 | https://api.github.com/repos/gii-is-psg2/PSG2-2021-G3-33 | closed | 24.A Technical report "SLA del servicio de mantenimiento para la clínica de mascotas PSG2-2021-G3.33" | documentation | The SLA should:
- describe the Petclinic maintenance service offered to potential clients. It must consider both corrective(incidents) and perfective (requests) with at least three diferentes levels of urgency and impact,
- define the service level target using TTO and TTR metrics (as defined in the official iTop documentation),
- provide a guarantee on it taking into account the average values of the lead time and cycle time achieved in previous sprints and its standard deviation,
- You should take into account that according to the definition provided for TTO in the documentation, it
should be computed as the difference between Lead time and Cycle time. Additionally, in order to state
a proper guarantee for those metrics, you can apply the empirical rule using the data obtained in
previous sprints for the lead time and cycle time.
- and specify the availability periods. For example, from Monday to Saturday from 9:00 to 22:00. Unavailable periods are mandatory in order to create incidents.
This document must contain at least the following items:
a. A screenshot of the SLA generated in iTop, and its association to the maintenance service as created in the iTop tool.
b. The definition of the SLA document, including the terms of service and the service level targets.
c. A justification and rationale of the value provided for the guarantee in the SLA.
This document must be attached to the corresponding Service in iTop. | 1.0 | 24.A Technical report "SLA del servicio de mantenimiento para la clínica de mascotas PSG2-2021-G3.33" - The SLA should:
- describe the Petclinic maintenance service offered to potential clients. It must consider both corrective(incidents) and perfective (requests) with at least three diferentes levels of urgency and impact,
- define the service level target using TTO and TTR metrics (as defined in the official iTop documentation),
- provide a guarantee on it taking into account the average values of the lead time and cycle time achieved in previous sprints and its standard deviation,
- You should take into account that according to the definition provided for TTO in the documentation, it
should be computed as the difference between Lead time and Cycle time. Additionally, in order to state
a proper guarantee for those metrics, you can apply the empirical rule using the data obtained in
previous sprints for the lead time and cycle time.
- and specify the availability periods. For example, from Monday to Saturday from 9:00 to 22:00. Unavailable periods are mandatory in order to create incidents.
This document must contain at least the following items:
a. A screenshot of the SLA generated in iTop, and its association to the maintenance service as created in the iTop tool.
b. The definition of the SLA document, including the terms of service and the service level targets.
c. A justification and rationale of the value provided for the guarantee in the SLA.
This document must be attached to the corresponding Service in iTop. | non_test | a technical report sla del servicio de mantenimiento para la clínica de mascotas the sla should describe the petclinic maintenance service offered to potential clients it must consider both corrective incidents and perfective requests with at least three diferentes levels of urgency and impact define the service level target using tto and ttr metrics as defined in the official itop documentation provide a guarantee on it taking into account the average values of the lead time and cycle time achieved in previous sprints and its standard deviation you should take into account that according to the definition provided for tto in the documentation it should be computed as the difference between lead time and cycle time additionally in order to state a proper guarantee for those metrics you can apply the empirical rule using the data obtained in previous sprints for the lead time and cycle time and specify the availability periods for example from monday to saturday from to unavailable periods are mandatory in order to create incidents this document must contain at least the following items a a screenshot of the sla generated in itop and its association to the maintenance service as created in the itop tool b the definition of the sla document including the terms of service and the service level targets c a justification and rationale of the value provided for the guarantee in the sla this document must be attached to the corresponding service in itop | 0 |
119,118 | 10,025,447,390 | IssuesEvent | 2019-07-17 02:18:04 | microsoft/azure-tools-for-java | https://api.github.com/repos/microsoft/azure-tools-for-java | closed | The password cannot be filled in automatically if pushing image from Azure Explorer | IntelliJ Internal Test | The issue is not a 2019.2 regression, it also repro in V3.23.0
### Environment:
OS: Windows10/Mac/Ubuntu
IntelliJ version: IU19.2/19.1.3 , IC19.2/19.1.3
Toolkits version: Azure java toolkit for IntelliJ: 3.24.0
### Repro steps:
1. Right-click a container registry (make sure the ACR admin user is enabled)
2. Select Push Image
### Result:
The password cannot be filled in automatically

If you manually select the ACR from the Container Registry list.

Even if you cancel the process, repeat steps 1, 2 and the password can be filled in automatically now.
### Expect:
The password can be filled in automatically if pushing the image from Azure Explorer

| 1.0 | The password cannot be filled in automatically if pushing image from Azure Explorer - The issue is not a 2019.2 regression, it also repro in V3.23.0
### Environment:
OS: Windows10/Mac/Ubuntu
IntelliJ version: IU19.2/19.1.3 , IC19.2/19.1.3
Toolkits version: Azure java toolkit for IntelliJ: 3.24.0
### Repro steps:
1. Right-click a container registry (make sure the ACR admin user is enabled)
2. Select Push Image
### Result:
The password cannot be filled in automatically

If you manually select the ACR from the Container Registry list.

Even if you cancel the process, repeat steps 1, 2 and the password can be filled in automatically now.
### Expect:
The password can be filled in automatically if pushing the image from Azure Explorer

| test | the password cannot be filled in automatically if pushing image from azure explorer the issue is not a regression it also repro in environment os mac ubuntu intellij version toolkits version azure java toolkit for intellij repro steps right click a container registry make sure the acr admin user is enabled select push image result the password cannot be filled in automatically if you manually select the acr from the container registry list even if you cancel the process repeat steps and the password can be filled in automatically now expect the password can be filled in automatically if pushing the image from azure explorer | 1 |
227,983 | 18,145,028,060 | IssuesEvent | 2021-09-25 09:11:08 | trinodb/trino | https://api.github.com/repos/trinodb/trino | closed | MongoServer is flaky | bug test | ```
2021-06-29T10:05:09.6150786Z [ERROR] init(io.trino.plugin.mongodb.TestMongo3LatestConnectorSmokeTest) Time elapsed: 17.784 s <<< FAILURE!
2021-06-29T10:05:09.6159298Z org.testcontainers.containers.ContainerLaunchException: Container startup failed
2021-06-29T10:05:09.6169119Z at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:330)
2021-06-29T10:05:09.6177498Z at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:311)
2021-06-29T10:05:09.6184357Z at io.trino.plugin.mongodb.MongoServer.<init>(MongoServer.java:38)
2021-06-29T10:05:09.6195752Z at io.trino.plugin.mongodb.TestMongo3LatestConnectorSmokeTest.createQueryRunner(TestMongo3LatestConnectorSmokeTest.java:28)
2021-06-29T10:05:09.6205801Z at io.trino.testing.AbstractTestQueryFramework.init(AbstractTestQueryFramework.java:91)
2021-06-29T10:05:09.6214060Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2021-06-29T10:05:09.6223513Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2021-06-29T10:05:09.6233801Z at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2021-06-29T10:05:09.6240872Z at java.base/java.lang.reflect.Method.invoke(Method.java:566)
2021-06-29T10:05:09.6249380Z at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
2021-06-29T10:05:09.6257207Z at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:515)
2021-06-29T10:05:09.6264343Z at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:217)
2021-06-29T10:05:09.6271306Z at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:144)
2021-06-29T10:05:09.6276656Z at org.testng.internal.TestMethodWorker.invokeBeforeClassMethods(TestMethodWorker.java:169)
2021-06-29T10:05:09.6281235Z at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108)
2021-06-29T10:05:09.6285650Z at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
2021-06-29T10:05:09.6290226Z at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
2021-06-29T10:05:09.6293379Z at java.base/java.lang.Thread.run(Thread.java:829)
2021-06-29T10:05:09.6297251Z Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
2021-06-29T10:05:09.6301983Z at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
2021-06-29T10:05:09.6307001Z at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:323)
2021-06-29T10:05:09.6310290Z ... 17 more
2021-06-29T10:05:09.6314684Z Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
2021-06-29T10:05:09.6320618Z at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:497)
2021-06-29T10:05:09.6326347Z at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:325)
2021-06-29T10:05:09.6331863Z at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
2021-06-29T10:05:09.6335215Z ... 18 more
2021-06-29T10:05:09.6339207Z Caused by: java.lang.ArrayIndexOutOfBoundsException: size=101 offset=0 byteCount=228
2021-06-29T10:05:09.6343893Z at org.testcontainers.shaded.okio.Util.checkOffsetAndCount(Util.java:30)
2021-06-29T10:05:09.6348866Z at org.testcontainers.shaded.okio.AsyncTimeout$1.write(AsyncTimeout.java:162)
2021-06-29T10:05:09.6354672Z at org.testcontainers.shaded.okio.RealBufferedSink.emitCompleteSegments(RealBufferedSink.java:179)
2021-06-29T10:05:09.6360826Z at org.testcontainers.shaded.okio.RealBufferedSink.writeUtf8(RealBufferedSink.java:54)
2021-06-29T10:05:09.6368954Z at org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec.writeRequest(Http1ExchangeCodec.java:196)
2021-06-29T10:05:09.6437108Z at org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec.writeRequestHeaders(Http1ExchangeCodec.java:141)
2021-06-29T10:05:09.6441776Z at org.testcontainers.shaded.okhttp3.internal.connection.Exchange.writeRequestHeaders(Exchange.java:72)
2021-06-29T10:05:09.6449398Z at org.testcontainers.shaded.okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:43)
2021-06-29T10:05:09.6453469Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6457377Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
2021-06-29T10:05:09.6461610Z at org.testcontainers.shaded.com.github.dockerjava.okhttp.HijackingInterceptor.intercept(HijackingInterceptor.java:20)
2021-06-29T10:05:09.6465861Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6469946Z at org.testcontainers.shaded.okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:43)
2021-06-29T10:05:09.6474011Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6478115Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
2021-06-29T10:05:09.6481913Z at org.testcontainers.shaded.okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
2021-06-29T10:05:09.6485678Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6489575Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
2021-06-29T10:05:09.6493556Z at org.testcontainers.shaded.okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
2021-06-29T10:05:09.6497458Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6501854Z at org.testcontainers.shaded.okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
2021-06-29T10:05:09.6506287Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6510199Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
2021-06-29T10:05:09.6513920Z at org.testcontainers.shaded.okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:229)
2021-06-29T10:05:09.6516634Z at org.testcontainers.shaded.okhttp3.RealCall.execute(RealCall.java:81)
2021-06-29T10:05:09.6521610Z at org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.<init>(OkDockerHttpClient.java:256)
2021-06-29T10:05:09.6525629Z at org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient.execute(OkDockerHttpClient.java:230)
2021-06-29T10:05:09.6530108Z at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.execute(DefaultInvocationBuilder.java:228)
2021-06-29T10:05:09.6534744Z at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.post(DefaultInvocationBuilder.java:124)
2021-06-29T10:05:09.6539180Z at org.testcontainers.shaded.com.github.dockerjava.core.exec.ExecCreateCmdExec.execute(ExecCreateCmdExec.java:30)
2021-06-29T10:05:09.6543561Z at org.testcontainers.shaded.com.github.dockerjava.core.exec.ExecCreateCmdExec.execute(ExecCreateCmdExec.java:13)
2021-06-29T10:05:09.6548099Z at org.testcontainers.shaded.com.github.dockerjava.core.exec.AbstrSyncDockerCmdExec.exec(AbstrSyncDockerCmdExec.java:21)
2021-06-29T10:05:09.6552464Z at org.testcontainers.shaded.com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:35)
2021-06-29T10:05:09.6556635Z at org.testcontainers.shaded.com.github.dockerjava.core.command.ExecCreateCmdImpl.exec(ExecCreateCmdImpl.java:172)
2021-06-29T10:05:09.6561127Z at org.testcontainers.shaded.com.github.dockerjava.core.command.ExecCreateCmdImpl.exec(ExecCreateCmdImpl.java:12)
2021-06-29T10:05:09.6571419Z at org.testcontainers.containers.ExecInContainerPattern.execInContainer(ExecInContainerPattern.java:71)
2021-06-29T10:05:09.6574520Z at org.testcontainers.containers.ContainerState.execInContainer(ContainerState.java:235)
2021-06-29T10:05:09.6581359Z at org.testcontainers.containers.ContainerState.execInContainer(ContainerState.java:226)
2021-06-29T10:05:09.6584119Z at org.testcontainers.containers.MongoDBContainer.initReplicaSet(MongoDBContainer.java:126)
2021-06-29T10:05:09.6587059Z at org.testcontainers.containers.MongoDBContainer.containerIsStarted(MongoDBContainer.java:80)
2021-06-29T10:05:09.6590145Z at org.testcontainers.containers.GenericContainer.containerIsStarted(GenericContainer.java:659)
2021-06-29T10:05:09.6592906Z at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:476)
2021-06-29T10:05:09.6594319Z ... 20 more
2021-06-29T10:05:09.6594562Z
2021-06-29T10:05:10.0559511Z [INFO]
2021-06-29T10:05:10.0562330Z [INFO] Results:
2021-06-29T10:05:10.0563905Z [INFO]
2021-06-29T10:05:10.0565437Z [ERROR] Failures:
2021-06-29T10:05:10.0571856Z [ERROR] TestMongo3LatestConnectorSmokeTest>AbstractTestQueryFramework.init:91->createQueryRunner:28 » ContainerLaunch
2021-06-29T10:05:10.0575585Z [INFO]
2021-06-29T10:05:10.0578609Z [ERROR] Tests run: 187, Failures: 1, Errors: 0, Skipped: 50
``` | 1.0 | MongoServer is flaky - ```
2021-06-29T10:05:09.6150786Z [ERROR] init(io.trino.plugin.mongodb.TestMongo3LatestConnectorSmokeTest) Time elapsed: 17.784 s <<< FAILURE!
2021-06-29T10:05:09.6159298Z org.testcontainers.containers.ContainerLaunchException: Container startup failed
2021-06-29T10:05:09.6169119Z at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:330)
2021-06-29T10:05:09.6177498Z at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:311)
2021-06-29T10:05:09.6184357Z at io.trino.plugin.mongodb.MongoServer.<init>(MongoServer.java:38)
2021-06-29T10:05:09.6195752Z at io.trino.plugin.mongodb.TestMongo3LatestConnectorSmokeTest.createQueryRunner(TestMongo3LatestConnectorSmokeTest.java:28)
2021-06-29T10:05:09.6205801Z at io.trino.testing.AbstractTestQueryFramework.init(AbstractTestQueryFramework.java:91)
2021-06-29T10:05:09.6214060Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2021-06-29T10:05:09.6223513Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2021-06-29T10:05:09.6233801Z at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2021-06-29T10:05:09.6240872Z at java.base/java.lang.reflect.Method.invoke(Method.java:566)
2021-06-29T10:05:09.6249380Z at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
2021-06-29T10:05:09.6257207Z at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:515)
2021-06-29T10:05:09.6264343Z at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:217)
2021-06-29T10:05:09.6271306Z at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:144)
2021-06-29T10:05:09.6276656Z at org.testng.internal.TestMethodWorker.invokeBeforeClassMethods(TestMethodWorker.java:169)
2021-06-29T10:05:09.6281235Z at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:108)
2021-06-29T10:05:09.6285650Z at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
2021-06-29T10:05:09.6290226Z at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
2021-06-29T10:05:09.6293379Z at java.base/java.lang.Thread.run(Thread.java:829)
2021-06-29T10:05:09.6297251Z Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
2021-06-29T10:05:09.6301983Z at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
2021-06-29T10:05:09.6307001Z at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:323)
2021-06-29T10:05:09.6310290Z ... 17 more
2021-06-29T10:05:09.6314684Z Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
2021-06-29T10:05:09.6320618Z at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:497)
2021-06-29T10:05:09.6326347Z at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:325)
2021-06-29T10:05:09.6331863Z at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
2021-06-29T10:05:09.6335215Z ... 18 more
2021-06-29T10:05:09.6339207Z Caused by: java.lang.ArrayIndexOutOfBoundsException: size=101 offset=0 byteCount=228
2021-06-29T10:05:09.6343893Z at org.testcontainers.shaded.okio.Util.checkOffsetAndCount(Util.java:30)
2021-06-29T10:05:09.6348866Z at org.testcontainers.shaded.okio.AsyncTimeout$1.write(AsyncTimeout.java:162)
2021-06-29T10:05:09.6354672Z at org.testcontainers.shaded.okio.RealBufferedSink.emitCompleteSegments(RealBufferedSink.java:179)
2021-06-29T10:05:09.6360826Z at org.testcontainers.shaded.okio.RealBufferedSink.writeUtf8(RealBufferedSink.java:54)
2021-06-29T10:05:09.6368954Z at org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec.writeRequest(Http1ExchangeCodec.java:196)
2021-06-29T10:05:09.6437108Z at org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec.writeRequestHeaders(Http1ExchangeCodec.java:141)
2021-06-29T10:05:09.6441776Z at org.testcontainers.shaded.okhttp3.internal.connection.Exchange.writeRequestHeaders(Exchange.java:72)
2021-06-29T10:05:09.6449398Z at org.testcontainers.shaded.okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:43)
2021-06-29T10:05:09.6453469Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6457377Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
2021-06-29T10:05:09.6461610Z at org.testcontainers.shaded.com.github.dockerjava.okhttp.HijackingInterceptor.intercept(HijackingInterceptor.java:20)
2021-06-29T10:05:09.6465861Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6469946Z at org.testcontainers.shaded.okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:43)
2021-06-29T10:05:09.6474011Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6478115Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
2021-06-29T10:05:09.6481913Z at org.testcontainers.shaded.okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
2021-06-29T10:05:09.6485678Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6489575Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
2021-06-29T10:05:09.6493556Z at org.testcontainers.shaded.okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
2021-06-29T10:05:09.6497458Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6501854Z at org.testcontainers.shaded.okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
2021-06-29T10:05:09.6506287Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
2021-06-29T10:05:09.6510199Z at org.testcontainers.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
2021-06-29T10:05:09.6513920Z at org.testcontainers.shaded.okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:229)
2021-06-29T10:05:09.6516634Z at org.testcontainers.shaded.okhttp3.RealCall.execute(RealCall.java:81)
2021-06-29T10:05:09.6521610Z at org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient$OkResponse.<init>(OkDockerHttpClient.java:256)
2021-06-29T10:05:09.6525629Z at org.testcontainers.shaded.com.github.dockerjava.okhttp.OkDockerHttpClient.execute(OkDockerHttpClient.java:230)
2021-06-29T10:05:09.6530108Z at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.execute(DefaultInvocationBuilder.java:228)
2021-06-29T10:05:09.6534744Z at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.post(DefaultInvocationBuilder.java:124)
2021-06-29T10:05:09.6539180Z at org.testcontainers.shaded.com.github.dockerjava.core.exec.ExecCreateCmdExec.execute(ExecCreateCmdExec.java:30)
2021-06-29T10:05:09.6543561Z at org.testcontainers.shaded.com.github.dockerjava.core.exec.ExecCreateCmdExec.execute(ExecCreateCmdExec.java:13)
2021-06-29T10:05:09.6548099Z at org.testcontainers.shaded.com.github.dockerjava.core.exec.AbstrSyncDockerCmdExec.exec(AbstrSyncDockerCmdExec.java:21)
2021-06-29T10:05:09.6552464Z at org.testcontainers.shaded.com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:35)
2021-06-29T10:05:09.6556635Z at org.testcontainers.shaded.com.github.dockerjava.core.command.ExecCreateCmdImpl.exec(ExecCreateCmdImpl.java:172)
2021-06-29T10:05:09.6561127Z at org.testcontainers.shaded.com.github.dockerjava.core.command.ExecCreateCmdImpl.exec(ExecCreateCmdImpl.java:12)
2021-06-29T10:05:09.6571419Z at org.testcontainers.containers.ExecInContainerPattern.execInContainer(ExecInContainerPattern.java:71)
2021-06-29T10:05:09.6574520Z at org.testcontainers.containers.ContainerState.execInContainer(ContainerState.java:235)
2021-06-29T10:05:09.6581359Z at org.testcontainers.containers.ContainerState.execInContainer(ContainerState.java:226)
2021-06-29T10:05:09.6584119Z at org.testcontainers.containers.MongoDBContainer.initReplicaSet(MongoDBContainer.java:126)
2021-06-29T10:05:09.6587059Z at org.testcontainers.containers.MongoDBContainer.containerIsStarted(MongoDBContainer.java:80)
2021-06-29T10:05:09.6590145Z at org.testcontainers.containers.GenericContainer.containerIsStarted(GenericContainer.java:659)
2021-06-29T10:05:09.6592906Z at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:476)
2021-06-29T10:05:09.6594319Z ... 20 more
2021-06-29T10:05:09.6594562Z
2021-06-29T10:05:10.0559511Z [INFO]
2021-06-29T10:05:10.0562330Z [INFO] Results:
2021-06-29T10:05:10.0563905Z [INFO]
2021-06-29T10:05:10.0565437Z [ERROR] Failures:
2021-06-29T10:05:10.0571856Z [ERROR] TestMongo3LatestConnectorSmokeTest>AbstractTestQueryFramework.init:91->createQueryRunner:28 » ContainerLaunch
2021-06-29T10:05:10.0575585Z [INFO]
2021-06-29T10:05:10.0578609Z [ERROR] Tests run: 187, Failures: 1, Errors: 0, Skipped: 50
``` | test | mongoserver is flaky init io trino plugin mongodb time elapsed s failure org testcontainers containers containerlaunchexception container startup failed at org testcontainers containers genericcontainer dostart genericcontainer java at org testcontainers containers genericcontainer start genericcontainer java at io trino plugin mongodb mongoserver mongoserver java at io trino plugin mongodb createqueryrunner java at io trino testing abstracttestqueryframework init abstracttestqueryframework java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invoker invokeconfigurationmethod invoker java at org testng internal invoker invokeconfigurations invoker java at org testng internal invoker invokeconfigurations invoker java at org testng internal testmethodworker invokebeforeclassmethods testmethodworker java at org testng internal testmethodworker run testmethodworker java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by org rnorth ducttape retrycountexceededexception retry limit hit with exception at org rnorth ducttape unreliables unreliables retryuntilsuccess unreliables java at org testcontainers containers genericcontainer dostart genericcontainer java more caused by org testcontainers containers containerlaunchexception could not create start container at org testcontainers containers genericcontainer trystart genericcontainer java at org testcontainers containers genericcontainer lambda dostart genericcontainer java at org rnorth ducttape unreliables unreliables retryuntilsuccess unreliables java more caused by java lang arrayindexoutofboundsexception size offset bytecount at org testcontainers shaded okio util checkoffsetandcount util java at org testcontainers shaded okio asynctimeout write asynctimeout java at org testcontainers shaded okio realbufferedsink emitcompletesegments realbufferedsink java at org testcontainers shaded okio realbufferedsink realbufferedsink java at org testcontainers shaded internal writerequest java at org testcontainers shaded internal writerequestheaders java at org testcontainers shaded internal connection exchange writerequestheaders exchange java at org testcontainers shaded internal http callserverinterceptor intercept callserverinterceptor java at org testcontainers shaded internal http realinterceptorchain proceed realinterceptorchain java at org testcontainers shaded internal http realinterceptorchain proceed realinterceptorchain java at org testcontainers shaded com github dockerjava okhttp hijackinginterceptor intercept hijackinginterceptor java at org testcontainers shaded internal http realinterceptorchain proceed realinterceptorchain java at org testcontainers shaded internal connection connectinterceptor intercept connectinterceptor java at org testcontainers shaded internal http realinterceptorchain proceed realinterceptorchain java at org testcontainers shaded internal http realinterceptorchain proceed realinterceptorchain java at org testcontainers shaded internal cache cacheinterceptor intercept cacheinterceptor java at org testcontainers shaded internal http realinterceptorchain proceed realinterceptorchain java at org testcontainers shaded internal http realinterceptorchain proceed realinterceptorchain java at org testcontainers shaded internal http bridgeinterceptor intercept bridgeinterceptor java at org testcontainers shaded internal http realinterceptorchain proceed realinterceptorchain java at org testcontainers shaded internal http retryandfollowupinterceptor intercept retryandfollowupinterceptor java at org testcontainers shaded internal http realinterceptorchain proceed realinterceptorchain java at org testcontainers shaded internal http realinterceptorchain proceed realinterceptorchain java at org testcontainers shaded realcall getresponsewithinterceptorchain realcall java at org testcontainers shaded realcall execute realcall java at org testcontainers shaded com github dockerjava okhttp okdockerhttpclient okresponse okdockerhttpclient java at org testcontainers shaded com github dockerjava okhttp okdockerhttpclient execute okdockerhttpclient java at org testcontainers shaded com github dockerjava core defaultinvocationbuilder execute defaultinvocationbuilder java at org testcontainers shaded com github dockerjava core defaultinvocationbuilder post defaultinvocationbuilder java at org testcontainers shaded com github dockerjava core exec execcreatecmdexec execute execcreatecmdexec java at org testcontainers shaded com github dockerjava core exec execcreatecmdexec execute execcreatecmdexec java at org testcontainers shaded com github dockerjava core exec abstrsyncdockercmdexec exec abstrsyncdockercmdexec java at org testcontainers shaded com github dockerjava core command abstrdockercmd exec abstrdockercmd java at org testcontainers shaded com github dockerjava core command execcreatecmdimpl exec execcreatecmdimpl java at org testcontainers shaded com github dockerjava core command execcreatecmdimpl exec execcreatecmdimpl java at org testcontainers containers execincontainerpattern execincontainer execincontainerpattern java at org testcontainers containers containerstate execincontainer containerstate java at org testcontainers containers containerstate execincontainer containerstate java at org testcontainers containers mongodbcontainer initreplicaset mongodbcontainer java at org testcontainers containers mongodbcontainer containerisstarted mongodbcontainer java at org testcontainers containers genericcontainer containerisstarted genericcontainer java at org testcontainers containers genericcontainer trystart genericcontainer java more results failures abstracttestqueryframework init createqueryrunner » containerlaunch tests run failures errors skipped | 1 |
498,979 | 14,436,774,678 | IssuesEvent | 2020-12-07 10:35:16 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.youtube.com - video or audio doesn't play | browser-firefox engine-gecko ml-needsdiagnosis-false priority-critical status-needsinfo-oana | <!-- @browser: Firefox 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:83.0) Gecko/20100101 Firefox/83.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/62809 -->
**URL**: https://www.youtube.com/watch?v=9FpCeAPI8ks
**Browser / Version**: Firefox 83.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes Chrome
**Problem type**: Video or audio doesn't play
**Description**: There is no audio
**Steps to Reproduce**:
I have sound for the vast majority of youtube videos.
This happens like one in a thousand, or even less.
I am on Windows 7, and it doesn't happen on another computer I have on windows 10.
It does happen when I'm logged off, or when I'm in private navigation.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/6ab09a2a-7711-4854-b63b-ca906439bcce.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.youtube.com - video or audio doesn't play - <!-- @browser: Firefox 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:83.0) Gecko/20100101 Firefox/83.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/62809 -->
**URL**: https://www.youtube.com/watch?v=9FpCeAPI8ks
**Browser / Version**: Firefox 83.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes Chrome
**Problem type**: Video or audio doesn't play
**Description**: There is no audio
**Steps to Reproduce**:
I have sound for the vast majority of youtube videos.
This happens like one in a thousand, or even less.
I am on Windows 7, and it doesn't happen on another computer I have on windows 10.
It does happen when I'm logged off, or when I'm in private navigation.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/11/6ab09a2a-7711-4854-b63b-ca906439bcce.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | video or audio doesn t play url browser version firefox operating system windows tested another browser yes chrome problem type video or audio doesn t play description there is no audio steps to reproduce i have sound for the vast majority of youtube videos this happens like one in a thousand or even less i am on windows and it doesn t happen on another computer i have on windows it does happen when i m logged off or when i m in private navigation view the screenshot img alt screenshot src browser configuration none from with ❤️ | 0 |
106,507 | 9,161,227,202 | IssuesEvent | 2019-03-01 09:54:08 | ComputationalRadiationPhysics/libSplash | https://api.github.com/repos/ComputationalRadiationPhysics/libSplash | opened | CMake: ZLIB Deps | affects latest release bug install | We currently (1.7.0) link `PRIVATE` in `CMakeLists.txt` to `ZLIB::ZLIB`, although we do not include direct zlib headers.
This dependency should nowadays (recent CMake version) be properly be pulled by HDF5.
Currently, we request zlib again and forget to search it in `SplashConfig.cmake`, which leads to the downstream issue of injecting a `ZLIB::ZLIB` in `Splash::Splash` yet not looking for a target. Therefore, users have to perform
```cmake
find_package(ZLIB REQUIRED)
# just to get the ZLIB::ZLIB target
find_package(Splash REQUIRED)
target_link_libraries(myTarget PRIVATE Splash::Splash)
``` | 1.0 | CMake: ZLIB Deps - We currently (1.7.0) link `PRIVATE` in `CMakeLists.txt` to `ZLIB::ZLIB`, although we do not include direct zlib headers.
This dependency should nowadays (recent CMake version) be properly be pulled by HDF5.
Currently, we request zlib again and forget to search it in `SplashConfig.cmake`, which leads to the downstream issue of injecting a `ZLIB::ZLIB` in `Splash::Splash` yet not looking for a target. Therefore, users have to perform
```cmake
find_package(ZLIB REQUIRED)
# just to get the ZLIB::ZLIB target
find_package(Splash REQUIRED)
target_link_libraries(myTarget PRIVATE Splash::Splash)
``` | test | cmake zlib deps we currently link private in cmakelists txt to zlib zlib although we do not include direct zlib headers this dependency should nowadays recent cmake version be properly be pulled by currently we request zlib again and forget to search it in splashconfig cmake which leads to the downstream issue of injecting a zlib zlib in splash splash yet not looking for a target therefore users have to perform cmake find package zlib required just to get the zlib zlib target find package splash required target link libraries mytarget private splash splash | 1 |
51,623 | 6,187,429,392 | IssuesEvent | 2017-07-04 07:29:11 | Kademi/kademi-dev | https://api.github.com/repos/Kademi/kademi-dev | closed | Specify checkout to pull ORG address | enhancement Ready to Test - Dev | Update or create new checkout component
- User adds product to checkout
- The checkout needs to pull the address from the users org
- The address needs to be locked
This is a requirement from an administration workflow where all orders need to be sent to the users work address.
| 1.0 | Specify checkout to pull ORG address - Update or create new checkout component
- User adds product to checkout
- The checkout needs to pull the address from the users org
- The address needs to be locked
This is a requirement from an administration workflow where all orders need to be sent to the users work address.
| test | specify checkout to pull org address update or create new checkout component user adds product to checkout the checkout needs to pull the address from the users org the address needs to be locked this is a requirement from an administration workflow where all orders need to be sent to the users work address | 1 |
3,277 | 4,184,096,214 | IssuesEvent | 2016-06-23 04:46:03 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | [Test Failure] Individual TAO Test Failure (+1 more) in windows_vsi_p2_prtest on PR #1059 | Area-Infrastructure Bug Contributor Pain Resolution-Duplicate | PR: [#1059](https://github.com/dotnet/roslyn-internal/pull/1059) *Remove stale options.* by @CyrusNajmabadi
Failure: http://dotnet-ci.cloudapp.net/job/Private/job/dotnet_roslyn-internal/job/master/job/windows_vsi_p2_prtest/141/
**6 Test Failures:**
CSharpFixAllOccurrences.FixAllOccurrences in Document - Apply Fix
CSharpFixAllOccurrences.FixAllOccurrences in Document - Undo
CSharpFixAllOccurrences.FixAllOccurrences in Solution - Apply Fix
CSharpFixAllOccurrences.FixAllOccurrences in Solution - Verify error list after applying fix
CSharpFixAllOccurrences.FixAllOccurrences in Solution - Verify document contents after applying fix
CSharpFixAllOccurrences.FixAllOccurrences in Solution - Remove this qualification
**Issue 1: Individual TAO Test Failure**
Failing integration tests:
CSharpFixAllOccurrences.xml
**Issue 2: Build Script**
A script in the build failed to execute correctly causing a build failure. | 1.0 | [Test Failure] Individual TAO Test Failure (+1 more) in windows_vsi_p2_prtest on PR #1059 - PR: [#1059](https://github.com/dotnet/roslyn-internal/pull/1059) *Remove stale options.* by @CyrusNajmabadi
Failure: http://dotnet-ci.cloudapp.net/job/Private/job/dotnet_roslyn-internal/job/master/job/windows_vsi_p2_prtest/141/
**6 Test Failures:**
CSharpFixAllOccurrences.FixAllOccurrences in Document - Apply Fix
CSharpFixAllOccurrences.FixAllOccurrences in Document - Undo
CSharpFixAllOccurrences.FixAllOccurrences in Solution - Apply Fix
CSharpFixAllOccurrences.FixAllOccurrences in Solution - Verify error list after applying fix
CSharpFixAllOccurrences.FixAllOccurrences in Solution - Verify document contents after applying fix
CSharpFixAllOccurrences.FixAllOccurrences in Solution - Remove this qualification
**Issue 1: Individual TAO Test Failure**
Failing integration tests:
CSharpFixAllOccurrences.xml
**Issue 2: Build Script**
A script in the build failed to execute correctly causing a build failure. | non_test | individual tao test failure more in windows vsi prtest on pr pr remove stale options by cyrusnajmabadi failure test failures csharpfixalloccurrences fixalloccurrences in document apply fix csharpfixalloccurrences fixalloccurrences in document undo csharpfixalloccurrences fixalloccurrences in solution apply fix csharpfixalloccurrences fixalloccurrences in solution verify error list after applying fix csharpfixalloccurrences fixalloccurrences in solution verify document contents after applying fix csharpfixalloccurrences fixalloccurrences in solution remove this qualification issue individual tao test failure failing integration tests csharpfixalloccurrences xml issue build script a script in the build failed to execute correctly causing a build failure | 0 |
298,546 | 25,836,179,932 | IssuesEvent | 2022-12-12 19:51:07 | momentohq/client-sdk-ruby | https://api.github.com/repos/momentohq/client-sdk-ruby | opened | Add fuzzy dependency testing | tests :white_check_mark: :x: | Currently, we're only testing our gem with the latest versions of dependent gems (or what is in Gemfile.lock). This is unrealistic. Users of the gem can and will use any version of our dependencies which satisfies the version requirement.
Simulate this in testing. See https://stackoverflow.com/questions/74776586/can-i-install-random-but-valid-dependencies-of-a-gem for how we might automate this.
- [ ] Add a test to the github workflow which uses fuzzy dependencies.
Related:
* #112 will find the real minimum versions manually, but we'd like to continue to test this and keep it up to date.
* #111 will remove the Gemfile.lock which is keeping dev and CI testing in dependency lock step. | 1.0 | Add fuzzy dependency testing - Currently, we're only testing our gem with the latest versions of dependent gems (or what is in Gemfile.lock). This is unrealistic. Users of the gem can and will use any version of our dependencies which satisfies the version requirement.
Simulate this in testing. See https://stackoverflow.com/questions/74776586/can-i-install-random-but-valid-dependencies-of-a-gem for how we might automate this.
- [ ] Add a test to the github workflow which uses fuzzy dependencies.
Related:
* #112 will find the real minimum versions manually, but we'd like to continue to test this and keep it up to date.
* #111 will remove the Gemfile.lock which is keeping dev and CI testing in dependency lock step. | test | add fuzzy dependency testing currently we re only testing our gem with the latest versions of dependent gems or what is in gemfile lock this is unrealistic users of the gem can and will use any version of our dependencies which satisfies the version requirement simulate this in testing see for how we might automate this add a test to the github workflow which uses fuzzy dependencies related will find the real minimum versions manually but we d like to continue to test this and keep it up to date will remove the gemfile lock which is keeping dev and ci testing in dependency lock step | 1 |
305,659 | 23,125,084,206 | IssuesEvent | 2022-07-28 04:14:49 | exastro-suite/it-automation-docs | https://api.github.com/repos/exastro-suite/it-automation-docs | closed | [docs] 利用マニュアル - 管理コンソール オペレーション削除管理の説明を加筆修正 | documentation | 管理コンソールにあるオペレーション削除管理メニューにて、
論理削除日数=物理削除日数にすると削除が実施されないため、論理削除日<物理削除日にする必要がある旨を加筆。
【加筆箇所】
●利用マニュアル - 管理コンソール(P.53~P.54)
| 1.0 | [docs] 利用マニュアル - 管理コンソール オペレーション削除管理の説明を加筆修正 - 管理コンソールにあるオペレーション削除管理メニューにて、
論理削除日数=物理削除日数にすると削除が実施されないため、論理削除日<物理削除日にする必要がある旨を加筆。
【加筆箇所】
●利用マニュアル - 管理コンソール(P.53~P.54)
| non_test | 利用マニュアル 管理コンソール オペレーション削除管理の説明を加筆修正 管理コンソールにあるオペレーション削除管理メニューにて、 論理削除日数=物理削除日数にすると削除が実施されないため、論理削除日<物理削除日にする必要がある旨を加筆。 【加筆箇所】 ●利用マニュアル 管理コンソール p ~p | 0 |
38,999 | 5,207,155,668 | IssuesEvent | 2017-01-24 22:37:58 | Microsoft/vscode | https://api.github.com/repos/Microsoft/vscode | closed | Test SCSS maps | testplan-item | Testing #1758
- [x] Any platform @isidorn
- [x] Any platform @mjbvz
Test the SCSS map syntax. E.g. https://webdesign.tutsplus.com/tutorials/an-introduction-to-sass-maps-usage-and-examples--cms-22184
| 1.0 | Test SCSS maps - Testing #1758
- [x] Any platform @isidorn
- [x] Any platform @mjbvz
Test the SCSS map syntax. E.g. https://webdesign.tutsplus.com/tutorials/an-introduction-to-sass-maps-usage-and-examples--cms-22184
| test | test scss maps testing any platform isidorn any platform mjbvz test the scss map syntax e g | 1 |
280,447 | 24,306,006,658 | IssuesEvent | 2022-09-29 17:28:50 | bankidz/bankidz-server | https://api.github.com/repos/bankidz/bankidz-server | closed | [REFACTOR] 가족 조회하기 API 리스폰스 수정 | For: API Type: Refactor Type: Test | # 🤖 기능 개요
- 렌더링 관련 이유로 프론트 요청에 맞추어 이전대로 가족이 없을 경우에도 빈 배열로 리스폰스를 반환합니다.
<!-- 이슈에 할당된 기능이 무엇인지 간략하게 한 줄로 적습니다 -->
### ✅ Implement TODO
<!-- 이슈에 할당된 TODO를 나름대로 항목화하여 적습니다 (PR할 때에는 모두 체크되어야함) -->
- [x] mapper에서 가족이 존재할 때와 없을 때 분기 처리
- [x] 테스트 코드 정상 작동 확인
### 📚 Remarks
<!-- 기능 개발에 있어 비고사항이 있었다면 적기 -->
| 1.0 | [REFACTOR] 가족 조회하기 API 리스폰스 수정 - # 🤖 기능 개요
- 렌더링 관련 이유로 프론트 요청에 맞추어 이전대로 가족이 없을 경우에도 빈 배열로 리스폰스를 반환합니다.
<!-- 이슈에 할당된 기능이 무엇인지 간략하게 한 줄로 적습니다 -->
### ✅ Implement TODO
<!-- 이슈에 할당된 TODO를 나름대로 항목화하여 적습니다 (PR할 때에는 모두 체크되어야함) -->
- [x] mapper에서 가족이 존재할 때와 없을 때 분기 처리
- [x] 테스트 코드 정상 작동 확인
### 📚 Remarks
<!-- 기능 개발에 있어 비고사항이 있었다면 적기 -->
| test | 가족 조회하기 api 리스폰스 수정 🤖 기능 개요 렌더링 관련 이유로 프론트 요청에 맞추어 이전대로 가족이 없을 경우에도 빈 배열로 리스폰스를 반환합니다 ✅ implement todo mapper에서 가족이 존재할 때와 없을 때 분기 처리 테스트 코드 정상 작동 확인 📚 remarks | 1 |
80,615 | 10,194,605,162 | IssuesEvent | 2019-08-12 16:02:57 | scikit-learn/scikit-learn | https://api.github.com/repos/scikit-learn/scikit-learn | closed | Univariate feature selection example confusing / wrong | Documentation Sprint good first issue help wanted | I find the [univariate feature selection](http://scikit-learn.org/dev/auto_examples/plot_feature_selection.html) example confusing.
It claims that the SVM assigns small weights to the significant features but to me it looks like it assigns very large weights.
Also: the y-axis is not labeled and it is not immediately clear to me what the meaning is (maybe because I'm not accustomed to seeing p-value plots).
Closer examination shows: `scores_` actually stores the f-scores, not the p-values.
So the legend is wrong and the text misleading.
Also: maybe document `scores_` and `p_values` attributes in the univariate feature selectors.
| 1.0 | Univariate feature selection example confusing / wrong - I find the [univariate feature selection](http://scikit-learn.org/dev/auto_examples/plot_feature_selection.html) example confusing.
It claims that the SVM assigns small weights to the significant features but to me it looks like it assigns very large weights.
Also: the y-axis is not labeled and it is not immediately clear to me what the meaning is (maybe because I'm not accustomed to seeing p-value plots).
Closer examination shows: `scores_` actually stores the f-scores, not the p-values.
So the legend is wrong and the text misleading.
Also: maybe document `scores_` and `p_values` attributes in the univariate feature selectors.
| non_test | univariate feature selection example confusing wrong i find the example confusing it claims that the svm assigns small weights to the significant features but to me it looks like it assigns very large weights also the y axis is not labeled and it is not immediately clear to me what the meaning is maybe because i m not accustomed to seeing p value plots closer examination shows scores actually stores the f scores not the p values so the legend is wrong and the text misleading also maybe document scores and p values attributes in the univariate feature selectors | 0 |
142,275 | 11,462,314,954 | IssuesEvent | 2020-02-07 13:54:36 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | [Failing Test] [sig-node] RuntimeClass | kind/failing-test lifecycle/rotten sig/node | <!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs -->
**Which jobs are failing**:
ci-kubernetes-node-kubelet-orphans
**Which test(s) are failing**:
[sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [cos-stable1]
**Since when has it been failing**:
https://github.com/kubernetes/kubernetes/compare/28e800245...bfd8610dd?
**Testgrid link**:
https://testgrid.k8s.io/sig-node-kubelet#node-kubelet-orphans
**Reason for failure**:
```
[91m[1m鈥� Failure [6.088 seconds][0m
I0904 21:46:07.394] [sig-node] RuntimeClass
I0904 21:46:07.394] [90m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:40[0m
I0904 21:46:07.394] [91m[1mshould reject a Pod requesting a deleted RuntimeClass [It][0m
I0904 21:46:07.394] [90m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:65[0m
I0904 21:46:07.394]
I0904 21:46:07.394] [91mError creating Pod
I0904 21:46:07.394] Unexpected error:
I0904 21:46:07.394] <*errors.StatusError | 0xc0005b9720>: {
I0904 21:46:07.395] ErrStatus: {
I0904 21:46:07.395] TypeMeta: {Kind: "", APIVersion: ""},
I0904 21:46:07.395] ListMeta: {
I0904 21:46:07.395] SelfLink: "",
I0904 21:46:07.395] ResourceVersion: "",
I0904 21:46:07.395] Continue: "",
I0904 21:46:07.395] RemainingItemCount: nil,
I0904 21:46:07.395] },
I0904 21:46:07.395] Status: "Failure",
I0904 21:46:07.395] Message: "pods \"test-runtimeclass-runtimeclass-4022-delete-me-\" is forbidden: pod rejected: RuntimeClass \"runtimeclass-4022-delete-me\" not found",
I0904 21:46:07.396] Reason: "Forbidden",
I0904 21:46:07.396] Details: {
I0904 21:46:07.396] Name: "test-runtimeclass-runtimeclass-4022-delete-me-",
I0904 21:46:07.396] Group: "",
I0904 21:46:07.396] Kind: "pods",
I0904 21:46:07.396] UID: "",
I0904 21:46:07.396] Causes: nil,
I0904 21:46:07.396] RetryAfterSeconds: 0,
I0904 21:46:07.396] },
I0904 21:46:07.396] Code: 403,
I0904 21:46:07.396] },
I0904 21:46:07.396] }
I0904 21:46:07.397] pods "test-runtimeclass-runtimeclass-4022-delete-me-" is forbidden: pod rejected: RuntimeClass "runtimeclass-4022-delete-me" not found
I0904 21:46:07.397] occurred[0m
```
**Anything else we need to know**:
| 1.0 | [Failing Test] [sig-node] RuntimeClass - <!-- Please only use this template for submitting reports about failing tests in Kubernetes CI jobs -->
**Which jobs are failing**:
ci-kubernetes-node-kubelet-orphans
**Which test(s) are failing**:
[sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [cos-stable1]
**Since when has it been failing**:
https://github.com/kubernetes/kubernetes/compare/28e800245...bfd8610dd?
**Testgrid link**:
https://testgrid.k8s.io/sig-node-kubelet#node-kubelet-orphans
**Reason for failure**:
```
[91m[1m鈥� Failure [6.088 seconds][0m
I0904 21:46:07.394] [sig-node] RuntimeClass
I0904 21:46:07.394] [90m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:40[0m
I0904 21:46:07.394] [91m[1mshould reject a Pod requesting a deleted RuntimeClass [It][0m
I0904 21:46:07.394] [90m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:65[0m
I0904 21:46:07.394]
I0904 21:46:07.394] [91mError creating Pod
I0904 21:46:07.394] Unexpected error:
I0904 21:46:07.394] <*errors.StatusError | 0xc0005b9720>: {
I0904 21:46:07.395] ErrStatus: {
I0904 21:46:07.395] TypeMeta: {Kind: "", APIVersion: ""},
I0904 21:46:07.395] ListMeta: {
I0904 21:46:07.395] SelfLink: "",
I0904 21:46:07.395] ResourceVersion: "",
I0904 21:46:07.395] Continue: "",
I0904 21:46:07.395] RemainingItemCount: nil,
I0904 21:46:07.395] },
I0904 21:46:07.395] Status: "Failure",
I0904 21:46:07.395] Message: "pods \"test-runtimeclass-runtimeclass-4022-delete-me-\" is forbidden: pod rejected: RuntimeClass \"runtimeclass-4022-delete-me\" not found",
I0904 21:46:07.396] Reason: "Forbidden",
I0904 21:46:07.396] Details: {
I0904 21:46:07.396] Name: "test-runtimeclass-runtimeclass-4022-delete-me-",
I0904 21:46:07.396] Group: "",
I0904 21:46:07.396] Kind: "pods",
I0904 21:46:07.396] UID: "",
I0904 21:46:07.396] Causes: nil,
I0904 21:46:07.396] RetryAfterSeconds: 0,
I0904 21:46:07.396] },
I0904 21:46:07.396] Code: 403,
I0904 21:46:07.396] },
I0904 21:46:07.396] }
I0904 21:46:07.397] pods "test-runtimeclass-runtimeclass-4022-delete-me-" is forbidden: pod rejected: RuntimeClass "runtimeclass-4022-delete-me" not found
I0904 21:46:07.397] occurred[0m
```
**Anything else we need to know**:
| test | runtimeclass which jobs are failing ci kubernetes node kubelet orphans which test s are failing runtimeclass should reject a pod requesting a deleted runtimeclass since when has it been failing testgrid link reason for failure runtimeclass go src io kubernetes output local go src io kubernetes test common runtimeclass go go src io kubernetes output local go src io kubernetes test common runtimeclass go creating pod unexpected error errstatus typemeta kind apiversion listmeta selflink resourceversion continue remainingitemcount nil status failure message pods test runtimeclass runtimeclass delete me is forbidden pod rejected runtimeclass runtimeclass delete me not found reason forbidden details name test runtimeclass runtimeclass delete me group kind pods uid causes nil retryafterseconds code pods test runtimeclass runtimeclass delete me is forbidden pod rejected runtimeclass runtimeclass delete me not found occurred anything else we need to know | 1 |
312,143 | 26,840,566,887 | IssuesEvent | 2023-02-02 23:55:27 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | `XRInterface.trigger_haptic_pulse()` throws error `Parameter "tracker" is null.` | bug needs testing topic:xr | ### Godot version
4.0 alpha 11
### System information
Windows 10
### Issue description
`XRInterface.trigger_haptic_pulse()` throws error `Parameter "tracker" is null.` any time it is called, no matter what argument is passed.
### Steps to reproduce
1. call `XRInterface.trigger_haptic_pulse()`
2. profit
### Minimal reproduction project
[minimum vr.zip](https://github.com/godotengine/godot/files/9023730/minimum.vr.zip)
| 1.0 | `XRInterface.trigger_haptic_pulse()` throws error `Parameter "tracker" is null.` - ### Godot version
4.0 alpha 11
### System information
Windows 10
### Issue description
`XRInterface.trigger_haptic_pulse()` throws error `Parameter "tracker" is null.` any time it is called, no matter what argument is passed.
### Steps to reproduce
1. call `XRInterface.trigger_haptic_pulse()`
2. profit
### Minimal reproduction project
[minimum vr.zip](https://github.com/godotengine/godot/files/9023730/minimum.vr.zip)
| test | xrinterface trigger haptic pulse throws error parameter tracker is null godot version alpha system information windows issue description xrinterface trigger haptic pulse throws error parameter tracker is null any time it is called no matter what argument is passed steps to reproduce call xrinterface trigger haptic pulse profit minimal reproduction project | 1 |
277,162 | 24,053,830,147 | IssuesEvent | 2022-09-16 14:59:51 | devonfw/ide | https://api.github.com/repos/devonfw/ide | closed | Renaming integration-tests and adding badges | enhancement test Team_IDE | The current nightly tests have the name "CI Build ...". But they are not really CI Builds and should be renamed to e.g. "Integration Test <OS>". Furthermore, the status of the tests should be displayed in the README.asciidoc. As this is the case for License, Maven Central and CI Build.
Acceptence criteria:
- Rename the current `CI Build <OS>` in e.g. `Integration Test <OS>`
- Add the status of these tests to [README.asciidoc](https://github.com/devonfw/ide/blob/master/README.asciidoc)
| 1.0 | Renaming integration-tests and adding badges - The current nightly tests have the name "CI Build ...". But they are not really CI Builds and should be renamed to e.g. "Integration Test <OS>". Furthermore, the status of the tests should be displayed in the README.asciidoc. As this is the case for License, Maven Central and CI Build.
Acceptence criteria:
- Rename the current `CI Build <OS>` in e.g. `Integration Test <OS>`
- Add the status of these tests to [README.asciidoc](https://github.com/devonfw/ide/blob/master/README.asciidoc)
| test | renaming integration tests and adding badges the current nightly tests have the name ci build but they are not really ci builds and should be renamed to e g integration test furthermore the status of the tests should be displayed in the readme asciidoc as this is the case for license maven central and ci build acceptence criteria rename the current ci build in e g integration test add the status of these tests to | 1 |
89,753 | 8,213,313,512 | IssuesEvent | 2018-09-04 19:08:07 | WebliniaERP/webliniaerp-web | https://api.github.com/repos/WebliniaERP/webliniaerp-web | closed | Ocultar produtos cadastrados como matéria prima no PDV | 1-problema 2-prioridade alta 3- EL SHADDAI GOURMET test | No PDV os produtos com o atributo Matéria Prima (Insumo) não podem aparecer no PDV para venda | 1.0 | Ocultar produtos cadastrados como matéria prima no PDV - No PDV os produtos com o atributo Matéria Prima (Insumo) não podem aparecer no PDV para venda | test | ocultar produtos cadastrados como matéria prima no pdv no pdv os produtos com o atributo matéria prima insumo não podem aparecer no pdv para venda | 1 |
115,216 | 9,784,306,585 | IssuesEvent | 2019-06-08 18:01:27 | X-Plane/XPlane2Blender | https://api.github.com/repos/X-Plane/XPlane2Blender | opened | Updater doesn't handle multiple scenes well | Bug New Unit Test priority urgent | The updater, like most of XPlane2Blender, was not designed with multiple scenes in mind
For this example we'll use "plane.blend" which has two Scenes "Main" and "Lights - Other".
On load it checks and alters the version history of the current scene context. That means that **only** as long as plane.blend's scene at save time is "Main" will the "Main" content be updated. For years this happen. plane.blend's "Light - Other" content will never have been updated! The user would have had to have been doing it!
The first time "Lights - Other" is saved as the default scene, the updater will run on it. But "Main" won't automatically receiving updates!
Not only is this unfortunate, this is buggy! The updater shifts properties around. If "Lights - Other" isn't updated from 3.3.0 to 3.4.0 the animations could get messed up.
We need the updater to match the version history across every scene every load, and the updater to be written more generically in terms of updating bpy.data, not bpy.context. | 1.0 | Updater doesn't handle multiple scenes well - The updater, like most of XPlane2Blender, was not designed with multiple scenes in mind
For this example we'll use "plane.blend" which has two Scenes "Main" and "Lights - Other".
On load it checks and alters the version history of the current scene context. That means that **only** as long as plane.blend's scene at save time is "Main" will the "Main" content be updated. For years this happen. plane.blend's "Light - Other" content will never have been updated! The user would have had to have been doing it!
The first time "Lights - Other" is saved as the default scene, the updater will run on it. But "Main" won't automatically receiving updates!
Not only is this unfortunate, this is buggy! The updater shifts properties around. If "Lights - Other" isn't updated from 3.3.0 to 3.4.0 the animations could get messed up.
We need the updater to match the version history across every scene every load, and the updater to be written more generically in terms of updating bpy.data, not bpy.context. | test | updater doesn t handle multiple scenes well the updater like most of was not designed with multiple scenes in mind for this example we ll use plane blend which has two scenes main and lights other on load it checks and alters the version history of the current scene context that means that only as long as plane blend s scene at save time is main will the main content be updated for years this happen plane blend s light other content will never have been updated the user would have had to have been doing it the first time lights other is saved as the default scene the updater will run on it but main won t automatically receiving updates not only is this unfortunate this is buggy the updater shifts properties around if lights other isn t updated from to the animations could get messed up we need the updater to match the version history across every scene every load and the updater to be written more generically in terms of updating bpy data not bpy context | 1 |
202,719 | 15,296,947,462 | IssuesEvent | 2021-02-24 07:41:20 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | Add Accessibility Test For Creating New Policy in ILM | Feature:ILM Project:Accessibility Team:Elasticsearch UI test_xpack_functional v8.0.0 | # Summary
This issue will track the work to add an accessibility test for ILM to cover the Create New Policy form/wizard. | 1.0 | Add Accessibility Test For Creating New Policy in ILM - # Summary
This issue will track the work to add an accessibility test for ILM to cover the Create New Policy form/wizard. | test | add accessibility test for creating new policy in ilm summary this issue will track the work to add an accessibility test for ilm to cover the create new policy form wizard | 1 |
48,475 | 5,960,304,140 | IssuesEvent | 2017-05-29 13:43:59 | ValveSoftware/steam-for-linux | https://api.github.com/repos/ValveSoftware/steam-for-linux | closed | Trine 2 crashes Steam when in offline mode and game saves not accessible while offline | Need Retest reviewed Steam client | The same issue as https://github.com/ValveSoftware/steam-for-linux/issues/701 , which is now closed.
I'm using Linux Mint 13 64-bit, the latest Steam and Trine 2 updates, etc.
When launching Trine 2 in offline mode, the save games appear to be missing. Starting a new game doesn't work because the game complains about no disk space available, although I have 100s of Gigabytes free.
The game eventually crashes.
Starting the game while online fixes the issue, and the saves appear and everything works.
| 1.0 | Trine 2 crashes Steam when in offline mode and game saves not accessible while offline - The same issue as https://github.com/ValveSoftware/steam-for-linux/issues/701 , which is now closed.
I'm using Linux Mint 13 64-bit, the latest Steam and Trine 2 updates, etc.
When launching Trine 2 in offline mode, the save games appear to be missing. Starting a new game doesn't work because the game complains about no disk space available, although I have 100s of Gigabytes free.
The game eventually crashes.
Starting the game while online fixes the issue, and the saves appear and everything works.
| test | trine crashes steam when in offline mode and game saves not accessible while offline the same issue as which is now closed i m using linux mint bit the latest steam and trine updates etc when launching trine in offline mode the save games appear to be missing starting a new game doesn t work because the game complains about no disk space available although i have of gigabytes free the game eventually crashes starting the game while online fixes the issue and the saves appear and everything works | 1 |
481,734 | 13,890,801,090 | IssuesEvent | 2020-10-19 09:46:03 | WordPress/twentytwentyone | https://api.github.com/repos/WordPress/twentytwentyone | closed | Group block not clearing float, Search block out of center | High priority [Component] Default blocks [Type] Bug | The Group block is not clearing floats. This can be seen by the fact that the floated image is hanging out the bottom of the block.
The Search block is not centered like everything else. Its button styling looks different from the Search widget I have in the footer.
**Screenshots**
This is showing the front end, with a Group block containing an aligned right image, then a search block, and then the Recent Comments block (which is the only one set to alignwide).

| 1.0 | Group block not clearing float, Search block out of center - The Group block is not clearing floats. This can be seen by the fact that the floated image is hanging out the bottom of the block.
The Search block is not centered like everything else. Its button styling looks different from the Search widget I have in the footer.
**Screenshots**
This is showing the front end, with a Group block containing an aligned right image, then a search block, and then the Recent Comments block (which is the only one set to alignwide).

| non_test | group block not clearing float search block out of center the group block is not clearing floats this can be seen by the fact that the floated image is hanging out the bottom of the block the search block is not centered like everything else its button styling looks different from the search widget i have in the footer screenshots this is showing the front end with a group block containing an aligned right image then a search block and then the recent comments block which is the only one set to alignwide | 0 |
4,918 | 2,755,652,713 | IssuesEvent | 2015-04-26 20:49:04 | joshbaird/MoistureSensingSprinkler | https://api.github.com/repos/joshbaird/MoistureSensingSprinkler | closed | Fix and test I2c sensor gathering script | Needs Prototype Built Needs to Be Tested | The script should grab all sensors and search for only those that are market with the types the script can handle. In this case I2C type. | 1.0 | Fix and test I2c sensor gathering script - The script should grab all sensors and search for only those that are market with the types the script can handle. In this case I2C type. | test | fix and test sensor gathering script the script should grab all sensors and search for only those that are market with the types the script can handle in this case type | 1 |
94,068 | 8,468,447,327 | IssuesEvent | 2018-10-23 19:46:40 | ray-project/ray | https://api.github.com/repos/ray-project/ray | closed | Test failure in test_global_state.py. | test failure | I've been seeing the following error a lot in Travis. The relevant test is
```
python -m pytest -v -s test/test_global_state.py
```
Example error
```
[1m[31m_____________________ TestAvailableResources.test_no_tasks _____________________[0m
self = <ray.test.test_global_state.TestAvailableResources object at 0x1198a7f50>
[1m def test_no_tasks(self):[0m
[1m cluster_resources = ray.global_state.cluster_resources()[0m
[1m available_resources = ray.global_state.available_resources()[0m
[1m> assert cluster_resources == available_resources[0m
[1m[31mE AssertionError: assert {'CPU': 1.0, 'GPU': 0.0} == {'CPU': 0.0, 'GPU': 0.0}[0m
[1m[31mE Omitting 1 identical items, use -vv to show[0m
[1m[31mE Differing items:[0m
[1m[31mE {'CPU': 1.0} != {'CPU': 0.0}[0m
[1m[31mE Full diff:[0m
[1m[31mE - {'CPU': 1.0, 'GPU': 0.0}[0m
[1m[31mE ? ^[0m
[1m[31mE + {'CPU': 0.0, 'GPU': 0.0}[0m
[1m[31mE ? ^[0m
[1m[31mpython/ray/test/test_global_state.py[0m:30: AssertionError
```
cc @pschafhalter | 1.0 | Test failure in test_global_state.py. - I've been seeing the following error a lot in Travis. The relevant test is
```
python -m pytest -v -s test/test_global_state.py
```
Example error
```
[1m[31m_____________________ TestAvailableResources.test_no_tasks _____________________[0m
self = <ray.test.test_global_state.TestAvailableResources object at 0x1198a7f50>
[1m def test_no_tasks(self):[0m
[1m cluster_resources = ray.global_state.cluster_resources()[0m
[1m available_resources = ray.global_state.available_resources()[0m
[1m> assert cluster_resources == available_resources[0m
[1m[31mE AssertionError: assert {'CPU': 1.0, 'GPU': 0.0} == {'CPU': 0.0, 'GPU': 0.0}[0m
[1m[31mE Omitting 1 identical items, use -vv to show[0m
[1m[31mE Differing items:[0m
[1m[31mE {'CPU': 1.0} != {'CPU': 0.0}[0m
[1m[31mE Full diff:[0m
[1m[31mE - {'CPU': 1.0, 'GPU': 0.0}[0m
[1m[31mE ? ^[0m
[1m[31mE + {'CPU': 0.0, 'GPU': 0.0}[0m
[1m[31mE ? ^[0m
[1m[31mpython/ray/test/test_global_state.py[0m:30: AssertionError
```
cc @pschafhalter | test | test failure in test global state py i ve been seeing the following error a lot in travis the relevant test is python m pytest v s test test global state py example error testavailableresources test no tasks self def test no tasks self cluster resources ray global state cluster resources available resources ray global state available resources assert cluster resources available resources assertionerror assert cpu gpu cpu gpu omitting identical items use vv to show differing items cpu cpu full diff cpu gpu cpu gpu ray test test global state py assertionerror cc pschafhalter | 1 |
274,128 | 23,811,601,774 | IssuesEvent | 2022-09-04 20:57:31 | AhmedNSidd/gamers-social-manager-telegram-bot | https://api.github.com/repos/AhmedNSidd/gamers-social-manager-telegram-bot | closed | fix variable name when status user has been added | V1.0 Release add-status-user user-testing | Telegram is saying "None has been added to the /status command by JeSuisAhmedN". Why is it saying none? Just simply added a status user to a group using /add_status_user | 1.0 | fix variable name when status user has been added - Telegram is saying "None has been added to the /status command by JeSuisAhmedN". Why is it saying none? Just simply added a status user to a group using /add_status_user | test | fix variable name when status user has been added telegram is saying none has been added to the status command by jesuisahmedn why is it saying none just simply added a status user to a group using add status user | 1 |
153,276 | 13,502,408,112 | IssuesEvent | 2020-09-13 08:20:11 | ShapeLayer/dalmoori-font | https://api.github.com/repos/ShapeLayer/dalmoori-font | closed | 해결: 데모페이지 에셋이 제대로 불러와지지 않은 문제 | bug documentation | 지금 열고 있는 Github Pages가 많아서 그랬던 것
임시로 dal.ho9.me 로 연결해서 해결했으나 PR할때는 `CNAME`파일 제거 요망 | 1.0 | 해결: 데모페이지 에셋이 제대로 불러와지지 않은 문제 - 지금 열고 있는 Github Pages가 많아서 그랬던 것
임시로 dal.ho9.me 로 연결해서 해결했으나 PR할때는 `CNAME`파일 제거 요망 | non_test | 해결 데모페이지 에셋이 제대로 불러와지지 않은 문제 지금 열고 있는 github pages가 많아서 그랬던 것 임시로 dal me 로 연결해서 해결했으나 pr할때는 cname 파일 제거 요망 | 0 |
251,488 | 21,484,707,136 | IssuesEvent | 2022-04-26 21:37:32 | hashgraph/guardian | https://api.github.com/repos/hashgraph/guardian | closed | [API - Automation] - GET /accounts/root-authorities | Automation Testing | Add /accounts/get_accounts_root_authorities to the Accounts test-suite | 1.0 | [API - Automation] - GET /accounts/root-authorities - Add /accounts/get_accounts_root_authorities to the Accounts test-suite | test | get accounts root authorities add accounts get accounts root authorities to the accounts test suite | 1 |
41,057 | 10,279,024,021 | IssuesEvent | 2019-08-25 19:18:40 | ase379/gpprofile2017 | https://api.github.com/repos/ase379/gpprofile2017 | closed | Huge prf might crash under 64 Bit | defect | Using a huge prf file (> 2GB size) sometimes crashes. After a restart, the load works. | 1.0 | Huge prf might crash under 64 Bit - Using a huge prf file (> 2GB size) sometimes crashes. After a restart, the load works. | non_test | huge prf might crash under bit using a huge prf file size sometimes crashes after a restart the load works | 0 |
64,320 | 7,785,531,681 | IssuesEvent | 2018-06-06 16:05:58 | nextcloud/server | https://api.github.com/repos/nextcloud/server | closed | Show app details in apps management via right sidebar | 1. to develop design enhancement feature: apps management | Idea to improve the apps handling:
Add a right hand sidebar to the apps menu. This could have a similar structure as the right hand sidebar for the files app. If you click on a app it could show a screenshot at the top. Followed by the description and allow users to rate and comment on the app.
**Edit:** And it could show the changelog #6684/#6984
Cc @jancborchardt @nextcloud/designers | 1.0 | Show app details in apps management via right sidebar - Idea to improve the apps handling:
Add a right hand sidebar to the apps menu. This could have a similar structure as the right hand sidebar for the files app. If you click on a app it could show a screenshot at the top. Followed by the description and allow users to rate and comment on the app.
**Edit:** And it could show the changelog #6684/#6984
Cc @jancborchardt @nextcloud/designers | non_test | show app details in apps management via right sidebar idea to improve the apps handling add a right hand sidebar to the apps menu this could have a similar structure as the right hand sidebar for the files app if you click on a app it could show a screenshot at the top followed by the description and allow users to rate and comment on the app edit and it could show the changelog cc jancborchardt nextcloud designers | 0 |
71,554 | 7,247,043,121 | IssuesEvent | 2018-02-15 00:17:54 | rlguy/Blender-FLIP-Fluids-Beta | https://api.github.com/repos/rlguy/Blender-FLIP-Fluids-Beta | closed | [Test Case Results] EverSimo | Test Case | I run the cascading water test scene here at 100 and 200 resolution and it finished successfully.
I'm running the same scene at 300 resolution now to see how it goes.
My processor is only a few seconds behind the original processor that appears in the tooltip from the resolution button.
[test_case_cascading_water_feature100.zip](https://github.com/rlguy/Blender-FLIP-Fluids-Beta/files/1726052/test_case_cascading_water_feature100.zip)
[test_case_cascading_water_feature200.zip](https://github.com/rlguy/Blender-FLIP-Fluids-Beta/files/1726053/test_case_cascading_water_feature200.zip)
| 1.0 | [Test Case Results] EverSimo - I run the cascading water test scene here at 100 and 200 resolution and it finished successfully.
I'm running the same scene at 300 resolution now to see how it goes.
My processor is only a few seconds behind the original processor that appears in the tooltip from the resolution button.
[test_case_cascading_water_feature100.zip](https://github.com/rlguy/Blender-FLIP-Fluids-Beta/files/1726052/test_case_cascading_water_feature100.zip)
[test_case_cascading_water_feature200.zip](https://github.com/rlguy/Blender-FLIP-Fluids-Beta/files/1726053/test_case_cascading_water_feature200.zip)
| test | eversimo i run the cascading water test scene here at and resolution and it finished successfully i m running the same scene at resolution now to see how it goes my processor is only a few seconds behind the original processor that appears in the tooltip from the resolution button | 1 |
237,688 | 19,666,021,397 | IssuesEvent | 2022-01-10 22:38:31 | PepeCesarG/decide-full-alcazaba | https://api.github.com/repos/PepeCesarG/decide-full-alcazaba | closed | Integracion de la internacionlizacion en heroku | visualizer voting authentication base booth census gateway mixnet home feature testing deploy | - Contexto: Interfaz de Usuario e incremento funcional en todos los modulos disponibles. Prioridad 2
- Problema a atajar: La Internacionalizacion debe estar integrada en heroku
- Como atajarlo: Se configurara gettext y se ejecutara compilemessages en la consola de heroku | 1.0 | Integracion de la internacionlizacion en heroku - - Contexto: Interfaz de Usuario e incremento funcional en todos los modulos disponibles. Prioridad 2
- Problema a atajar: La Internacionalizacion debe estar integrada en heroku
- Como atajarlo: Se configurara gettext y se ejecutara compilemessages en la consola de heroku | test | integracion de la internacionlizacion en heroku contexto interfaz de usuario e incremento funcional en todos los modulos disponibles prioridad problema a atajar la internacionalizacion debe estar integrada en heroku como atajarlo se configurara gettext y se ejecutara compilemessages en la consola de heroku | 1 |
202,492 | 15,286,771,511 | IssuesEvent | 2021-02-23 15:03:34 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: interleavedpartitioned failed | C-test-failure O-roachtest O-robot branch-release-20.1 release-blocker | [(roachtest).interleavedpartitioned failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2661563&tab=buildLog) on [release-20.1@90f78268f3b5b08ba838ac3ad164821d2f5a5362](https://github.com/cockroachdb/cockroach/commits/90f78268f3b5b08ba838ac3ad164821d2f5a5362):
```
The test failed on branch=release-20.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/interleavedpartitioned/run_1
cluster.go:2198,interleavedpartitioned.go:67,interleavedpartitioned.go:125,test_runner.go:749: output in run_074559.735_n5_workload_init_interleavedpartitioned: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2661563-1612941348-48-n12cpu4-geo:5 -- ./workload init interleavedpartitioned --east-zone-name europe-west2-b --west-zone-name us-west1-b --central-zone-name us-east1-b --drop --locality east --init-sessions 1000 returned: exit status 20
(1) attached stack trace
| main.(*cluster).RunE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2276
| main.(*cluster).Run
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2196
| main.registerInterleaved.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/interleavedpartitioned.go:67
| main.registerInterleaved.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/interleavedpartitioned.go:125
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:749
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (2) 2 safe details enclosed
Wraps: (3) output in run_074559.735_n5_workload_init_interleavedpartitioned
Wraps: (4) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2661563-1612941348-48-n12cpu4-geo:5 -- ./workload init interleavedpartitioned --east-zone-name europe-west2-b --west-zone-name us-west1-b --central-zone-name us-east1-b --drop --locality east --init-sessions 1000 returned
| stderr:
| ./workload: error while loading shared libraries: libncurses.so.6: cannot open shared object file: No such file or directory
| Error: COMMAND_PROBLEM: exit status 127
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 5. Command with error:
| | ```
| | ./workload init interleavedpartitioned --east-zone-name europe-west2-b --west-zone-name us-west1-b --central-zone-name us-east1-b --drop --locality east --init-sessions 1000
| | ```
| Wraps: (3) exit status 127
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (5) exit status 20
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError
```
<details><summary>More</summary><p>
Artifacts: [/interleavedpartitioned](https://teamcity.cockroachdb.com/viewLog.html?buildId=2661563&tab=artifacts#/interleavedpartitioned)
Related:
- #60197 roachtest: interleavedpartitioned failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-60149](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-60149) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #60086 roachtest: interleavedpartitioned failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #59929 roachtest: interleavedpartitioned failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ainterleavedpartitioned.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: interleavedpartitioned failed - [(roachtest).interleavedpartitioned failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2661563&tab=buildLog) on [release-20.1@90f78268f3b5b08ba838ac3ad164821d2f5a5362](https://github.com/cockroachdb/cockroach/commits/90f78268f3b5b08ba838ac3ad164821d2f5a5362):
```
The test failed on branch=release-20.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/interleavedpartitioned/run_1
cluster.go:2198,interleavedpartitioned.go:67,interleavedpartitioned.go:125,test_runner.go:749: output in run_074559.735_n5_workload_init_interleavedpartitioned: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2661563-1612941348-48-n12cpu4-geo:5 -- ./workload init interleavedpartitioned --east-zone-name europe-west2-b --west-zone-name us-west1-b --central-zone-name us-east1-b --drop --locality east --init-sessions 1000 returned: exit status 20
(1) attached stack trace
| main.(*cluster).RunE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2276
| main.(*cluster).Run
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2196
| main.registerInterleaved.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/interleavedpartitioned.go:67
| main.registerInterleaved.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/interleavedpartitioned.go:125
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:749
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (2) 2 safe details enclosed
Wraps: (3) output in run_074559.735_n5_workload_init_interleavedpartitioned
Wraps: (4) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2661563-1612941348-48-n12cpu4-geo:5 -- ./workload init interleavedpartitioned --east-zone-name europe-west2-b --west-zone-name us-west1-b --central-zone-name us-east1-b --drop --locality east --init-sessions 1000 returned
| stderr:
| ./workload: error while loading shared libraries: libncurses.so.6: cannot open shared object file: No such file or directory
| Error: COMMAND_PROBLEM: exit status 127
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 5. Command with error:
| | ```
| | ./workload init interleavedpartitioned --east-zone-name europe-west2-b --west-zone-name us-west1-b --central-zone-name us-east1-b --drop --locality east --init-sessions 1000
| | ```
| Wraps: (3) exit status 127
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (5) exit status 20
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError
```
<details><summary>More</summary><p>
Artifacts: [/interleavedpartitioned](https://teamcity.cockroachdb.com/viewLog.html?buildId=2661563&tab=artifacts#/interleavedpartitioned)
Related:
- #60197 roachtest: interleavedpartitioned failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-60149](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-60149) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #60086 roachtest: interleavedpartitioned failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #59929 roachtest: interleavedpartitioned failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ainterleavedpartitioned.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | roachtest interleavedpartitioned failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts interleavedpartitioned run cluster go interleavedpartitioned go interleavedpartitioned go test runner go output in run workload init interleavedpartitioned home agent work go src github com cockroachdb cockroach bin roachprod run teamcity geo workload init interleavedpartitioned east zone name europe b west zone name us b central zone name us b drop locality east init sessions returned exit status attached stack trace main cluster rune home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main cluster run home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registerinterleaved home agent work go src github com cockroachdb cockroach pkg cmd roachtest interleavedpartitioned go main registerinterleaved home agent work go src github com cockroachdb cockroach pkg cmd roachtest interleavedpartitioned go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go runtime goexit usr local go src runtime asm s wraps safe details enclosed wraps output in run workload init interleavedpartitioned wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity geo workload init interleavedpartitioned east zone name europe b west zone name us b central zone name us b drop locality east init sessions returned stderr workload error while loading shared libraries libncurses so cannot open shared object file no such file or directory error command problem exit status command problem wraps node command with error workload init interleavedpartitioned east zone name europe b west zone name us b central zone name us b drop locality east init sessions wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack safedetails withsafedetails errutil withmessage main withcommanddetails exec exiterror more artifacts related roachtest interleavedpartitioned failed roachtest interleavedpartitioned failed roachtest interleavedpartitioned failed powered by | 1 |
206,268 | 7,111,049,835 | IssuesEvent | 2018-01-17 12:58:15 | roboticslab-uc3m/openrave-yarp-plugins | https://api.github.com/repos/roboticslab-uc3m/openrave-yarp-plugins | closed | Move YarpOpenraveBase one level (directory) up | priority: low | From https://github.com/roboticslab-uc3m/openrave-yarp-plugins/issues/61#issuecomment-354334328
> Should [YarpOpenraveBase](https://github.com/roboticslab-uc3m/openrave-yarp-plugins/tree/9e29a31d79a808f78a966bdd27cc7a8b326b3bb8/libraries/YarpPlugins/YarpOpenraveBase) live in the `YarpPlugins/` directory in spite of not being a YARP plugin (i.e. inheritance from `DeviceDriver` and such)? | 1.0 | Move YarpOpenraveBase one level (directory) up - From https://github.com/roboticslab-uc3m/openrave-yarp-plugins/issues/61#issuecomment-354334328
> Should [YarpOpenraveBase](https://github.com/roboticslab-uc3m/openrave-yarp-plugins/tree/9e29a31d79a808f78a966bdd27cc7a8b326b3bb8/libraries/YarpPlugins/YarpOpenraveBase) live in the `YarpPlugins/` directory in spite of not being a YARP plugin (i.e. inheritance from `DeviceDriver` and such)? | non_test | move yarpopenravebase one level directory up from should live in the yarpplugins directory in spite of not being a yarp plugin i e inheritance from devicedriver and such | 0 |
9,534 | 8,029,810,169 | IssuesEvent | 2018-07-27 17:19:57 | Microsoft/visualfsharp | https://api.github.com/repos/Microsoft/visualfsharp | closed | Publish new FSharp.Core patch version to include XML doc bug fix | Area-Infrastructure Area-Library | The current FSharp.Core version includes a bug in the XML docs that was fixed here: https://github.com/Microsoft/visualfsharp/commit/ee2edd15a55b3a62866004377b054eff41bb05fa
This prevents us from onboarding onto the docs.microsoft.com/dotnet/api API reference. | 1.0 | Publish new FSharp.Core patch version to include XML doc bug fix - The current FSharp.Core version includes a bug in the XML docs that was fixed here: https://github.com/Microsoft/visualfsharp/commit/ee2edd15a55b3a62866004377b054eff41bb05fa
This prevents us from onboarding onto the docs.microsoft.com/dotnet/api API reference. | non_test | publish new fsharp core patch version to include xml doc bug fix the current fsharp core version includes a bug in the xml docs that was fixed here this prevents us from onboarding onto the docs microsoft com dotnet api api reference | 0 |
724,382 | 24,927,948,703 | IssuesEvent | 2022-10-31 09:11:14 | open-mmlab/mmediting | https://api.github.com/repos/open-mmlab/mmediting | closed | Dataset preprocessing scripts cannot input int type | kind/enhancement community/good first issue priority/P0 | In the super-resolution dataset preprocessing scripts (take `tools\data\super-resolution\div2k\preprocess_div2k_dataset.py` as an example), the cmd parser is defined as follows:
```python
parser.add_argument(
'--n-thread',
nargs='?',
default=20,
help='thread number when using multiprocessing')
```
where the type of the argument is not set. These args will be parsed as str even if int is given. Seems the function misses `type=int`.
PS:
1. The default n-thread is set to 20, which may cause lack of memory on most personal computers. I suggest setting a lower number (maybe 4 or 8 is better).
2. Seems the script cannot create an annotation file(https://mmediting.readthedocs.io/en/latest/_tmp/sr_datasets.html#prepare-annotation-list). Adding an annotation creator will be helpful.
Thanks for the help. | 1.0 | Dataset preprocessing scripts cannot input int type - In the super-resolution dataset preprocessing scripts (take `tools\data\super-resolution\div2k\preprocess_div2k_dataset.py` as an example), the cmd parser is defined as follows:
```python
parser.add_argument(
'--n-thread',
nargs='?',
default=20,
help='thread number when using multiprocessing')
```
where the type of the argument is not set. These args will be parsed as str even if int is given. Seems the function misses `type=int`.
PS:
1. The default n-thread is set to 20, which may cause lack of memory on most personal computers. I suggest setting a lower number (maybe 4 or 8 is better).
2. Seems the script cannot create an annotation file(https://mmediting.readthedocs.io/en/latest/_tmp/sr_datasets.html#prepare-annotation-list). Adding an annotation creator will be helpful.
Thanks for the help. | non_test | dataset preprocessing scripts cannot input int type in the super resolution dataset preprocessing scripts take tools data super resolution preprocess dataset py as an example the cmd parser is defined as follows python parser add argument n thread nargs default help thread number when using multiprocessing where the type of the argument is not set these args will be parsed as str even if int is given seems the function misses type int ps the default n thread is set to which may cause lack of memory on most personal computers i suggest setting a lower number maybe or is better seems the script cannot create an annotation file adding an annotation creator will be helpful thanks for the help | 0 |
313,880 | 9,577,086,123 | IssuesEvent | 2019-05-07 10:40:01 | pravega/pravega | https://api.github.com/repos/pravega/pravega | closed | NPE in ServerConnectionInboundHandler - DecoderException: java.lang.NullPointerException causing IO stalls | kind/bug priority/P1 version/0.5.0 | Observing WARN `Async iteration failed: java.util.concurrent.CompletionException: ` with error message `Caused by: io.pravega.controller.server.WireCommandFailedException: WireCommandFailed with type READ_TABLE_KEYS reason SegmentDoesNotExist` exception. This Error is being observed in one of the controller after running moderate IO workload (~6 Mbps) medium-scale longevity for 1 day in Pravega cluster.
Also it is observed that the longevity `medium-scale` IO's are stalled (Reader & Writer events count are not increasing ) after `13 hrs` ( Client log timestamp: "2019-05-02 23:26:04,878") of run. However new longevity workload and Pravvega-Benchmark workload are running fine. There is no PKS Pod restart observed,
Environment details: PKS / K8 with medium cluster:
```
1 master nodes @ large.cpu (4 CPU, 4 GB Ram, 16 GB Disk)
3 worker nodes @ xlarge.cpu(4 cpu, 16 GB Ram, 32 GB Disk)
Tier-1 storage is from VSAN datastore
Tier-2 storage curved on NFS Client provisioner using Isilon as backend
```
Pravega details:
```
Pravega version : 0.5.0-2236.5228e2d
Zookeeper Operator : pravega/zookeeper-operator:0.2.1
Pravega Operator: pravega/pravega-operator:0.3.2
```
2019-05-02 23:21:07,522 56021735 [epollEventLoopGroup-11-6] ERROR i.p.s.s.h.h.ServerConnectionInboundHandler - Caught exception on connection:
io.netty.handler.codec.DecoderException: java.lang.NullPointerException
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:426)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.pravega.shared.protocol.netty.ExceptionLoggingHandler.channelRead(ExceptionLoggingHandler.java:37)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:421)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:321)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at io.pravega.shared.protocol.netty.WireCommandType.readFrom(WireCommandType.java:122)
at io.pravega.shared.protocol.netty.CommandDecoder.parseCommand(CommandDecoder.java:55)
at io.pravega.shared.protocol.netty.CommandDecoder.decode(CommandDecoder.java:36)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
... 26 common frames omitted
``` | 1.0 | NPE in ServerConnectionInboundHandler - DecoderException: java.lang.NullPointerException causing IO stalls - Observing WARN `Async iteration failed: java.util.concurrent.CompletionException: ` with error message `Caused by: io.pravega.controller.server.WireCommandFailedException: WireCommandFailed with type READ_TABLE_KEYS reason SegmentDoesNotExist` exception. This Error is being observed in one of the controller after running moderate IO workload (~6 Mbps) medium-scale longevity for 1 day in Pravega cluster.
Also it is observed that the longevity `medium-scale` IO's are stalled (Reader & Writer events count are not increasing ) after `13 hrs` ( Client log timestamp: "2019-05-02 23:26:04,878") of run. However new longevity workload and Pravvega-Benchmark workload are running fine. There is no PKS Pod restart observed,
Environment details: PKS / K8 with medium cluster:
```
1 master nodes @ large.cpu (4 CPU, 4 GB Ram, 16 GB Disk)
3 worker nodes @ xlarge.cpu(4 cpu, 16 GB Ram, 32 GB Disk)
Tier-1 storage is from VSAN datastore
Tier-2 storage curved on NFS Client provisioner using Isilon as backend
```
Pravega details:
```
Pravega version : 0.5.0-2236.5228e2d
Zookeeper Operator : pravega/zookeeper-operator:0.2.1
Pravega Operator: pravega/pravega-operator:0.3.2
```
2019-05-02 23:21:07,522 56021735 [epollEventLoopGroup-11-6] ERROR i.p.s.s.h.h.ServerConnectionInboundHandler - Caught exception on connection:
io.netty.handler.codec.DecoderException: java.lang.NullPointerException
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:426)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.pravega.shared.protocol.netty.ExceptionLoggingHandler.channelRead(ExceptionLoggingHandler.java:37)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:421)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:321)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at io.pravega.shared.protocol.netty.WireCommandType.readFrom(WireCommandType.java:122)
at io.pravega.shared.protocol.netty.CommandDecoder.parseCommand(CommandDecoder.java:55)
at io.pravega.shared.protocol.netty.CommandDecoder.decode(CommandDecoder.java:36)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
... 26 common frames omitted
``` | non_test | npe in serverconnectioninboundhandler decoderexception java lang nullpointerexception causing io stalls observing warn async iteration failed java util concurrent completionexception with error message caused by io pravega controller server wirecommandfailedexception wirecommandfailed with type read table keys reason segmentdoesnotexist exception this error is being observed in one of the controller after running moderate io workload mbps medium scale longevity for day in pravega cluster also it is observed that the longevity medium scale io s are stalled reader writer events count are not increasing after hrs client log timestamp of run however new longevity workload and pravvega benchmark workload are running fine there is no pks pod restart observed environment details pks with medium cluster master nodes large cpu cpu gb ram gb disk worker nodes xlarge cpu cpu gb ram gb disk tier storage is from vsan datastore tier storage curved on nfs client provisioner using isilon as backend pravega details pravega version zookeeper operator pravega zookeeper operator pravega operator pravega pravega operator error i p s s h h serverconnectioninboundhandler caught exception on connection io netty handler codec decoderexception java lang nullpointerexception at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel channelinboundhandleradapter channelread channelinboundhandleradapter java at io pravega shared protocol netty exceptionlogginghandler channelread exceptionlogginghandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel epoll abstractepollstreamchannel epollstreamunsafe epollinready abstractepollstreamchannel java at io netty channel epoll epolleventloop processready epolleventloop java at io netty channel epoll epolleventloop run epolleventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java caused by java lang nullpointerexception null at io pravega shared protocol netty wirecommandtype readfrom wirecommandtype java at io pravega shared protocol netty commanddecoder parsecommand commanddecoder java at io pravega shared protocol netty commanddecoder decode commanddecoder java at io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java common frames omitted | 0 |
9,890 | 3,078,192,879 | IssuesEvent | 2015-08-21 08:33:19 | devprovers/QA-Teamwork-Provers | https://api.github.com/repos/devprovers/QA-Teamwork-Provers | closed | Display Task | To Be Tested Use Case | Use Case №21: Display Task
Primary Actor: User
Pre Condition: User logged in
Scenario:
1. User selects task from Tasks (left pane, refer user screens in Appendix )
2. System displays the Task information
| 1.0 | Display Task - Use Case №21: Display Task
Primary Actor: User
Pre Condition: User logged in
Scenario:
1. User selects task from Tasks (left pane, refer user screens in Appendix )
2. System displays the Task information
| test | display task use case № display task primary actor user pre condition user logged in scenario user selects task from tasks left pane refer user screens in appendix system displays the task information | 1 |
232,953 | 18,927,175,099 | IssuesEvent | 2021-11-17 10:45:54 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [Security Solution] If the event filter has an operator as 'Is One Of' the UI shows it as 'Any' | bug impact:medium Team: SecuritySolution Team:Onboarding and Lifecycle Mgt QA:Needs Validation QA:Ready for Testing QA:Validated v7.16.0 | **Description:**
If the event filter has an operator as 'Is One Of' the UI shows it as 'Any'
**Build Details:**
```
VERSION: 7.16.0-BC1
BUILD: 45504
COMMIT: 9231d806c9384df4026977ba7435a9302dc2d4ab
ARTIFACT: https://staging.elastic.co/7.16.0-255b8273/summary-7.16.0.html
```
**Browser Details:**
All
**Preconditions:**
1. Kibana user should be logged in.
**Steps to Reproduce:**
1. Navigate to the Endpoint tab under the Manage section under Security from the left side navigation on Kibana
2. Go to the Event Filtering tab
3. Click on 'Add new value'
4. In the flyout, use the operator 'any of'
5. Save the value
**Impacted Test case:**
N/A
**Actual Result:**
If the event filter has an operator as 'Is One Of' the UI shows it as 'Any'
**Expected Result:**
If the event filter has an operator as 'Is One Of' the UI shows it as 'Any'
**What's working:**
N/A
**What's not working:**
N/A
**Screenshots:**


**Logs:**
N/A | 1.0 | [Security Solution] If the event filter has an operator as 'Is One Of' the UI shows it as 'Any' - **Description:**
If the event filter has an operator as 'Is One Of' the UI shows it as 'Any'
**Build Details:**
```
VERSION: 7.16.0-BC1
BUILD: 45504
COMMIT: 9231d806c9384df4026977ba7435a9302dc2d4ab
ARTIFACT: https://staging.elastic.co/7.16.0-255b8273/summary-7.16.0.html
```
**Browser Details:**
All
**Preconditions:**
1. Kibana user should be logged in.
**Steps to Reproduce:**
1. Navigate to the Endpoint tab under the Manage section under Security from the left side navigation on Kibana
2. Go to the Event Filtering tab
3. Click on 'Add new value'
4. In the flyout, use the operator 'any of'
5. Save the value
**Impacted Test case:**
N/A
**Actual Result:**
If the event filter has an operator as 'Is One Of' the UI shows it as 'Any'
**Expected Result:**
If the event filter has an operator as 'Is One Of' the UI shows it as 'Any'
**What's working:**
N/A
**What's not working:**
N/A
**Screenshots:**


**Logs:**
N/A | test | if the event filter has an operator as is one of the ui shows it as any description if the event filter has an operator as is one of the ui shows it as any build details version build commit artifact browser details all preconditions kibana user should be logged in steps to reproduce navigate to the endpoint tab under the manage section under security from the left side navigation on kibana go to the event filtering tab click on add new value in the flyout use the operator any of save the value impacted test case n a actual result if the event filter has an operator as is one of the ui shows it as any expected result if the event filter has an operator as is one of the ui shows it as any what s working n a what s not working n a screenshots logs n a | 1 |
42,222 | 17,085,787,272 | IssuesEvent | 2021-07-08 11:39:31 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Embeddable migration and Persistable State updates | Team:AppServices impact:critical loe:hours | The current implementation relies on the persistor of embeddable state to use `getMigration` (on PersistableStateDefinition) and `getMigrationVersions` (from EmbeddableFactory I believe) together to run migrations correctly (otherwise the plugin author won't know what version number to pass into the `getMigration` function).
The `getMigration` and `getAllMigrationVersions` should be replaced with a single `getMigrations` function on PersistableStateService interfaces that returns an object with the versions mapped to the functions. This will make it easier for consumers.
Additional persistable state items tracked in other issues:
- [ ] Add a tested example plugin that uses these services to showcase to consumers how it is meant to be used with Saved Object mgirations. (https://github.com/elastic/kibana/issues/102771)
- [ ] Improve Embeddable migration unit tests (#102772)
- [ ] Add a tutorial in our Developer Guide called "Persistable state" that walks through 1. how to persist persistable state (call the migrations, inject/extract, etc), and 2. How to implement persistable state (write migrations for any state change). (https://github.com/elastic/kibana/issues/103105)
I'm adding this as critical impact because of the impact this will have on users of PersistableStateService. This change should result in improved upgrade stability, by making it less fragile to use in an incorrect manner. | 1.0 | Embeddable migration and Persistable State updates - The current implementation relies on the persistor of embeddable state to use `getMigration` (on PersistableStateDefinition) and `getMigrationVersions` (from EmbeddableFactory I believe) together to run migrations correctly (otherwise the plugin author won't know what version number to pass into the `getMigration` function).
The `getMigration` and `getAllMigrationVersions` should be replaced with a single `getMigrations` function on PersistableStateService interfaces that returns an object with the versions mapped to the functions. This will make it easier for consumers.
Additional persistable state items tracked in other issues:
- [ ] Add a tested example plugin that uses these services to showcase to consumers how it is meant to be used with Saved Object mgirations. (https://github.com/elastic/kibana/issues/102771)
- [ ] Improve Embeddable migration unit tests (#102772)
- [ ] Add a tutorial in our Developer Guide called "Persistable state" that walks through 1. how to persist persistable state (call the migrations, inject/extract, etc), and 2. How to implement persistable state (write migrations for any state change). (https://github.com/elastic/kibana/issues/103105)
I'm adding this as critical impact because of the impact this will have on users of PersistableStateService. This change should result in improved upgrade stability, by making it less fragile to use in an incorrect manner. | non_test | embeddable migration and persistable state updates the current implementation relies on the persistor of embeddable state to use getmigration on persistablestatedefinition and getmigrationversions from embeddablefactory i believe together to run migrations correctly otherwise the plugin author won t know what version number to pass into the getmigration function the getmigration and getallmigrationversions should be replaced with a single getmigrations function on persistablestateservice interfaces that returns an object with the versions mapped to the functions this will make it easier for consumers additional persistable state items tracked in other issues add a tested example plugin that uses these services to showcase to consumers how it is meant to be used with saved object mgirations improve embeddable migration unit tests add a tutorial in our developer guide called persistable state that walks through how to persist persistable state call the migrations inject extract etc and how to implement persistable state write migrations for any state change i m adding this as critical impact because of the impact this will have on users of persistablestateservice this change should result in improved upgrade stability by making it less fragile to use in an incorrect manner | 0 |
131,794 | 10,710,843,157 | IssuesEvent | 2019-10-25 03:53:48 | EasyRPG/Player | https://api.github.com/repos/EasyRPG/Player | closed | Text overlaps with face on teleport | Event/Interpreter Patch available Testcase available | Please fill in the following fields before submitting an issue:
#### Name of the game:
Ahriman's Prophecy
#### Player platform:
Windows, Android (probably all)
#### Describe the issue in detail and how to reproduce it:
On displaying a message with a face to the left after teleporting from inside a common event, the face is displayed but the offset for the text gets unset for all except the first line. So the first line shows up with the proper offset but the rest of the text overlaps with the face. | 1.0 | Text overlaps with face on teleport - Please fill in the following fields before submitting an issue:
#### Name of the game:
Ahriman's Prophecy
#### Player platform:
Windows, Android (probably all)
#### Describe the issue in detail and how to reproduce it:
On displaying a message with a face to the left after teleporting from inside a common event, the face is displayed but the offset for the text gets unset for all except the first line. So the first line shows up with the proper offset but the rest of the text overlaps with the face. | test | text overlaps with face on teleport please fill in the following fields before submitting an issue name of the game ahriman s prophecy player platform windows android probably all describe the issue in detail and how to reproduce it on displaying a message with a face to the left after teleporting from inside a common event the face is displayed but the offset for the text gets unset for all except the first line so the first line shows up with the proper offset but the rest of the text overlaps with the face | 1 |
70,208 | 7,179,452,200 | IssuesEvent | 2018-01-31 19:44:26 | vmware/vic | https://api.github.com/repos/vmware/vic | closed | nightly 5-1-Distributed-Switch: Install VCH fails in validation: Post to https://VC_IP/sdk:EOF | component/test kind/nightly-blocker priority/high status/need-info status/needs-estimation team/lifecycle | VCH installation fails in one of the validation steps, with error `Post https://10.162.28.135/sdk: EOF`.
`10.162.28.135` is the VC IP according to the log (`--target=https://10.162.28.135`)
So this looks like it just lost connection to nimbus all of a sudden.
Where exactly in the validation step it fails is hard to know. Sometimes when there's an error during validation, the error is not going to log immediately. They're collected and printed out when validation of everything else finishes. This error falls in this case. All the logs during validation do not show any apparent failure.
From `vic-machine.log`:
```
time="2017-12-16T05:48:54-06:00" level=info msg=" \"/ha-datacenter/host/10.160.227.177/10.160.227.177\""
time="2017-12-16T05:48:54-06:00" level=info msg="DRS check SKIPPED - target is standalone host"
Dec 16 2017 05:48:54.674-06:00 DEBUG URL: https://harbor.ci.drone.local/v2/
Dec 16 2017 05:48:59.678-06:00 DEBUG URL: http://harbor.ci.drone.local/v2/
time="2017-12-16T05:49:04-06:00" level=warning msg="Unable to confirm insecure registry harbor.ci.drone.local is a valid registry at this time."
time="2017-12-16T05:49:04-06:00" level=info msg="Insecure registries = harbor.ci.drone.local"
time="2017-12-16T05:49:05-06:00" level=error msg=--------------------
time="2017-12-16T05:49:05-06:00" level=error msg="Post https://10.162.28.135/sdk: EOF"
time="2017-12-16T05:49:05-06:00" level=error msg="Create cannot continue: configuration validation failed"
time="2017-12-16T05:49:05-06:00" level=error msg=--------------------
time="2017-12-16T05:49:05-06:00" level=error msg="vic-machine-linux create failed: validation of configuration failed\n" ' does not contain 'Installer completed successfully'
```
Log bundle: [5-1-Distributed-Switch.zip](https://github.com/vmware/vic/files/1565340/5-1-Distributed-Switch.zip)
| 1.0 | nightly 5-1-Distributed-Switch: Install VCH fails in validation: Post to https://VC_IP/sdk:EOF - VCH installation fails in one of the validation steps, with error `Post https://10.162.28.135/sdk: EOF`.
`10.162.28.135` is the VC IP according to the log (`--target=https://10.162.28.135`)
So this looks like it just lost connection to nimbus all of a sudden.
Where exactly in the validation step it fails is hard to know. Sometimes when there's an error during validation, the error is not going to log immediately. They're collected and printed out when validation of everything else finishes. This error falls in this case. All the logs during validation do not show any apparent failure.
From `vic-machine.log`:
```
time="2017-12-16T05:48:54-06:00" level=info msg=" \"/ha-datacenter/host/10.160.227.177/10.160.227.177\""
time="2017-12-16T05:48:54-06:00" level=info msg="DRS check SKIPPED - target is standalone host"
Dec 16 2017 05:48:54.674-06:00 DEBUG URL: https://harbor.ci.drone.local/v2/
Dec 16 2017 05:48:59.678-06:00 DEBUG URL: http://harbor.ci.drone.local/v2/
time="2017-12-16T05:49:04-06:00" level=warning msg="Unable to confirm insecure registry harbor.ci.drone.local is a valid registry at this time."
time="2017-12-16T05:49:04-06:00" level=info msg="Insecure registries = harbor.ci.drone.local"
time="2017-12-16T05:49:05-06:00" level=error msg=--------------------
time="2017-12-16T05:49:05-06:00" level=error msg="Post https://10.162.28.135/sdk: EOF"
time="2017-12-16T05:49:05-06:00" level=error msg="Create cannot continue: configuration validation failed"
time="2017-12-16T05:49:05-06:00" level=error msg=--------------------
time="2017-12-16T05:49:05-06:00" level=error msg="vic-machine-linux create failed: validation of configuration failed\n" ' does not contain 'Installer completed successfully'
```
Log bundle: [5-1-Distributed-Switch.zip](https://github.com/vmware/vic/files/1565340/5-1-Distributed-Switch.zip)
| test | nightly distributed switch install vch fails in validation post to vch installation fails in one of the validation steps with error post eof is the vc ip according to the log target so this looks like it just lost connection to nimbus all of a sudden where exactly in the validation step it fails is hard to know sometimes when there s an error during validation the error is not going to log immediately they re collected and printed out when validation of everything else finishes this error falls in this case all the logs during validation do not show any apparent failure from vic machine log time level info msg ha datacenter host time level info msg drs check skipped target is standalone host dec debug url dec debug url time level warning msg unable to confirm insecure registry harbor ci drone local is a valid registry at this time time level info msg insecure registries harbor ci drone local time level error msg time level error msg post eof time level error msg create cannot continue configuration validation failed time level error msg time level error msg vic machine linux create failed validation of configuration failed n does not contain installer completed successfully log bundle | 1 |
232,686 | 18,900,891,181 | IssuesEvent | 2021-11-16 00:41:32 | backend-br/vagas | https://api.github.com/repos/backend-br/vagas | closed | [Remoto] Python Back-end Developer na Facily o primeiro Social Commerce da América Latina! | CLT Pleno PHP Python Django GoLang Remoto Especialista AWS MySQL PostgreSQL Testes Unitários Stale | <!--
==================================================
Caso a vaga for remoto durante a pandemia informar no texto "Remoto durante o covid"
==================================================
-->
<!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA BACK-END!
Não faça distinção de gênero no título da vaga.
Use: "Back-End Developer" ao invés de
"Desenvolvedor Back-End" \o/
Exemplo: `[São Paulo] Back-End Developer @ NOME DA EMPRESA`
==================================================
-->
<!--
==================================================
Caso a vaga for remoto durante a pandemia deixar a linha abaixo
==================================================
-->
## Nossa empresa
A Facily é o primeiro Social Commerce da América Latina!
Somos uma plataforma em que os usuários fazem compras em grupo para terem acesso a produtos de qualidade pelo menor preço, por meio de uma experiência de compra interativa.
São dezenas de milhares de produtos, aproximadamente 6 milhões de downloads e preço baixo SEMPRE!
Estamos crescendo e nosso time também!
## Descrição da vaga
Estamos crescendo e nosso time também!
Buscamos Python Software Engineers, que atuarão na construção de aplicativos da Facily, garantindo que nossos usuários tenham uma experiência incrível com os nossos produtos!
Você participará de todo o processo de desenvolvimento de uma aplicação - design das soluções construídas, desenvolvimento até o deploy em produção.
Para se dar bem no nosso time, você precisa gostar de ambientes dinâmicos, ter muita proatividade e autonomia para resolver grandes desafios! Procuramos pessoas movidas por propósito, que queiram nos ajudar a construir o futuro do social commerce e impactar a vida de milhões de brasileiros.
## Local
Remoto
## Requisitos
**Obrigatórios:**
- Django e FastAPI ou frameworks semelhantes;
- Arquitetura de microsserviços;
- GCP e/ou AWS;
- Programação Orientada a Objetos;
- Bancos de dados relacionais, preferencialmente MySQL e PostgreSQL.
- Monitoramento de serviços distribuídos.
**Diferenciais:**
- projetos opensource
- Experiência com PHP, Node.js ou Golang.
## Suas atribuições serão:
- Construir aplicações usando Django e FastAPI;
- Participar de definições e melhorias de arquitetura;
- Garantir um código de qualidade e escalável, seguindo boas práticas de desenvolvimento pensando em performance e segurança;
- Desenvolver testes unitários e de integração;
- Criar documentações e garantir que elas estarão atualizadas com as últimas versões dos sistemas e ferramentas;
- Promover discussões e iniciativas para a evolução técnica do produto e do time.
## Benefícios
- Plano de saúde com cobertura nacional;
- Vale Alimentação ou Refeição;
- Gympass;
- Day off no dia do seu aniversário!;
- Horários flexíveis;
- Trabalho full remote.
## Contratação
- CLT
## Como se candidatar
Por favor envie um email para renan.bordin@faci.ly ou bruno.queiroz@faci.ly com o link do seu Linkedin - enviar no assunto: Python Back-end Developer na Facily
## Tempo médio de feedbacks
Costumamos enviar feedbacks em até 2 dias após cada processo.
E-mail para contato em caso de não haver resposta: renan.bordin@faci.ly
## Labels
#### Alocação
- Remoto
#### Regime
- CLT
#### Nível
- Pleno
- Sênior
- Especialista
| 1.0 | [Remoto] Python Back-end Developer na Facily o primeiro Social Commerce da América Latina! - <!--
==================================================
Caso a vaga for remoto durante a pandemia informar no texto "Remoto durante o covid"
==================================================
-->
<!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA BACK-END!
Não faça distinção de gênero no título da vaga.
Use: "Back-End Developer" ao invés de
"Desenvolvedor Back-End" \o/
Exemplo: `[São Paulo] Back-End Developer @ NOME DA EMPRESA`
==================================================
-->
<!--
==================================================
Caso a vaga for remoto durante a pandemia deixar a linha abaixo
==================================================
-->
## Nossa empresa
A Facily é o primeiro Social Commerce da América Latina!
Somos uma plataforma em que os usuários fazem compras em grupo para terem acesso a produtos de qualidade pelo menor preço, por meio de uma experiência de compra interativa.
São dezenas de milhares de produtos, aproximadamente 6 milhões de downloads e preço baixo SEMPRE!
Estamos crescendo e nosso time também!
## Descrição da vaga
Estamos crescendo e nosso time também!
Buscamos Python Software Engineers, que atuarão na construção de aplicativos da Facily, garantindo que nossos usuários tenham uma experiência incrível com os nossos produtos!
Você participará de todo o processo de desenvolvimento de uma aplicação - design das soluções construídas, desenvolvimento até o deploy em produção.
Para se dar bem no nosso time, você precisa gostar de ambientes dinâmicos, ter muita proatividade e autonomia para resolver grandes desafios! Procuramos pessoas movidas por propósito, que queiram nos ajudar a construir o futuro do social commerce e impactar a vida de milhões de brasileiros.
## Local
Remoto
## Requisitos
**Obrigatórios:**
- Django e FastAPI ou frameworks semelhantes;
- Arquitetura de microsserviços;
- GCP e/ou AWS;
- Programação Orientada a Objetos;
- Bancos de dados relacionais, preferencialmente MySQL e PostgreSQL.
- Monitoramento de serviços distribuídos.
**Diferenciais:**
- projetos opensource
- Experiência com PHP, Node.js ou Golang.
## Suas atribuições serão:
- Construir aplicações usando Django e FastAPI;
- Participar de definições e melhorias de arquitetura;
- Garantir um código de qualidade e escalável, seguindo boas práticas de desenvolvimento pensando em performance e segurança;
- Desenvolver testes unitários e de integração;
- Criar documentações e garantir que elas estarão atualizadas com as últimas versões dos sistemas e ferramentas;
- Promover discussões e iniciativas para a evolução técnica do produto e do time.
## Benefícios
- Plano de saúde com cobertura nacional;
- Vale Alimentação ou Refeição;
- Gympass;
- Day off no dia do seu aniversário!;
- Horários flexíveis;
- Trabalho full remote.
## Contratação
- CLT
## Como se candidatar
Por favor envie um email para renan.bordin@faci.ly ou bruno.queiroz@faci.ly com o link do seu Linkedin - enviar no assunto: Python Back-end Developer na Facily
## Tempo médio de feedbacks
Costumamos enviar feedbacks em até 2 dias após cada processo.
E-mail para contato em caso de não haver resposta: renan.bordin@faci.ly
## Labels
#### Alocação
- Remoto
#### Regime
- CLT
#### Nível
- Pleno
- Sênior
- Especialista
| test | python back end developer na facily o primeiro social commerce da américa latina caso a vaga for remoto durante a pandemia informar no texto remoto durante o covid por favor só poste se a vaga for para back end não faça distinção de gênero no título da vaga use back end developer ao invés de desenvolvedor back end o exemplo back end developer nome da empresa caso a vaga for remoto durante a pandemia deixar a linha abaixo nossa empresa a facily é o primeiro social commerce da américa latina somos uma plataforma em que os usuários fazem compras em grupo para terem acesso a produtos de qualidade pelo menor preço por meio de uma experiência de compra interativa são dezenas de milhares de produtos aproximadamente milhões de downloads e preço baixo sempre estamos crescendo e nosso time também descrição da vaga estamos crescendo e nosso time também buscamos python software engineers que atuarão na construção de aplicativos da facily garantindo que nossos usuários tenham uma experiência incrível com os nossos produtos você participará de todo o processo de desenvolvimento de uma aplicação design das soluções construídas desenvolvimento até o deploy em produção para se dar bem no nosso time você precisa gostar de ambientes dinâmicos ter muita proatividade e autonomia para resolver grandes desafios procuramos pessoas movidas por propósito que queiram nos ajudar a construir o futuro do social commerce e impactar a vida de milhões de brasileiros local remoto requisitos obrigatórios django e fastapi ou frameworks semelhantes arquitetura de microsserviços gcp e ou aws programação orientada a objetos bancos de dados relacionais preferencialmente mysql e postgresql monitoramento de serviços distribuídos diferenciais projetos opensource experiência com php node js ou golang suas atribuições serão construir aplicações usando django e fastapi participar de definições e melhorias de arquitetura garantir um código de qualidade e escalável seguindo boas práticas de desenvolvimento pensando em performance e segurança desenvolver testes unitários e de integração criar documentações e garantir que elas estarão atualizadas com as últimas versões dos sistemas e ferramentas promover discussões e iniciativas para a evolução técnica do produto e do time benefícios plano de saúde com cobertura nacional vale alimentação ou refeição gympass day off no dia do seu aniversário horários flexíveis trabalho full remote contratação clt como se candidatar por favor envie um email para renan bordin faci ly ou bruno queiroz faci ly com o link do seu linkedin enviar no assunto python back end developer na facily tempo médio de feedbacks costumamos enviar feedbacks em até dias após cada processo e mail para contato em caso de não haver resposta renan bordin faci ly labels alocação remoto regime clt nível pleno sênior especialista | 1 |
289,602 | 24,999,134,010 | IssuesEvent | 2022-11-03 05:34:41 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: network/authentication/nodes=4 failed | C-test-failure O-robot O-roachtest branch-master release-blocker | roachtest.network/authentication/nodes=4 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7288517?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7288517?buildTab=artifacts#/network/authentication/nodes=4) on master @ [9c9d55d707ad9a768027e9b7a3775c7c7cde8de7](https://github.com/cockroachdb/cockroach/commits/9c9d55d707ad9a768027e9b7a3775c7c7cde8de7):
```
test artifacts and logs in: /artifacts/network/authentication/nodes=4/run_1
(test_impl.go:291).Fatal: monitor failure: monitor task failed: Non-zero exit code: 1
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*network/authentication/nodes=4.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: network/authentication/nodes=4 failed - roachtest.network/authentication/nodes=4 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7288517?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7288517?buildTab=artifacts#/network/authentication/nodes=4) on master @ [9c9d55d707ad9a768027e9b7a3775c7c7cde8de7](https://github.com/cockroachdb/cockroach/commits/9c9d55d707ad9a768027e9b7a3775c7c7cde8de7):
```
test artifacts and logs in: /artifacts/network/authentication/nodes=4/run_1
(test_impl.go:291).Fatal: monitor failure: monitor task failed: Non-zero exit code: 1
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*network/authentication/nodes=4.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest network authentication nodes failed roachtest network authentication nodes with on master test artifacts and logs in artifacts network authentication nodes run test impl go fatal monitor failure monitor task failed non zero exit code parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb kv triage | 1 |
753,395 | 26,346,400,812 | IssuesEvent | 2023-01-10 22:32:22 | pdx-blurp/blurp-frontend | https://api.github.com/repos/pdx-blurp/blurp-frontend | closed | [Practice] Resolve this ticket 2/4 | new feature medium priority | Practice resolving tickets using github keywords:
ex.: git commit -m "resolve #9999, I did a thing!" | 1.0 | [Practice] Resolve this ticket 2/4 - Practice resolving tickets using github keywords:
ex.: git commit -m "resolve #9999, I did a thing!" | non_test | resolve this ticket practice resolving tickets using github keywords ex git commit m resolve i did a thing | 0 |
76,490 | 9,458,237,738 | IssuesEvent | 2019-04-17 04:14:19 | NAVCoin/NavHub | https://api.github.com/repos/NAVCoin/NavHub | closed | Footer Partial | NavHub Redesign | The footer which will be reused across the site. All links are hardcoded
See design for full details, images below are for quick reference.



| 1.0 | Footer Partial - The footer which will be reused across the site. All links are hardcoded
See design for full details, images below are for quick reference.



| non_test | footer partial the footer which will be reused across the site all links are hardcoded see design for full details images below are for quick reference | 0 |
14,350 | 3,392,129,491 | IssuesEvent | 2015-11-30 18:15:43 | mesosphere/kubernetes-mesos | https://api.github.com/repos/mesosphere/kubernetes-mesos | reopened | Skipped e2e patch for v0.7-v1.1 branch | tests/conformance tests/e2e WIP | Do we need your patch for our release branch in mesosphere/kubernetes to be conformant? | 2.0 | Skipped e2e patch for v0.7-v1.1 branch - Do we need your patch for our release branch in mesosphere/kubernetes to be conformant? | test | skipped patch for branch do we need your patch for our release branch in mesosphere kubernetes to be conformant | 1 |
153,012 | 19,702,393,414 | IssuesEvent | 2022-01-12 17:55:48 | jgeraigery/analyticsapi-engines-java-sdk | https://api.github.com/repos/jgeraigery/analyticsapi-engines-java-sdk | opened | CVE-2021-22569 (Medium) detected in protobuf-java-3.12.2.jar | security vulnerability | ## CVE-2021-22569 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobuf-java-3.12.2.jar</b></p></summary>
<p>Core Protocol Buffers library. Protocol Buffers are a way of encoding structured data in an
efficient yet extensible format.</p>
<p>Library home page: <a href="https://developers.google.com/protocol-buffers/">https://developers.google.com/protocol-buffers/</a></p>
<p>Path to dependency file: /auto-generated-sdk/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/google/protobuf/protobuf-java/3.12.2/protobuf-java-3.12.2.jar,/Resource_YKNZPN/20211018192116/protobuf-java-3.12.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **protobuf-java-3.12.2.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue in protobuf-java allowed the interleaving of com.google.protobuf.UnknownFieldSet fields in such a way that would be processed out of order. A small malicious payload can occupy the parser for several minutes by creating large numbers of short-lived objects that cause frequent, repeated pauses. We recommend upgrading libraries beyond the vulnerable versions.
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22569>CVE-2021-22569</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-22569">https://nvd.nist.gov/vuln/detail/CVE-2021-22569</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution: com.google.protobuf:protobuf-java - 3.19.2,3.18.2,3.16.1</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.google.protobuf","packageName":"protobuf-java","packageVersion":"3.12.2","packageFilePaths":["/auto-generated-sdk/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.google.protobuf:protobuf-java:3.12.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.google.protobuf:protobuf-java - 3.19.2,3.18.2,3.16.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-22569","vulnerabilityDetails":"An issue in protobuf-java allowed the interleaving of com.google.protobuf.UnknownFieldSet fields in such a way that would be processed out of order. A small malicious payload can occupy the parser for several minutes by creating large numbers of short-lived objects that cause frequent, repeated pauses. We recommend upgrading libraries beyond the vulnerable versions.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22569","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-22569 (Medium) detected in protobuf-java-3.12.2.jar - ## CVE-2021-22569 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobuf-java-3.12.2.jar</b></p></summary>
<p>Core Protocol Buffers library. Protocol Buffers are a way of encoding structured data in an
efficient yet extensible format.</p>
<p>Library home page: <a href="https://developers.google.com/protocol-buffers/">https://developers.google.com/protocol-buffers/</a></p>
<p>Path to dependency file: /auto-generated-sdk/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/google/protobuf/protobuf-java/3.12.2/protobuf-java-3.12.2.jar,/Resource_YKNZPN/20211018192116/protobuf-java-3.12.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **protobuf-java-3.12.2.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue in protobuf-java allowed the interleaving of com.google.protobuf.UnknownFieldSet fields in such a way that would be processed out of order. A small malicious payload can occupy the parser for several minutes by creating large numbers of short-lived objects that cause frequent, repeated pauses. We recommend upgrading libraries beyond the vulnerable versions.
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22569>CVE-2021-22569</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-22569">https://nvd.nist.gov/vuln/detail/CVE-2021-22569</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution: com.google.protobuf:protobuf-java - 3.19.2,3.18.2,3.16.1</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.google.protobuf","packageName":"protobuf-java","packageVersion":"3.12.2","packageFilePaths":["/auto-generated-sdk/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.google.protobuf:protobuf-java:3.12.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.google.protobuf:protobuf-java - 3.19.2,3.18.2,3.16.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-22569","vulnerabilityDetails":"An issue in protobuf-java allowed the interleaving of com.google.protobuf.UnknownFieldSet fields in such a way that would be processed out of order. A small malicious payload can occupy the parser for several minutes by creating large numbers of short-lived objects that cause frequent, repeated pauses. We recommend upgrading libraries beyond the vulnerable versions.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22569","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in protobuf java jar cve medium severity vulnerability vulnerable library protobuf java jar core protocol buffers library protocol buffers are a way of encoding structured data in an efficient yet extensible format library home page a href path to dependency file auto generated sdk pom xml path to vulnerable library home wss scanner repository com google protobuf protobuf java protobuf java jar resource yknzpn protobuf java jar dependency hierarchy x protobuf java jar vulnerable library found in base branch master vulnerability details an issue in protobuf java allowed the interleaving of com google protobuf unknownfieldset fields in such a way that would be processed out of order a small malicious payload can occupy the parser for several minutes by creating large numbers of short lived objects that cause frequent repeated pauses we recommend upgrading libraries beyond the vulnerable versions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com google protobuf protobuf java rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com google protobuf protobuf java isminimumfixversionavailable true minimumfixversion com google protobuf protobuf java isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails an issue in protobuf java allowed the interleaving of com google protobuf unknownfieldset fields in such a way that would be processed out of order a small malicious payload can occupy the parser for several minutes by creating large numbers of short lived objects that cause frequent repeated pauses we recommend upgrading libraries beyond the vulnerable versions vulnerabilityurl | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.