Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
28,684 | 13,783,268,785 | IssuesEvent | 2020-10-08 18:56:45 | MaterializeInc/materialize | https://api.github.com/repos/MaterializeInc/materialize | closed | Graphical UX for dataflows | C-feature T-UX T-memory T-performance | Materialize produces raw tabular data about the dataflow graphs that it manages, their structure, and resources associated with each of the operators. These data are valuable, but hard to consume in their raw form.
Materialize would likely benefit from a more graphical navigable interface to this information. For example, we could present the list of running dataflows with their accumulated resource uses, with the ability to expand out dataflow to their constituent operators and nested regions. We could also present dataflow graphs for each of the dataflows, perhaps annotated with resource usage, or progress frontiers, or other useful information.
There is some design work to do to determine what facets are useful, but it feels like several of these options would be excellent starting points even if we quickly conclude they should be improved.
---
Information about dataflow structure can be found in the `mz_catalog` sources:
```
mz_dataflow_channels
mz_dataflow_operator_addresses
mz_dataflow_operators
```
The `_operators` and `_channels` collections name operators and the edges between them. The `_addresses` relation describes the hierarchical relationship these share. There are some idiosyncrasies, mainly that dataflow edges that cross levels of the hierarchy go through a phantom "operator zero" in each scope, whose inputs and outputs correspond to the outputs and inputs (respectively) of the containing scope.
---
For prior work on graph drawing timely dataflow graphs, consider https://github.com/TimelyDataflow/diagnostics | True | Graphical UX for dataflows - Materialize produces raw tabular data about the dataflow graphs that it manages, their structure, and resources associated with each of the operators. These data are valuable, but hard to consume in their raw form.
Materialize would likely benefit from a more graphical navigable interface to this information. For example, we could present the list of running dataflows with their accumulated resource uses, with the ability to expand out dataflow to their constituent operators and nested regions. We could also present dataflow graphs for each of the dataflows, perhaps annotated with resource usage, or progress frontiers, or other useful information.
There is some design work to do to determine what facets are useful, but it feels like several of these options would be excellent starting points even if we quickly conclude they should be improved.
---
Information about dataflow structure can be found in the `mz_catalog` sources:
```
mz_dataflow_channels
mz_dataflow_operator_addresses
mz_dataflow_operators
```
The `_operators` and `_channels` collections name operators and the edges between them. The `_addresses` relation describes the hierarchical relationship these share. There are some idiosyncrasies, mainly that dataflow edges that cross levels of the hierarchy go through a phantom "operator zero" in each scope, whose inputs and outputs correspond to the outputs and inputs (respectively) of the containing scope.
---
For prior work on graph drawing timely dataflow graphs, consider https://github.com/TimelyDataflow/diagnostics | non_process | graphical ux for dataflows materialize produces raw tabular data about the dataflow graphs that it manages their structure and resources associated with each of the operators these data are valuable but hard to consume in their raw form materialize would likely benefit from a more graphical navigable interface to this information for example we could present the list of running dataflows with their accumulated resource uses with the ability to expand out dataflow to their constituent operators and nested regions we could also present dataflow graphs for each of the dataflows perhaps annotated with resource usage or progress frontiers or other useful information there is some design work to do to determine what facets are useful but it feels like several of these options would be excellent starting points even if we quickly conclude they should be improved information about dataflow structure can be found in the mz catalog sources mz dataflow channels mz dataflow operator addresses mz dataflow operators the operators and channels collections name operators and the edges between them the addresses relation describes the hierarchical relationship these share there are some idiosyncrasies mainly that dataflow edges that cross levels of the hierarchy go through a phantom operator zero in each scope whose inputs and outputs correspond to the outputs and inputs respectively of the containing scope for prior work on graph drawing timely dataflow graphs consider | 0 |
7,220 | 3,520,138,913 | IssuesEvent | 2016-01-12 19:35:30 | umts/detours | https://api.github.com/repos/umts/detours | opened | Enable CodeClimate pull request integration | codeclimate enhancement | Once we determine that it works properly for jobapps. | 1.0 | Enable CodeClimate pull request integration - Once we determine that it works properly for jobapps. | non_process | enable codeclimate pull request integration once we determine that it works properly for jobapps | 0 |
155,827 | 12,279,097,066 | IssuesEvent | 2020-05-08 11:25:45 | WoWManiaUK/Blackwing-Lair | https://api.github.com/repos/WoWManiaUK/Blackwing-Lair | reopened | [NPC] John J. Keeshan - Missing (issue3) - Redridge Mountains | Confirmed By Tester Missing NPC Starting Zone | **Links:**
NPC http://cata.cavernoftime.com/npc=43611
**What is happening:**
- missing
**What should happen:**
Need him spawned for
Delivery of http://cata.cavernoftime.com/quest=26651
and pick up of
https://www.wowhead.com/quest=26668/detonation
https://www.wowhead.com/quest=26693/the-dark-tower
Reported from GM
| 1.0 | [NPC] John J. Keeshan - Missing (issue3) - Redridge Mountains - **Links:**
NPC http://cata.cavernoftime.com/npc=43611
**What is happening:**
- missing
**What should happen:**
Need him spawned for
Delivery of http://cata.cavernoftime.com/quest=26651
and pick up of
https://www.wowhead.com/quest=26668/detonation
https://www.wowhead.com/quest=26693/the-dark-tower
Reported from GM
| non_process | john j keeshan missing redridge mountains links npc what is happening missing what should happen need him spawned for delivery of and pick up of reported from gm | 0 |
87,330 | 8,071,911,882 | IssuesEvent | 2018-08-06 14:31:37 | GTNewHorizons/NewHorizons | https://api.github.com/repos/GTNewHorizons/NewHorizons | closed | witchery doesn't recognize special mobs blazes which causes problems | FixedInDev need to be tested | #### Which modpack version are you using?
2.0.4.6
#
#### If in multiplayer; On which server does this happen?
delta server
#
#### What did you try to do, and what did you expect to happen?
i am a level 5 vampire from the witchery mod attempting to level up to level 6 which requires me to kill 20 blazes
#
#### What happened instead? (Attach screenshots if needed)
witchery does not recognize blazes from the special mobs mod so i cannot level up my vampire level so i am stuck on vampire level 5 until i can kill 20 vanilla minecraft blazes
>i also tried buying blazes from one of the quests in coins coins coins quest tab which also spawned special mobs blazes and we also tried the obvious of using ender io spawners which also failed to spawn vanilla blazes
>using a tablet of necromancy the ritual specifically spawns a blaze,also doesn't work as it also summons the special mobs blaze
>tested in a singleplayer world using commands to summon a blaze /summon blaze gave the same results of spawning the special mobs one
#
####What do you suggest instead/what changes do you propose?
honestly i think its either both vanilla and special mobs blazes spawn or we find a certain way to spawn the vanilla blaze but considering even the tablet of necromancy didn't work the first option may be the only one i also
unless somehow you could get witchery to make sense of the blazes i don't think i'll be able to level up my vampire level
besides im still in lv and witchery is a long way to go not to mention that clearing vampirism is a endgame witchery | 1.0 | witchery doesn't recognize special mobs blazes which causes problems - #### Which modpack version are you using?
2.0.4.6
#
#### If in multiplayer; On which server does this happen?
delta server
#
#### What did you try to do, and what did you expect to happen?
i am a level 5 vampire from the witchery mod attempting to level up to level 6 which requires me to kill 20 blazes
#
#### What happened instead? (Attach screenshots if needed)
witchery does not recognize blazes from the special mobs mod so i cannot level up my vampire level so i am stuck on vampire level 5 until i can kill 20 vanilla minecraft blazes
>i also tried buying blazes from one of the quests in coins coins coins quest tab which also spawned special mobs blazes and we also tried the obvious of using ender io spawners which also failed to spawn vanilla blazes
>using a tablet of necromancy the ritual specifically spawns a blaze,also doesn't work as it also summons the special mobs blaze
>tested in a singleplayer world using commands to summon a blaze /summon blaze gave the same results of spawning the special mobs one
#
####What do you suggest instead/what changes do you propose?
honestly i think its either both vanilla and special mobs blazes spawn or we find a certain way to spawn the vanilla blaze but considering even the tablet of necromancy didn't work the first option may be the only one i also
unless somehow you could get witchery to make sense of the blazes i don't think i'll be able to level up my vampire level
besides im still in lv and witchery is a long way to go not to mention that clearing vampirism is a endgame witchery | non_process | witchery doesn t recognize special mobs blazes which causes problems which modpack version are you using if in multiplayer on which server does this happen delta server what did you try to do and what did you expect to happen i am a level vampire from the witchery mod attempting to level up to level which requires me to kill blazes what happened instead attach screenshots if needed witchery does not recognize blazes from the special mobs mod so i cannot level up my vampire level so i am stuck on vampire level until i can kill vanilla minecraft blazes i also tried buying blazes from one of the quests in coins coins coins quest tab which also spawned special mobs blazes and we also tried the obvious of using ender io spawners which also failed to spawn vanilla blazes using a tablet of necromancy the ritual specifically spawns a blaze also doesn t work as it also summons the special mobs blaze tested in a singleplayer world using commands to summon a blaze summon blaze gave the same results of spawning the special mobs one what do you suggest instead what changes do you propose honestly i think its either both vanilla and special mobs blazes spawn or we find a certain way to spawn the vanilla blaze but considering even the tablet of necromancy didn t work the first option may be the only one i also unless somehow you could get witchery to make sense of the blazes i don t think i ll be able to level up my vampire level besides im still in lv and witchery is a long way to go not to mention that clearing vampirism is a endgame witchery | 0 |
223,702 | 7,459,941,276 | IssuesEvent | 2018-03-30 17:32:56 | knowmetools/km-api | https://api.github.com/repos/knowmetools/km-api | closed | Ansible fails after infrastructure provisioning | Priority: Low Status: Available Type: Bug | ### Bug Report
#### Expected Behavior
We should be able to run the Ansible playbook.
#### Actual Behavior
The playbook fails to install `python3-pip`. Running `apt-get update` fixes the issue.
| 1.0 | Ansible fails after infrastructure provisioning - ### Bug Report
#### Expected Behavior
We should be able to run the Ansible playbook.
#### Actual Behavior
The playbook fails to install `python3-pip`. Running `apt-get update` fixes the issue.
| non_process | ansible fails after infrastructure provisioning bug report expected behavior we should be able to run the ansible playbook actual behavior the playbook fails to install pip running apt get update fixes the issue | 0 |
6,601 | 9,683,184,179 | IssuesEvent | 2019-05-23 10:52:47 | linnovate/root | https://api.github.com/repos/linnovate/root | opened | after creating some items, and pressing enter, instead of creating a new item, an already created item is selected | 2.0.7 Process bug | create some items
click on enter to try to create a new item
instead of creating a new item it goes down the list to an unspecified item | 1.0 | after creating some items, and pressing enter, instead of creating a new item, an already created item is selected - create some items
click on enter to try to create a new item
instead of creating a new item it goes down the list to an unspecified item | process | after creating some items and pressing enter instead of creating a new item an already created item is selected create some items click on enter to try to create a new item instead of creating a new item it goes down the list to an unspecified item | 1 |
80,781 | 3,574,383,989 | IssuesEvent | 2016-01-27 11:33:20 | jlowe64/moodle-logstore_xapi | https://api.github.com/repos/jlowe64/moodle-logstore_xapi | closed | AttemptCompleted score additions break tests | priority:medium status:confirmed type:bug | @garemoko your additions to the AttemptCompleted class breaks compatibility with AttemptAbandoned and AttemptReviewed. They can't seem to find the new index's that you've added, possibly because the test's haven't been updated to reflect this.
This is the output from the unit test:
```
There was 1 error:
1) Tests\AttemptAbandonedTest::testCreateEvent
Undefined index: attempt_score_raw
/var/www/sandbox/moodle/admin/tool/log/store/xapi/vendor/learninglocker/xapi-recipe-emitter/src/Events/AttemptCompleted.php:23
/var/www/sandbox/moodle/admin/tool/log/store/xapi/vendor/learninglocker/xapi-recipe-emitter/src/Controller.php:40
/var/www/sandbox/moodle/admin/tool/log/store/xapi/tests/TestCase.php:40
/usr/share/php/PHPUnit/TextUI/Command.php:147
/usr/share/php/PHPUnit/TextUI/Command.php:99
FAILURES!
Tests: 16, Assertions: 53, Errors: 1.
```
The event from the translator to the emitter looks like this:
```
Array
(
[user_id] => 1
[user_url] => http://www.example.com
[user_name] => test_username
[context_lang] => en
[context_platform] => Moodle
[context_ext] => Array
(
[userid] => 1
[relateduserid] => 1
[courseid] => 1
[timecreated] => 1433946701
[eventname] => \mod_quiz\event\attempt_abandoned
[objecttable] => quiz_attempts
[objectid] => 1
)
[context_ext_key] => http://lrs.learninglocker.net/define/extensions/moodle_logstore_standard_log
[context_info] => stdClass Object
(
[https://moodle.org/] => 1.0.0
[https://github.com/LearningLocker/Moodle-Log-Expander] => 0.5.0
[https://github.com/LearningLocker/Moodle-xAPI-Translator] => 0.5.0
)
[time] => 2015-06-10T14:31:41+00:00
[app_url] => http://www.example.com
[app_name] => test_fullname
[app_description] => test_summary
[app_type] => http://id.tincanapi.com/activitytype/site
[app_ext] => stdClass Object
(
[id] => 1
[username] => test_username
[lang] => en
[fullname] => test_fullname
[summary] => test_summary
[name] => test_name
[intro] => test_intro
[timestart] => 1433946701
[timefinish] => 1433946702
[state] => finished
[course] => 1
[sumgrades] => 1
[grade] => 2
[quiz] => 1
[assignment] => 1
[userid] => 1
[forum] => 1
[type] => site
[scorm] => 1
[grademax] => 5.00000
[grademin] => 0.00000
[gradepass] => 5.00000
[commenttext] => <p>test comment</p>
[url] => http://www.example.com
)
[app_ext_key] => http://lrs.learninglocker.net/define/extensions/moodle_course
[source_url] => http://moodle.org
[source_name] => Moodle
[source_description] => Moodle is a open source learning platform designed to provide educators, administrators and learners with a single robust, secure and integrated system to create personalised learning environments.
[source_type] => http://id.tincanapi.com/activitytype/source
[recipe] => attempt_completed
[course_url] => http://www.example.com/course/view.php?id=1
[course_name] => test_fullname
[course_description] => test_summary
[course_type] => http://lrs.learninglocker.net/define/type/moodle/object
[course_ext] => stdClass Object
(
[id] => 1
[username] => test_username
[lang] => en
[fullname] => test_fullname
[summary] => test_summary
[name] => test_name
[intro] => test_intro
[timestart] => 1433946701
[timefinish] => 1433946702
[state] => finished
[course] => 1
[sumgrades] => 1
[grade] => 2
[quiz] => 1
[assignment] => 1
[userid] => 1
[forum] => 1
[type] => object
[scorm] => 1
[grademax] => 5.00000
[grademin] => 0.00000
[gradepass] => 5.00000
[commenttext] => <p>test comment</p>
[url] => http://www.example.com/course/view.php?id=1
)
[course_ext_key] => http://lrs.learninglocker.net/define/extensions/moodle_course
[module_url] => http://www.example.com/mod/quiz/view.php?id=1
[module_name] => test_name
[module_description] => test_intro
[module_type] => http://lrs.learninglocker.net/define/type/moodle/object
[module_ext] => stdClass Object
(
[id] => 1
[username] => test_username
[lang] => en
[fullname] => test_fullname
[summary] => test_summary
[name] => test_name
[intro] => test_intro
[timestart] => 1433946701
[timefinish] => 1433946702
[state] => finished
[course] => 1
[sumgrades] => 1
[grade] => 2
[quiz] => 1
[assignment] => 1
[userid] => 1
[forum] => 1
[type] => object
[scorm] => 1
[grademax] => 5.00000
[grademin] => 0.00000
[gradepass] => 5.00000
[commenttext] => <p>test comment</p>
[url] => http://www.example.com/mod/quiz/view.php?id=1
)
[module_ext_key] => http://lrs.learninglocker.net/define/extensions/moodle_module
[attempt_url] => http://www.example.com/mod/quiz/attempt.php?attempt=1
[attempt_type] => http://lrs.learninglocker.net/define/type/moodle/object
[attempt_ext] => stdClass Object
(
[id] => 1
[username] => test_username
[lang] => en
[fullname] => test_fullname
[summary] => test_summary
[name] => Attempt 1
[intro] => test_intro
[timestart] => 1433946701
[timefinish] => 1433946702
[state] => finished
[course] => 1
[sumgrades] => 1
[grade] => 2
[quiz] => 1
[assignment] => 1
[userid] => 1
[forum] => 1
[type] => object
[scorm] => 1
[grademax] => 5.00000
[grademin] => 0.00000
[gradepass] => 5.00000
[commenttext] => <p>test comment</p>
[url] => http://www.example.com/mod/quiz/attempt.php?attempt=1
)
[attempt_ext_key] => http://lrs.learninglocker.net/define/extensions/moodle_attempt
[attempt_name] => Attempt 1
[attempt_result] => 1
[attempt_completed] => 1
[attempt_duration] => P00Y00M00DT00H00M01S
)
```
Could you look into this please? I think it may just be tests. | 1.0 | AttemptCompleted score additions break tests - @garemoko your additions to the AttemptCompleted class breaks compatibility with AttemptAbandoned and AttemptReviewed. They can't seem to find the new index's that you've added, possibly because the test's haven't been updated to reflect this.
This is the output from the unit test:
```
There was 1 error:
1) Tests\AttemptAbandonedTest::testCreateEvent
Undefined index: attempt_score_raw
/var/www/sandbox/moodle/admin/tool/log/store/xapi/vendor/learninglocker/xapi-recipe-emitter/src/Events/AttemptCompleted.php:23
/var/www/sandbox/moodle/admin/tool/log/store/xapi/vendor/learninglocker/xapi-recipe-emitter/src/Controller.php:40
/var/www/sandbox/moodle/admin/tool/log/store/xapi/tests/TestCase.php:40
/usr/share/php/PHPUnit/TextUI/Command.php:147
/usr/share/php/PHPUnit/TextUI/Command.php:99
FAILURES!
Tests: 16, Assertions: 53, Errors: 1.
```
The event from the translator to the emitter looks like this:
```
Array
(
[user_id] => 1
[user_url] => http://www.example.com
[user_name] => test_username
[context_lang] => en
[context_platform] => Moodle
[context_ext] => Array
(
[userid] => 1
[relateduserid] => 1
[courseid] => 1
[timecreated] => 1433946701
[eventname] => \mod_quiz\event\attempt_abandoned
[objecttable] => quiz_attempts
[objectid] => 1
)
[context_ext_key] => http://lrs.learninglocker.net/define/extensions/moodle_logstore_standard_log
[context_info] => stdClass Object
(
[https://moodle.org/] => 1.0.0
[https://github.com/LearningLocker/Moodle-Log-Expander] => 0.5.0
[https://github.com/LearningLocker/Moodle-xAPI-Translator] => 0.5.0
)
[time] => 2015-06-10T14:31:41+00:00
[app_url] => http://www.example.com
[app_name] => test_fullname
[app_description] => test_summary
[app_type] => http://id.tincanapi.com/activitytype/site
[app_ext] => stdClass Object
(
[id] => 1
[username] => test_username
[lang] => en
[fullname] => test_fullname
[summary] => test_summary
[name] => test_name
[intro] => test_intro
[timestart] => 1433946701
[timefinish] => 1433946702
[state] => finished
[course] => 1
[sumgrades] => 1
[grade] => 2
[quiz] => 1
[assignment] => 1
[userid] => 1
[forum] => 1
[type] => site
[scorm] => 1
[grademax] => 5.00000
[grademin] => 0.00000
[gradepass] => 5.00000
[commenttext] => <p>test comment</p>
[url] => http://www.example.com
)
[app_ext_key] => http://lrs.learninglocker.net/define/extensions/moodle_course
[source_url] => http://moodle.org
[source_name] => Moodle
[source_description] => Moodle is a open source learning platform designed to provide educators, administrators and learners with a single robust, secure and integrated system to create personalised learning environments.
[source_type] => http://id.tincanapi.com/activitytype/source
[recipe] => attempt_completed
[course_url] => http://www.example.com/course/view.php?id=1
[course_name] => test_fullname
[course_description] => test_summary
[course_type] => http://lrs.learninglocker.net/define/type/moodle/object
[course_ext] => stdClass Object
(
[id] => 1
[username] => test_username
[lang] => en
[fullname] => test_fullname
[summary] => test_summary
[name] => test_name
[intro] => test_intro
[timestart] => 1433946701
[timefinish] => 1433946702
[state] => finished
[course] => 1
[sumgrades] => 1
[grade] => 2
[quiz] => 1
[assignment] => 1
[userid] => 1
[forum] => 1
[type] => object
[scorm] => 1
[grademax] => 5.00000
[grademin] => 0.00000
[gradepass] => 5.00000
[commenttext] => <p>test comment</p>
[url] => http://www.example.com/course/view.php?id=1
)
[course_ext_key] => http://lrs.learninglocker.net/define/extensions/moodle_course
[module_url] => http://www.example.com/mod/quiz/view.php?id=1
[module_name] => test_name
[module_description] => test_intro
[module_type] => http://lrs.learninglocker.net/define/type/moodle/object
[module_ext] => stdClass Object
(
[id] => 1
[username] => test_username
[lang] => en
[fullname] => test_fullname
[summary] => test_summary
[name] => test_name
[intro] => test_intro
[timestart] => 1433946701
[timefinish] => 1433946702
[state] => finished
[course] => 1
[sumgrades] => 1
[grade] => 2
[quiz] => 1
[assignment] => 1
[userid] => 1
[forum] => 1
[type] => object
[scorm] => 1
[grademax] => 5.00000
[grademin] => 0.00000
[gradepass] => 5.00000
[commenttext] => <p>test comment</p>
[url] => http://www.example.com/mod/quiz/view.php?id=1
)
[module_ext_key] => http://lrs.learninglocker.net/define/extensions/moodle_module
[attempt_url] => http://www.example.com/mod/quiz/attempt.php?attempt=1
[attempt_type] => http://lrs.learninglocker.net/define/type/moodle/object
[attempt_ext] => stdClass Object
(
[id] => 1
[username] => test_username
[lang] => en
[fullname] => test_fullname
[summary] => test_summary
[name] => Attempt 1
[intro] => test_intro
[timestart] => 1433946701
[timefinish] => 1433946702
[state] => finished
[course] => 1
[sumgrades] => 1
[grade] => 2
[quiz] => 1
[assignment] => 1
[userid] => 1
[forum] => 1
[type] => object
[scorm] => 1
[grademax] => 5.00000
[grademin] => 0.00000
[gradepass] => 5.00000
[commenttext] => <p>test comment</p>
[url] => http://www.example.com/mod/quiz/attempt.php?attempt=1
)
[attempt_ext_key] => http://lrs.learninglocker.net/define/extensions/moodle_attempt
[attempt_name] => Attempt 1
[attempt_result] => 1
[attempt_completed] => 1
[attempt_duration] => P00Y00M00DT00H00M01S
)
```
Could you look into this please? I think it may just be tests. | non_process | attemptcompleted score additions break tests garemoko your additions to the attemptcompleted class breaks compatibility with attemptabandoned and attemptreviewed they can t seem to find the new index s that you ve added possibly because the test s haven t been updated to reflect this this is the output from the unit test there was error tests attemptabandonedtest testcreateevent undefined index attempt score raw var www sandbox moodle admin tool log store xapi vendor learninglocker xapi recipe emitter src events attemptcompleted php var www sandbox moodle admin tool log store xapi vendor learninglocker xapi recipe emitter src controller php var www sandbox moodle admin tool log store xapi tests testcase php usr share php phpunit textui command php usr share php phpunit textui command php failures tests assertions errors the event from the translator to the emitter looks like this array test username en moodle array mod quiz event attempt abandoned quiz attempts stdclass object test fullname test summary stdclass object test username en test fullname test summary test name test intro finished site test comment moodle moodle is a open source learning platform designed to provide educators administrators and learners with a single robust secure and integrated system to create personalised learning environments attempt completed test fullname test summary stdclass object test username en test fullname test summary test name test intro finished object test comment test name test intro stdclass object test username en test fullname test summary test name test intro finished object test comment stdclass object test username en test fullname test summary attempt test intro finished object test comment attempt could you look into this please i think it may just be tests | 0 |
163,026 | 13,908,385,086 | IssuesEvent | 2020-10-20 13:45:19 | airctic/icevision | https://api.github.com/repos/airctic/icevision | closed | Remove Restart Runtime Warning in Colab | documentation good first issue hacktoberfest | ## 📓 Documentation Update
Thanks to the update in version 0.2.1, we don't need anymore to Restart Runtime Warning in Colab. We should remove both the warning and the corresponding code snippet. | 1.0 | Remove Restart Runtime Warning in Colab - ## 📓 Documentation Update
Thanks to the update in version 0.2.1, we don't need anymore to Restart Runtime Warning in Colab. We should remove both the warning and the corresponding code snippet. | non_process | remove restart runtime warning in colab 📓 documentation update thanks to the update in version we don t need anymore to restart runtime warning in colab we should remove both the warning and the corresponding code snippet | 0 |
416,615 | 12,149,425,770 | IssuesEvent | 2020-04-24 16:07:54 | ChainSafe/chainbridge-solidity | https://api.github.com/repos/ChainSafe/chainbridge-solidity | closed | Internal audit | Priority: 3 - Medium | Similar to ChainBridge, we should take a step back and do a brief audit of our work so far.
We should look at:
- cleanup tasks (unused code, duplicated code, outdated comments)
- structure/contract layout
- over-engineering of components (things were easier than anticipated, perhaps)
- etc.
This should result in documented summary of the findings, from which we can then extract any necessary tasks to be completed.
(Here's what I came up with for ChainBridge: https://github.com/ChainSafe/ChainBridge/issues/305#issuecomment-612628358) | 1.0 | Internal audit - Similar to ChainBridge, we should take a step back and do a brief audit of our work so far.
We should look at:
- cleanup tasks (unused code, duplicated code, outdated comments)
- structure/contract layout
- over-engineering of components (things were easier than anticipated, perhaps)
- etc.
This should result in documented summary of the findings, from which we can then extract any necessary tasks to be completed.
(Here's what I came up with for ChainBridge: https://github.com/ChainSafe/ChainBridge/issues/305#issuecomment-612628358) | non_process | internal audit similar to chainbridge we should take a step back and do a brief audit of our work so far we should look at cleanup tasks unused code duplicated code outdated comments structure contract layout over engineering of components things were easier than anticipated perhaps etc this should result in documented summary of the findings from which we can then extract any necessary tasks to be completed here s what i came up with for chainbridge | 0 |
2,653 | 5,430,470,388 | IssuesEvent | 2017-03-03 21:20:29 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Ubuntu 16.04 outerloop debug - System.Diagnostics.Tests.ProcessWaitingTests.WaitChain failed with "Xunit.Sdk.EqualException" | area-System.Diagnostics.Process test bug test-run-core | Failed test: System.Diagnostics.Tests.ProcessWaitingTests.WaitChain
Detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_ubuntu16.04_debug/92/consoleText
Message:
~~~
System.Diagnostics.Tests.ProcessWaitingTests.WaitChain [FAIL]
Assert.Equal() Failure
Expected: 42
Actual: 145
~~~
Stack Trace:
~~~
/mnt/j/workspace/dotnet_corefx/master/outerloop_ubuntu16.04_debug/src/System.Diagnostics.Process/tests/ProcessWaitingTests.cs(190,0): at System.Diagnostics.Tests.ProcessWaitingTests.WaitChain()
~~~
Configuration:
OuterLoop_Ubuntu16.04_debug (build#92) | 1.0 | Ubuntu 16.04 outerloop debug - System.Diagnostics.Tests.ProcessWaitingTests.WaitChain failed with "Xunit.Sdk.EqualException" - Failed test: System.Diagnostics.Tests.ProcessWaitingTests.WaitChain
Detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_ubuntu16.04_debug/92/consoleText
Message:
~~~
System.Diagnostics.Tests.ProcessWaitingTests.WaitChain [FAIL]
Assert.Equal() Failure
Expected: 42
Actual: 145
~~~
Stack Trace:
~~~
/mnt/j/workspace/dotnet_corefx/master/outerloop_ubuntu16.04_debug/src/System.Diagnostics.Process/tests/ProcessWaitingTests.cs(190,0): at System.Diagnostics.Tests.ProcessWaitingTests.WaitChain()
~~~
Configuration:
OuterLoop_Ubuntu16.04_debug (build#92) | process | ubuntu outerloop debug system diagnostics tests processwaitingtests waitchain failed with xunit sdk equalexception failed test system diagnostics tests processwaitingtests waitchain detail message system diagnostics tests processwaitingtests waitchain assert equal failure expected actual stack trace mnt j workspace dotnet corefx master outerloop debug src system diagnostics process tests processwaitingtests cs at system diagnostics tests processwaitingtests waitchain configuration outerloop debug build | 1 |
98,282 | 20,628,518,911 | IssuesEvent | 2022-03-08 02:34:10 | ProjectSidewalk/SidewalkWebpage | https://api.github.com/repos/ProjectSidewalk/SidewalkWebpage | opened | Use configs instead of duplicated images for different servers | EasyFix! Code cleanup | ##### Brief description of problem/feature
We have a folder in public/assets for each city we are deployed in, and in each of those folders we have logo and skyline images. There are really only 2-3 version of each of those, so at this point it would be better to not have multiple copies of the same image in those different folders, and just indicate which image the city should use in the configs. | 1.0 | Use configs instead of duplicated images for different servers - ##### Brief description of problem/feature
We have a folder in public/assets for each city we are deployed in, and in each of those folders we have logo and skyline images. There are really only 2-3 version of each of those, so at this point it would be better to not have multiple copies of the same image in those different folders, and just indicate which image the city should use in the configs. | non_process | use configs instead of duplicated images for different servers brief description of problem feature we have a folder in public assets for each city we are deployed in and in each of those folders we have logo and skyline images there are really only version of each of those so at this point it would be better to not have multiple copies of the same image in those different folders and just indicate which image the city should use in the configs | 0 |
19,494 | 25,801,252,718 | IssuesEvent | 2022-12-11 02:00:06 | lizhihao6/get-daily-arxiv-noti | https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti | opened | New submissions for Fri, 9 Dec 22 | event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB | ## Keyword: events
### RainUNet for Super-Resolution Rain Movie Prediction under Spatio-temporal Shifts
- **Authors:** Jinyoung Park, Minseok Son, Seungju Cho, Inyoung Lee, Changick Kim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04005
- **Pdf link:** https://arxiv.org/pdf/2212.04005
- **Abstract**
This paper presents a solution to the Weather4cast 2022 Challenge Stage 2. The goal of the challenge is to forecast future high-resolution rainfall events obtained from ground radar using low-resolution multiband satellite images. We suggest a solution that performs data preprocessing appropriate to the challenge and then predicts rainfall movies using a novel RainUNet. RainUNet is a hierarchical U-shaped network with temporal-wise separable block (TS block) using a decoupled large kernel 3D convolution to improve the prediction performance. Various evaluation metrics show that our solution is effective compared to the baseline method. The source codes are available at https://github.com/jinyxp/Weather4cast-2022
### Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly Supervised Video Anomaly Detection
- **Authors:** Chen Zhang, Guorong Li, Yuankai Qi, Shuhui Wang, Laiyun Qing, Qingming Huang, Ming-Hsuan Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04090
- **Pdf link:** https://arxiv.org/pdf/2212.04090
- **Abstract**
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels. Recently, two-stage self-training methods have achieved significant improvements by self-generating pseudo labels and self-refining anomaly scores with these labels. As the pseudo labels play a crucial role, we propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training. Specifically, we first design a multi-head classification module (each head serves as a classifier) with a diversity loss to maximize the distribution differences of predicted pseudo labels across heads. This encourages the generated pseudo labels to cover as many abnormal events as possible. We then devise an iterative uncertainty pseudo label refinement strategy, which improves not only the initial pseudo labels but also the updated ones obtained by the desired classifier in the second stage. Extensive experimental results demonstrate the proposed method performs favorably against state-of-the-art approaches on the UCF-Crime, TAD, and XD-Violence benchmark datasets.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Elixir: A system to enhance data quality for multiple analytics on a video stream
- **Authors:** Sibendu Paul, Kunal Rao, Giuseppe Coviello, Murugan Sankaradas, Oliver Po, Y. Charlie Hu, Srimat T. Chakradhar
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multiagent Systems (cs.MA)
- **Arxiv link:** https://arxiv.org/abs/2212.04061
- **Pdf link:** https://arxiv.org/pdf/2212.04061
- **Abstract**
IoT sensors, especially video cameras, are ubiquitously deployed around the world to perform a variety of computer vision tasks in several verticals including retail, healthcare, safety and security, transportation, manufacturing, etc. To amortize their high deployment effort and cost, it is desirable to perform multiple video analytics tasks, which we refer to as Analytical Units (AUs), off the video feed coming out of every camera. In this paper, we first show that in a multi-AU setting, changing the camera setting has disproportionate impact on different AUs performance. In particular, the optimal setting for one AU may severely degrade the performance for another AU, and further the impact on different AUs varies as the environmental condition changes. We then present Elixir, a system to enhance the video stream quality for multiple analytics on a video stream. Elixir leverages Multi-Objective Reinforcement Learning (MORL), where the RL agent caters to the objectives from different AUs and adjusts the camera setting to simultaneously enhance the performance of all AUs. To define the multiple objectives in MORL, we develop new AU-specific quality estimator values for each individual AU. We evaluate Elixir through real-world experiments on a testbed with three cameras deployed next to each other (overlooking a large enterprise parking lot) running Elixir and two baseline approaches, respectively. Elixir correctly detects 7.1% (22,068) and 5.0% (15,731) more cars, 94% (551) and 72% (478) more faces, and 670.4% (4975) and 158.6% (3507) more persons than the default-setting and time-sharing approaches, respectively. It also detects 115 license plates, far more than the time-sharing approach (7) and the default setting (0).
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Executing your Commands via Motion Diffusion in Latent Space
- **Authors:** Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, Jingyi Yu, Gang Yu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2212.04048
- **Pdf link:** https://arxiv.org/pdf/2212.04048
- **Abstract**
We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors. Since human motions are highly diverse and have a property of quite different distribution from conditional modalities, such as textual descriptors in natural languages, it is hard to learn a probabilistic mapping from the desired conditional modality to the human motion sequences. Besides, the raw motion data from the motion capture system might be redundant in sequences and contain noises; directly modeling the joint distribution over the raw motion sequences and conditional modalities would need a heavy computational overhead and might result in artifacts introduced by the captured noises. To learn a better representation of the various human motion sequences, we first design a powerful Variational AutoEncoder (VAE) and arrive at a representative and low-dimensional latent code for a human motion sequence. Then, instead of using a diffusion model to establish the connections between the raw motion sequences and the conditional inputs, we perform a diffusion process on the motion latent space. Our proposed Motion Latent-based Diffusion model (MLD) could produce vivid motion sequences conforming to the given conditional inputs and substantially reduce the computational overhead in both the training and inference stages. Extensive experiments on various human motion generation tasks demonstrate that our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks, with two orders of magnitude faster than previous diffusion models on raw motion sequences.
### Cross-view Geo-localization via Learning Disentangled Geometric Layout Correspondence
- **Authors:** Xiaohan Zhang, Xingyu Li, Waqas Sultani, Yi Zhou, Safwan Wshah
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04074
- **Pdf link:** https://arxiv.org/pdf/2212.04074
- **Abstract**
Cross-view geo-localization aims to estimate the location of a query ground image by matching it to a reference geo-tagged aerial images database. As an extremely challenging task, its difficulties root in the drastic view changes and different capturing time between two views. Despite these difficulties, recent works achieve outstanding progress on cross-view geo-localization benchmarks. However, existing methods still suffer from poor performance on the cross-area benchmarks, in which the training and testing data are captured from two different regions. We attribute this deficiency to the lack of ability to extract the spatial configuration of visual feature layouts and models' overfitting on low-level details from the training set. In this paper, we propose GeoDTR which explicitly disentangles geometric information from raw features and learns the spatial correlations among visual features from aerial and ground pairs with a novel geometric layout extractor module. This module generates a set of geometric layout descriptors, modulating the raw features and producing high-quality latent representations. In addition, we elaborate on two categories of data augmentations, (i) Layout simulation, which varies the spatial configuration while keeping the low-level details intact. (ii) Semantic augmentation, which alters the low-level details and encourages the model to capture spatial configurations. These augmentations help to improve the performance of the cross-view geo-localization models, especially on the cross-area benchmarks. Moreover, we propose a counterfactual-based learning process to benefit the geometric layout extractor in exploring spatial information. Extensive experiments show that GeoDTR not only achieves state-of-the-art results but also significantly boosts the performance on same-area and cross-area benchmarks.
### NRTR: Neuron Reconstruction with Transformer from 3D Optical Microscopy Images
- **Authors:** Yijun Wang, Rui Lang, Rui Li, Junsong Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2212.04163
- **Pdf link:** https://arxiv.org/pdf/2212.04163
- **Abstract**
The neuron reconstruction from raw Optical Microscopy (OM) image stacks is the basis of neuroscience. Manual annotation and semi-automatic neuron tracing algorithms are time-consuming and inefficient. Existing deep learning neuron reconstruction methods, although demonstrating exemplary performance, greatly demand complex rule-based components. Therefore, a crucial challenge is designing an end-to-end neuron reconstruction method that makes the overall framework simpler and model training easier. We propose a Neuron Reconstruction Transformer (NRTR) that, discarding the complex rule-based components, views neuron reconstruction as a direct set-prediction problem. To the best of our knowledge, NRTR is the first image-to-set deep learning model for end-to-end neuron reconstruction. In experiments using the BigNeuron and VISoR-40 datasets, NRTR achieves excellent neuron reconstruction results for comprehensive benchmarks and outperforms competitive baselines. Results of extensive experiments indicate that NRTR is effective at showing that neuron reconstruction is viewed as a set-prediction problem, which makes end-to-end model training available.
### Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning
- **Authors:** Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Lu Yuan, Yu-Gang Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04500
- **Pdf link:** https://arxiv.org/pdf/2212.04500
- **Abstract**
Benefiting from masked visual modeling, self-supervised video representation learning has achieved remarkable progress. However, existing methods focus on learning representations from scratch through reconstructing low-level features like raw pixel RGB values. In this paper, we propose masked video distillation (MVD), a simple yet effective two-stage masked feature modeling framework for video representation learning: firstly we pretrain an image (or video) model by recovering low-level features of masked patches, then we use the resulting features as targets for masked feature modeling. For the choice of teacher models, we observe that students taught by video teachers perform better on temporally-heavy video tasks, while image teachers transfer stronger spatial representations for spatially-heavy video tasks. Visualization analysis also indicates different teachers produce different learned patterns for students. Motivated by this observation, to leverage the advantage of different teachers, we design a spatial-temporal co-teaching method for MVD. Specifically, we distill student models from both video teachers and image teachers by masked feature modeling. Extensive experimental results demonstrate that video transformers pretrained with spatial-temporal co-teaching outperform models distilled with a single teacher on a multitude of video datasets. Our MVD with vanilla ViT achieves state-of-the-art performance compared with previous supervised or self-supervised methods on several challenging video downstream tasks. For example, with the ViT-Large model, our MVD achieves 86.4% and 75.9% Top-1 accuracy on Kinetics-400 and Something-Something-v2, outperforming VideoMAE by 1.2% and 1.6% respectively. Code will be available at \url{https://github.com/ruiwang2021/mvd}.
## Keyword: raw image
There is no result
| 2.0 | New submissions for Fri, 9 Dec 22 - ## Keyword: events
### RainUNet for Super-Resolution Rain Movie Prediction under Spatio-temporal Shifts
- **Authors:** Jinyoung Park, Minseok Son, Seungju Cho, Inyoung Lee, Changick Kim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04005
- **Pdf link:** https://arxiv.org/pdf/2212.04005
- **Abstract**
This paper presents a solution to the Weather4cast 2022 Challenge Stage 2. The goal of the challenge is to forecast future high-resolution rainfall events obtained from ground radar using low-resolution multiband satellite images. We suggest a solution that performs data preprocessing appropriate to the challenge and then predicts rainfall movies using a novel RainUNet. RainUNet is a hierarchical U-shaped network with temporal-wise separable block (TS block) using a decoupled large kernel 3D convolution to improve the prediction performance. Various evaluation metrics show that our solution is effective compared to the baseline method. The source codes are available at https://github.com/jinyxp/Weather4cast-2022
### Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly Supervised Video Anomaly Detection
- **Authors:** Chen Zhang, Guorong Li, Yuankai Qi, Shuhui Wang, Laiyun Qing, Qingming Huang, Ming-Hsuan Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04090
- **Pdf link:** https://arxiv.org/pdf/2212.04090
- **Abstract**
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels. Recently, two-stage self-training methods have achieved significant improvements by self-generating pseudo labels and self-refining anomaly scores with these labels. As the pseudo labels play a crucial role, we propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training. Specifically, we first design a multi-head classification module (each head serves as a classifier) with a diversity loss to maximize the distribution differences of predicted pseudo labels across heads. This encourages the generated pseudo labels to cover as many abnormal events as possible. We then devise an iterative uncertainty pseudo label refinement strategy, which improves not only the initial pseudo labels but also the updated ones obtained by the desired classifier in the second stage. Extensive experimental results demonstrate the proposed method performs favorably against state-of-the-art approaches on the UCF-Crime, TAD, and XD-Violence benchmark datasets.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Elixir: A system to enhance data quality for multiple analytics on a video stream
- **Authors:** Sibendu Paul, Kunal Rao, Giuseppe Coviello, Murugan Sankaradas, Oliver Po, Y. Charlie Hu, Srimat T. Chakradhar
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multiagent Systems (cs.MA)
- **Arxiv link:** https://arxiv.org/abs/2212.04061
- **Pdf link:** https://arxiv.org/pdf/2212.04061
- **Abstract**
IoT sensors, especially video cameras, are ubiquitously deployed around the world to perform a variety of computer vision tasks in several verticals including retail, healthcare, safety and security, transportation, manufacturing, etc. To amortize their high deployment effort and cost, it is desirable to perform multiple video analytics tasks, which we refer to as Analytical Units (AUs), off the video feed coming out of every camera. In this paper, we first show that in a multi-AU setting, changing the camera setting has disproportionate impact on different AUs performance. In particular, the optimal setting for one AU may severely degrade the performance for another AU, and further the impact on different AUs varies as the environmental condition changes. We then present Elixir, a system to enhance the video stream quality for multiple analytics on a video stream. Elixir leverages Multi-Objective Reinforcement Learning (MORL), where the RL agent caters to the objectives from different AUs and adjusts the camera setting to simultaneously enhance the performance of all AUs. To define the multiple objectives in MORL, we develop new AU-specific quality estimator values for each individual AU. We evaluate Elixir through real-world experiments on a testbed with three cameras deployed next to each other (overlooking a large enterprise parking lot) running Elixir and two baseline approaches, respectively. Elixir correctly detects 7.1% (22,068) and 5.0% (15,731) more cars, 94% (551) and 72% (478) more faces, and 670.4% (4975) and 158.6% (3507) more persons than the default-setting and time-sharing approaches, respectively. It also detects 115 license plates, far more than the time-sharing approach (7) and the default setting (0).
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Executing your Commands via Motion Diffusion in Latent Space
- **Authors:** Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, Jingyi Yu, Gang Yu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2212.04048
- **Pdf link:** https://arxiv.org/pdf/2212.04048
- **Abstract**
We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors. Since human motions are highly diverse and have a property of quite different distribution from conditional modalities, such as textual descriptors in natural languages, it is hard to learn a probabilistic mapping from the desired conditional modality to the human motion sequences. Besides, the raw motion data from the motion capture system might be redundant in sequences and contain noises; directly modeling the joint distribution over the raw motion sequences and conditional modalities would need a heavy computational overhead and might result in artifacts introduced by the captured noises. To learn a better representation of the various human motion sequences, we first design a powerful Variational AutoEncoder (VAE) and arrive at a representative and low-dimensional latent code for a human motion sequence. Then, instead of using a diffusion model to establish the connections between the raw motion sequences and the conditional inputs, we perform a diffusion process on the motion latent space. Our proposed Motion Latent-based Diffusion model (MLD) could produce vivid motion sequences conforming to the given conditional inputs and substantially reduce the computational overhead in both the training and inference stages. Extensive experiments on various human motion generation tasks demonstrate that our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks, with two orders of magnitude faster than previous diffusion models on raw motion sequences.
### Cross-view Geo-localization via Learning Disentangled Geometric Layout Correspondence
- **Authors:** Xiaohan Zhang, Xingyu Li, Waqas Sultani, Yi Zhou, Safwan Wshah
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04074
- **Pdf link:** https://arxiv.org/pdf/2212.04074
- **Abstract**
Cross-view geo-localization aims to estimate the location of a query ground image by matching it to a reference geo-tagged aerial images database. As an extremely challenging task, its difficulties root in the drastic view changes and different capturing time between two views. Despite these difficulties, recent works achieve outstanding progress on cross-view geo-localization benchmarks. However, existing methods still suffer from poor performance on the cross-area benchmarks, in which the training and testing data are captured from two different regions. We attribute this deficiency to the lack of ability to extract the spatial configuration of visual feature layouts and models' overfitting on low-level details from the training set. In this paper, we propose GeoDTR which explicitly disentangles geometric information from raw features and learns the spatial correlations among visual features from aerial and ground pairs with a novel geometric layout extractor module. This module generates a set of geometric layout descriptors, modulating the raw features and producing high-quality latent representations. In addition, we elaborate on two categories of data augmentations, (i) Layout simulation, which varies the spatial configuration while keeping the low-level details intact. (ii) Semantic augmentation, which alters the low-level details and encourages the model to capture spatial configurations. These augmentations help to improve the performance of the cross-view geo-localization models, especially on the cross-area benchmarks. Moreover, we propose a counterfactual-based learning process to benefit the geometric layout extractor in exploring spatial information. Extensive experiments show that GeoDTR not only achieves state-of-the-art results but also significantly boosts the performance on same-area and cross-area benchmarks.
### NRTR: Neuron Reconstruction with Transformer from 3D Optical Microscopy Images
- **Authors:** Yijun Wang, Rui Lang, Rui Li, Junsong Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2212.04163
- **Pdf link:** https://arxiv.org/pdf/2212.04163
- **Abstract**
The neuron reconstruction from raw Optical Microscopy (OM) image stacks is the basis of neuroscience. Manual annotation and semi-automatic neuron tracing algorithms are time-consuming and inefficient. Existing deep learning neuron reconstruction methods, although demonstrating exemplary performance, greatly demand complex rule-based components. Therefore, a crucial challenge is designing an end-to-end neuron reconstruction method that makes the overall framework simpler and model training easier. We propose a Neuron Reconstruction Transformer (NRTR) that, discarding the complex rule-based components, views neuron reconstruction as a direct set-prediction problem. To the best of our knowledge, NRTR is the first image-to-set deep learning model for end-to-end neuron reconstruction. In experiments using the BigNeuron and VISoR-40 datasets, NRTR achieves excellent neuron reconstruction results for comprehensive benchmarks and outperforms competitive baselines. Results of extensive experiments indicate that NRTR is effective at showing that neuron reconstruction is viewed as a set-prediction problem, which makes end-to-end model training available.
### Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning
- **Authors:** Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Lu Yuan, Yu-Gang Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.04500
- **Pdf link:** https://arxiv.org/pdf/2212.04500
- **Abstract**
Benefiting from masked visual modeling, self-supervised video representation learning has achieved remarkable progress. However, existing methods focus on learning representations from scratch through reconstructing low-level features like raw pixel RGB values. In this paper, we propose masked video distillation (MVD), a simple yet effective two-stage masked feature modeling framework for video representation learning: firstly we pretrain an image (or video) model by recovering low-level features of masked patches, then we use the resulting features as targets for masked feature modeling. For the choice of teacher models, we observe that students taught by video teachers perform better on temporally-heavy video tasks, while image teachers transfer stronger spatial representations for spatially-heavy video tasks. Visualization analysis also indicates different teachers produce different learned patterns for students. Motivated by this observation, to leverage the advantage of different teachers, we design a spatial-temporal co-teaching method for MVD. Specifically, we distill student models from both video teachers and image teachers by masked feature modeling. Extensive experimental results demonstrate that video transformers pretrained with spatial-temporal co-teaching outperform models distilled with a single teacher on a multitude of video datasets. Our MVD with vanilla ViT achieves state-of-the-art performance compared with previous supervised or self-supervised methods on several challenging video downstream tasks. For example, with the ViT-Large model, our MVD achieves 86.4% and 75.9% Top-1 accuracy on Kinetics-400 and Something-Something-v2, outperforming VideoMAE by 1.2% and 1.6% respectively. Code will be available at \url{https://github.com/ruiwang2021/mvd}.
## Keyword: raw image
There is no result
| process | new submissions for fri dec keyword events rainunet for super resolution rain movie prediction under spatio temporal shifts authors jinyoung park minseok son seungju cho inyoung lee changick kim subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper presents a solution to the challenge stage the goal of the challenge is to forecast future high resolution rainfall events obtained from ground radar using low resolution multiband satellite images we suggest a solution that performs data preprocessing appropriate to the challenge and then predicts rainfall movies using a novel rainunet rainunet is a hierarchical u shaped network with temporal wise separable block ts block using a decoupled large kernel convolution to improve the prediction performance various evaluation metrics show that our solution is effective compared to the baseline method the source codes are available at exploiting completeness and uncertainty of pseudo labels for weakly supervised video anomaly detection authors chen zhang guorong li yuankai qi shuhui wang laiyun qing qingming huang ming hsuan yang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract weakly supervised video anomaly detection aims to identify abnormal events in videos using only video level labels recently two stage self training methods have achieved significant improvements by self generating pseudo labels and self refining anomaly scores with these labels as the pseudo labels play a crucial role we propose an enhancement framework by exploiting completeness and uncertainty properties for effective self training specifically we first design a multi head classification module each head serves as a classifier with a diversity loss to maximize the distribution differences of predicted pseudo labels across heads this encourages the generated pseudo labels to cover as many abnormal events as possible we then devise an iterative uncertainty pseudo label refinement strategy which improves not only the initial pseudo labels but also the updated ones obtained by the desired classifier in the second stage extensive experimental results demonstrate the proposed method performs favorably against state of the art approaches on the ucf crime tad and xd violence benchmark datasets keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp elixir a system to enhance data quality for multiple analytics on a video stream authors sibendu paul kunal rao giuseppe coviello murugan sankaradas oliver po y charlie hu srimat t chakradhar subjects computer vision and pattern recognition cs cv multiagent systems cs ma arxiv link pdf link abstract iot sensors especially video cameras are ubiquitously deployed around the world to perform a variety of computer vision tasks in several verticals including retail healthcare safety and security transportation manufacturing etc to amortize their high deployment effort and cost it is desirable to perform multiple video analytics tasks which we refer to as analytical units aus off the video feed coming out of every camera in this paper we first show that in a multi au setting changing the camera setting has disproportionate impact on different aus performance in particular the optimal setting for one au may severely degrade the performance for another au and further the impact on different aus varies as the environmental condition changes we then present elixir a system to enhance the video stream quality for multiple analytics on a video stream elixir leverages multi objective reinforcement learning morl where the rl agent caters to the objectives from different aus and adjusts the camera setting to simultaneously enhance the performance of all aus to define the multiple objectives in morl we develop new au specific quality estimator values for each individual au we evaluate elixir through real world experiments on a testbed with three cameras deployed next to each other overlooking a large enterprise parking lot running elixir and two baseline approaches respectively elixir correctly detects and more cars and more faces and and more persons than the default setting and time sharing approaches respectively it also detects license plates far more than the time sharing approach and the default setting keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw executing your commands via motion diffusion in latent space authors xin chen biao jiang wen liu zilong huang bin fu tao chen jingyi yu gang yu subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract we study a challenging task conditional human motion generation which produces plausible human motion sequences according to various conditional inputs such as action classes or textual descriptors since human motions are highly diverse and have a property of quite different distribution from conditional modalities such as textual descriptors in natural languages it is hard to learn a probabilistic mapping from the desired conditional modality to the human motion sequences besides the raw motion data from the motion capture system might be redundant in sequences and contain noises directly modeling the joint distribution over the raw motion sequences and conditional modalities would need a heavy computational overhead and might result in artifacts introduced by the captured noises to learn a better representation of the various human motion sequences we first design a powerful variational autoencoder vae and arrive at a representative and low dimensional latent code for a human motion sequence then instead of using a diffusion model to establish the connections between the raw motion sequences and the conditional inputs we perform a diffusion process on the motion latent space our proposed motion latent based diffusion model mld could produce vivid motion sequences conforming to the given conditional inputs and substantially reduce the computational overhead in both the training and inference stages extensive experiments on various human motion generation tasks demonstrate that our mld achieves significant improvements over the state of the art methods among extensive human motion generation tasks with two orders of magnitude faster than previous diffusion models on raw motion sequences cross view geo localization via learning disentangled geometric layout correspondence authors xiaohan zhang xingyu li waqas sultani yi zhou safwan wshah subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract cross view geo localization aims to estimate the location of a query ground image by matching it to a reference geo tagged aerial images database as an extremely challenging task its difficulties root in the drastic view changes and different capturing time between two views despite these difficulties recent works achieve outstanding progress on cross view geo localization benchmarks however existing methods still suffer from poor performance on the cross area benchmarks in which the training and testing data are captured from two different regions we attribute this deficiency to the lack of ability to extract the spatial configuration of visual feature layouts and models overfitting on low level details from the training set in this paper we propose geodtr which explicitly disentangles geometric information from raw features and learns the spatial correlations among visual features from aerial and ground pairs with a novel geometric layout extractor module this module generates a set of geometric layout descriptors modulating the raw features and producing high quality latent representations in addition we elaborate on two categories of data augmentations i layout simulation which varies the spatial configuration while keeping the low level details intact ii semantic augmentation which alters the low level details and encourages the model to capture spatial configurations these augmentations help to improve the performance of the cross view geo localization models especially on the cross area benchmarks moreover we propose a counterfactual based learning process to benefit the geometric layout extractor in exploring spatial information extensive experiments show that geodtr not only achieves state of the art results but also significantly boosts the performance on same area and cross area benchmarks nrtr neuron reconstruction with transformer from optical microscopy images authors yijun wang rui lang rui li junsong zhang subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract the neuron reconstruction from raw optical microscopy om image stacks is the basis of neuroscience manual annotation and semi automatic neuron tracing algorithms are time consuming and inefficient existing deep learning neuron reconstruction methods although demonstrating exemplary performance greatly demand complex rule based components therefore a crucial challenge is designing an end to end neuron reconstruction method that makes the overall framework simpler and model training easier we propose a neuron reconstruction transformer nrtr that discarding the complex rule based components views neuron reconstruction as a direct set prediction problem to the best of our knowledge nrtr is the first image to set deep learning model for end to end neuron reconstruction in experiments using the bigneuron and visor datasets nrtr achieves excellent neuron reconstruction results for comprehensive benchmarks and outperforms competitive baselines results of extensive experiments indicate that nrtr is effective at showing that neuron reconstruction is viewed as a set prediction problem which makes end to end model training available masked video distillation rethinking masked feature modeling for self supervised video representation learning authors rui wang dongdong chen zuxuan wu yinpeng chen xiyang dai mengchen liu lu yuan yu gang jiang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract benefiting from masked visual modeling self supervised video representation learning has achieved remarkable progress however existing methods focus on learning representations from scratch through reconstructing low level features like raw pixel rgb values in this paper we propose masked video distillation mvd a simple yet effective two stage masked feature modeling framework for video representation learning firstly we pretrain an image or video model by recovering low level features of masked patches then we use the resulting features as targets for masked feature modeling for the choice of teacher models we observe that students taught by video teachers perform better on temporally heavy video tasks while image teachers transfer stronger spatial representations for spatially heavy video tasks visualization analysis also indicates different teachers produce different learned patterns for students motivated by this observation to leverage the advantage of different teachers we design a spatial temporal co teaching method for mvd specifically we distill student models from both video teachers and image teachers by masked feature modeling extensive experimental results demonstrate that video transformers pretrained with spatial temporal co teaching outperform models distilled with a single teacher on a multitude of video datasets our mvd with vanilla vit achieves state of the art performance compared with previous supervised or self supervised methods on several challenging video downstream tasks for example with the vit large model our mvd achieves and top accuracy on kinetics and something something outperforming videomae by and respectively code will be available at url keyword raw image there is no result | 1 |
20,014 | 26,486,707,652 | IssuesEvent | 2023-01-17 18:40:06 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | [processor/resourcedetectionprocessor] support openshift as metadata provider | enhancement Stale processor/resourcedetection | ### Is your feature request related to a problem? Please describe.
I would like to be able to add [openshift infrastructure data](https://docs.openshift.com/container-platform/4.11/rest_api/config_apis/infrastructure-config-openshift-io-v1.html) to transmitted telemetry data.
### Describe the solution you'd like
Another detector:
```yaml
processors:
resourcedetection/openshift:
detectors: ["openshift"]
timeout: 2s
override: false
```
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | 1.0 | [processor/resourcedetectionprocessor] support openshift as metadata provider - ### Is your feature request related to a problem? Please describe.
I would like to be able to add [openshift infrastructure data](https://docs.openshift.com/container-platform/4.11/rest_api/config_apis/infrastructure-config-openshift-io-v1.html) to transmitted telemetry data.
### Describe the solution you'd like
Another detector:
```yaml
processors:
resourcedetection/openshift:
detectors: ["openshift"]
timeout: 2s
override: false
```
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | process | support openshift as metadata provider is your feature request related to a problem please describe i would like to be able to add to transmitted telemetry data describe the solution you d like another detector yaml processors resourcedetection openshift detectors timeout override false describe alternatives you ve considered no response additional context no response | 1 |
8,101 | 11,277,521,782 | IssuesEvent | 2020-01-15 03:08:44 | boycgit/fe-program-tips | https://api.github.com/repos/boycgit/fe-program-tips | opened | #4 用 process.hrtime 获取纳秒级的计时精度 - JSCON专栏﹒前端Tips | Gitalk nodejs/process_hrtime | https://boycgit.github.io/fe-program-tips/#/nodejs/process_hrtime<h2 id="第-4-期-用-processhrtime-获取纳秒级的计时精度"><a href="#/nodejs/process_hrtime?id=%e7%ac%ac-4-%e6%9c%9f-%e7%94%a8-processhrtime-%e8%8e%b7%e5%8f%96%e7%ba%b3%e7%a7%92%e7%ba%a7%e7%9a%84%e8%ae%a1%e6%97%b6%e7%b2%be%e5%ba%a6" data-id="第-4-期-用-processhrtime-获取纳秒级的计时精度" class="anchor"><span>第 4 期 - 用 process.hrtime 获取纳秒级的计时精度 </span></a></h2><h2 id="视频讲解"><a href="#/nodejs/process_hrtime?id=%e8%a7%86%e9%a2%91%e8%ae%b2%e8%a7%a3" data-id="视频讲解" class="anchor"><span>视频讲解</span></a></h2><iframe class="article-video" src="//player.bilibili.com/player.html?aid=83455727&cid=142771778&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
<h2 id="文字讲解"><a href="#/nodejs/process_hrtime?id=%e6%96%87%e5%ad%97%e8%ae%b2%e8%a7%a3" data-id="文字讲解" class="anchor"><span>文字讲解</span></a></h2><p>如果去<strong>测试代码运行的时长</strong>,你会选择哪个<strong>时间函数</strong>? 一般第一时间想到的函数是 <code>Date.now</code> 或 <code>Date.getTime</code>。</p><h3 id="_1、先讲结论"><a href="#/nodejs/process_hrtime?id=_1%e3%80%81%e5%85%88%e8%ae%b2%e7%bb%93%e8%ae%ba" data-id="_1、先讲结论" class="anchor"><span>1、先讲结论</span></a></h3><p>在 Node.js 程序中,优先选 <a href="http://nodejs.cn/api/process/process_hrtime_time.html" target="_blank" rel="noopener">process.hrtime</a>,其次选 <a href="https://nodejs.org/api/perf_hooks.html#perf_hooks_performance_now" target="_blank" rel="noopener">performance.now</a>,最后才会是 <a href="https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Reference/Global_Objects/Date" target="_blank" rel="noopener">Date.now</a></p><p>之所以这么选,是基于 <strong>精度</strong> 和 <strong>时钟同步</strong> 两方面考虑的。</p><h3 id="_2、知识讲解"><a href="#/nodejs/process_hrtime?id=_2%e3%80%81%e7%9f%a5%e8%af%86%e8%ae%b2%e8%a7%a3" data-id="_2、知识讲解" class="anchor"><span>2、知识讲解</span></a></h3><p>首先看一下 <code>Date.now</code> 的缺点</p><ol><li>返回的时间精度为 <strong>毫秒</strong>(10^-3)级别,精度不够;</li><li>受到系统时间影响,也有可能被其他软件调整所影响</li></ol><p>为了获得更高精度、且和系统时间无关的时间,W3C 制定了 <a href="https://www.w3.org/TR/hr-time-2/" target="_blank" rel="noopener">High Resolution Time Level 2</a> 标准,其中的 <a href="https://www.w3.org/TR/hr-time-2/#sec-monotonic-clock" target="_blank" rel="noopener">6. Monotonic Clock</a> 章节就规定了标准实现方需要提供 “单调递增” 的全局系统时钟:</p><p><img src="https://img.alicdn.com/tfs/TB1l7ypt2b2gK0jSZK9XXaEgFXa-881-323.png" data-origin="https://img.alicdn.com/tfs/TB1l7ypt2b2gK0jSZK9XXaEgFXa-881-323.png" alt="单调递增时钟"></p><p>在 Node.js 和 浏览器中都实现了该标准,具体的实现就是 <code>performance</code> 对象。我们可以通过 <a href="https://nodejs.org/api/perf_hooks.html#perf_hooks_performance_now" target="_blank" rel="noopener">performance.now</a> 获取相对起点的时间戳,具备以下几个特性:</p><ol><li>和 JS 中其他可用的时间类函数(比如 <code>Date.now</code> )不同的是,<code>performance.now()</code> 返回的时间使用了一个浮点数来达到 <strong>微秒(10^-6)</strong> 级别的精确度</li><li>时间以一个 <strong>恒定的速率</strong> 慢慢 <strong>增加</strong> 的,它不会受到系统时间的影响(不会被其他软件所调整)</li><li>从标准定义看,可以存在 <code>clock drift</code> (允许时钟漂移)</li></ol><p><img src="https://img.alicdn.com/tfs/TB17uWst.T1gK0jSZFhXXaAtVXa-870-268.png" data-origin="https://img.alicdn.com/tfs/TB17uWst.T1gK0jSZFhXXaAtVXa-870-268.png" alt="时钟漂移"></p><blockquote>
<p>这里大致说一下 <code>clock drift</code> 的概念,它是源于 <strong>时钟同步</strong> 概念。时钟同步(<code>Clock synchronization</code>)是计算机科学与工程学中的一个概念,旨在协调多个独立的时钟。现实中的多个时钟,即使时间已调至一致,但在一段时间后依然会因为时钟漂移(即<code>clock drift</code>)而显示不同的时间,因为它们计时的速率会略有差异。</p></blockquote>
<p>是否有更精细的时钟存在呢?</p><p>有的,在 Node.js 环境中就提供了 <a href="http://nodejs.cn/api/process/process_hrtime_time.html" target="_blank" rel="noopener">process.hrtime</a> 方法:</p><ol><li>在 node v0.7.6 版本中新增,兼容性很好(毕竟现在都 v12 LTS 版本了)</li><li>精度高达 <strong>纳秒(10^-9)</strong> 级别</li><li>不存在 <strong>时钟漂移 (clock drift)</strong></li></ol><p>可以说 <code>process.hrtime</code> 方法是 <strong>专为测量时间间隔而打造</strong> 的。</p><blockquote>
<p>注:浏览器环境没有这个 <code>hrtime</code> 方法,因此浏览器环境所能达到的最高精度也就用 <code>performance.now</code> 的微秒级别(当然各个浏览器实现也是有差异)</p></blockquote>
<p>只不过这个方法使用需要注意一下,首次调用返回的 <code>time</code> 需要作为后面调用的入参:</p><pre v-pre="" data-lang="js"><code class="lang-js"><span class="token keyword">const</span> <span class="token constant">NS_PER_SEC</span> <span class="token operator">=</span> <span class="token number">1e9</span><span class="token punctuation">;</span>
<span class="token keyword">const</span> time <span class="token operator">=</span> process<span class="token punctuation">.</span><span class="token function">hrtime</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token comment">// 这里第一次调用,返回 time 变量</span>
<span class="token comment">// [ 1800216, 25 ]</span>
<span class="token function">setTimeout</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">=></span> <span class="token punctuation">{</span>
<span class="token keyword">const</span> diff <span class="token operator">=</span> process<span class="token punctuation">.</span><span class="token function">hrtime</span><span class="token punctuation">(</span>time<span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token comment">// 用第一次返回的 time 变量作为入参放在第二次调用中,从而获取 diff 时间差值</span>
<span class="token comment">// [ 1, 552 ]</span>
console<span class="token punctuation">.</span><span class="token function">log</span><span class="token punctuation">(</span><span class="token template-string"><span class="token template-punctuation string">`</span><span class="token string">Benchmark took </span><span class="token interpolation"><span class="token interpolation-punctuation punctuation">${</span>diff<span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span> <span class="token operator">*</span> <span class="token constant">NS_PER_SEC</span> <span class="token operator">+</span> diff<span class="token punctuation">[</span><span class="token number">1</span><span class="token punctuation">]</span><span class="token interpolation-punctuation punctuation">}</span></span><span class="token string"> nanoseconds</span><span class="token template-punctuation string">`</span></span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token comment">// Benchmark took 1000000552 nanoseconds</span>
<span class="token punctuation">}</span><span class="token punctuation">,</span> <span class="token number">1000</span><span class="token punctuation">)</span><span class="token punctuation">;</span></code><button class="docsify-copy-code-button"><span class="label">点击复制</span><span class="error">错误</span><span class="success">复制成功</span></button></pre><p>到这里本节主要内容讲完了,也就自然而然获得本节刚开始的结论。</p><h3 id="_3、小知识"><a href="#/nodejs/process_hrtime?id=_3%e3%80%81%e5%b0%8f%e7%9f%a5%e8%af%86" data-id="_3、小知识" class="anchor"><span>3、小知识</span></a></h3><p>如果你使用 Node.js <strong>V10.7.0</strong> 以上的版本,还可以使用 <a href="https://nodejs.org/api/process.html#process_process_hrtime_bigint" target="_blank" rel="noopener">hrtime.bigint</a> 方法,它是 <code>process.hrtime</code> 的 <code>bigint</code> 版本(<code>bigint</code> 类型从 v10.4 开始支持),返回当前的高精度实际时间。</p><p>这方法使用起来比 <code>process.hrtime</code> 更加方便,因为它不用额外的 <code>time</code> 入参,直接通过两次调用结果相减就能获得计算时间差:</p><pre v-pre="" data-lang="js"><code class="lang-js"><span class="token keyword">const</span> start <span class="token operator">=</span> process<span class="token punctuation">.</span>hrtime<span class="token punctuation">.</span><span class="token function">bigint</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token comment">// 191051479007711n</span>
<span class="token function">setTimeout</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">=></span> <span class="token punctuation">{</span>
<span class="token keyword">const</span> end <span class="token operator">=</span> process<span class="token punctuation">.</span>hrtime<span class="token punctuation">.</span><span class="token function">bigint</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token comment">// 191052633396993n</span>
console<span class="token punctuation">.</span><span class="token function">log</span><span class="token punctuation">(</span><span class="token template-string"><span class="token template-punctuation string">`</span><span class="token string">基准测试耗时 </span><span class="token interpolation"><span class="token interpolation-punctuation punctuation">${</span>end <span class="token operator">-</span> start<span class="token interpolation-punctuation punctuation">}</span></span><span class="token string"> 纳秒</span><span class="token template-punctuation string">`</span></span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token comment">// 基准测试耗时 1154389282 纳秒</span>
<span class="token punctuation">}</span><span class="token punctuation">,</span> <span class="token number">1000</span><span class="token punctuation">)</span><span class="token punctuation">;</span></code><button class="docsify-copy-code-button"><span class="label">点击复制</span><span class="error">错误</span><span class="success">复制成功</span></button></pre><h3 id="_4、参考文章"><a href="#/nodejs/process_hrtime?id=_4%e3%80%81%e5%8f%82%e8%80%83%e6%96%87%e7%ab%a0" data-id="_4、参考文章" class="anchor"><span>4、参考文章</span></a></h3><ul><li><a href="https://www.w3.org/TR/hr-time-2/" target="_blank" rel="noopener">High Resolution Time Level 2</a>:w3c 中高精度时间的标准</li><li><a href="https://developer.mozilla.org/zh-CN/docs/Web/API/Performance/now" target="_blank" rel="noopener">MDN - Performance.now</a>:MDN 上 performance.now 的 API 文档</li><li><a href="https://mythbusters.js.org/#/date/timestamp" target="_blank" rel="noopener">Creating a timestamp</a>:言简意赅的总结,本文的选材最初就是来源于此</li><li><a href="https://www.zhihu.com/question/274957596" target="_blank" rel="noopener">如何理解 clock drift 和 clock skew 这两个概念呢?</a>:知乎上对这两个概念的回答</li><li><a href="https://stackoverflow.com/questions/48768758/measure-process-time-with-node-js" target="_blank" rel="noopener">Measure process time with Node Js?</a>:SOF 上对该问题的解答,也是用 process.hrtime 进行高精度时间测量</li><li><a href="https://cloud.tencent.com/developer/ask/84570" target="_blank" rel="noopener">如何在Node.js中获得微时间?</a>:可以看一下这个问题</li><li><a href="http://www.alloyteam.com/2015/09/explore-performance/" target="_blank" rel="noopener">初探 performance – 监控网页与程序性能</a>:window.performance 提供了一组精确的数据,经过简单的计算就能得出一些网页性能数据</li><li><a href="https://juejin.im/post/5d53a1056fb9a06b1d213ac7" target="_blank" rel="noopener">页面性能监测之performance</a>:详细介绍如何利用 performance 对象来评测页面性能</li></ul><section id="content-qr-footer" style="display: block;"><p style="text-align: center">关注微信公众号,回复 “tips” + “期号” 获取往期 tip。(比如可以通过回复 “tips25” 获取 第 25 期 tips)</p> <section style="text-align: center"><img src="https://boycgit.github.io/fe-program-tips/images/blogqrcode2.jpg"></section></section> | 1.0 | #4 用 process.hrtime 获取纳秒级的计时精度 - JSCON专栏﹒前端Tips - https://boycgit.github.io/fe-program-tips/#/nodejs/process_hrtime<h2 id="第-4-期-用-processhrtime-获取纳秒级的计时精度"><a href="#/nodejs/process_hrtime?id=%e7%ac%ac-4-%e6%9c%9f-%e7%94%a8-processhrtime-%e8%8e%b7%e5%8f%96%e7%ba%b3%e7%a7%92%e7%ba%a7%e7%9a%84%e8%ae%a1%e6%97%b6%e7%b2%be%e5%ba%a6" data-id="第-4-期-用-processhrtime-获取纳秒级的计时精度" class="anchor"><span>第 4 期 - 用 process.hrtime 获取纳秒级的计时精度 </span></a></h2><h2 id="视频讲解"><a href="#/nodejs/process_hrtime?id=%e8%a7%86%e9%a2%91%e8%ae%b2%e8%a7%a3" data-id="视频讲解" class="anchor"><span>视频讲解</span></a></h2><iframe class="article-video" src="//player.bilibili.com/player.html?aid=83455727&cid=142771778&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
<h2 id="文字讲解"><a href="#/nodejs/process_hrtime?id=%e6%96%87%e5%ad%97%e8%ae%b2%e8%a7%a3" data-id="文字讲解" class="anchor"><span>文字讲解</span></a></h2><p>如果去<strong>测试代码运行的时长</strong>,你会选择哪个<strong>时间函数</strong>? 一般第一时间想到的函数是 <code>Date.now</code> 或 <code>Date.getTime</code>。</p><h3 id="_1、先讲结论"><a href="#/nodejs/process_hrtime?id=_1%e3%80%81%e5%85%88%e8%ae%b2%e7%bb%93%e8%ae%ba" data-id="_1、先讲结论" class="anchor"><span>1、先讲结论</span></a></h3><p>在 Node.js 程序中,优先选 <a href="http://nodejs.cn/api/process/process_hrtime_time.html" target="_blank" rel="noopener">process.hrtime</a>,其次选 <a href="https://nodejs.org/api/perf_hooks.html#perf_hooks_performance_now" target="_blank" rel="noopener">performance.now</a>,最后才会是 <a href="https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Reference/Global_Objects/Date" target="_blank" rel="noopener">Date.now</a></p><p>之所以这么选,是基于 <strong>精度</strong> 和 <strong>时钟同步</strong> 两方面考虑的。</p><h3 id="_2、知识讲解"><a href="#/nodejs/process_hrtime?id=_2%e3%80%81%e7%9f%a5%e8%af%86%e8%ae%b2%e8%a7%a3" data-id="_2、知识讲解" class="anchor"><span>2、知识讲解</span></a></h3><p>首先看一下 <code>Date.now</code> 的缺点</p><ol><li>返回的时间精度为 <strong>毫秒</strong>(10^-3)级别,精度不够;</li><li>受到系统时间影响,也有可能被其他软件调整所影响</li></ol><p>为了获得更高精度、且和系统时间无关的时间,W3C 制定了 <a href="https://www.w3.org/TR/hr-time-2/" target="_blank" rel="noopener">High Resolution Time Level 2</a> 标准,其中的 <a href="https://www.w3.org/TR/hr-time-2/#sec-monotonic-clock" target="_blank" rel="noopener">6. Monotonic Clock</a> 章节就规定了标准实现方需要提供 “单调递增” 的全局系统时钟:</p><p><img src="https://img.alicdn.com/tfs/TB1l7ypt2b2gK0jSZK9XXaEgFXa-881-323.png" data-origin="https://img.alicdn.com/tfs/TB1l7ypt2b2gK0jSZK9XXaEgFXa-881-323.png" alt="单调递增时钟"></p><p>在 Node.js 和 浏览器中都实现了该标准,具体的实现就是 <code>performance</code> 对象。我们可以通过 <a href="https://nodejs.org/api/perf_hooks.html#perf_hooks_performance_now" target="_blank" rel="noopener">performance.now</a> 获取相对起点的时间戳,具备以下几个特性:</p><ol><li>和 JS 中其他可用的时间类函数(比如 <code>Date.now</code> )不同的是,<code>performance.now()</code> 返回的时间使用了一个浮点数来达到 <strong>微秒(10^-6)</strong> 级别的精确度</li><li>时间以一个 <strong>恒定的速率</strong> 慢慢 <strong>增加</strong> 的,它不会受到系统时间的影响(不会被其他软件所调整)</li><li>从标准定义看,可以存在 <code>clock drift</code> (允许时钟漂移)</li></ol><p><img src="https://img.alicdn.com/tfs/TB17uWst.T1gK0jSZFhXXaAtVXa-870-268.png" data-origin="https://img.alicdn.com/tfs/TB17uWst.T1gK0jSZFhXXaAtVXa-870-268.png" alt="时钟漂移"></p><blockquote>
<p>这里大致说一下 <code>clock drift</code> 的概念,它是源于 <strong>时钟同步</strong> 概念。时钟同步(<code>Clock synchronization</code>)是计算机科学与工程学中的一个概念,旨在协调多个独立的时钟。现实中的多个时钟,即使时间已调至一致,但在一段时间后依然会因为时钟漂移(即<code>clock drift</code>)而显示不同的时间,因为它们计时的速率会略有差异。</p></blockquote>
<p>是否有更精细的时钟存在呢?</p><p>有的,在 Node.js 环境中就提供了 <a href="http://nodejs.cn/api/process/process_hrtime_time.html" target="_blank" rel="noopener">process.hrtime</a> 方法:</p><ol><li>在 node v0.7.6 版本中新增,兼容性很好(毕竟现在都 v12 LTS 版本了)</li><li>精度高达 <strong>纳秒(10^-9)</strong> 级别</li><li>不存在 <strong>时钟漂移 (clock drift)</strong></li></ol><p>可以说 <code>process.hrtime</code> 方法是 <strong>专为测量时间间隔而打造</strong> 的。</p><blockquote>
<p>注:浏览器环境没有这个 <code>hrtime</code> 方法,因此浏览器环境所能达到的最高精度也就用 <code>performance.now</code> 的微秒级别(当然各个浏览器实现也是有差异)</p></blockquote>
<p>只不过这个方法使用需要注意一下,首次调用返回的 <code>time</code> 需要作为后面调用的入参:</p><pre v-pre="" data-lang="js"><code class="lang-js"><span class="token keyword">const</span> <span class="token constant">NS_PER_SEC</span> <span class="token operator">=</span> <span class="token number">1e9</span><span class="token punctuation">;</span>
<span class="token keyword">const</span> time <span class="token operator">=</span> process<span class="token punctuation">.</span><span class="token function">hrtime</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token comment">// 这里第一次调用,返回 time 变量</span>
<span class="token comment">// [ 1800216, 25 ]</span>
<span class="token function">setTimeout</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">=></span> <span class="token punctuation">{</span>
<span class="token keyword">const</span> diff <span class="token operator">=</span> process<span class="token punctuation">.</span><span class="token function">hrtime</span><span class="token punctuation">(</span>time<span class="token punctuation">)</span><span class="token punctuation">;</span> <span class="token comment">// 用第一次返回的 time 变量作为入参放在第二次调用中,从而获取 diff 时间差值</span>
<span class="token comment">// [ 1, 552 ]</span>
console<span class="token punctuation">.</span><span class="token function">log</span><span class="token punctuation">(</span><span class="token template-string"><span class="token template-punctuation string">`</span><span class="token string">Benchmark took </span><span class="token interpolation"><span class="token interpolation-punctuation punctuation">${</span>diff<span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span> <span class="token operator">*</span> <span class="token constant">NS_PER_SEC</span> <span class="token operator">+</span> diff<span class="token punctuation">[</span><span class="token number">1</span><span class="token punctuation">]</span><span class="token interpolation-punctuation punctuation">}</span></span><span class="token string"> nanoseconds</span><span class="token template-punctuation string">`</span></span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token comment">// Benchmark took 1000000552 nanoseconds</span>
<span class="token punctuation">}</span><span class="token punctuation">,</span> <span class="token number">1000</span><span class="token punctuation">)</span><span class="token punctuation">;</span></code><button class="docsify-copy-code-button"><span class="label">点击复制</span><span class="error">错误</span><span class="success">复制成功</span></button></pre><p>到这里本节主要内容讲完了,也就自然而然获得本节刚开始的结论。</p><h3 id="_3、小知识"><a href="#/nodejs/process_hrtime?id=_3%e3%80%81%e5%b0%8f%e7%9f%a5%e8%af%86" data-id="_3、小知识" class="anchor"><span>3、小知识</span></a></h3><p>如果你使用 Node.js <strong>V10.7.0</strong> 以上的版本,还可以使用 <a href="https://nodejs.org/api/process.html#process_process_hrtime_bigint" target="_blank" rel="noopener">hrtime.bigint</a> 方法,它是 <code>process.hrtime</code> 的 <code>bigint</code> 版本(<code>bigint</code> 类型从 v10.4 开始支持),返回当前的高精度实际时间。</p><p>这方法使用起来比 <code>process.hrtime</code> 更加方便,因为它不用额外的 <code>time</code> 入参,直接通过两次调用结果相减就能获得计算时间差:</p><pre v-pre="" data-lang="js"><code class="lang-js"><span class="token keyword">const</span> start <span class="token operator">=</span> process<span class="token punctuation">.</span>hrtime<span class="token punctuation">.</span><span class="token function">bigint</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token comment">// 191051479007711n</span>
<span class="token function">setTimeout</span><span class="token punctuation">(</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token operator">=></span> <span class="token punctuation">{</span>
<span class="token keyword">const</span> end <span class="token operator">=</span> process<span class="token punctuation">.</span>hrtime<span class="token punctuation">.</span><span class="token function">bigint</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token comment">// 191052633396993n</span>
console<span class="token punctuation">.</span><span class="token function">log</span><span class="token punctuation">(</span><span class="token template-string"><span class="token template-punctuation string">`</span><span class="token string">基准测试耗时 </span><span class="token interpolation"><span class="token interpolation-punctuation punctuation">${</span>end <span class="token operator">-</span> start<span class="token interpolation-punctuation punctuation">}</span></span><span class="token string"> 纳秒</span><span class="token template-punctuation string">`</span></span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token comment">// 基准测试耗时 1154389282 纳秒</span>
<span class="token punctuation">}</span><span class="token punctuation">,</span> <span class="token number">1000</span><span class="token punctuation">)</span><span class="token punctuation">;</span></code><button class="docsify-copy-code-button"><span class="label">点击复制</span><span class="error">错误</span><span class="success">复制成功</span></button></pre><h3 id="_4、参考文章"><a href="#/nodejs/process_hrtime?id=_4%e3%80%81%e5%8f%82%e8%80%83%e6%96%87%e7%ab%a0" data-id="_4、参考文章" class="anchor"><span>4、参考文章</span></a></h3><ul><li><a href="https://www.w3.org/TR/hr-time-2/" target="_blank" rel="noopener">High Resolution Time Level 2</a>:w3c 中高精度时间的标准</li><li><a href="https://developer.mozilla.org/zh-CN/docs/Web/API/Performance/now" target="_blank" rel="noopener">MDN - Performance.now</a>:MDN 上 performance.now 的 API 文档</li><li><a href="https://mythbusters.js.org/#/date/timestamp" target="_blank" rel="noopener">Creating a timestamp</a>:言简意赅的总结,本文的选材最初就是来源于此</li><li><a href="https://www.zhihu.com/question/274957596" target="_blank" rel="noopener">如何理解 clock drift 和 clock skew 这两个概念呢?</a>:知乎上对这两个概念的回答</li><li><a href="https://stackoverflow.com/questions/48768758/measure-process-time-with-node-js" target="_blank" rel="noopener">Measure process time with Node Js?</a>:SOF 上对该问题的解答,也是用 process.hrtime 进行高精度时间测量</li><li><a href="https://cloud.tencent.com/developer/ask/84570" target="_blank" rel="noopener">如何在Node.js中获得微时间?</a>:可以看一下这个问题</li><li><a href="http://www.alloyteam.com/2015/09/explore-performance/" target="_blank" rel="noopener">初探 performance – 监控网页与程序性能</a>:window.performance 提供了一组精确的数据,经过简单的计算就能得出一些网页性能数据</li><li><a href="https://juejin.im/post/5d53a1056fb9a06b1d213ac7" target="_blank" rel="noopener">页面性能监测之performance</a>:详细介绍如何利用 performance 对象来评测页面性能</li></ul><section id="content-qr-footer" style="display: block;"><p style="text-align: center">关注微信公众号,回复 “tips” + “期号” 获取往期 tip。(比如可以通过回复 “tips25” 获取 第 25 期 tips)</p> <section style="text-align: center"><img src="https://boycgit.github.io/fe-program-tips/images/blogqrcode2.jpg"></section></section> | process | 用 process hrtime 获取纳秒级的计时精度 jscon专栏﹒前端tips id 第 期 用 processhrtime 获取纳秒级的计时精度 第 期 用 process hrtime 获取纳秒级的计时精度 视频讲解 文字讲解 如果去 测试代码运行的时长 ,你会选择哪个 时间函数 ? 一般第一时间想到的函数是 date now 或 date gettime 。 、先讲结论 在 node js 程序中,优先选 process hrtime ,其次选 performance now ,最后才会是 date now 之所以这么选,是基于 精度 和 时钟同步 两方面考虑的。 、知识讲解 首先看一下 date now 的缺点 返回的时间精度为 毫秒 ( )级别,精度不够; 受到系统时间影响,也有可能被其他软件调整所影响 为了获得更高精度、且和系统时间无关的时间, 制定了 high resolution time level 标准,其中的 monotonic clock 章节就规定了标准实现方需要提供 “单调递增” 的全局系统时钟: 在 node js 和 浏览器中都实现了该标准,具体的实现就是 performance 对象。我们可以通过 performance now 获取相对起点的时间戳,具备以下几个特性: 和 js 中其他可用的时间类函数(比如 date now )不同的是, performance now 返回的时间使用了一个浮点数来达到 微秒( ) 级别的精确度 时间以一个 恒定的速率 慢慢 增加 的,它不会受到系统时间的影响(不会被其他软件所调整) 从标准定义看,可以存在 clock drift (允许时钟漂移) 这里大致说一下 clock drift 的概念,它是源于 时钟同步 概念。时钟同步( clock synchronization )是计算机科学与工程学中的一个概念,旨在协调多个独立的时钟。现实中的多个时钟,即使时间已调至一致,但在一段时间后依然会因为时钟漂移(即 clock drift )而显示不同的时间,因为它们计时的速率会略有差异。 是否有更精细的时钟存在呢? 有的,在 node js 环境中就提供了 process hrtime 方法: 在 node 版本中新增,兼容性很好(毕竟现在都 lts 版本了) 精度高达 纳秒( ) 级别 不存在 时钟漂移 clock drift 可以说 process hrtime 方法是 专为测量时间间隔而打造 的。 注:浏览器环境没有这个 hrtime 方法,因此浏览器环境所能达到的最高精度也就用 performance now 的微秒级别(当然各个浏览器实现也是有差异) 只不过这个方法使用需要注意一下,首次调用返回的 time 需要作为后面调用的入参: const ns per sec const time process hrtime 这里第一次调用,返回 time 变量 settimeout gt const diff process hrtime time 用第一次返回的 time 变量作为入参放在第二次调用中,从而获取 diff 时间差值 console log benchmark took diff ns per sec diff nanoseconds benchmark took nanoseconds 点击复制 错误 复制成功 到这里本节主要内容讲完了,也就自然而然获得本节刚开始的结论。 、小知识 如果你使用 node js 以上的版本,还可以使用 hrtime bigint 方法,它是 process hrtime 的 bigint 版本( bigint 类型从 开始支持),返回当前的高精度实际时间。 这方法使用起来比 process hrtime 更加方便,因为它不用额外的 time 入参,直接通过两次调用结果相减就能获得计算时间差: const start process hrtime bigint settimeout gt const end process hrtime bigint console log 基准测试耗时 end start 纳秒 基准测试耗时 纳秒 点击复制 错误 复制成功 、参考文章 high resolution time level : 中高精度时间的标准 mdn performance now :mdn 上 performance now 的 api 文档 creating a timestamp :言简意赅的总结,本文的选材最初就是来源于此 如何理解 clock drift 和 clock skew 这两个概念呢? :知乎上对这两个概念的回答 measure process time with node js :sof 上对该问题的解答,也是用 process hrtime 进行高精度时间测量 如何在node js中获得微时间? :可以看一下这个问题 初探 performance – 监控网页与程序性能 :window performance 提供了一组精确的数据,经过简单的计算就能得出一些网页性能数据 页面性能监测之performance :详细介绍如何利用 performance 对象来评测页面性能 关注微信公众号,回复 “tips” “期号” 获取往期 tip。(比如可以通过回复 “ ” 获取 第 期 tips) img src | 1 |
122,432 | 17,703,901,736 | IssuesEvent | 2021-08-25 04:04:31 | Chiencc/Sample_Webgoat | https://api.github.com/repos/Chiencc/Sample_Webgoat | opened | CVE-2013-4316 (Medium) detected in struts2-core-2.3.15.jar | security vulnerability | ## CVE-2013-4316 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>struts2-core-2.3.15.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Path to dependency file: Sample_Webgoat/pom.xml</p>
<p>Path to vulnerable library: sitory/org/apache/struts/struts2-core/2.3.15/struts2-core-2.3.15.jar</p>
<p>
Dependency Hierarchy:
- :x: **struts2-core-2.3.15.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Chiencc/Sample_Webgoat/commit/5cef7c1658e063cd48f915c9038805975aeeab84">5cef7c1658e063cd48f915c9038805975aeeab84</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Struts 2.0.0 through 2.3.15.1 enables Dynamic Method Invocation by default, which has unknown impact and attack vectors.
<p>Publish Date: 2013-09-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-4316>CVE-2013-4316</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://struts.apache.org/release/2.3.x/docs/s2-019.html">http://struts.apache.org/release/2.3.x/docs/s2-019.html</a></p>
<p>Release Date: 2017-12-31</p>
<p>Fix Resolution: Developers should immediately upgrade to Struts 2.3.15.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2013-4316 (Medium) detected in struts2-core-2.3.15.jar - ## CVE-2013-4316 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>struts2-core-2.3.15.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Path to dependency file: Sample_Webgoat/pom.xml</p>
<p>Path to vulnerable library: sitory/org/apache/struts/struts2-core/2.3.15/struts2-core-2.3.15.jar</p>
<p>
Dependency Hierarchy:
- :x: **struts2-core-2.3.15.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Chiencc/Sample_Webgoat/commit/5cef7c1658e063cd48f915c9038805975aeeab84">5cef7c1658e063cd48f915c9038805975aeeab84</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Struts 2.0.0 through 2.3.15.1 enables Dynamic Method Invocation by default, which has unknown impact and attack vectors.
<p>Publish Date: 2013-09-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-4316>CVE-2013-4316</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://struts.apache.org/release/2.3.x/docs/s2-019.html">http://struts.apache.org/release/2.3.x/docs/s2-019.html</a></p>
<p>Release Date: 2017-12-31</p>
<p>Fix Resolution: Developers should immediately upgrade to Struts 2.3.15.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in core jar cve medium severity vulnerability vulnerable library core jar apache struts path to dependency file sample webgoat pom xml path to vulnerable library sitory org apache struts core core jar dependency hierarchy x core jar vulnerable library found in head commit a href found in base branch master vulnerability details apache struts through enables dynamic method invocation by default which has unknown impact and attack vectors publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution developers should immediately upgrade to struts step up your open source security game with whitesource | 0 |
85,223 | 7,964,250,247 | IssuesEvent | 2018-07-13 20:40:49 | dojot/dojot | https://api.github.com/repos/dojot/dojot | closed | Create template - no alert message is displayed when the type field is not selected | Priority:Medium Status:ToTest Team:Frontend Type:Bug | If the **type** field is not selected in the configuration parameters or attributes, they are added but the template is not created.

Expected result:
- the attributes should not be added, and help messages for field selection should be displayed | 1.0 | Create template - no alert message is displayed when the type field is not selected - If the **type** field is not selected in the configuration parameters or attributes, they are added but the template is not created.

Expected result:
- the attributes should not be added, and help messages for field selection should be displayed | non_process | create template no alert message is displayed when the type field is not selected if the type field is not selected in the configuration parameters or attributes they are added but the template is not created expected result the attributes should not be added and help messages for field selection should be displayed | 0 |
149,743 | 5,724,859,347 | IssuesEvent | 2017-04-20 15:22:08 | LDMW/app | https://api.github.com/repos/LDMW/app | closed | Configure wagtail to use bcrypt | priority-2 T1d technical | The default is sha256
https://docs.djangoproject.com/en/1.11/topics/auth/passwords/ seems to explain how this can be configured in django | 1.0 | Configure wagtail to use bcrypt - The default is sha256
https://docs.djangoproject.com/en/1.11/topics/auth/passwords/ seems to explain how this can be configured in django | non_process | configure wagtail to use bcrypt the default is seems to explain how this can be configured in django | 0 |
7,863 | 11,042,165,817 | IssuesEvent | 2019-12-09 08:33:15 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | Improve algorithm parameters description using tables | 3.4 Guidelines Processing Alg enhancement | ## Description
This is a place holder to discuss the possible use of tables to describe the processing algorithms parameters instead of headings and bold syntax.
As discussed in Nodebo Hackfest, tables are hard to maintain in Sphinx. Meanwhile, while discussing #2738, I found some nice directives that may make it easier:
https://rest-sphinx-memo.readthedocs.io/en/latest/ReST.html#csv-table
I will try to come up with a structure for using them with the parameters. And see if it works.
Other improvements:
- [ ] See whether some data inputs should be more explained in a particular chapter
- [x] Add the Python identifier for each parameter
- [ ] Add the Python call for the algorithm
- [ ] Find where to place these "new" features in the formating
- [ ] Study whether a table structure is adapted for the alg description
- [ ] (optional) Add the custom icon of the algorithm when available - not a priority but if we come with a nice organization, it could be a plus
- [ ] Information whether the algorithm produces a new file or modifies the input one is inequally present. Could be nice to have a dedicated section (instead of inline comment in the description as now) | 1.0 | Improve algorithm parameters description using tables - ## Description
This is a place holder to discuss the possible use of tables to describe the processing algorithms parameters instead of headings and bold syntax.
As discussed in Nodebo Hackfest, tables are hard to maintain in Sphinx. Meanwhile, while discussing #2738, I found some nice directives that may make it easier:
https://rest-sphinx-memo.readthedocs.io/en/latest/ReST.html#csv-table
I will try to come up with a structure for using them with the parameters. And see if it works.
Other improvements:
- [ ] See whether some data inputs should be more explained in a particular chapter
- [x] Add the Python identifier for each parameter
- [ ] Add the Python call for the algorithm
- [ ] Find where to place these "new" features in the formating
- [ ] Study whether a table structure is adapted for the alg description
- [ ] (optional) Add the custom icon of the algorithm when available - not a priority but if we come with a nice organization, it could be a plus
- [ ] Information whether the algorithm produces a new file or modifies the input one is inequally present. Could be nice to have a dedicated section (instead of inline comment in the description as now) | process | improve algorithm parameters description using tables description this is a place holder to discuss the possible use of tables to describe the processing algorithms parameters instead of headings and bold syntax as discussed in nodebo hackfest tables are hard to maintain in sphinx meanwhile while discussing i found some nice directives that may make it easier i will try to come up with a structure for using them with the parameters and see if it works other improvements see whether some data inputs should be more explained in a particular chapter add the python identifier for each parameter add the python call for the algorithm find where to place these new features in the formating study whether a table structure is adapted for the alg description optional add the custom icon of the algorithm when available not a priority but if we come with a nice organization it could be a plus information whether the algorithm produces a new file or modifies the input one is inequally present could be nice to have a dedicated section instead of inline comment in the description as now | 1 |
11,925 | 14,704,082,379 | IssuesEvent | 2021-01-04 15:58:28 | GetTerminus/terminus-oss | https://api.github.com/repos/GetTerminus/terminus-oss | closed | Icons: Make paid icons optional | Focus: component Goal: Process Improvement Needs: exploration Type: feature | Pro FontAwesome icons must be made available to the end consumer. But the current implementation _requires_ all consumers to install the pro version. This means that any components that use our icon component cannot be spun up in something like StackBlitz etc.
- [ ] Determine a way to make the Pro icons optional
- Possibly use injection token as the charts component does
- [ ] Move pro icons to optional peer deps
- [ ] Add free icons to optional peer deps | 1.0 | Icons: Make paid icons optional - Pro FontAwesome icons must be made available to the end consumer. But the current implementation _requires_ all consumers to install the pro version. This means that any components that use our icon component cannot be spun up in something like StackBlitz etc.
- [ ] Determine a way to make the Pro icons optional
- Possibly use injection token as the charts component does
- [ ] Move pro icons to optional peer deps
- [ ] Add free icons to optional peer deps | process | icons make paid icons optional pro fontawesome icons must be made available to the end consumer but the current implementation requires all consumers to install the pro version this means that any components that use our icon component cannot be spun up in something like stackblitz etc determine a way to make the pro icons optional possibly use injection token as the charts component does move pro icons to optional peer deps add free icons to optional peer deps | 1 |
5,395 | 8,227,742,633 | IssuesEvent | 2018-09-07 00:49:23 | pelias/pelias | https://api.github.com/repos/pelias/pelias | closed | Autocomplete for "nashville" against sources=wof doesn't return Nashville, TN | processed | If I hit autocomplete with the string "nashville" I don't get back Nashville, Tennessee.
Example URL: http://search.mapzen.com/v1/autocomplete?sources=wof&layers=locality,county,neighbourhood,borough,localadmin&api_key=mapzen-XXX&text=nashville
Results (reformatted to just ID and label):
85816591 East Nashville, Nashville, TN, USA
85855987 West Nashville, Nashville, TN, USA
85893481 South Nashville, Nashville, TN, USA
85980525 Nashville, NC, USA
85936711 Nashville, GA, USA
85920767 Nashville, AR, USA
404497063 Nashville, IL, USA
420529213 Nashville, USA
85893473 North Nashville, Nashville, TN, USA
404494819 Nashville, WI, USA
None of these are Nashville, TN. | 1.0 | Autocomplete for "nashville" against sources=wof doesn't return Nashville, TN - If I hit autocomplete with the string "nashville" I don't get back Nashville, Tennessee.
Example URL: http://search.mapzen.com/v1/autocomplete?sources=wof&layers=locality,county,neighbourhood,borough,localadmin&api_key=mapzen-XXX&text=nashville
Results (reformatted to just ID and label):
85816591 East Nashville, Nashville, TN, USA
85855987 West Nashville, Nashville, TN, USA
85893481 South Nashville, Nashville, TN, USA
85980525 Nashville, NC, USA
85936711 Nashville, GA, USA
85920767 Nashville, AR, USA
404497063 Nashville, IL, USA
420529213 Nashville, USA
85893473 North Nashville, Nashville, TN, USA
404494819 Nashville, WI, USA
None of these are Nashville, TN. | process | autocomplete for nashville against sources wof doesn t return nashville tn if i hit autocomplete with the string nashville i don t get back nashville tennessee example url results reformatted to just id and label east nashville nashville tn usa west nashville nashville tn usa south nashville nashville tn usa nashville nc usa nashville ga usa nashville ar usa nashville il usa nashville usa north nashville nashville tn usa nashville wi usa none of these are nashville tn | 1 |
13,964 | 16,740,047,872 | IssuesEvent | 2021-06-11 08:40:47 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | Error: [libs/datamodel/connectors/dml/src/model.rs:161:64] Could not find relation field Trade_OrderToTrade on model Order. | bug/1-repro-available kind/bug process/candidate team/migrations | <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma introspect`
Version: `2.23.0`
Binary Version: `adf5e8cba3daf12d456d911d72b6e9418681b28b`
Report: https://prisma-errors.netlify.app/report/13330
OS: `x64 darwin 20.4.0`
JS Stacktrace:
```
Error: [libs/datamodel/connectors/dml/src/model.rs:161:64] Could not find relation field Trade_OrderToTrade on model Order.
at ChildProcess.<anonymous> (/<....>/core/node_modules/prisma/build/index.js:39885:28)
at ChildProcess.emit (events.js:315:20)
at ChildProcess.EventEmitter.emit (domain.js:467:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::option::expect_failed
10: dml::model::Model::find_relation_field_mut
11: sql_introspection_connector::re_introspection::enrich
12: sql_introspection_connector::calculate_datamodel::calculate_datamodel
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
15: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
16: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
17: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
18: introspection_engine::main::{{closure}}
19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
20: introspection_engine::main
21: std::sys_common::backtrace::__rust_begin_short_backtrace
22: std::rt::lang_start::{{closure}}
23: std::rt::lang_start_internal
24: std::rt::lang_start
```
| 1.0 | Error: [libs/datamodel/connectors/dml/src/model.rs:161:64] Could not find relation field Trade_OrderToTrade on model Order. - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma introspect`
Version: `2.23.0`
Binary Version: `adf5e8cba3daf12d456d911d72b6e9418681b28b`
Report: https://prisma-errors.netlify.app/report/13330
OS: `x64 darwin 20.4.0`
JS Stacktrace:
```
Error: [libs/datamodel/connectors/dml/src/model.rs:161:64] Could not find relation field Trade_OrderToTrade on model Order.
at ChildProcess.<anonymous> (/<....>/core/node_modules/prisma/build/index.js:39885:28)
at ChildProcess.emit (events.js:315:20)
at ChildProcess.EventEmitter.emit (domain.js:467:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::option::expect_failed
10: dml::model::Model::find_relation_field_mut
11: sql_introspection_connector::re_introspection::enrich
12: sql_introspection_connector::calculate_datamodel::calculate_datamodel
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
15: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
16: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
17: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
18: introspection_engine::main::{{closure}}
19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
20: introspection_engine::main
21: std::sys_common::backtrace::__rust_begin_short_backtrace
22: std::rt::lang_start::{{closure}}
23: std::rt::lang_start_internal
24: std::rt::lang_start
```
| process | error could not find relation field trade ordertotrade on model order command prisma introspect version binary version report os darwin js stacktrace error could not find relation field trade ordertotrade on model order at childprocess core node modules prisma build index js at childprocess emit events js at childprocess eventemitter emit domain js at process childprocess handle onexit internal child process js rust stacktrace backtrace backtrace trace backtrace capture backtrace new user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook std panicking begin panic handler closure std sys common backtrace rust end short backtrace rust begin unwind core panicking panic fmt core option expect failed dml model model find relation field mut sql introspection connector re introspection enrich sql introspection connector calculate datamodel calculate datamodel as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll introspection engine main closure as core future future future poll introspection engine main std sys common backtrace rust begin short backtrace std rt lang start closure std rt lang start internal std rt lang start | 1 |
143,303 | 21,996,024,392 | IssuesEvent | 2022-05-26 06:26:42 | stores-cedcommerce/Anthony-Store-Design | https://api.github.com/repos/stores-cedcommerce/Anthony-Store-Design | opened | Inside the hambergur, the currency converter is cropping. | Header section Mobile Design / UI / UX | **Actual result:**
Inside the hambergur, the currency converter is cropping.

**Expected result:**
The currency converter should not overlapped. | 1.0 | Inside the hambergur, the currency converter is cropping. - **Actual result:**
Inside the hambergur, the currency converter is cropping.

**Expected result:**
The currency converter should not overlapped. | non_process | inside the hambergur the currency converter is cropping actual result inside the hambergur the currency converter is cropping expected result the currency converter should not overlapped | 0 |
21,641 | 30,056,080,239 | IssuesEvent | 2023-06-28 06:55:06 | 0xPolygonMiden/miden-vm | https://api.github.com/repos/0xPolygonMiden/miden-vm | closed | Procedure context tracking in the processor | assembly processor on hold | Currently, the processor is unaware of the procedure context of an executing program. That is, the processor doesn't know which procedure is being executed, which procedure called the current procedure etc. This limits usefulness of debug info. Thus, it would be very useful if the processor maintained a "procedure stack".
To make a process aware of which procedures are executing, we could introduce two new decorators. Something like:
* `ProcStart` - which the assembler would add at the start of each procedure.
* `ProcEnd` - which the assembler would add right before a procedure returns.
Then, the processor would push a procedure onto a stack when it sees `ProcStart` decorator in the instruction stream, and remove a procedure from the stack when it sees `ProcEnd` decorator. This should be very lightweight - so, I don't think we should worry about performance implications. | 1.0 | Procedure context tracking in the processor - Currently, the processor is unaware of the procedure context of an executing program. That is, the processor doesn't know which procedure is being executed, which procedure called the current procedure etc. This limits usefulness of debug info. Thus, it would be very useful if the processor maintained a "procedure stack".
To make a process aware of which procedures are executing, we could introduce two new decorators. Something like:
* `ProcStart` - which the assembler would add at the start of each procedure.
* `ProcEnd` - which the assembler would add right before a procedure returns.
Then, the processor would push a procedure onto a stack when it sees `ProcStart` decorator in the instruction stream, and remove a procedure from the stack when it sees `ProcEnd` decorator. This should be very lightweight - so, I don't think we should worry about performance implications. | process | procedure context tracking in the processor currently the processor is unaware of the procedure context of an executing program that is the processor doesn t know which procedure is being executed which procedure called the current procedure etc this limits usefulness of debug info thus it would be very useful if the processor maintained a procedure stack to make a process aware of which procedures are executing we could introduce two new decorators something like procstart which the assembler would add at the start of each procedure procend which the assembler would add right before a procedure returns then the processor would push a procedure onto a stack when it sees procstart decorator in the instruction stream and remove a procedure from the stack when it sees procend decorator this should be very lightweight so i don t think we should worry about performance implications | 1 |
775,797 | 27,237,431,622 | IssuesEvent | 2023-02-21 17:21:11 | Baystation12/Baystation12 | https://api.github.com/repos/Baystation12/Baystation12 | closed | SCBA masks lack species spritesheets | Oversight Priority: Low Could Reproduce Sprites | <!--
Anything inside tags like these is a comment and will not be displayed in the final issue.
Be careful not to write inside them!
Every field other than 'specific information for locating' is required.
If you do not fill out the 'specific information' field, please delete the header.
/!\ Omitting or not answering a required field will result in your issue being closed. /!\
Repeated violation of this rule, or joke or spam issues, will result in punishment.
PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS
(The lines with four #'s)
Don't edit them or delete them - it's part of the formatting
-->
#### Description of issue
Just a simple oversight, even though all emergency boxes have SCBA masks there only seem to be sprites for humans.
#### Difference between expected and actual behavior
All crew species (that breathe air) should have sprites for SCBAs
Currently SCBA masks are invisible on alien species
#### Steps to reproduce
N/A
#### Specific information for locating
<!-- e.g. an object name, paste specific message outputs... -->
Baystation12/code/modules/clothing/masks/breath.dm
/obj/item/clothing/mask/breath/scba
#### Length of time in which bug has been known to occur
<!--
Be specific if you approximately know the time it's been occurring
for—this can speed up finding the source. If you're not sure
about it, tell us too!
-->
9 months-ish
#### Client version, Server revision & Game ID
<!-- Found with the "Show server revision" verb in the OOC tab in game. -->
Client Version: 513
Server Revision: 757d796fbcbe445a7618c35df8eedc160dc25566 - 7618c35df8eedc160dc25566 - 2020-08-09
Game ID: b8z-c7IG
Current map: SEV Torch
#### Issue bingo
<!-- Check these by writing an x inside the [ ] (like this: [x])-->
<!-- Don't forget to remove the space between the brackets, or it won't work! -->
- [x] Issue could be reproduced at least once
- [x] Issue could be reproduced by different players
- [x] Issue could be reproduced in multiple rounds
- [x] Issue happened in a recent (less than 7 days ago) round
- [x] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
| 1.0 | SCBA masks lack species spritesheets - <!--
Anything inside tags like these is a comment and will not be displayed in the final issue.
Be careful not to write inside them!
Every field other than 'specific information for locating' is required.
If you do not fill out the 'specific information' field, please delete the header.
/!\ Omitting or not answering a required field will result in your issue being closed. /!\
Repeated violation of this rule, or joke or spam issues, will result in punishment.
PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS
(The lines with four #'s)
Don't edit them or delete them - it's part of the formatting
-->
#### Description of issue
Just a simple oversight, even though all emergency boxes have SCBA masks there only seem to be sprites for humans.
#### Difference between expected and actual behavior
All crew species (that breathe air) should have sprites for SCBAs
Currently SCBA masks are invisible on alien species
#### Steps to reproduce
N/A
#### Specific information for locating
<!-- e.g. an object name, paste specific message outputs... -->
Baystation12/code/modules/clothing/masks/breath.dm
/obj/item/clothing/mask/breath/scba
#### Length of time in which bug has been known to occur
<!--
Be specific if you approximately know the time it's been occurring
for—this can speed up finding the source. If you're not sure
about it, tell us too!
-->
9 months-ish
#### Client version, Server revision & Game ID
<!-- Found with the "Show server revision" verb in the OOC tab in game. -->
Client Version: 513
Server Revision: 757d796fbcbe445a7618c35df8eedc160dc25566 - 7618c35df8eedc160dc25566 - 2020-08-09
Game ID: b8z-c7IG
Current map: SEV Torch
#### Issue bingo
<!-- Check these by writing an x inside the [ ] (like this: [x])-->
<!-- Don't forget to remove the space between the brackets, or it won't work! -->
- [x] Issue could be reproduced at least once
- [x] Issue could be reproduced by different players
- [x] Issue could be reproduced in multiple rounds
- [x] Issue happened in a recent (less than 7 days ago) round
- [x] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
| non_process | scba masks lack species spritesheets anything inside tags like these is a comment and will not be displayed in the final issue be careful not to write inside them every field other than specific information for locating is required if you do not fill out the specific information field please delete the header omitting or not answering a required field will result in your issue being closed repeated violation of this rule or joke or spam issues will result in punishment put your answers on the blank lines below the headers the lines with four s don t edit them or delete them it s part of the formatting description of issue just a simple oversight even though all emergency boxes have scba masks there only seem to be sprites for humans difference between expected and actual behavior all crew species that breathe air should have sprites for scbas currently scba masks are invisible on alien species steps to reproduce n a specific information for locating code modules clothing masks breath dm obj item clothing mask breath scba length of time in which bug has been known to occur be specific if you approximately know the time it s been occurring for—this can speed up finding the source if you re not sure about it tell us too months ish client version server revision game id client version server revision game id current map sev torch issue bingo issue could be reproduced at least once issue could be reproduced by different players issue could be reproduced in multiple rounds issue happened in a recent less than days ago round | 0 |
21,760 | 30,277,675,250 | IssuesEvent | 2023-07-07 21:28:07 | h4sh5/npm-auto-scanner | https://api.github.com/repos/h4sh5/npm-auto-scanner | opened | mongodb-memory-server-core 8.13.0 has 1 guarddog issues | npm-silent-process-execution | ```{"npm-silent-process-execution":[{"code":" const killer = (0, child_process_1.fork)(path.resolve(__dirname, '../../scripts/mongo_killer.js'), [parentPid.toString(), childPid.toString()], {\n detached: true,\n stdio: 'ignore', // stdio cannot be done with ... });","location":"package/lib/util/MongoInstance.js:258","message":"This package is silently executing another executable"}]}``` | 1.0 | mongodb-memory-server-core 8.13.0 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":" const killer = (0, child_process_1.fork)(path.resolve(__dirname, '../../scripts/mongo_killer.js'), [parentPid.toString(), childPid.toString()], {\n detached: true,\n stdio: 'ignore', // stdio cannot be done with ... });","location":"package/lib/util/MongoInstance.js:258","message":"This package is silently executing another executable"}]}``` | process | mongodb memory server core has guarddog issues npm silent process execution n detached true n stdio ignore stdio cannot be done with location package lib util mongoinstance js message this package is silently executing another executable | 1 |
735,459 | 25,399,289,736 | IssuesEvent | 2022-11-22 10:51:45 | geneontology/go-annotation | https://api.github.com/repos/geneontology/go-annotation | closed | PTHR12864 - PTN002489953 | PAINT annotation PomBase low priority |
GO:0007166 | cell surface receptor signaling pathway | IBA with FBgn0262114 , PTN002489953 | Gaudet P et al. (2011)
this term isn't very useful. I'm picking up a couple of mappings that seem incorrect
GO:0007166 | cell surface receptor signaling pathway | IBA with FBgn0262114 , PTN002489953
for gid complex
@pgaudet can we make
GO:0007166 | cell surface receptor signaling pathway not for direct annotation (we would just use signalling, or the precise pathway)
| 1.0 | PTHR12864 - PTN002489953 -
GO:0007166 | cell surface receptor signaling pathway | IBA with FBgn0262114 , PTN002489953 | Gaudet P et al. (2011)
this term isn't very useful. I'm picking up a couple of mappings that seem incorrect
GO:0007166 | cell surface receptor signaling pathway | IBA with FBgn0262114 , PTN002489953
for gid complex
@pgaudet can we make
GO:0007166 | cell surface receptor signaling pathway not for direct annotation (we would just use signalling, or the precise pathway)
| non_process | go cell surface receptor signaling pathway iba with gaudet p et al this term isn t very useful i m picking up a couple of mappings that seem incorrect go cell surface receptor signaling pathway iba with for gid complex pgaudet can we make go cell surface receptor signaling pathway not for direct annotation we would just use signalling or the precise pathway | 0 |
9,350 | 12,364,961,395 | IssuesEvent | 2020-05-18 08:02:33 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | [FEATURE][processing] New algorithm "Repair Shapefile" | 3.12 Automatic new feature Processing Alg | Original commit: https://github.com/qgis/QGIS/commit/7746061b466ecce84b76165be2947e7ff56cadf5 by nyalldawson
Uses GDAL to repair shapefiles which have a broken or missing .SHX file | 1.0 | [FEATURE][processing] New algorithm "Repair Shapefile" - Original commit: https://github.com/qgis/QGIS/commit/7746061b466ecce84b76165be2947e7ff56cadf5 by nyalldawson
Uses GDAL to repair shapefiles which have a broken or missing .SHX file | process | new algorithm repair shapefile original commit by nyalldawson uses gdal to repair shapefiles which have a broken or missing shx file | 1 |
16,599 | 21,656,269,841 | IssuesEvent | 2022-05-06 14:23:52 | elastic/beats | https://api.github.com/repos/elastic/beats | reopened | add_kubernetes_metadata processors supports more k8s resources | enhancement :Processors Stalled needs_team | Now add_kubernetes_metadata only supports pods, it is fine for enriching logs with pod meta.
But when using metricbeats to persist k8s events, I also want to enrich the event item with involved object's label and annotations, so resources meta caches for other resources are needed too.
I think it can be achieved by adding some resource watchers in add_kubernetes_metadata processors.
Is there any plan about it?
If it is ok to add it, I can try to make a PR for it. | 1.0 | add_kubernetes_metadata processors supports more k8s resources - Now add_kubernetes_metadata only supports pods, it is fine for enriching logs with pod meta.
But when using metricbeats to persist k8s events, I also want to enrich the event item with involved object's label and annotations, so resources meta caches for other resources are needed too.
I think it can be achieved by adding some resource watchers in add_kubernetes_metadata processors.
Is there any plan about it?
If it is ok to add it, I can try to make a PR for it. | process | add kubernetes metadata processors supports more resources now add kubernetes metadata only supports pods it is fine for enriching logs with pod meta but when using metricbeats to persist events i also want to enrich the event item with involved object s label and annotations so resources meta caches for other resources are needed too i think it can be achieved by adding some resource watchers in add kubernetes metadata processors is there any plan about it if it is ok to add it i can try to make a pr for it | 1 |
411,006 | 27,809,836,663 | IssuesEvent | 2023-03-18 01:52:54 | amm33/mywebclass-simulation | https://api.github.com/repos/amm33/mywebclass-simulation | opened | As a website user, I want to be able to easily give and withdraw my consent for the use of cookies and other data collection methods, so I have control over my personal data | documentation Website Legal | GDPR user story
| 1.0 | As a website user, I want to be able to easily give and withdraw my consent for the use of cookies and other data collection methods, so I have control over my personal data - GDPR user story
| non_process | as a website user i want to be able to easily give and withdraw my consent for the use of cookies and other data collection methods so i have control over my personal data gdpr user story | 0 |
95,780 | 3,959,966,668 | IssuesEvent | 2016-05-02 01:14:23 | jpchanson/SpendNoEvil | https://api.github.com/repos/jpchanson/SpendNoEvil | opened | Reports | Business Function High Priority | As: any user
I want: the application to generate reports such as profit and loss statements and balance sheets for me.
So that: I do not have to do them manually. | 1.0 | Reports - As: any user
I want: the application to generate reports such as profit and loss statements and balance sheets for me.
So that: I do not have to do them manually. | non_process | reports as any user i want the application to generate reports such as profit and loss statements and balance sheets for me so that i do not have to do them manually | 0 |
673,692 | 23,027,806,814 | IssuesEvent | 2022-07-22 10:56:29 | hovgaardgames/bigambitions | https://api.github.com/repos/hovgaardgames/bigambitions | closed | Switching off a map filter has a visible delay | confirmed low-priority | ### Build number
1002
### Bug description
https://feedback.hovgaard.com/report/db7a446a-2d4c-42d3-bb14-4289af7d1e9c
### Steps to reproduce the bug
_No response_
### Savegame file
_No response_
### Screenshots or videos
_No response_ | 1.0 | Switching off a map filter has a visible delay - ### Build number
1002
### Bug description
https://feedback.hovgaard.com/report/db7a446a-2d4c-42d3-bb14-4289af7d1e9c
### Steps to reproduce the bug
_No response_
### Savegame file
_No response_
### Screenshots or videos
_No response_ | non_process | switching off a map filter has a visible delay build number bug description steps to reproduce the bug no response savegame file no response screenshots or videos no response | 0 |
15,097 | 18,820,853,869 | IssuesEvent | 2021-11-10 08:07:58 | streamnative/pulsar-flink | https://api.github.com/repos/streamnative/pulsar-flink | closed | [FEATURE] Support for consumer subscriptions | type/feature platform/data-processing | **Is your feature request related to a problem? Please describe.**
I am upgrading from a previous unsupported implementation of a Pulsar Flink connector and in migrating the code, I have not found a way to support consumer subscriptions to streams. This is a rather fundamental feature set that is very useful and should exist.
**Describe the solution you'd like**
The connector should support the underlying Pulsar feature of multiple consumer subscriptions to a stream
**Describe alternatives you've considered**
I have looked at sequential processing of the stream in a daisy chain mode, but some processing is for lower latency and other processing requires more effort so the two should not be on the same flow
**Additional context**
None | 1.0 | [FEATURE] Support for consumer subscriptions - **Is your feature request related to a problem? Please describe.**
I am upgrading from a previous unsupported implementation of a Pulsar Flink connector and in migrating the code, I have not found a way to support consumer subscriptions to streams. This is a rather fundamental feature set that is very useful and should exist.
**Describe the solution you'd like**
The connector should support the underlying Pulsar feature of multiple consumer subscriptions to a stream
**Describe alternatives you've considered**
I have looked at sequential processing of the stream in a daisy chain mode, but some processing is for lower latency and other processing requires more effort so the two should not be on the same flow
**Additional context**
None | process | support for consumer subscriptions is your feature request related to a problem please describe i am upgrading from a previous unsupported implementation of a pulsar flink connector and in migrating the code i have not found a way to support consumer subscriptions to streams this is a rather fundamental feature set that is very useful and should exist describe the solution you d like the connector should support the underlying pulsar feature of multiple consumer subscriptions to a stream describe alternatives you ve considered i have looked at sequential processing of the stream in a daisy chain mode but some processing is for lower latency and other processing requires more effort so the two should not be on the same flow additional context none | 1 |
274,503 | 23,844,059,559 | IssuesEvent | 2022-09-06 12:46:02 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: X-Pack Alerting API Integration Tests.x-pack/test/alerting_api_integration/spaces_only/tests/alerting/event_log·ts - alerting api integration spaces only Alerting eventLog in space default should generate events for execution errors | failed-test Team:ResponseOps | A test failed on a tracked branch
```
Error: expected 'failure' to equal undefined
at Assertion.assert (/opt/local-ssd/buildkite/builds/kb-n2-4-e183d965eba9b985/elastic/kibana-hourly/kibana/node_modules/@kbn/expect/expect.js:100:11)
at Assertion.equal (/opt/local-ssd/buildkite/builds/kb-n2-4-e183d965eba9b985/elastic/kibana-hourly/kibana/node_modules/@kbn/expect/expect.js:227:8)
at validateEvent (test/alerting_api_integration/spaces_only/tests/alerting/event_log.ts:615:36)
at Context.<anonymous> (test/alerting_api_integration/spaces_only/tests/alerting/event_log.ts:499:11)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Object.apply (/opt/local-ssd/buildkite/builds/kb-n2-4-e183d965eba9b985/elastic/kibana-hourly/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - 8.0](https://buildkite.com/elastic/kibana-hourly/builds/9761#9a01f77a-aaac-4555-8034-9c4ba05ad7ae)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Alerting API Integration Tests.x-pack/test/alerting_api_integration/spaces_only/tests/alerting/event_log·ts","test.name":"alerting api integration spaces only Alerting eventLog in space default should generate events for execution errors","test.failCount":1}} --> | 1.0 | Failing test: X-Pack Alerting API Integration Tests.x-pack/test/alerting_api_integration/spaces_only/tests/alerting/event_log·ts - alerting api integration spaces only Alerting eventLog in space default should generate events for execution errors - A test failed on a tracked branch
```
Error: expected 'failure' to equal undefined
at Assertion.assert (/opt/local-ssd/buildkite/builds/kb-n2-4-e183d965eba9b985/elastic/kibana-hourly/kibana/node_modules/@kbn/expect/expect.js:100:11)
at Assertion.equal (/opt/local-ssd/buildkite/builds/kb-n2-4-e183d965eba9b985/elastic/kibana-hourly/kibana/node_modules/@kbn/expect/expect.js:227:8)
at validateEvent (test/alerting_api_integration/spaces_only/tests/alerting/event_log.ts:615:36)
at Context.<anonymous> (test/alerting_api_integration/spaces_only/tests/alerting/event_log.ts:499:11)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Object.apply (/opt/local-ssd/buildkite/builds/kb-n2-4-e183d965eba9b985/elastic/kibana-hourly/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - 8.0](https://buildkite.com/elastic/kibana-hourly/builds/9761#9a01f77a-aaac-4555-8034-9c4ba05ad7ae)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Alerting API Integration Tests.x-pack/test/alerting_api_integration/spaces_only/tests/alerting/event_log·ts","test.name":"alerting api integration spaces only Alerting eventLog in space default should generate events for execution errors","test.failCount":1}} --> | non_process | failing test x pack alerting api integration tests x pack test alerting api integration spaces only tests alerting event log·ts alerting api integration spaces only alerting eventlog in space default should generate events for execution errors a test failed on a tracked branch error expected failure to equal undefined at assertion assert opt local ssd buildkite builds kb elastic kibana hourly kibana node modules kbn expect expect js at assertion equal opt local ssd buildkite builds kb elastic kibana hourly kibana node modules kbn expect expect js at validateevent test alerting api integration spaces only tests alerting event log ts at context test alerting api integration spaces only tests alerting event log ts at runmicrotasks at processticksandrejections node internal process task queues at object apply opt local ssd buildkite builds kb elastic kibana hourly kibana node modules kbn test target node functional test runner lib mocha wrap function js first failure | 0 |
22,407 | 31,142,292,071 | IssuesEvent | 2023-08-16 01:44:45 | cypress-io/cypress | https://api.github.com/repos/cypress-io/cypress | closed | Flaky test: expected exit code 3 but got 1: expected 1 to equal 3 | OS: linux process: flaky test topic: flake ❄️ stage: flake stale | ### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/41370/workflows/b5842bda-87dd-488c-ac98-4855e959a900/jobs/1713324
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/system-tests/test/record_spec.js#L346
### Analysis
<img width="696" alt="Screen Shot 2022-08-05 at 12 27 20 PM" src="https://user-images.githubusercontent.com/26726429/183147389-c3748432-218f-4eae-b214-64f6c0307ceb.png">
### Cypress Version
10.4.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed | 1.0 | Flaky test: expected exit code 3 but got 1: expected 1 to equal 3 - ### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/41370/workflows/b5842bda-87dd-488c-ac98-4855e959a900/jobs/1713324
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/system-tests/test/record_spec.js#L346
### Analysis
<img width="696" alt="Screen Shot 2022-08-05 at 12 27 20 PM" src="https://user-images.githubusercontent.com/26726429/183147389-c3748432-218f-4eae-b214-64f6c0307ceb.png">
### Cypress Version
10.4.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed | process | flaky test expected exit code but got expected to equal link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at pm src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed | 1 |
19,423 | 25,573,306,953 | IssuesEvent | 2022-11-30 19:38:31 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | Garbled text in default embedded terminal | bug confirmation-pending terminal-process | Issue Type: <b>Bug</b>
1. Open VSC on remote host
VS Code version: Code 1.60.1 (83bd43bc519d15e50c4272c6cf5c1479df196a4d, 2021-09-10T17:07:10.714Z)
OS version: Windows_NT x64 10.0.18363
Restricted Mode: No
Remote OS version: Linux x64 3.10.0-1160.49.1.el7.x86_64
Remote OS version: Linux x64 3.10.0-1160.49.1.el7.x86_64
<!-- generated by issue reporter -->
When I open VSC on a remote host, I often see garbled text in the embedded terminal as shown in the screenshot that follows:

When I kill and restart the terminal, the issue disappears.
I'm running VSC on VirtualBox guest VM running Windows 10 Pro. The underlying host is a RockyLinux v8.5 system.
I'm using the "Remote SSH" extension, and the remote host is running a reasonably current version CentOS 7. Each remote host is an AWS EC2 instance. I've been noticing this behavior for the last month or so. It seems to happen on each remote host (though I'm not sure).
Here is the same VSC instance after killing the garbled terminal and starting a new one by invoking the "Terminal -> New Terminal" command:

| 1.0 | Garbled text in default embedded terminal - Issue Type: <b>Bug</b>
1. Open VSC on remote host
VS Code version: Code 1.60.1 (83bd43bc519d15e50c4272c6cf5c1479df196a4d, 2021-09-10T17:07:10.714Z)
OS version: Windows_NT x64 10.0.18363
Restricted Mode: No
Remote OS version: Linux x64 3.10.0-1160.49.1.el7.x86_64
Remote OS version: Linux x64 3.10.0-1160.49.1.el7.x86_64
<!-- generated by issue reporter -->
When I open VSC on a remote host, I often see garbled text in the embedded terminal as shown in the screenshot that follows:

When I kill and restart the terminal, the issue disappears.
I'm running VSC on VirtualBox guest VM running Windows 10 Pro. The underlying host is a RockyLinux v8.5 system.
I'm using the "Remote SSH" extension, and the remote host is running a reasonably current version CentOS 7. Each remote host is an AWS EC2 instance. I've been noticing this behavior for the last month or so. It seems to happen on each remote host (though I'm not sure).
Here is the same VSC instance after killing the garbled terminal and starting a new one by invoking the "Terminal -> New Terminal" command:

| process | garbled text in default embedded terminal issue type bug open vsc on remote host vs code version code os version windows nt restricted mode no remote os version linux remote os version linux when i open vsc on a remote host i often see garbled text in the embedded terminal as shown in the screenshot that follows when i kill and restart the terminal the issue disappears i m running vsc on virtualbox guest vm running windows pro the underlying host is a rockylinux system i m using the remote ssh extension and the remote host is running a reasonably current version centos each remote host is an aws instance i ve been noticing this behavior for the last month or so it seems to happen on each remote host though i m not sure here is the same vsc instance after killing the garbled terminal and starting a new one by invoking the terminal new terminal command | 1 |
7,738 | 10,861,175,281 | IssuesEvent | 2019-11-14 10:32:38 | zammad/zammad | https://api.github.com/repos/zammad/zammad | closed | Mail processing fails if attachment meta exceeds 2500 characters | bug mail processing prioritized by payment verified | <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 3.1.x and earlier
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: MySQL and postgreSQL
* Elasticsearch version: any
* Browser + version: any
* Ticket-ID: #1051341
### Expected behavior:
Zammad will process mails, no matter how much Meta data is within the attachment directive of an attachment.
### Actual behavior:
Zammad will stop processing the mail if in case the attachment directive contains so much information, that it exceeds 2500 characters.
Part of the exception:
```
Traceback (most recent call last):
22: from /opt/zammad/bin/rails:9:in `<main>'
21: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `require'
20: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:257:in `load_dependency'
19: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `block in require'
18: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
17: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
16: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
15: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
14: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
13: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands.rb:18:in `<main>'
12: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/command.rb:46:in `invoke'
11: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/command/base.rb:65:in `perform'
10: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor.rb:387:in `dispatch'
9: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor/invocation.rb:126:in `invoke_command'
8: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor/command.rb:27:in `run'
7: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `perform'
6: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `eval'
5: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>'
4: from /opt/zammad/app/models/channel/email_parser.rb:479:in `process_unprocessable_mails'
3: from /opt/zammad/app/models/channel/email_parser.rb:479:in `glob'
2: from /opt/zammad/app/models/channel/email_parser.rb:480:in `block in process_unprocessable_mails'
1: from /opt/zammad/app/models/channel/email_parser.rb:112:in `process'
/opt/zammad/app/models/channel/email_parser.rb:133:in `rescue in process': #<ActiveRecord::ValueTooLong: Mysql2::Error: Data too long for column 'preferences' at row 1: INSERT INTO `stores` (`store_object_id`, `store_file_id`, `o_id`, `preferences`, `size`, `filename`, `created_by_id`, `created_at`, `updated_at`) VALUES (3, 202933, 99587, '--- !ruby/hash:ActiveSupport::HashWithIndifferentAccess\nReturn-Path:
<<redacted>>
Inquiry 1119000526.pdf', 5530, '2019-08-12 12:24:41', '2019-08-12 12:24:41.650000')> (RuntimeError)
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:120:in `_query'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:120:in `block in query'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:119:in `handle_interrupt'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:119:in `query'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_mysql_adapter.rb:187:in `block (2 levels) in execute'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies/interlock.rb:48:in `block in permit_concurrent_loads'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/concurrency/share_lock.rb:187:in `yield_shares'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies/interlock.rb:47:in `permit_concurrent_loads'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_mysql_adapter.rb:186:in `block in execute'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_adapter.rb:581:in `block (2 levels) in log'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_adapter.rb:580:in `block in log'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/notifications/instrumenter.rb:23:in `instrument'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_adapter.rb:571:in `log'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_mysql_adapter.rb:185:in `execute'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/mysql/database_statements.rb:28:in `execute'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_mysql_adapter.rb:196:in `execute_and_free'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/mysql/database_statements.rb:33:in `exec_query'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/database_statements.rb:130:in `exec_insert'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/composite_primary_keys-11.2.0/lib/composite_primary_keys/connection_adapters/abstract/database_statements.rb:6:in `insert'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/query_cache.rb:21:in `insert'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/persistence.rb:187:in `_insert_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/composite_primary_keys-11.2.0/lib/composite_primary_keys/persistence.rb:64:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/counter_cache.rb:184:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/locking/optimistic.rb:70:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/attribute_methods/dirty.rb:140:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/callbacks.rb:346:in `block in _create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:132:in `run_callbacks'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:816:in `_run_create_callbacks'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/callbacks.rb:346:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/timestamp.rb:102:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/persistence.rb:705:in `create_or_update'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/callbacks.rb:342:in `block in create_or_update'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:132:in `run_callbacks'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:816:in `_run_save_callbacks'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/callbacks.rb:342:in `create_or_update'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/persistence.rb:308:in `save!'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/validations.rb:52:in `save!'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:315:in `block in save!'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:387:in `block in with_transaction_returning_status'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/database_statements.rb:265:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:212:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:385:in `with_transaction_returning_status'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:315:in `save!'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/suppressor.rb:48:in `save!'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/persistence.rb:53:in `create!'
/opt/zammad/app/models/store.rb:64:in `add'
/opt/zammad/app/models/channel/email_parser.rb:291:in `block (3 levels) in _process'
/opt/zammad/app/models/channel/email_parser.rb:286:in `each'
/opt/zammad/app/models/channel/email_parser.rb:286:in `block (2 levels) in _process'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/locking/pessimistic.rb:84:in `block in with_lock'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/database_statements.rb:265:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:212:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:301:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/locking/pessimistic.rb:82:in `with_lock'
/opt/zammad/app/models/channel/email_parser.rb:260:in `block in _process'
/opt/zammad/app/models/transaction.rb:15:in `block in execute'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/database_statements.rb:267:in `block in transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/transaction.rb:239:in `block in within_new_transaction'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/transaction.rb:236:in `within_new_transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/database_statements.rb:267:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:212:in `transaction'
/opt/zammad/app/models/transaction.rb:13:in `execute'
/opt/zammad/app/models/channel/email_parser.rb:174:in `_process'
/opt/zammad/app/models/channel/email_parser.rb:115:in `block in process'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:93:in `block in timeout'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `block in catch'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:108:in `timeout'
/opt/zammad/app/models/channel/email_parser.rb:114:in `process'
/opt/zammad/app/models/channel/email_parser.rb:480:in `block in process_unprocessable_mails'
/opt/zammad/app/models/channel/email_parser.rb:479:in `glob'
/opt/zammad/app/models/channel/email_parser.rb:479:in `process_unprocessable_mails'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `eval'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `perform'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor/command.rb:27:in `run'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor/invocation.rb:126:in `invoke_command'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor.rb:387:in `dispatch'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/command/base.rb:65:in `perform'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/command.rb:46:in `invoke'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands.rb:18:in `<main>'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `block in require'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:257:in `load_dependency'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `require'
/opt/zammad/bin/rails:9:in `<main>'
```
If needed, you can find the original affected eml files in the above linked Ticket. It contains too sensitive information to post them here.
### Steps to reproduce the behavior:
* create a mail that contains huge amounts of meta information (like hashes and so on) within the directive where a attachment is defined
* try to import that
Customer had 3500 - 7600 characters within the definition for attachment which will cause the processing to fail. Sending system was SAP in combination with exchange.
Yes I'm sure this is a bug and no feature request or a general question.
| 1.0 | Mail processing fails if attachment meta exceeds 2500 characters - <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 3.1.x and earlier
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: MySQL and postgreSQL
* Elasticsearch version: any
* Browser + version: any
* Ticket-ID: #1051341
### Expected behavior:
Zammad will process mails, no matter how much Meta data is within the attachment directive of an attachment.
### Actual behavior:
Zammad will stop processing the mail if in case the attachment directive contains so much information, that it exceeds 2500 characters.
Part of the exception:
```
Traceback (most recent call last):
22: from /opt/zammad/bin/rails:9:in `<main>'
21: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `require'
20: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:257:in `load_dependency'
19: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `block in require'
18: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
17: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
16: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
15: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
14: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
13: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands.rb:18:in `<main>'
12: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/command.rb:46:in `invoke'
11: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/command/base.rb:65:in `perform'
10: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor.rb:387:in `dispatch'
9: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor/invocation.rb:126:in `invoke_command'
8: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor/command.rb:27:in `run'
7: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `perform'
6: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `eval'
5: from /opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>'
4: from /opt/zammad/app/models/channel/email_parser.rb:479:in `process_unprocessable_mails'
3: from /opt/zammad/app/models/channel/email_parser.rb:479:in `glob'
2: from /opt/zammad/app/models/channel/email_parser.rb:480:in `block in process_unprocessable_mails'
1: from /opt/zammad/app/models/channel/email_parser.rb:112:in `process'
/opt/zammad/app/models/channel/email_parser.rb:133:in `rescue in process': #<ActiveRecord::ValueTooLong: Mysql2::Error: Data too long for column 'preferences' at row 1: INSERT INTO `stores` (`store_object_id`, `store_file_id`, `o_id`, `preferences`, `size`, `filename`, `created_by_id`, `created_at`, `updated_at`) VALUES (3, 202933, 99587, '--- !ruby/hash:ActiveSupport::HashWithIndifferentAccess\nReturn-Path:
<<redacted>>
Inquiry 1119000526.pdf', 5530, '2019-08-12 12:24:41', '2019-08-12 12:24:41.650000')> (RuntimeError)
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:120:in `_query'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:120:in `block in query'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:119:in `handle_interrupt'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/mysql2-0.4.10/lib/mysql2/client.rb:119:in `query'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_mysql_adapter.rb:187:in `block (2 levels) in execute'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies/interlock.rb:48:in `block in permit_concurrent_loads'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/concurrency/share_lock.rb:187:in `yield_shares'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies/interlock.rb:47:in `permit_concurrent_loads'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_mysql_adapter.rb:186:in `block in execute'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_adapter.rb:581:in `block (2 levels) in log'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_adapter.rb:580:in `block in log'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/notifications/instrumenter.rb:23:in `instrument'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_adapter.rb:571:in `log'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_mysql_adapter.rb:185:in `execute'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/mysql/database_statements.rb:28:in `execute'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract_mysql_adapter.rb:196:in `execute_and_free'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/mysql/database_statements.rb:33:in `exec_query'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/database_statements.rb:130:in `exec_insert'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/composite_primary_keys-11.2.0/lib/composite_primary_keys/connection_adapters/abstract/database_statements.rb:6:in `insert'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/query_cache.rb:21:in `insert'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/persistence.rb:187:in `_insert_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/composite_primary_keys-11.2.0/lib/composite_primary_keys/persistence.rb:64:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/counter_cache.rb:184:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/locking/optimistic.rb:70:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/attribute_methods/dirty.rb:140:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/callbacks.rb:346:in `block in _create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:132:in `run_callbacks'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:816:in `_run_create_callbacks'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/callbacks.rb:346:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/timestamp.rb:102:in `_create_record'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/persistence.rb:705:in `create_or_update'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/callbacks.rb:342:in `block in create_or_update'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:132:in `run_callbacks'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/callbacks.rb:816:in `_run_save_callbacks'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/callbacks.rb:342:in `create_or_update'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/persistence.rb:308:in `save!'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/validations.rb:52:in `save!'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:315:in `block in save!'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:387:in `block in with_transaction_returning_status'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/database_statements.rb:265:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:212:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:385:in `with_transaction_returning_status'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:315:in `save!'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/suppressor.rb:48:in `save!'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/persistence.rb:53:in `create!'
/opt/zammad/app/models/store.rb:64:in `add'
/opt/zammad/app/models/channel/email_parser.rb:291:in `block (3 levels) in _process'
/opt/zammad/app/models/channel/email_parser.rb:286:in `each'
/opt/zammad/app/models/channel/email_parser.rb:286:in `block (2 levels) in _process'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/locking/pessimistic.rb:84:in `block in with_lock'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/database_statements.rb:265:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:212:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:301:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/locking/pessimistic.rb:82:in `with_lock'
/opt/zammad/app/models/channel/email_parser.rb:260:in `block in _process'
/opt/zammad/app/models/transaction.rb:15:in `block in execute'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/database_statements.rb:267:in `block in transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/transaction.rb:239:in `block in within_new_transaction'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/transaction.rb:236:in `within_new_transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/database_statements.rb:267:in `transaction'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.3/lib/active_record/transactions.rb:212:in `transaction'
/opt/zammad/app/models/transaction.rb:13:in `execute'
/opt/zammad/app/models/channel/email_parser.rb:174:in `_process'
/opt/zammad/app/models/channel/email_parser.rb:115:in `block in process'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:93:in `block in timeout'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `block in catch'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/opt/zammad/vendor/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:108:in `timeout'
/opt/zammad/app/models/channel/email_parser.rb:114:in `process'
/opt/zammad/app/models/channel/email_parser.rb:480:in `block in process_unprocessable_mails'
/opt/zammad/app/models/channel/email_parser.rb:479:in `glob'
/opt/zammad/app/models/channel/email_parser.rb:479:in `process_unprocessable_mails'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `eval'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands/runner/runner_command.rb:41:in `perform'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor/command.rb:27:in `run'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor/invocation.rb:126:in `invoke_command'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/thor-0.20.3/lib/thor.rb:387:in `dispatch'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/command/base.rb:65:in `perform'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/command.rb:46:in `invoke'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/railties-5.2.3/lib/rails/commands.rb:18:in `<main>'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `block in require'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:257:in `load_dependency'
/opt/zammad/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.3/lib/active_support/dependencies.rb:291:in `require'
/opt/zammad/bin/rails:9:in `<main>'
```
If needed, you can find the original affected eml files in the above linked Ticket. It contains too sensitive information to post them here.
### Steps to reproduce the behavior:
* create a mail that contains huge amounts of meta information (like hashes and so on) within the directive where a attachment is defined
* try to import that
Customer had 3500 - 7600 characters within the definition for attachment which will cause the processing to fail. Sending system was SAP in combination with exchange.
Yes I'm sure this is a bug and no feature request or a general question.
| process | mail processing fails if attachment meta exceeds characters hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version x and earlier installation method source package any operating system any database version mysql and postgresql elasticsearch version any browser version any ticket id expected behavior zammad will process mails no matter how much meta data is within the attachment directive of an attachment actual behavior zammad will stop processing the mail if in case the attachment directive contains so much information that it exceeds characters part of the exception traceback most recent call last from opt zammad bin rails in from opt zammad vendor bundle ruby gems activesupport lib active support dependencies rb in require from opt zammad vendor bundle ruby gems activesupport lib active support dependencies rb in load dependency from opt zammad vendor bundle ruby gems activesupport lib active support dependencies rb in block in require from opt zammad vendor bundle ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from opt zammad vendor bundle ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi from opt zammad vendor bundle ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register from opt zammad vendor bundle ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi from opt zammad vendor bundle ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from opt zammad vendor bundle ruby gems railties lib rails commands rb in from opt zammad vendor bundle ruby gems railties lib rails command rb in invoke from opt zammad vendor bundle ruby gems railties lib rails command base rb in perform from opt zammad vendor bundle ruby gems thor lib thor rb in dispatch from opt zammad vendor bundle ruby gems thor lib thor invocation rb in invoke command from opt zammad vendor bundle ruby gems thor lib thor command rb in run from opt zammad vendor bundle ruby gems railties lib rails commands runner runner command rb in perform from opt zammad vendor bundle ruby gems railties lib rails commands runner runner command rb in eval from opt zammad vendor bundle ruby gems railties lib rails commands runner runner command rb in from opt zammad app models channel email parser rb in process unprocessable mails from opt zammad app models channel email parser rb in glob from opt zammad app models channel email parser rb in block in process unprocessable mails from opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in rescue in process activerecord valuetoolong error data too long for column preferences at row insert into stores store object id store file id o id preferences size filename created by id created at updated at values ruby hash activesupport hashwithindifferentaccess nreturn path inquiry pdf runtimeerror opt zammad vendor bundle ruby gems lib client rb in query opt zammad vendor bundle ruby gems lib client rb in block in query opt zammad vendor bundle ruby gems lib client rb in handle interrupt opt zammad vendor bundle ruby gems lib client rb in query opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract mysql adapter rb in block levels in execute opt zammad vendor bundle ruby gems activesupport lib active support dependencies interlock rb in block in permit concurrent loads opt zammad vendor bundle ruby gems activesupport lib active support concurrency share lock rb in yield shares opt zammad vendor bundle ruby gems activesupport lib active support dependencies interlock rb in permit concurrent loads opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract mysql adapter rb in block in execute opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract adapter rb in block levels in log opt zammad vendor ruby lib ruby monitor rb in mon synchronize opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract adapter rb in block in log opt zammad vendor bundle ruby gems activesupport lib active support notifications instrumenter rb in instrument opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract adapter rb in log opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract mysql adapter rb in execute opt zammad vendor bundle ruby gems activerecord lib active record connection adapters mysql database statements rb in execute opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract mysql adapter rb in execute and free opt zammad vendor bundle ruby gems activerecord lib active record connection adapters mysql database statements rb in exec query opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract database statements rb in exec insert opt zammad vendor bundle ruby gems composite primary keys lib composite primary keys connection adapters abstract database statements rb in insert opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract query cache rb in insert opt zammad vendor bundle ruby gems activerecord lib active record persistence rb in insert record opt zammad vendor bundle ruby gems composite primary keys lib composite primary keys persistence rb in create record opt zammad vendor bundle ruby gems activerecord lib active record counter cache rb in create record opt zammad vendor bundle ruby gems activerecord lib active record locking optimistic rb in create record opt zammad vendor bundle ruby gems activerecord lib active record attribute methods dirty rb in create record opt zammad vendor bundle ruby gems activerecord lib active record callbacks rb in block in create record opt zammad vendor bundle ruby gems activesupport lib active support callbacks rb in run callbacks opt zammad vendor bundle ruby gems activesupport lib active support callbacks rb in run create callbacks opt zammad vendor bundle ruby gems activerecord lib active record callbacks rb in create record opt zammad vendor bundle ruby gems activerecord lib active record timestamp rb in create record opt zammad vendor bundle ruby gems activerecord lib active record persistence rb in create or update opt zammad vendor bundle ruby gems activerecord lib active record callbacks rb in block in create or update opt zammad vendor bundle ruby gems activesupport lib active support callbacks rb in run callbacks opt zammad vendor bundle ruby gems activesupport lib active support callbacks rb in run save callbacks opt zammad vendor bundle ruby gems activerecord lib active record callbacks rb in create or update opt zammad vendor bundle ruby gems activerecord lib active record persistence rb in save opt zammad vendor bundle ruby gems activerecord lib active record validations rb in save opt zammad vendor bundle ruby gems activerecord lib active record transactions rb in block in save opt zammad vendor bundle ruby gems activerecord lib active record transactions rb in block in with transaction returning status opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract database statements rb in transaction opt zammad vendor bundle ruby gems activerecord lib active record transactions rb in transaction opt zammad vendor bundle ruby gems activerecord lib active record transactions rb in with transaction returning status opt zammad vendor bundle ruby gems activerecord lib active record transactions rb in save opt zammad vendor bundle ruby gems activerecord lib active record suppressor rb in save opt zammad vendor bundle ruby gems activerecord lib active record persistence rb in create opt zammad app models store rb in add opt zammad app models channel email parser rb in block levels in process opt zammad app models channel email parser rb in each opt zammad app models channel email parser rb in block levels in process opt zammad vendor bundle ruby gems activerecord lib active record locking pessimistic rb in block in with lock opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract database statements rb in transaction opt zammad vendor bundle ruby gems activerecord lib active record transactions rb in transaction opt zammad vendor bundle ruby gems activerecord lib active record transactions rb in transaction opt zammad vendor bundle ruby gems activerecord lib active record locking pessimistic rb in with lock opt zammad app models channel email parser rb in block in process opt zammad app models transaction rb in block in execute opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract database statements rb in block in transaction opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract transaction rb in block in within new transaction opt zammad vendor ruby lib ruby monitor rb in mon synchronize opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract transaction rb in within new transaction opt zammad vendor bundle ruby gems activerecord lib active record connection adapters abstract database statements rb in transaction opt zammad vendor bundle ruby gems activerecord lib active record transactions rb in transaction opt zammad app models transaction rb in execute opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in block in process opt zammad vendor ruby lib ruby timeout rb in block in timeout opt zammad vendor ruby lib ruby timeout rb in block in catch opt zammad vendor ruby lib ruby timeout rb in catch opt zammad vendor ruby lib ruby timeout rb in catch opt zammad vendor ruby lib ruby timeout rb in timeout opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in block in process unprocessable mails opt zammad app models channel email parser rb in glob opt zammad app models channel email parser rb in process unprocessable mails opt zammad vendor bundle ruby gems railties lib rails commands runner runner command rb in opt zammad vendor bundle ruby gems railties lib rails commands runner runner command rb in eval opt zammad vendor bundle ruby gems railties lib rails commands runner runner command rb in perform opt zammad vendor bundle ruby gems thor lib thor command rb in run opt zammad vendor bundle ruby gems thor lib thor invocation rb in invoke command opt zammad vendor bundle ruby gems thor lib thor rb in dispatch opt zammad vendor bundle ruby gems railties lib rails command base rb in perform opt zammad vendor bundle ruby gems railties lib rails command rb in invoke opt zammad vendor bundle ruby gems railties lib rails commands rb in opt zammad vendor bundle ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require opt zammad vendor bundle ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi opt zammad vendor bundle ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register opt zammad vendor bundle ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi opt zammad vendor bundle ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require opt zammad vendor bundle ruby gems activesupport lib active support dependencies rb in block in require opt zammad vendor bundle ruby gems activesupport lib active support dependencies rb in load dependency opt zammad vendor bundle ruby gems activesupport lib active support dependencies rb in require opt zammad bin rails in if needed you can find the original affected eml files in the above linked ticket it contains too sensitive information to post them here steps to reproduce the behavior create a mail that contains huge amounts of meta information like hashes and so on within the directive where a attachment is defined try to import that customer had characters within the definition for attachment which will cause the processing to fail sending system was sap in combination with exchange yes i m sure this is a bug and no feature request or a general question | 1 |
8,495 | 11,659,773,547 | IssuesEvent | 2020-03-03 01:06:28 | googleapis/python-grafeas | https://api.github.com/repos/googleapis/python-grafeas | closed | test: drop magic coverage number for individual unit test sessions | type: process | Since coverage is accumulative, only the total coverage threshold should be retained. | 1.0 | test: drop magic coverage number for individual unit test sessions - Since coverage is accumulative, only the total coverage threshold should be retained. | process | test drop magic coverage number for individual unit test sessions since coverage is accumulative only the total coverage threshold should be retained | 1 |
278,123 | 24,125,224,351 | IssuesEvent | 2022-09-20 23:13:28 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: clearrange/checks=true failed: MVCCStats divergence | C-test-failure O-robot O-roachtest branch-master T-storage | roachtest.clearrange/checks=true [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6181118?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6181118?buildTab=artifacts#/clearrange/checks=true) on master @ [aaf50e920ceff3c2863ab96b9e3614b8434b70a8](https://github.com/cockroachdb/cockroach/commits/aaf50e920ceff3c2863ab96b9e3614b8434b70a8):
```
| I220822 09:18:04.571921 361 workload/pgx_helpers.go:79 [-] 33 pgx logger [error]: Exec logParams=map[args:[2579421732937982817 f9] err:unexpected EOF pid:5510701 sql:kv-2 time:13.509937ms]
| I220822 09:18:04.571954 370 workload/pgx_helpers.go:79 [-] 34 pgx logger [error]: Exec logParams=map[args:[2264071080716551703 c2] err:unexpected EOF pid:6281931 sql:kv-2 time:23.314191ms]
| I220822 09:18:04.571983 364 workload/pgx_helpers.go:79 [-] 35 pgx logger [error]: Exec logParams=map[args:[4863347191029465407 1d] err:unexpected EOF pid:5518672 sql:kv-2 time:23.41852ms]
| I220822 09:18:04.572012 156 workload/pgx_helpers.go:79 [-] 36 pgx logger [error]: Exec logParams=map[args:[-1175214458801366906 2c] err:unexpected EOF pid:5328080 sql:kv-2 time:22.889022ms]
| Error: unexpected EOF
| COMMAND_PROBLEM: exit status 1
| 6:
| UNCLASSIFIED_PROBLEM: context canceled
| 7:
| UNCLASSIFIED_PROBLEM: context canceled
| 8:
| UNCLASSIFIED_PROBLEM: context canceled
| 9:
| UNCLASSIFIED_PROBLEM: context canceled
| 10:
| UNCLASSIFIED_PROBLEM: context canceled
Wraps: (4) secondary error attachment
| COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 5. Command with error:
| | ``````
| | ./cockroach workload run kv --concurrency=32 --duration=1h
| | ``````
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
Wraps: (5) context canceled
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) *secondary.withSecondaryError (5) *errors.errorString
monitor.go:127,clearrange.go:206,clearrange.go:39,test_runner.go:896: monitor failure: monitor command failure: unexpected node event: 5: dead (exit status 7)
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runClearRange
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/clearrange.go:206
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerClearRange.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/clearrange.go:39
| [...repeated from below...]
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func3
| main/pkg/cmd/roachtest/monitor.go:202
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1571
Wraps: (4) monitor command failure
Wraps: (5) unexpected node event: 5: dead (exit status 7)
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *errors.errorString
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=16</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/storage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*clearrange/checks=true.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-18804
| 2.0 | roachtest: clearrange/checks=true failed: MVCCStats divergence - roachtest.clearrange/checks=true [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6181118?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6181118?buildTab=artifacts#/clearrange/checks=true) on master @ [aaf50e920ceff3c2863ab96b9e3614b8434b70a8](https://github.com/cockroachdb/cockroach/commits/aaf50e920ceff3c2863ab96b9e3614b8434b70a8):
```
| I220822 09:18:04.571921 361 workload/pgx_helpers.go:79 [-] 33 pgx logger [error]: Exec logParams=map[args:[2579421732937982817 f9] err:unexpected EOF pid:5510701 sql:kv-2 time:13.509937ms]
| I220822 09:18:04.571954 370 workload/pgx_helpers.go:79 [-] 34 pgx logger [error]: Exec logParams=map[args:[2264071080716551703 c2] err:unexpected EOF pid:6281931 sql:kv-2 time:23.314191ms]
| I220822 09:18:04.571983 364 workload/pgx_helpers.go:79 [-] 35 pgx logger [error]: Exec logParams=map[args:[4863347191029465407 1d] err:unexpected EOF pid:5518672 sql:kv-2 time:23.41852ms]
| I220822 09:18:04.572012 156 workload/pgx_helpers.go:79 [-] 36 pgx logger [error]: Exec logParams=map[args:[-1175214458801366906 2c] err:unexpected EOF pid:5328080 sql:kv-2 time:22.889022ms]
| Error: unexpected EOF
| COMMAND_PROBLEM: exit status 1
| 6:
| UNCLASSIFIED_PROBLEM: context canceled
| 7:
| UNCLASSIFIED_PROBLEM: context canceled
| 8:
| UNCLASSIFIED_PROBLEM: context canceled
| 9:
| UNCLASSIFIED_PROBLEM: context canceled
| 10:
| UNCLASSIFIED_PROBLEM: context canceled
Wraps: (4) secondary error attachment
| COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 5. Command with error:
| | ``````
| | ./cockroach workload run kv --concurrency=32 --duration=1h
| | ``````
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
Wraps: (5) context canceled
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) *secondary.withSecondaryError (5) *errors.errorString
monitor.go:127,clearrange.go:206,clearrange.go:39,test_runner.go:896: monitor failure: monitor command failure: unexpected node event: 5: dead (exit status 7)
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runClearRange
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/clearrange.go:206
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerClearRange.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/clearrange.go:39
| [...repeated from below...]
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func3
| main/pkg/cmd/roachtest/monitor.go:202
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1571
Wraps: (4) monitor command failure
Wraps: (5) unexpected node event: 5: dead (exit status 7)
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *errors.errorString
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=16</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/storage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*clearrange/checks=true.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-18804
| non_process | roachtest clearrange checks true failed mvccstats divergence roachtest clearrange checks true with on master workload pgx helpers go pgx logger exec logparams map err unexpected eof pid sql kv time workload pgx helpers go pgx logger exec logparams map err unexpected eof pid sql kv time workload pgx helpers go pgx logger exec logparams map err unexpected eof pid sql kv time workload pgx helpers go pgx logger exec logparams map err unexpected eof pid sql kv time error unexpected eof command problem exit status unclassified problem context canceled unclassified problem context canceled unclassified problem context canceled unclassified problem context canceled unclassified problem context canceled wraps secondary error attachment command problem exit status command problem wraps node command with error cockroach workload run kv concurrency duration wraps exit status error types errors cmd hintdetail withdetail exec exiterror wraps context canceled error types withstack withstack errutil withprefix cluster withcommanddetails secondary withsecondaryerror errors errorstring monitor go clearrange go clearrange go test runner go monitor failure monitor command failure unexpected node event dead exit status attached stack trace stack trace main monitorimpl waite main pkg cmd roachtest monitor go main monitorimpl wait main pkg cmd roachtest monitor go github com cockroachdb cockroach pkg cmd roachtest tests runclearrange github com cockroachdb cockroach pkg cmd roachtest tests clearrange go github com cockroachdb cockroach pkg cmd roachtest tests registerclearrange github com cockroachdb cockroach pkg cmd roachtest tests clearrange go wraps monitor failure wraps attached stack trace stack trace main monitorimpl wait main pkg cmd roachtest monitor go runtime goexit goroot src runtime asm s wraps monitor command failure wraps unexpected node event dead exit status error types withstack withstack errutil withprefix withstack withstack errutil withprefix errors errorstring parameters roachtest cloud gce roachtest cpu roachtest ssd help see see cc cockroachdb storage jira issue crdb | 0 |
6,748 | 9,875,344,598 | IssuesEvent | 2019-06-23 10:56:27 | NottingHack/hms2 | https://api.github.com/repos/NottingHack/hms2 | closed | Team Management | 2.1 Process question | Need to work out how we deal with adding and removing team roles to users, (this should be the preferred way rather than #250 )
* direct add for trustees?
* Trustees puts a person in a team
* request add for a user?
* Member can request to join a team, Who approves? (trustees, existing team members)
* User opts to leave a team
* notify team and trustees
* trustees direct removal
| 1.0 | Team Management - Need to work out how we deal with adding and removing team roles to users, (this should be the preferred way rather than #250 )
* direct add for trustees?
* Trustees puts a person in a team
* request add for a user?
* Member can request to join a team, Who approves? (trustees, existing team members)
* User opts to leave a team
* notify team and trustees
* trustees direct removal
| process | team management need to work out how we deal with adding and removing team roles to users this should be the preferred way rather than direct add for trustees trustees puts a person in a team request add for a user member can request to join a team who approves trustees existing team members user opts to leave a team notify team and trustees trustees direct removal | 1 |
21,172 | 28,143,872,746 | IssuesEvent | 2023-04-02 08:52:20 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Add `input` option to async child_process methods | child_process feature request | <!--
Thank you for suggesting an idea to make Node.js better.
Please fill in as much of the template below as you're able.
-->
**Is your feature request related to a problem? Please describe.**
When spawning a child process, it's common to want to send some data to the process' stdin. Usually, a string or buffer. This can be done by writing to the stdin stream:
```js
const {execFile} = require('child_process');
const child = execFile('node', (error, stdout, stderr) => {
if (error) {
throw error;
}
console.log(stdout);
});
child.stdin.write('foo');
child.stdin.end();
```
However, that's a bit verbose and not that obvious. It's also easy to forget to call `.end()`, see https://github.com/nodejs/node/issues/2339.
**Describe the solution you'd like**
I propose the async `child_process` methods get an `input` option for convenience. Just like the [synchronous methods already have](https://nodejs.org/api/child_process.html#child_process_child_process_execfilesync_file_args_options).
```js
const {execFile} = require('child_process');
execFile('node', {input: 'foo'}, (error, stdout, stderr) => {
if (error) {
throw error;
}
console.log(stdout);
});
```
This will also improve the stdin situation when a `child_process` method is promisified, as it then returns a `Promise<Object>` with `stdout` and `stdin` instead of the `ChildProcess` object.
**Describe alternatives you've considered**
I could create a `child_process` wrapper that does this, however, I have already done that: https://github.com/sindresorhus/execa#input But I would like to upstream some of the most useful ideas from that package.
---
// @floatdrop @Qix- | 1.0 | Add `input` option to async child_process methods - <!--
Thank you for suggesting an idea to make Node.js better.
Please fill in as much of the template below as you're able.
-->
**Is your feature request related to a problem? Please describe.**
When spawning a child process, it's common to want to send some data to the process' stdin. Usually, a string or buffer. This can be done by writing to the stdin stream:
```js
const {execFile} = require('child_process');
const child = execFile('node', (error, stdout, stderr) => {
if (error) {
throw error;
}
console.log(stdout);
});
child.stdin.write('foo');
child.stdin.end();
```
However, that's a bit verbose and not that obvious. It's also easy to forget to call `.end()`, see https://github.com/nodejs/node/issues/2339.
**Describe the solution you'd like**
I propose the async `child_process` methods get an `input` option for convenience. Just like the [synchronous methods already have](https://nodejs.org/api/child_process.html#child_process_child_process_execfilesync_file_args_options).
```js
const {execFile} = require('child_process');
execFile('node', {input: 'foo'}, (error, stdout, stderr) => {
if (error) {
throw error;
}
console.log(stdout);
});
```
This will also improve the stdin situation when a `child_process` method is promisified, as it then returns a `Promise<Object>` with `stdout` and `stdin` instead of the `ChildProcess` object.
**Describe alternatives you've considered**
I could create a `child_process` wrapper that does this, however, I have already done that: https://github.com/sindresorhus/execa#input But I would like to upstream some of the most useful ideas from that package.
---
// @floatdrop @Qix- | process | add input option to async child process methods thank you for suggesting an idea to make node js better please fill in as much of the template below as you re able is your feature request related to a problem please describe when spawning a child process it s common to want to send some data to the process stdin usually a string or buffer this can be done by writing to the stdin stream js const execfile require child process const child execfile node error stdout stderr if error throw error console log stdout child stdin write foo child stdin end however that s a bit verbose and not that obvious it s also easy to forget to call end see describe the solution you d like i propose the async child process methods get an input option for convenience just like the js const execfile require child process execfile node input foo error stdout stderr if error throw error console log stdout this will also improve the stdin situation when a child process method is promisified as it then returns a promise with stdout and stdin instead of the childprocess object describe alternatives you ve considered i could create a child process wrapper that does this however i have already done that but i would like to upstream some of the most useful ideas from that package floatdrop qix | 1 |
17,200 | 22,777,304,182 | IssuesEvent | 2022-07-08 15:38:00 | GoogleCloudPlatform/getting-started-python | https://api.github.com/repos/GoogleCloudPlatform/getting-started-python | closed | Update primary reviewer to python-samples-reviewers | priority: p2 type: process | python-sample-owners looks only after the python-docs-samples repo, so we should update it here to ask reviews for python-sample-reviewers instead. | 1.0 | Update primary reviewer to python-samples-reviewers - python-sample-owners looks only after the python-docs-samples repo, so we should update it here to ask reviews for python-sample-reviewers instead. | process | update primary reviewer to python samples reviewers python sample owners looks only after the python docs samples repo so we should update it here to ask reviews for python sample reviewers instead | 1 |
8,419 | 11,583,901,527 | IssuesEvent | 2020-02-22 14:11:18 | arunkumar9t2/scabbard | https://api.github.com/repos/arunkumar9t2/scabbard | closed | Support more formats like SVG | enhancement module:gradle-plugin module:intellij module:processor | [GraphViz-java](https://github.com/nidi3/graphviz-java) already supports multiple formats. Current implementation should be abstracted out to support multiple formats based on input received from Gradle plugin. | 1.0 | Support more formats like SVG - [GraphViz-java](https://github.com/nidi3/graphviz-java) already supports multiple formats. Current implementation should be abstracted out to support multiple formats based on input received from Gradle plugin. | process | support more formats like svg already supports multiple formats current implementation should be abstracted out to support multiple formats based on input received from gradle plugin | 1 |
457,983 | 13,166,254,626 | IssuesEvent | 2020-08-11 08:14:02 | wso2/product-microgateway | https://api.github.com/repos/wso2/product-microgateway | closed | Setting event hub service url does not apply. | Priority/Normal Type/Bug | ### Description:
The service_url config does not apply.
### Steps to reproduce:
### Affected Product Version:
3.2.0-Beta
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
| 1.0 | Setting event hub service url does not apply. - ### Description:
The service_url config does not apply.
### Steps to reproduce:
### Affected Product Version:
3.2.0-Beta
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
| non_process | setting event hub service url does not apply description the service url config does not apply steps to reproduce affected product version beta environment details with versions os client env docker optional fields related issues suggested labels suggested assignees | 0 |
28,699 | 4,113,787,678 | IssuesEvent | 2016-06-07 15:06:03 | linode/manager | https://api.github.com/repos/linode/manager | closed | Linode details summary page changes wrt Chris's feedback | design in progress | * [x] Rename "launch" button
* [x] Make all launchy buttons same color & less prevalent
* [x] Add SSH button next to SSH command
* [x] Change "Lish" to "Text Console" or so
* [x] Change "Glish" to "Graphical Console" or so
* [x] Make the Linode's current state bigger
* [x] Move state management near the state (i.e. power)
* [ ] Too much text on the detail page (who cares) | 1.0 | Linode details summary page changes wrt Chris's feedback - * [x] Rename "launch" button
* [x] Make all launchy buttons same color & less prevalent
* [x] Add SSH button next to SSH command
* [x] Change "Lish" to "Text Console" or so
* [x] Change "Glish" to "Graphical Console" or so
* [x] Make the Linode's current state bigger
* [x] Move state management near the state (i.e. power)
* [ ] Too much text on the detail page (who cares) | non_process | linode details summary page changes wrt chris s feedback rename launch button make all launchy buttons same color less prevalent add ssh button next to ssh command change lish to text console or so change glish to graphical console or so make the linode s current state bigger move state management near the state i e power too much text on the detail page who cares | 0 |
12,569 | 14,985,126,342 | IssuesEvent | 2021-01-28 19:33:51 | tc39-transfer/proposal-regex-escaping | https://api.github.com/repos/tc39-transfer/proposal-regex-escaping | closed | Advance to stage 1 | process | From [the tc39 process](https://docs.google.com/document/d/1QbEE0BsO4lvl7NFTn5WXWeiEIBfaVUF7Dk0hpPpPDzU/edit)
- [x] Identified “champion” who will advance the addition
This would be me, with the much appreciated help of @domenic . Others (Uri Shaked and Elad Kats) have offered help with the process.
- [x] Prose outlining the problem or need and the general shape of a solution
Done in the readme.
- [x] Illustrative examples of usage
Done in the readme.
- [x] High-level API
Done in the readme, including semantics and a polyfill and a spec to JS file. The semantics spec will likely be moved to ECMarkup format.
- [X] Discussion of key algorithms, abstractions and semantics
This is a relatively simple proposal - covered in https://github.com/benjamingr/RexExp.escape/issues/3 and https://github.com/benjamingr/RexExp.escape/issues/4. A template tag is still considered but as it looks from #4 it seems inferior.
- [X] Identification of potential “cross-cutting” concerns and implementation challenges/complexity.
Done, and updated the readme file to address these.
- [ ] Get TC39 to agree that we have advanced to stage 1, after meeting all the above requirements.
| 1.0 | Advance to stage 1 - From [the tc39 process](https://docs.google.com/document/d/1QbEE0BsO4lvl7NFTn5WXWeiEIBfaVUF7Dk0hpPpPDzU/edit)
- [x] Identified “champion” who will advance the addition
This would be me, with the much appreciated help of @domenic . Others (Uri Shaked and Elad Kats) have offered help with the process.
- [x] Prose outlining the problem or need and the general shape of a solution
Done in the readme.
- [x] Illustrative examples of usage
Done in the readme.
- [x] High-level API
Done in the readme, including semantics and a polyfill and a spec to JS file. The semantics spec will likely be moved to ECMarkup format.
- [X] Discussion of key algorithms, abstractions and semantics
This is a relatively simple proposal - covered in https://github.com/benjamingr/RexExp.escape/issues/3 and https://github.com/benjamingr/RexExp.escape/issues/4. A template tag is still considered but as it looks from #4 it seems inferior.
- [X] Identification of potential “cross-cutting” concerns and implementation challenges/complexity.
Done, and updated the readme file to address these.
- [ ] Get TC39 to agree that we have advanced to stage 1, after meeting all the above requirements.
| process | advance to stage from identified “champion” who will advance the addition this would be me with the much appreciated help of domenic others uri shaked and elad kats have offered help with the process prose outlining the problem or need and the general shape of a solution done in the readme illustrative examples of usage done in the readme high level api done in the readme including semantics and a polyfill and a spec to js file the semantics spec will likely be moved to ecmarkup format discussion of key algorithms abstractions and semantics this is a relatively simple proposal covered in and a template tag is still considered but as it looks from it seems inferior identification of potential “cross cutting” concerns and implementation challenges complexity done and updated the readme file to address these get to agree that we have advanced to stage after meeting all the above requirements | 1 |
18,907 | 3,734,151,976 | IssuesEvent | 2016-03-08 04:42:20 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Machine Driver section starts with two that go to error | area/setting area/ui kind/bug release/v1.0.0 status/blocker status/resolved status/to-test | Version - master UI 3/3
Steps to Reproduce:
1. Go to setting
Results: The Machine drivers section starts with two but they are showing up as Error

Expected: Don't have two error'ed machine drivers?
| 1.0 | Machine Driver section starts with two that go to error - Version - master UI 3/3
Steps to Reproduce:
1. Go to setting
Results: The Machine drivers section starts with two but they are showing up as Error

Expected: Don't have two error'ed machine drivers?
| non_process | machine driver section starts with two that go to error version master ui steps to reproduce go to setting results the machine drivers section starts with two but they are showing up as error expected don t have two error ed machine drivers | 0 |
259,107 | 19,586,265,462 | IssuesEvent | 2022-01-05 07:22:44 | devoption/workflows | https://api.github.com/repos/devoption/workflows | opened | Update Readme - Fix Installation Instructions | bug documentation good first issue | ## Context:
As a Laravel Developer, when following the installation instructions in the readme, the command are not correct.
## Acceptance Criteria:
- Code changes are made to support the readme instructions to install the package | 1.0 | Update Readme - Fix Installation Instructions - ## Context:
As a Laravel Developer, when following the installation instructions in the readme, the command are not correct.
## Acceptance Criteria:
- Code changes are made to support the readme instructions to install the package | non_process | update readme fix installation instructions context as a laravel developer when following the installation instructions in the readme the command are not correct acceptance criteria code changes are made to support the readme instructions to install the package | 0 |
15,686 | 19,847,939,560 | IssuesEvent | 2022-01-21 09:02:40 | ooi-data/CE02SHBP-LJ01D-06-CTDBPN106-streamed-ctdbp_no_sample | https://api.github.com/repos/ooi-data/CE02SHBP-LJ01D-06-CTDBPN106-streamed-ctdbp_no_sample | opened | 🛑 Processing failed: ValueError | process | ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:02:39.603670.
## Details
Flow name: `CE02SHBP-LJ01D-06-CTDBPN106-streamed-ctdbp_no_sample`
Task name: `processing_task`
Error type: `ValueError`
Error message: cannot reshape array of size 1209600 into shape (2777778,)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append
return self._write_op(self._append_nosync, data, axis=axis)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op
return self._synchronized_op(f, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op
result = f(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync
self[append_selection] = data
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp>
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem
chunk = self._decode_chunk(cdata)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk
chunk = chunk.reshape(expected_shape or self._chunks, order=self._order)
ValueError: cannot reshape array of size 1209600 into shape (2777778,)
```
</details>
| 1.0 | 🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:02:39.603670.
## Details
Flow name: `CE02SHBP-LJ01D-06-CTDBPN106-streamed-ctdbp_no_sample`
Task name: `processing_task`
Error type: `ValueError`
Error message: cannot reshape array of size 1209600 into shape (2777778,)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append
return self._write_op(self._append_nosync, data, axis=axis)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op
return self._synchronized_op(f, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op
result = f(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync
self[append_selection] = data
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp>
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem
chunk = self._decode_chunk(cdata)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk
chunk = chunk.reshape(expected_shape or self._chunks, order=self._order)
ValueError: cannot reshape array of size 1209600 into shape (2777778,)
```
</details>
| process | 🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name streamed ctdbp no sample task name processing task error type valueerror error message cannot reshape array of size into shape traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages zarr core py line in append return self write op self append nosync data axis axis file srv conda envs notebook lib site packages zarr core py line in write op return self synchronized op f args kwargs file srv conda envs notebook lib site packages zarr core py line in synchronized op result f args kwargs file srv conda envs notebook lib site packages zarr core py line in append nosync self data file srv conda envs notebook lib site packages zarr core py line in setitem self set basic selection selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection return self set basic selection nd selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection nd self set selection indexer value fields fields file srv conda envs notebook lib site packages zarr core py line in set selection self chunk setitems lchunk coords lchunk selection chunk values file srv conda envs notebook lib site packages zarr core py line in chunk setitems cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in process for setitem chunk self decode chunk cdata file srv conda envs notebook lib site packages zarr core py line in decode chunk chunk chunk reshape expected shape or self chunks order self order valueerror cannot reshape array of size into shape | 1 |
11,897 | 14,689,863,461 | IssuesEvent | 2021-01-02 12:17:50 | MrPeterJin/MrPeterJin.github.io | https://api.github.com/repos/MrPeterJin/MrPeterJin.github.io | closed | 传统方法下图像处理的数学原理整理 - P某的备忘录 | /post/conventional-imaging-process-review/ Gitalk | https://www.601b.codes/post/conventional-imaging-process-review/
封面图片 credit: From Digital Image Processing 4E, Global Edition
众所周知在图像处理领域,神经网络还没有大规模运用时,支撑这个领域半边天的是各种数学原理(虽然现在数学也很重要)。这一... | 1.0 | 传统方法下图像处理的数学原理整理 - P某的备忘录 - https://www.601b.codes/post/conventional-imaging-process-review/
封面图片 credit: From Digital Image Processing 4E, Global Edition
众所周知在图像处理领域,神经网络还没有大规模运用时,支撑这个领域半边天的是各种数学原理(虽然现在数学也很重要)。这一... | process | 传统方法下图像处理的数学原理整理 p某的备忘录 封面图片 credit from digital image processing global edition 众所周知在图像处理领域,神经网络还没有大规模运用时,支撑这个领域半边天的是各种数学原理(虽然现在数学也很重要)。这一 | 1 |
13,285 | 3,136,416,413 | IssuesEvent | 2015-09-10 19:47:12 | bmcfee/crema | https://api.github.com/repos/bmcfee/crema | opened | Data sampler | design | An object to be constructed for each source of data (ie, pair of track/feature and jams object).
The data sampler will be responsible for constructing all the corresponding task transformers, and aligning feature samples and annotations. | 1.0 | Data sampler - An object to be constructed for each source of data (ie, pair of track/feature and jams object).
The data sampler will be responsible for constructing all the corresponding task transformers, and aligning feature samples and annotations. | non_process | data sampler an object to be constructed for each source of data ie pair of track feature and jams object the data sampler will be responsible for constructing all the corresponding task transformers and aligning feature samples and annotations | 0 |
5,223 | 8,026,882,448 | IssuesEvent | 2018-07-27 06:53:55 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | Piping both outputs to goaccess (multiple pipes) | log-processing question | Hello,
I'm quite new to linux, and found myself stuck trying to pipe two things at the same time from the access_log into the goaccess report.
For example:
I'm trying to pipe statistics only for certain url, like this:
grep -h -i 'example.com' /var/log/httpd/access_log | goaccess -a -o /usr/local/goaccess/testreport.html -
But I also want to, at the same time pipe the statistics for only previous day.
I've seen in the documentation something like:
sed -n '/'$(date '+%d\/%b\/%Y' -d 'yesterday')'/,$ p' /var/log/httpd/access_log | goaccess -a -o /usr/local/goaccess/testreport.html -
Is there a good way to combine both of these options so I can generate html output for only example.com url and only for previous day? | 1.0 | Piping both outputs to goaccess (multiple pipes) - Hello,
I'm quite new to linux, and found myself stuck trying to pipe two things at the same time from the access_log into the goaccess report.
For example:
I'm trying to pipe statistics only for certain url, like this:
grep -h -i 'example.com' /var/log/httpd/access_log | goaccess -a -o /usr/local/goaccess/testreport.html -
But I also want to, at the same time pipe the statistics for only previous day.
I've seen in the documentation something like:
sed -n '/'$(date '+%d\/%b\/%Y' -d 'yesterday')'/,$ p' /var/log/httpd/access_log | goaccess -a -o /usr/local/goaccess/testreport.html -
Is there a good way to combine both of these options so I can generate html output for only example.com url and only for previous day? | process | piping both outputs to goaccess multiple pipes hello i m quite new to linux and found myself stuck trying to pipe two things at the same time from the access log into the goaccess report for example i m trying to pipe statistics only for certain url like this grep h i example com var log httpd access log goaccess a o usr local goaccess testreport html but i also want to at the same time pipe the statistics for only previous day i ve seen in the documentation something like sed n date d b y d yesterday p var log httpd access log goaccess a o usr local goaccess testreport html is there a good way to combine both of these options so i can generate html output for only example com url and only for previous day | 1 |
1,474 | 4,053,689,349 | IssuesEvent | 2016-05-24 09:31:51 | XENON1T/pax | https://api.github.com/repos/XENON1T/pax | closed | Crashing with Multicore in cax.process.py on Midway | bug processed data io | Jobs submitted via cax.process.py on Midway using 4 cores crashes:
https://gist.github.com/pdeperio/44281f9deb8c680d750c22fc48ab4909
Dataset: 160501_1331
Run: 338
https://gist.github.com/pdeperio/67b715e74ca2c95e89599f374771d222
Dataset: 160501_1314
Run: 337
etc... | 1.0 | Crashing with Multicore in cax.process.py on Midway - Jobs submitted via cax.process.py on Midway using 4 cores crashes:
https://gist.github.com/pdeperio/44281f9deb8c680d750c22fc48ab4909
Dataset: 160501_1331
Run: 338
https://gist.github.com/pdeperio/67b715e74ca2c95e89599f374771d222
Dataset: 160501_1314
Run: 337
etc... | process | crashing with multicore in cax process py on midway jobs submitted via cax process py on midway using cores crashes dataset run dataset run etc | 1 |
299,280 | 9,205,312,417 | IssuesEvent | 2019-03-08 10:13:29 | qissue-bot/QGIS | https://api.github.com/repos/qissue-bot/QGIS | closed | Fixed typos, added i18n | Component: Easy fix? Component: Pull Request or Patch supplied Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Feature request | ---
Author Name: **Redmine Admin** (Redmine Admin)
Original Redmine Issue: 685, https://issues.qgis.org/issues/685
Original Assignee: nobody -
---
Fixed several typos and added some tr() in Grass plugin.
---
- [typos.diff](https://issues.qgis.org/attachments/download/1901/typos.diff) (Redmine Admin) | 1.0 | Fixed typos, added i18n - ---
Author Name: **Redmine Admin** (Redmine Admin)
Original Redmine Issue: 685, https://issues.qgis.org/issues/685
Original Assignee: nobody -
---
Fixed several typos and added some tr() in Grass plugin.
---
- [typos.diff](https://issues.qgis.org/attachments/download/1901/typos.diff) (Redmine Admin) | non_process | fixed typos added author name redmine admin redmine admin original redmine issue original assignee nobody fixed several typos and added some tr in grass plugin redmine admin | 0 |
78,083 | 7,620,222,596 | IssuesEvent | 2018-05-03 01:15:13 | Spooky-Action-Developers/Project-Ironclad | https://api.github.com/repos/Spooky-Action-Developers/Project-Ironclad | closed | Store Secret (from command line) | backend high requires test user story | As a Mozilla Employee, I want to securely store a secret credential from the command line through Amazon Web Services to guarantee privacy of my information. | 1.0 | Store Secret (from command line) - As a Mozilla Employee, I want to securely store a secret credential from the command line through Amazon Web Services to guarantee privacy of my information. | non_process | store secret from command line as a mozilla employee i want to securely store a secret credential from the command line through amazon web services to guarantee privacy of my information | 0 |
11,387 | 14,223,677,430 | IssuesEvent | 2020-11-17 18:32:48 | google/eventid-js | https://api.github.com/repos/google/eventid-js | opened | configure for automated releases | type: process | This will not be automatically published when we create a release; one option would be using the release-please-action, with an `npm publish` step. | 1.0 | configure for automated releases - This will not be automatically published when we create a release; one option would be using the release-please-action, with an `npm publish` step. | process | configure for automated releases this will not be automatically published when we create a release one option would be using the release please action with an npm publish step | 1 |
6,339 | 9,380,126,589 | IssuesEvent | 2019-04-04 16:18:56 | fablabbcn/fablabs.io | https://api.github.com/repos/fablabbcn/fablabs.io | opened | Alert on updated LAB profile on Approval Process | Approval Process enhancement | **Is your feature request related to a problem? Please describe.**
Once you have sent the email to the creator of the Lab to add more information to their application form, it is hard for referees to identify if the LAB actually respond to the request, especially when the application started 2 years ago and they are still waiting for more information to be added.
**Describe the solution you'd like**
We should have a limited time after we resquest_more_info to the LAB, suggestions can be 1-2 weeks, top! if after a week they don't reply the system automatically sends a reminder on updating their information and if after the 2nd week we have no updates, the system will request to apply again with all the correct information for it to be approved. That application will be removed
**Describe alternatives you've considered**
- archive this not responding labs
- have a more extended time - 1 month
- status as remove lab
**Additional context**
--
| 1.0 | Alert on updated LAB profile on Approval Process - **Is your feature request related to a problem? Please describe.**
Once you have sent the email to the creator of the Lab to add more information to their application form, it is hard for referees to identify if the LAB actually respond to the request, especially when the application started 2 years ago and they are still waiting for more information to be added.
**Describe the solution you'd like**
We should have a limited time after we resquest_more_info to the LAB, suggestions can be 1-2 weeks, top! if after a week they don't reply the system automatically sends a reminder on updating their information and if after the 2nd week we have no updates, the system will request to apply again with all the correct information for it to be approved. That application will be removed
**Describe alternatives you've considered**
- archive this not responding labs
- have a more extended time - 1 month
- status as remove lab
**Additional context**
--
| process | alert on updated lab profile on approval process is your feature request related to a problem please describe once you have sent the email to the creator of the lab to add more information to their application form it is hard for referees to identify if the lab actually respond to the request especially when the application started years ago and they are still waiting for more information to be added describe the solution you d like we should have a limited time after we resquest more info to the lab suggestions can be weeks top if after a week they don t reply the system automatically sends a reminder on updating their information and if after the week we have no updates the system will request to apply again with all the correct information for it to be approved that application will be removed describe alternatives you ve considered archive this not responding labs have a more extended time month status as remove lab additional context | 1 |
145,459 | 13,151,550,570 | IssuesEvent | 2020-08-09 17:19:06 | xatkit-bot-platform/xatkit | https://api.github.com/repos/xatkit-bot-platform/xatkit | closed | Update the animated gif showing the test bot | documentation | Replace it with one showing the new and prettier widget | 1.0 | Update the animated gif showing the test bot - Replace it with one showing the new and prettier widget | non_process | update the animated gif showing the test bot replace it with one showing the new and prettier widget | 0 |
12,583 | 14,991,268,217 | IssuesEvent | 2021-01-29 08:03:29 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [PM] [Audit Logs] Location ID is incorrect in description for the events | Bug P2 Participant manager datastore Process: Fixed Process: Tested dev | **Events:**
1. NEW_LOCATION_ADDED
2. LOCATION_EDITED
3. LOCATION_DECOMMISSIONED
4. LOCATION_ACTIVATED
**Actual:** Location ID displaying DB value
**Expected:** Location ID should be custom Location ID entered by PM admin
```
{
"insertId": "1ny0gu3fbsrww7",
"jsonPayload": {
"appVersion": "v0.1",
"userAccessLevel": null,
"participantId": null,
"occurred": 1611745121710,
"studyVersion": null,
"sourceApplicationVersion": "1.0",
"siteId": null,
"eventCode": "LOCATION_EDITED",
"platformVersion": "1.0",
"destination": "PARTICIPANT USER DATASTORE",
"appId": "PARTICIPANT MANAGER",
"description": "Location details edited (location ID - 2c91808976db9bcc0176dc9a63390019).",
"source": "PARTICIPANT MANAGER",
"studyId": null,
"destinationApplicationVersion": "1.0",
"mobilePlatform": "UNKNOWN",
"correlationId": "3dfdbb6e-97ee-4c02-98e1-f78df5f961ae",
"userId": "2c9180897689364401768a08f0060000",
"userIp": "117.216.170.157",
"resourceServer": null
}
```
| 2.0 | [PM] [Audit Logs] Location ID is incorrect in description for the events - **Events:**
1. NEW_LOCATION_ADDED
2. LOCATION_EDITED
3. LOCATION_DECOMMISSIONED
4. LOCATION_ACTIVATED
**Actual:** Location ID displaying DB value
**Expected:** Location ID should be custom Location ID entered by PM admin
```
{
"insertId": "1ny0gu3fbsrww7",
"jsonPayload": {
"appVersion": "v0.1",
"userAccessLevel": null,
"participantId": null,
"occurred": 1611745121710,
"studyVersion": null,
"sourceApplicationVersion": "1.0",
"siteId": null,
"eventCode": "LOCATION_EDITED",
"platformVersion": "1.0",
"destination": "PARTICIPANT USER DATASTORE",
"appId": "PARTICIPANT MANAGER",
"description": "Location details edited (location ID - 2c91808976db9bcc0176dc9a63390019).",
"source": "PARTICIPANT MANAGER",
"studyId": null,
"destinationApplicationVersion": "1.0",
"mobilePlatform": "UNKNOWN",
"correlationId": "3dfdbb6e-97ee-4c02-98e1-f78df5f961ae",
"userId": "2c9180897689364401768a08f0060000",
"userIp": "117.216.170.157",
"resourceServer": null
}
```
| process | location id is incorrect in description for the events events new location added location edited location decommissioned location activated actual location id displaying db value expected location id should be custom location id entered by pm admin insertid jsonpayload appversion useraccesslevel null participantid null occurred studyversion null sourceapplicationversion siteid null eventcode location edited platformversion destination participant user datastore appid participant manager description location details edited location id source participant manager studyid null destinationapplicationversion mobileplatform unknown correlationid userid userip resourceserver null | 1 |
467,606 | 13,451,363,320 | IssuesEvent | 2020-09-08 20:07:36 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Update server urls in accounts website | Category: Web Priority: High Status: Fixed | Sorry @denysaw , the fact that you would need to change this hadn't crossed my mind!
old urls:
https://s3-us-west-2.amazonaws.com/eco-releases/EcoServer_v0.9.0.0-beta.zip
new urls (similar in format to client urls):
https://s3-us-west-2.amazonaws.com/eco-releases/EcoServerPC_v0.9.0.0-beta.zip
https://s3-us-west-2.amazonaws.com/eco-releases/EcoServerPC32_v0.9.0.0-beta.zip
https://s3-us-west-2.amazonaws.com/eco-releases/EcoServerOSX_v0.9.0.0-beta.zip
https://s3-us-west-2.amazonaws.com/eco-releases/EcoServerLinux_v0.9.0.0-beta.zip | 1.0 | Update server urls in accounts website - Sorry @denysaw , the fact that you would need to change this hadn't crossed my mind!
old urls:
https://s3-us-west-2.amazonaws.com/eco-releases/EcoServer_v0.9.0.0-beta.zip
new urls (similar in format to client urls):
https://s3-us-west-2.amazonaws.com/eco-releases/EcoServerPC_v0.9.0.0-beta.zip
https://s3-us-west-2.amazonaws.com/eco-releases/EcoServerPC32_v0.9.0.0-beta.zip
https://s3-us-west-2.amazonaws.com/eco-releases/EcoServerOSX_v0.9.0.0-beta.zip
https://s3-us-west-2.amazonaws.com/eco-releases/EcoServerLinux_v0.9.0.0-beta.zip | non_process | update server urls in accounts website sorry denysaw the fact that you would need to change this hadn t crossed my mind old urls new urls similar in format to client urls | 0 |
19,625 | 25,979,737,730 | IssuesEvent | 2022-12-19 17:41:35 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | closed | Security Policy violation Binary Artifacts | type: process P1 allstar | _This issue was automatically created by [Allstar](https://github.com/ossf/allstar/)._
**Security Policy Violation**
Project is out of compliance with Binary Artifacts policy: binaries present in source code
**Rule Description**
Binary Artifacts are an increased security risk in your repository. Binary artifacts cannot be reviewed, allowing the introduction of possibly obsolete or maliciously subverted executables. For more information see the [Security Scorecards Documentation](https://github.com/ossf/scorecard/blob/main/docs/checks.md#binary-artifacts) for Binary Artifacts.
**Remediation Steps**
To remediate, remove the generated executable artifacts from the repository.
**Artifacts Found**
- third_party/jdk8/jpda-jdi.jar
**Additional Information**
This policy is drawn from [Security Scorecards](https://github.com/ossf/scorecard/), which is a tool that scores a project's adherence to security best practices. You may wish to run a Scorecards scan directly on this repository for more details.
---
<!-- Edit section #updates --><!-- Current result text hash: aeb085fbfedd012c0aa4bf980be4e8b58ec76597157625e61078c22719f31eb2 --><!-- Edit section #updates -->
Allstar has been installed on all Google managed GitHub orgs. Policies are gradually being rolled out and enforced by the GOSST and OSPO teams. Learn more at http://go/allstar
This issue will auto resolve when the policy is in compliance.
Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer. | 1.0 | Security Policy violation Binary Artifacts - _This issue was automatically created by [Allstar](https://github.com/ossf/allstar/)._
**Security Policy Violation**
Project is out of compliance with Binary Artifacts policy: binaries present in source code
**Rule Description**
Binary Artifacts are an increased security risk in your repository. Binary artifacts cannot be reviewed, allowing the introduction of possibly obsolete or maliciously subverted executables. For more information see the [Security Scorecards Documentation](https://github.com/ossf/scorecard/blob/main/docs/checks.md#binary-artifacts) for Binary Artifacts.
**Remediation Steps**
To remediate, remove the generated executable artifacts from the repository.
**Artifacts Found**
- third_party/jdk8/jpda-jdi.jar
**Additional Information**
This policy is drawn from [Security Scorecards](https://github.com/ossf/scorecard/), which is a tool that scores a project's adherence to security best practices. You may wish to run a Scorecards scan directly on this repository for more details.
---
<!-- Edit section #updates --><!-- Current result text hash: aeb085fbfedd012c0aa4bf980be4e8b58ec76597157625e61078c22719f31eb2 --><!-- Edit section #updates -->
Allstar has been installed on all Google managed GitHub orgs. Policies are gradually being rolled out and enforced by the GOSST and OSPO teams. Learn more at http://go/allstar
This issue will auto resolve when the policy is in compliance.
Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer. | process | security policy violation binary artifacts this issue was automatically created by security policy violation project is out of compliance with binary artifacts policy binaries present in source code rule description binary artifacts are an increased security risk in your repository binary artifacts cannot be reviewed allowing the introduction of possibly obsolete or maliciously subverted executables for more information see the for binary artifacts remediation steps to remediate remove the generated executable artifacts from the repository artifacts found third party jpda jdi jar additional information this policy is drawn from which is a tool that scores a project s adherence to security best practices you may wish to run a scorecards scan directly on this repository for more details allstar has been installed on all google managed github orgs policies are gradually being rolled out and enforced by the gosst and ospo teams learn more at this issue will auto resolve when the policy is in compliance issue created by allstar see for more information for questions specific to the repository please contact the owner or maintainer | 1 |
178,046 | 13,759,071,742 | IssuesEvent | 2020-10-07 01:56:15 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | SslStream renegotiate Outerloop tests are failing | area-System.Net.Security test-run-core | ```
SslStream_NetworkStream_Renegotiation_Succeeds
SslStream_AllowRenegotiation_True_Succeeds
```
are failing with `Assert.InRange() Failure\r\nRange: (2 - 2147483647)\r\nActual: 1`
this is regression caused by #42836
| 1.0 | SslStream renegotiate Outerloop tests are failing - ```
SslStream_NetworkStream_Renegotiation_Succeeds
SslStream_AllowRenegotiation_True_Succeeds
```
are failing with `Assert.InRange() Failure\r\nRange: (2 - 2147483647)\r\nActual: 1`
this is regression caused by #42836
| non_process | sslstream renegotiate outerloop tests are failing sslstream networkstream renegotiation succeeds sslstream allowrenegotiation true succeeds are failing with assert inrange failure r nrange r nactual this is regression caused by | 0 |
1,550 | 4,155,916,345 | IssuesEvent | 2016-06-16 16:16:26 | altoxml/schema | https://api.github.com/repos/altoxml/schema | opened | Processing history | 1 submitted processing history | Recently, several feature requests were submitted that relate to the recording of processing information in ALTO (see #13, #27, #36, #35). In an attempt to consolidate and harmonize the requests, this issue shall serve as the main point of discussion from now on.
Features requested:
- [x] Change _OCRProcessing_ to generic _Processing_ (#13, #35).
- [x] Change _preProcessingStep_, _ocrProcessingStep_, _postProcessingStep_ to generic _processingStep_ with _processingStepType_ element to record the type of processing performed (#13).
- [x] Add required attribute _ID_ to _ProcessingStepType_ (#13, #27, #35).
- [x] Add optional attributes _COR_ (CORRECTEDBY), _VER_ (VERIFIEDBY) for all elements. The attributes are holding a list of references (using the _ID_ attribute) to all _processingStepType_ entries which have changed the original value (#27).
- [ ] Being able to link elements to a particular _processingStep_ (#35).
Example: Use Tesseract's page segmentation with Ocropus's recognition, so that _TextLine_ elements are sourced from one _ProcessingStep_(Ocropus), but their text content from another one (Tesseract).
- [ ] Common vocabulary of _processingStepDescription_ attribute values to increase interoperability (#36) | 1.0 | Processing history - Recently, several feature requests were submitted that relate to the recording of processing information in ALTO (see #13, #27, #36, #35). In an attempt to consolidate and harmonize the requests, this issue shall serve as the main point of discussion from now on.
Features requested:
- [x] Change _OCRProcessing_ to generic _Processing_ (#13, #35).
- [x] Change _preProcessingStep_, _ocrProcessingStep_, _postProcessingStep_ to generic _processingStep_ with _processingStepType_ element to record the type of processing performed (#13).
- [x] Add required attribute _ID_ to _ProcessingStepType_ (#13, #27, #35).
- [x] Add optional attributes _COR_ (CORRECTEDBY), _VER_ (VERIFIEDBY) for all elements. The attributes are holding a list of references (using the _ID_ attribute) to all _processingStepType_ entries which have changed the original value (#27).
- [ ] Being able to link elements to a particular _processingStep_ (#35).
Example: Use Tesseract's page segmentation with Ocropus's recognition, so that _TextLine_ elements are sourced from one _ProcessingStep_(Ocropus), but their text content from another one (Tesseract).
- [ ] Common vocabulary of _processingStepDescription_ attribute values to increase interoperability (#36) | process | processing history recently several feature requests were submitted that relate to the recording of processing information in alto see in an attempt to consolidate and harmonize the requests this issue shall serve as the main point of discussion from now on features requested change ocrprocessing to generic processing change preprocessingstep ocrprocessingstep postprocessingstep to generic processingstep with processingsteptype element to record the type of processing performed add required attribute id to processingsteptype add optional attributes cor correctedby ver verifiedby for all elements the attributes are holding a list of references using the id attribute to all processingsteptype entries which have changed the original value being able to link elements to a particular processingstep example use tesseract s page segmentation with ocropus s recognition so that textline elements are sourced from one processingstep ocropus but their text content from another one tesseract common vocabulary of processingstepdescription attribute values to increase interoperability | 1 |
350,960 | 31,932,556,208 | IssuesEvent | 2023-09-19 08:25:07 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix gradients.test_unset_with_grads | Sub Task Failing Test | | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4781650459/jobs/8500301506"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4796624963/jobs/8532610389"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="null"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4796624963/jobs/8532616775"><img src=https://img.shields.io/badge/-failure-red></a>
| 1.0 | Fix gradients.test_unset_with_grads - | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4781650459/jobs/8500301506"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4796624963/jobs/8532610389"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="null"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4796624963/jobs/8532616775"><img src=https://img.shields.io/badge/-failure-red></a>
| non_process | fix gradients test unset with grads jax a href src numpy a href src tensorflow img src torch a href src | 0 |
309,987 | 23,315,737,212 | IssuesEvent | 2022-08-08 12:21:59 | astronomer/astro-sdk | https://api.github.com/repos/astronomer/astro-sdk | closed | Add example DAG and documentation for cleanup | documentation feature priority/critical improvement | **Please describe the feature you'd like to see**
In the past, we had a tutorial which illustrated how to use each of our operators/decorators:
https://github.com/astronomer/astro-sdk/blob/be6280df00ccff0d7a1c0dfb099b2065303dbe88/REFERENCE.md
**Describe the solution you'd like**
Have a reference page per operator/decorator similar to
https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/ecs.html#howto-operator-ecsoperator
In which we reference parts of an (automated tested) example DAG which illustrates the usage of that operator/decorator.
Many of these use cases already exist in our example DAGs - we should reference them.
* [ ] cleanup
- single example
If no example DAGs or operators are illustrating the functionalities, we should create them.
**Acceptance Criteria**
* [ ] cleanup
- single example
- [ ] All checks and tests in the CI should pass
- [ ] Unit tests (90% code coverage or more, [once available](https://github.com/astronomer/astro-sdk/issues/191))
- [ ] Integration tests (if the feature relates to a new database or external service)
- [ ] Example DAG
- [ ] Docstrings in [reStructuredText](https://peps.python.org/pep-0287/) for each of methods, classes, functions and module-level attributes (including Example DAG on how it should be used)
- [ ] Exception handling in case of errors
- [ ] Logging (are we exposing useful information to the user? e.g. source and destination)
- [ ] Improve the documentation (README, Sphinx, and any other relevant)
- [ ] How to use Guide for the feature ([example](https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/operators/postgres_operator_howto_guide.html))
| 1.0 | Add example DAG and documentation for cleanup - **Please describe the feature you'd like to see**
In the past, we had a tutorial which illustrated how to use each of our operators/decorators:
https://github.com/astronomer/astro-sdk/blob/be6280df00ccff0d7a1c0dfb099b2065303dbe88/REFERENCE.md
**Describe the solution you'd like**
Have a reference page per operator/decorator similar to
https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/ecs.html#howto-operator-ecsoperator
In which we reference parts of an (automated tested) example DAG which illustrates the usage of that operator/decorator.
Many of these use cases already exist in our example DAGs - we should reference them.
* [ ] cleanup
- single example
If no example DAGs or operators are illustrating the functionalities, we should create them.
**Acceptance Criteria**
* [ ] cleanup
- single example
- [ ] All checks and tests in the CI should pass
- [ ] Unit tests (90% code coverage or more, [once available](https://github.com/astronomer/astro-sdk/issues/191))
- [ ] Integration tests (if the feature relates to a new database or external service)
- [ ] Example DAG
- [ ] Docstrings in [reStructuredText](https://peps.python.org/pep-0287/) for each of methods, classes, functions and module-level attributes (including Example DAG on how it should be used)
- [ ] Exception handling in case of errors
- [ ] Logging (are we exposing useful information to the user? e.g. source and destination)
- [ ] Improve the documentation (README, Sphinx, and any other relevant)
- [ ] How to use Guide for the feature ([example](https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/operators/postgres_operator_howto_guide.html))
| non_process | add example dag and documentation for cleanup please describe the feature you d like to see in the past we had a tutorial which illustrated how to use each of our operators decorators describe the solution you d like have a reference page per operator decorator similar to in which we reference parts of an automated tested example dag which illustrates the usage of that operator decorator many of these use cases already exist in our example dags we should reference them cleanup single example if no example dags or operators are illustrating the functionalities we should create them acceptance criteria cleanup single example all checks and tests in the ci should pass unit tests code coverage or more integration tests if the feature relates to a new database or external service example dag docstrings in for each of methods classes functions and module level attributes including example dag on how it should be used exception handling in case of errors logging are we exposing useful information to the user e g source and destination improve the documentation readme sphinx and any other relevant how to use guide for the feature | 0 |
12,736 | 15,102,853,081 | IssuesEvent | 2021-02-08 09:33:29 | Maximus5/ConEmu | https://api.github.com/repos/Maximus5/ConEmu | closed | Run command under FAR fails with error code 57 | processes | In case user try to run command under FAR command fail wity error code 57
Example
C:\Users\AGorlov\Documents\projects\workspaces\m51>git status
Can't create process, ErrCode=0x00000057, Description:
The parameter is incorrect.
Current directory:
C:\Users\AGorlov\Documents\projects\workspaces\m51
Command to be executed:
""C:\tools\vc\git\cmd\git.exe" status
In case user uses FAR itself command works without problem
ConEmu version 11.210.2020
Far version : build 3.0 build 5700 x64 | 1.0 | Run command under FAR fails with error code 57 - In case user try to run command under FAR command fail wity error code 57
Example
C:\Users\AGorlov\Documents\projects\workspaces\m51>git status
Can't create process, ErrCode=0x00000057, Description:
The parameter is incorrect.
Current directory:
C:\Users\AGorlov\Documents\projects\workspaces\m51
Command to be executed:
""C:\tools\vc\git\cmd\git.exe" status
In case user uses FAR itself command works without problem
ConEmu version 11.210.2020
Far version : build 3.0 build 5700 x64 | process | run command under far fails with error code in case user try to run command under far command fail wity error code example c users agorlov documents projects workspaces git status can t create process errcode description the parameter is incorrect current directory c users agorlov documents projects workspaces command to be executed c tools vc git cmd git exe status in case user uses far itself command works without problem conemu version far version build build | 1 |
10,598 | 13,426,178,166 | IssuesEvent | 2020-09-06 13:18:44 | threefoldtech/js-sdk | https://api.github.com/repos/threefoldtech/js-sdk | closed | adding and removing package doesn't effect in package list in jsng shell | process_wontfix type_bug | ### Version information
* OS: Ubuntu 20.04, 18.04
* Branch: development
* commit: 270a4e0c8e0821ad6918273f0fd4dc4ea0f9f6ef
### Steps to reproduce:
1- trying to add package from the admin dashboard.
2- check the package list from the `jsng` shell using this command `j.servers.threebot.default.packages.list_all()`

3- you should logout from jsng shell and log in again to see the package if added or not.

| 1.0 | adding and removing package doesn't effect in package list in jsng shell - ### Version information
* OS: Ubuntu 20.04, 18.04
* Branch: development
* commit: 270a4e0c8e0821ad6918273f0fd4dc4ea0f9f6ef
### Steps to reproduce:
1- trying to add package from the admin dashboard.
2- check the package list from the `jsng` shell using this command `j.servers.threebot.default.packages.list_all()`

3- you should logout from jsng shell and log in again to see the package if added or not.

| process | adding and removing package doesn t effect in package list in jsng shell version information os ubuntu branch development commit steps to reproduce trying to add package from the admin dashboard check the package list from the jsng shell using this command j servers threebot default packages list all you should logout from jsng shell and log in again to see the package if added or not | 1 |
707,306 | 24,301,733,413 | IssuesEvent | 2022-09-29 14:19:58 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.nytimes.com - see bug description | browser-firefox priority-critical type-tracking-protection-standard engine-gecko | <!-- @browser: Firefox 105.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/111556 -->
**URL**: https://www.nytimes.com/games/wordle/index.html
**Browser / Version**: Firefox 105.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: Site works with Edge, unable to play game.
**Steps to Reproduce**:
Unable to scroll down to 'keyboard' to play game using FF. No issues using MS Edge.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.nytimes.com - see bug description - <!-- @browser: Firefox 105.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/111556 -->
**URL**: https://www.nytimes.com/games/wordle/index.html
**Browser / Version**: Firefox 105.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: Site works with Edge, unable to play game.
**Steps to Reproduce**:
Unable to scroll down to 'keyboard' to play game using FF. No issues using MS Edge.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | see bug description url browser version firefox operating system windows tested another browser yes edge problem type something else description site works with edge unable to play game steps to reproduce unable to scroll down to keyboard to play game using ff no issues using ms edge browser configuration none from with ❤️ | 0 |
278,104 | 30,702,200,636 | IssuesEvent | 2023-07-27 01:10:55 | hshivhare67/kernel_v4.1.15 | https://api.github.com/repos/hshivhare67/kernel_v4.1.15 | closed | CVE-2022-42895 (Medium) detected in linuxlinux-4.6 - autoclosed | Mend: dependency security vulnerability | ## CVE-2022-42895 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/bluetooth/l2cap_core.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/bluetooth/l2cap_core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
There is an infoleak vulnerability in the Linux kernel's net/bluetooth/l2cap_core.c's l2cap_parse_conf_req function which can be used to leak kernel pointers remotely. We recommend upgrading past commit https://github.com/torvalds/linux/commit/b1a2cd50c0357f243b7435a732b4e62ba3157a2e https://www.google.com/url
<p>Publish Date: 2022-11-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42895>CVE-2022-42895</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-42895">https://www.linuxkernelcves.com/cves/CVE-2022-42895</a></p>
<p>Release Date: 2022-11-23</p>
<p>Fix Resolution: v4.9.333,v4.14.299,v4.19.265,v5.4.224,v5.10.154,v5.15.78,v6.0.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-42895 (Medium) detected in linuxlinux-4.6 - autoclosed - ## CVE-2022-42895 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/bluetooth/l2cap_core.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/bluetooth/l2cap_core.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
There is an infoleak vulnerability in the Linux kernel's net/bluetooth/l2cap_core.c's l2cap_parse_conf_req function which can be used to leak kernel pointers remotely. We recommend upgrading past commit https://github.com/torvalds/linux/commit/b1a2cd50c0357f243b7435a732b4e62ba3157a2e https://www.google.com/url
<p>Publish Date: 2022-11-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42895>CVE-2022-42895</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-42895">https://www.linuxkernelcves.com/cves/CVE-2022-42895</a></p>
<p>Release Date: 2022-11-23</p>
<p>Fix Resolution: v4.9.333,v4.14.299,v4.19.265,v5.4.224,v5.10.154,v5.15.78,v6.0.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files net bluetooth core c net bluetooth core c vulnerability details there is an infoleak vulnerability in the linux kernel s net bluetooth core c s parse conf req function which can be used to leak kernel pointers remotely we recommend upgrading past commit publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
26,897 | 13,149,751,938 | IssuesEvent | 2020-08-09 07:20:55 | gumpcha/good-reads-clone | https://api.github.com/repos/gumpcha/good-reads-clone | closed | element-icons | performance | ## 퍼포먼스 저해 요소
- Preload key requests
- Ensure text remains visible during webfont load
## 대안
- `font-family: element-icons`가 기본값으로 `font-display: auto`로 되어있어 변경이 안됨 | True | element-icons - ## 퍼포먼스 저해 요소
- Preload key requests
- Ensure text remains visible during webfont load
## 대안
- `font-family: element-icons`가 기본값으로 `font-display: auto`로 되어있어 변경이 안됨 | non_process | element icons 퍼포먼스 저해 요소 preload key requests ensure text remains visible during webfont load 대안 font family element icons 가 기본값으로 font display auto 로 되어있어 변경이 안됨 | 0 |
7,532 | 10,608,182,851 | IssuesEvent | 2019-10-11 06:49:26 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | closed | FX files for OBS projects | enhancement preprocessor | https://github.com/ESMValGroup/ESMValCore/issues/294 and #295 describe a somewhat more general problem - how to find and use fx files from OBS projects when applying various masking preprocessors. The cut-throat approach in #295 needs to be generalized in a more flexible approach and, if the fx files from OBS projects obey the same file naming rules as the OBS files themselves it shouldn't be much of a problem but we need to set these right in `_recipe.py`.
I will do this now but I need to know if the naming rule differs from OBS project to OBS project what these files are called :beer: | 1.0 | FX files for OBS projects - https://github.com/ESMValGroup/ESMValCore/issues/294 and #295 describe a somewhat more general problem - how to find and use fx files from OBS projects when applying various masking preprocessors. The cut-throat approach in #295 needs to be generalized in a more flexible approach and, if the fx files from OBS projects obey the same file naming rules as the OBS files themselves it shouldn't be much of a problem but we need to set these right in `_recipe.py`.
I will do this now but I need to know if the naming rule differs from OBS project to OBS project what these files are called :beer: | process | fx files for obs projects and describe a somewhat more general problem how to find and use fx files from obs projects when applying various masking preprocessors the cut throat approach in needs to be generalized in a more flexible approach and if the fx files from obs projects obey the same file naming rules as the obs files themselves it shouldn t be much of a problem but we need to set these right in recipe py i will do this now but i need to know if the naming rule differs from obs project to obs project what these files are called beer | 1 |
21,654 | 30,089,353,257 | IssuesEvent | 2023-06-29 11:05:13 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | Retouch module issue when using bilateral filters on amd opencl | scope: image processing scope: hardware support | ### Describe the bug
When using bliteral filters (in wavelet decomposition mode) amd opencl drivers are again causing problems when zooming. This goes to @jenshannoschwalm I guess to make you happy again :D
DEVICE: 0: 'gfx803'
PLATFORM NAME & VENDOR: AMD Accelerated Parallel Processing, Advanced Micro Devices, Inc.
CANONICAL NAME: amdacceleratedparallelprocessinggfx803
DRIVER VERSION: 3558.0 (HSA1.1,LC)
DEVICE VERSION: OpenCL 1.2
DEVICE_TYPE: GPU
GLOBAL MEM SIZE: 8192 MB
MAX MEM ALLOC: 6963 MB
MAX IMAGE SIZE: 16384 x 16384
### Steps to reproduce
Use retouch module and use bilateral softening (I am using it in the 5th decomposition scale) and zoom the image. Strange artifacts appear. This stops when disabling the retouch module
### Expected behavior
Not showing strange artifacts when zooming
### Logfile | Screenshot | Screencast

### Commit
_No response_
### Where did you install darktable from?
Arch repos
### darktable version
4.4.0
### What OS are you using?
Linux
### What is the version of your OS?
Arch Liux
### Describe your system?
Linux 6.3.9, Wayland, 64 GB Ram, Ryzen 7950X
### Are you using OpenCL GPU in darktable?
Yes
### If yes, what is the GPU card and driver?
AMD Polaris RX 580 GPU (all time favorite)
### Please provide additional context if applicable. You can attach files too, but might need to rename to .txt or .zip
_No response_ | 1.0 | Retouch module issue when using bilateral filters on amd opencl - ### Describe the bug
When using bliteral filters (in wavelet decomposition mode) amd opencl drivers are again causing problems when zooming. This goes to @jenshannoschwalm I guess to make you happy again :D
DEVICE: 0: 'gfx803'
PLATFORM NAME & VENDOR: AMD Accelerated Parallel Processing, Advanced Micro Devices, Inc.
CANONICAL NAME: amdacceleratedparallelprocessinggfx803
DRIVER VERSION: 3558.0 (HSA1.1,LC)
DEVICE VERSION: OpenCL 1.2
DEVICE_TYPE: GPU
GLOBAL MEM SIZE: 8192 MB
MAX MEM ALLOC: 6963 MB
MAX IMAGE SIZE: 16384 x 16384
### Steps to reproduce
Use retouch module and use bilateral softening (I am using it in the 5th decomposition scale) and zoom the image. Strange artifacts appear. This stops when disabling the retouch module
### Expected behavior
Not showing strange artifacts when zooming
### Logfile | Screenshot | Screencast

### Commit
_No response_
### Where did you install darktable from?
Arch repos
### darktable version
4.4.0
### What OS are you using?
Linux
### What is the version of your OS?
Arch Liux
### Describe your system?
Linux 6.3.9, Wayland, 64 GB Ram, Ryzen 7950X
### Are you using OpenCL GPU in darktable?
Yes
### If yes, what is the GPU card and driver?
AMD Polaris RX 580 GPU (all time favorite)
### Please provide additional context if applicable. You can attach files too, but might need to rename to .txt or .zip
_No response_ | process | retouch module issue when using bilateral filters on amd opencl describe the bug when using bliteral filters in wavelet decomposition mode amd opencl drivers are again causing problems when zooming this goes to jenshannoschwalm i guess to make you happy again d device platform name vendor amd accelerated parallel processing advanced micro devices inc canonical name driver version lc device version opencl device type gpu global mem size mb max mem alloc mb max image size x steps to reproduce use retouch module and use bilateral softening i am using it in the decomposition scale and zoom the image strange artifacts appear this stops when disabling the retouch module expected behavior not showing strange artifacts when zooming logfile screenshot screencast commit no response where did you install darktable from arch repos darktable version what os are you using linux what is the version of your os arch liux describe your system linux wayland gb ram ryzen are you using opencl gpu in darktable yes if yes what is the gpu card and driver amd polaris rx gpu all time favorite please provide additional context if applicable you can attach files too but might need to rename to txt or zip no response | 1 |
719,661 | 24,766,618,462 | IssuesEvent | 2022-10-22 16:01:21 | AY2223S1-CS2103T-T12-2/tp | https://api.github.com/repos/AY2223S1-CS2103T-T12-2/tp | closed | :bug: Bug: `add_task` doesn't add task to the relative index of what's displayed on screen | type.Bug priority.High | ## How to Reproduce
1. type `find Bernice Yu`
2. `add_task n/test c/1`
3. The task would've been associated with the wrong person
## Expected Behaviour
- The task should've been associated with Bernice Yu | 1.0 | :bug: Bug: `add_task` doesn't add task to the relative index of what's displayed on screen - ## How to Reproduce
1. type `find Bernice Yu`
2. `add_task n/test c/1`
3. The task would've been associated with the wrong person
## Expected Behaviour
- The task should've been associated with Bernice Yu | non_process | bug bug add task doesn t add task to the relative index of what s displayed on screen how to reproduce type find bernice yu add task n test c the task would ve been associated with the wrong person expected behaviour the task should ve been associated with bernice yu | 0 |
9,041 | 12,130,107,985 | IssuesEvent | 2020-04-23 00:30:40 | GoogleCloudPlatform/python-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples | closed | remove gcp-devrel-py-tools from appengine/standard/firebase/firetactoe/requirements-test.txt | priority: p2 remove-gcp-devrel-py-tools type: process | remove gcp-devrel-py-tools from appengine/standard/firebase/firetactoe/requirements-test.txt | 1.0 | remove gcp-devrel-py-tools from appengine/standard/firebase/firetactoe/requirements-test.txt - remove gcp-devrel-py-tools from appengine/standard/firebase/firetactoe/requirements-test.txt | process | remove gcp devrel py tools from appengine standard firebase firetactoe requirements test txt remove gcp devrel py tools from appengine standard firebase firetactoe requirements test txt | 1 |
57,127 | 7,034,758,678 | IssuesEvent | 2017-12-27 18:49:12 | D0tNet4Fun/Automation.TestFramework | https://api.github.com/repos/D0tNet4Fun/Automation.TestFramework | closed | Execute test case methods in the correct order | design | Consider a test case template such as the one in http://www.softwaretestinghelp.com/test-case-template-examples. This can be mapped to a test class in which the methods are the test case steps.
Acceptance:
The test case methods should be executed in the correct order: first the preconditions, then for every test step: input then expected result (if any). | 1.0 | Execute test case methods in the correct order - Consider a test case template such as the one in http://www.softwaretestinghelp.com/test-case-template-examples. This can be mapped to a test class in which the methods are the test case steps.
Acceptance:
The test case methods should be executed in the correct order: first the preconditions, then for every test step: input then expected result (if any). | non_process | execute test case methods in the correct order consider a test case template such as the one in this can be mapped to a test class in which the methods are the test case steps acceptance the test case methods should be executed in the correct order first the preconditions then for every test step input then expected result if any | 0 |
222,103 | 17,393,790,004 | IssuesEvent | 2021-08-02 10:50:37 | NuGet/Home | https://api.github.com/repos/NuGet/Home | opened | The Alternate Package’s name shows the redundant “packageid:” in the search box in “Browse” tab | Found:ManualTests | Affected Branch: Main
Affected Build: 31601.14
Affected Product Language: ENU
## Installation
1. Install VS Main\31601.14 from: https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=5044845&view=ms.vss-build-web.run-extensions-tab
2. Install NuGet Client Dev\6.0.0.169: https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=5039908&view=artifacts&pathAsName=false&type=publishedArtifacts
## Steps to reproduce
1. Create a C# Console Application (.NET core) project.
2. Open PM UI, add a deprecated package source https://apidev.nugettest.org/v3-index/index.json and select it.
3. Search the deprecated package (e.g. afine) in “Browse” tab.
4. Click the link of the Alternate package (e.g. plastic=3.0.0).
## Expected
Only the Alternate Package’s name shows in the search box in “Browse” tab.
## Actual
The Alternate Package’s name shows the redundant “packageid:” in the search box in “Browse” tab
as below screenshot.

## Notes
1. The issue also reproes on “test10k” package.
2. The issue does not repro on “ComplexLicense/ComplexLicenseExpressionWithException” package.
3. The issue starts from Dev\6.0.0.169. | 1.0 | The Alternate Package’s name shows the redundant “packageid:” in the search box in “Browse” tab - Affected Branch: Main
Affected Build: 31601.14
Affected Product Language: ENU
## Installation
1. Install VS Main\31601.14 from: https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=5044845&view=ms.vss-build-web.run-extensions-tab
2. Install NuGet Client Dev\6.0.0.169: https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=5039908&view=artifacts&pathAsName=false&type=publishedArtifacts
## Steps to reproduce
1. Create a C# Console Application (.NET core) project.
2. Open PM UI, add a deprecated package source https://apidev.nugettest.org/v3-index/index.json and select it.
3. Search the deprecated package (e.g. afine) in “Browse” tab.
4. Click the link of the Alternate package (e.g. plastic=3.0.0).
## Expected
Only the Alternate Package’s name shows in the search box in “Browse” tab.
## Actual
The Alternate Package’s name shows the redundant “packageid:” in the search box in “Browse” tab
as below screenshot.

## Notes
1. The issue also reproes on “test10k” package.
2. The issue does not repro on “ComplexLicense/ComplexLicenseExpressionWithException” package.
3. The issue starts from Dev\6.0.0.169. | non_process | the alternate package’s name shows the redundant “packageid ” in the search box in “browse” tab affected branch main affected build affected product language enu installation install vs main from install nuget client dev steps to reproduce create a c console application net core project open pm ui add a deprecated package source and select it search the deprecated package e g afine in “browse” tab click the link of the alternate package e g plastic expected only the alternate package’s name shows in the search box in “browse” tab actual the alternate package’s name shows the redundant “packageid ” in the search box in “browse” tab as below screenshot notes the issue also reproes on “ ” package the issue does not repro on “complexlicense complexlicenseexpressionwithexception” package the issue starts from dev | 0 |
358 | 2,794,898,370 | IssuesEvent | 2015-05-11 19:07:28 | scieloorg/search-journals | https://api.github.com/repos/scieloorg/search-journals | opened | Alterar o formato de importação do processamento | Processamento Tarefa | Com a última refatoração temos um pipeline que converte um JSON em XML, porém o Solr a partir da versão 3.1 aceita como entrada o formato JSON, ver: https://wiki.apache.org/solr/UpdateJSONrelease.
Aproveitando que o JSON é um formato menos verbose e mais performático e atualmente mais aderente a novas tecnologias é importante termos os dois tipos de formato de saída no processamento do sistema de busca.
Interessante seria termos um para ``-e`` em que as opções seriam ``XML`` ou ``JSON``. | 1.0 | Alterar o formato de importação do processamento - Com a última refatoração temos um pipeline que converte um JSON em XML, porém o Solr a partir da versão 3.1 aceita como entrada o formato JSON, ver: https://wiki.apache.org/solr/UpdateJSONrelease.
Aproveitando que o JSON é um formato menos verbose e mais performático e atualmente mais aderente a novas tecnologias é importante termos os dois tipos de formato de saída no processamento do sistema de busca.
Interessante seria termos um para ``-e`` em que as opções seriam ``XML`` ou ``JSON``. | process | alterar o formato de importação do processamento com a última refatoração temos um pipeline que converte um json em xml porém o solr a partir da versão aceita como entrada o formato json ver aproveitando que o json é um formato menos verbose e mais performático e atualmente mais aderente a novas tecnologias é importante termos os dois tipos de formato de saída no processamento do sistema de busca interessante seria termos um para e em que as opções seriam xml ou json | 1 |
345,065 | 10,352,711,972 | IssuesEvent | 2019-09-05 09:50:16 | jenkins-x/jx | https://api.github.com/repos/jenkins-x/jx | closed | Race condition causes PipelineRun failure sporadically | area/prow area/tekton kind/bug priority/important-soon | 1-2% of the time on the prod cluster, we see `PipelineRun`s fail with messages like this:
```
Pipeline jx/jenkins-x-jx-pr-5266-images-16 can't be found:pipeline.tekton.dev
"jenkins-x-jx-pr-5266-images-16" not found
```
This seems to be a race condition in the `PipelineRun` controller where it gets informed of a new `PipelineRun` and tries to run it, but its lister cache doesn't yet have the corresponding `Pipeline`. I haven't yet managed to reproduce this effectively so can't figure out how to fix it properly in Tekton yet. | 1.0 | Race condition causes PipelineRun failure sporadically - 1-2% of the time on the prod cluster, we see `PipelineRun`s fail with messages like this:
```
Pipeline jx/jenkins-x-jx-pr-5266-images-16 can't be found:pipeline.tekton.dev
"jenkins-x-jx-pr-5266-images-16" not found
```
This seems to be a race condition in the `PipelineRun` controller where it gets informed of a new `PipelineRun` and tries to run it, but its lister cache doesn't yet have the corresponding `Pipeline`. I haven't yet managed to reproduce this effectively so can't figure out how to fix it properly in Tekton yet. | non_process | race condition causes pipelinerun failure sporadically of the time on the prod cluster we see pipelinerun s fail with messages like this pipeline jx jenkins x jx pr images can t be found pipeline tekton dev jenkins x jx pr images not found this seems to be a race condition in the pipelinerun controller where it gets informed of a new pipelinerun and tries to run it but its lister cache doesn t yet have the corresponding pipeline i haven t yet managed to reproduce this effectively so can t figure out how to fix it properly in tekton yet | 0 |
2,265 | 3,367,581,074 | IssuesEvent | 2015-11-22 09:30:58 | RDFLib/rdflib | https://api.github.com/repos/RDFLib/rdflib | closed | Iteratively adding triples to an rdflib graph and UPDATE query issue | performance SPARQL | Apparently, when a more complicated SPARQL UPDATE query which uses the transitivity operators * and + is applied to an rdflib graph (constructed iteratively by adding triples in Python code), the application of the SPARQL UPDATE query returns a RuntimeError: maximum recursion depth exceeded while calling a Python object.
This happens if the rdflib graph has more than about 500 triples, number established by a number of tests performed by me.
If a system recursion limit is set before the application of the SPARQL UPDATE query (using sys.setrecursionlimit(PREFFERED INTEGER NUMBER) ), then the query works perfectly fine.
A suggestion is to include a higher depth for recursion by default rather than letting a user of the library handle this manually. | True | Iteratively adding triples to an rdflib graph and UPDATE query issue - Apparently, when a more complicated SPARQL UPDATE query which uses the transitivity operators * and + is applied to an rdflib graph (constructed iteratively by adding triples in Python code), the application of the SPARQL UPDATE query returns a RuntimeError: maximum recursion depth exceeded while calling a Python object.
This happens if the rdflib graph has more than about 500 triples, number established by a number of tests performed by me.
If a system recursion limit is set before the application of the SPARQL UPDATE query (using sys.setrecursionlimit(PREFFERED INTEGER NUMBER) ), then the query works perfectly fine.
A suggestion is to include a higher depth for recursion by default rather than letting a user of the library handle this manually. | non_process | iteratively adding triples to an rdflib graph and update query issue apparently when a more complicated sparql update query which uses the transitivity operators and is applied to an rdflib graph constructed iteratively by adding triples in python code the application of the sparql update query returns a runtimeerror maximum recursion depth exceeded while calling a python object this happens if the rdflib graph has more than about triples number established by a number of tests performed by me if a system recursion limit is set before the application of the sparql update query using sys setrecursionlimit preffered integer number then the query works perfectly fine a suggestion is to include a higher depth for recursion by default rather than letting a user of the library handle this manually | 0 |
21,680 | 30,122,023,993 | IssuesEvent | 2023-06-30 15:55:56 | metabase/metabase | https://api.github.com/repos/metabase/metabase | opened | [MLv2] [Bug] `replace-clause` doesn't handle metrics | .Regression/master .metabase-lib .Team/QueryProcessor :hammer_and_wrench: | When trying to replace a regular aggregation with a metric, `replace-clause` fails with the following error:
```js
core.cljs:1942 Uncaught (in promise) Error: nth not supported on this type cljs.core/PersistentHashMap
at Function.cljs$core$IFn$_invoke$arity$3 (core.cljs:1942:1)
at convert.cljc:428:1
at core.cljs:4416:1
at Function.G__30280__2 [as cljs$core$IFn$_invoke$arity$2] (core.cljs:4416:1)
at core.cljs:5686:1
at core.cljs:5686:1
at Object.cljs$core$IReduce$_reduce$arity$3 (core.cljs:5690:1)
at Function.cljs$core$IFn$_invoke$arity$3 (core.cljs:2570:1)
at Function.cljs$core$IFn$_invoke$arity$4 (core.cljs:2626:1)
at Function.cljs$core$IFn$_invoke$arity$3 (core.cljs:5275:1)
```
### Steps to reproduce
1. Sign in as admin
2. Create a metric for the sample Orders table at `/admin/datamodel/metrics`
3. Start a question from the Orders table
4. Add an aggregation like Count
5. Try to replace the Count aggregation with the metric from step 2
Code:
```
import * as Lib from "metabase-lib"
const aggregation = Lib.aggregations(query, 0)[0]
const metric = Lib.availableMetrics(query, 0)[0]
const query2 = Lib.replaceClause(query, 0, aggregation, metric)
// The error shows up when trying to convert this back to MLv1 format:
Lib.toLegacyQuery(query2)
``` | 1.0 | [MLv2] [Bug] `replace-clause` doesn't handle metrics - When trying to replace a regular aggregation with a metric, `replace-clause` fails with the following error:
```js
core.cljs:1942 Uncaught (in promise) Error: nth not supported on this type cljs.core/PersistentHashMap
at Function.cljs$core$IFn$_invoke$arity$3 (core.cljs:1942:1)
at convert.cljc:428:1
at core.cljs:4416:1
at Function.G__30280__2 [as cljs$core$IFn$_invoke$arity$2] (core.cljs:4416:1)
at core.cljs:5686:1
at core.cljs:5686:1
at Object.cljs$core$IReduce$_reduce$arity$3 (core.cljs:5690:1)
at Function.cljs$core$IFn$_invoke$arity$3 (core.cljs:2570:1)
at Function.cljs$core$IFn$_invoke$arity$4 (core.cljs:2626:1)
at Function.cljs$core$IFn$_invoke$arity$3 (core.cljs:5275:1)
```
### Steps to reproduce
1. Sign in as admin
2. Create a metric for the sample Orders table at `/admin/datamodel/metrics`
3. Start a question from the Orders table
4. Add an aggregation like Count
5. Try to replace the Count aggregation with the metric from step 2
Code:
```
import * as Lib from "metabase-lib"
const aggregation = Lib.aggregations(query, 0)[0]
const metric = Lib.availableMetrics(query, 0)[0]
const query2 = Lib.replaceClause(query, 0, aggregation, metric)
// The error shows up when trying to convert this back to MLv1 format:
Lib.toLegacyQuery(query2)
``` | process | replace clause doesn t handle metrics when trying to replace a regular aggregation with a metric replace clause fails with the following error js core cljs uncaught in promise error nth not supported on this type cljs core persistenthashmap at function cljs core ifn invoke arity core cljs at convert cljc at core cljs at function g core cljs at core cljs at core cljs at object cljs core ireduce reduce arity core cljs at function cljs core ifn invoke arity core cljs at function cljs core ifn invoke arity core cljs at function cljs core ifn invoke arity core cljs steps to reproduce sign in as admin create a metric for the sample orders table at admin datamodel metrics start a question from the orders table add an aggregation like count try to replace the count aggregation with the metric from step code import as lib from metabase lib const aggregation lib aggregations query const metric lib availablemetrics query const lib replaceclause query aggregation metric the error shows up when trying to convert this back to format lib tolegacyquery | 1 |
757,722 | 26,526,402,493 | IssuesEvent | 2023-01-19 09:06:01 | libscie/ResearchEquals.com | https://api.github.com/repos/libscie/ResearchEquals.com | closed | 🐛 `collection-submission-mailer` error | bug Priority | ### URL of the Page
_No response_
### What happened?
Seems like the collection submission mailer crashed the site for a few seconds. Got a 503 after submitting a DOI.
### Steps to reproduce
_No response_
### What browser(s) did you use?
_No response_
### Anything else?
This is the error from runtime logs
```
Received job to api/collection-submission-mailer 39
/var/www/app/node_modules/postmark/dist/client/errors/Errors.js:124
return new ApiInputError(message, code, statusCode);
ApiInputError: The Template's 'Alias' associated with this request is not valid or was not found.
``` | 1.0 | 🐛 `collection-submission-mailer` error - ### URL of the Page
_No response_
### What happened?
Seems like the collection submission mailer crashed the site for a few seconds. Got a 503 after submitting a DOI.
### Steps to reproduce
_No response_
### What browser(s) did you use?
_No response_
### Anything else?
This is the error from runtime logs
```
Received job to api/collection-submission-mailer 39
/var/www/app/node_modules/postmark/dist/client/errors/Errors.js:124
return new ApiInputError(message, code, statusCode);
ApiInputError: The Template's 'Alias' associated with this request is not valid or was not found.
``` | non_process | 🐛 collection submission mailer error url of the page no response what happened seems like the collection submission mailer crashed the site for a few seconds got a after submitting a doi steps to reproduce no response what browser s did you use no response anything else this is the error from runtime logs received job to api collection submission mailer var www app node modules postmark dist client errors errors js return new apiinputerror message code statuscode apiinputerror the template s alias associated with this request is not valid or was not found | 0 |
427,698 | 12,397,948,928 | IssuesEvent | 2020-05-21 00:15:30 | eclipse-ee4j/glassfish | https://api.github.com/repos/eclipse-ee4j/glassfish | closed | asadmin set/get works even if an incorrect server-name is specified | Component: admin ERR: Assignee Priority: Minor Stale Type: Bug | asadmin set/get works unexpectedly even if incorrect server name is specified.
When an invalid server name is used for mail-resource, the command works. It is not expected to work this way. For example, aaaaaaa is an invalid server-name, but the command works for the 'server'.
C:\>asadmin set aaaaaaa.resources.mail-resource.test.description=bbbbbb
aaaaaaa.resources.mail-resource.test.description=bbbbbb
Command set executed successfully.
C:\>asadmin get aaaaaaa.resources.mail-resource.test.description
aaaaaaa.resources.mail-resource.test.description=bbbbbb
Command get executed successfully.
C:\>asadmin get ccccccc.resources.mail-resource.test.description
ccccccc.resources.mail-resource.test.description=bbbbbb
Command get executed successfully.
Similarly, the command works even if incorrect server name used for jdbc-connection-pool.
C:\>asadmin set bbbb.resources.jdbc-connection-pool.DerbyPool.property.PortNumber=9999
bbbb.resources.jdbc-connection-pool.DerbyPool.property.PortNumber=9999
Command set executed successfully.
C:\>asadmin get cccc.resources.jdbc-connection-pool.DerbyPool.property.PortNumber
cccc.resources.jdbc-connection-pool.DerbyPool.property.PortNumber=9999
Command get executed successfully.
#### Affected Versions
[4.0_dev] | 1.0 | asadmin set/get works even if an incorrect server-name is specified - asadmin set/get works unexpectedly even if incorrect server name is specified.
When an invalid server name is used for mail-resource, the command works. It is not expected to work this way. For example, aaaaaaa is an invalid server-name, but the command works for the 'server'.
C:\>asadmin set aaaaaaa.resources.mail-resource.test.description=bbbbbb
aaaaaaa.resources.mail-resource.test.description=bbbbbb
Command set executed successfully.
C:\>asadmin get aaaaaaa.resources.mail-resource.test.description
aaaaaaa.resources.mail-resource.test.description=bbbbbb
Command get executed successfully.
C:\>asadmin get ccccccc.resources.mail-resource.test.description
ccccccc.resources.mail-resource.test.description=bbbbbb
Command get executed successfully.
Similarly, the command works even if incorrect server name used for jdbc-connection-pool.
C:\>asadmin set bbbb.resources.jdbc-connection-pool.DerbyPool.property.PortNumber=9999
bbbb.resources.jdbc-connection-pool.DerbyPool.property.PortNumber=9999
Command set executed successfully.
C:\>asadmin get cccc.resources.jdbc-connection-pool.DerbyPool.property.PortNumber
cccc.resources.jdbc-connection-pool.DerbyPool.property.PortNumber=9999
Command get executed successfully.
#### Affected Versions
[4.0_dev] | non_process | asadmin set get works even if an incorrect server name is specified asadmin set get works unexpectedly even if incorrect server name is specified when an invalid server name is used for mail resource the command works it is not expected to work this way for example aaaaaaa is an invalid server name but the command works for the server c asadmin set aaaaaaa resources mail resource test description bbbbbb aaaaaaa resources mail resource test description bbbbbb command set executed successfully c asadmin get aaaaaaa resources mail resource test description aaaaaaa resources mail resource test description bbbbbb command get executed successfully c asadmin get ccccccc resources mail resource test description ccccccc resources mail resource test description bbbbbb command get executed successfully similarly the command works even if incorrect server name used for jdbc connection pool c asadmin set bbbb resources jdbc connection pool derbypool property portnumber bbbb resources jdbc connection pool derbypool property portnumber command set executed successfully c asadmin get cccc resources jdbc connection pool derbypool property portnumber cccc resources jdbc connection pool derbypool property portnumber command get executed successfully affected versions | 0 |
644 | 3,104,749,317 | IssuesEvent | 2015-08-31 17:28:03 | sysown/proxysql-0.2 | https://api.github.com/repos/sysown/proxysql-0.2 | opened | Extend stats_mysql_query_digest with more statistics | ADMIN CONNECTION POOL PROTOCOL QUERY PROCESSOR STATISTICS | ## Why
stats_mysql_query_digest is a great source of information on what it is running inside ProxySQL . More statistics are welcome
## What
* [ ] add columns to identify backend: hostgroup_id , address, port
* [ ] add columns to identify the amount of data sent to backend: min_data_sent , max_data_sent , sum_data_sent
* [ ] add columns to identify the amount of data received from backend: min_data_recv , max_data_recv , sum_data_recv | 1.0 | Extend stats_mysql_query_digest with more statistics - ## Why
stats_mysql_query_digest is a great source of information on what it is running inside ProxySQL . More statistics are welcome
## What
* [ ] add columns to identify backend: hostgroup_id , address, port
* [ ] add columns to identify the amount of data sent to backend: min_data_sent , max_data_sent , sum_data_sent
* [ ] add columns to identify the amount of data received from backend: min_data_recv , max_data_recv , sum_data_recv | process | extend stats mysql query digest with more statistics why stats mysql query digest is a great source of information on what it is running inside proxysql more statistics are welcome what add columns to identify backend hostgroup id address port add columns to identify the amount of data sent to backend min data sent max data sent sum data sent add columns to identify the amount of data received from backend min data recv max data recv sum data recv | 1 |
591,095 | 17,794,726,807 | IssuesEvent | 2021-08-31 20:31:01 | workcraft/workcraft | https://api.github.com/repos/workcraft/workcraft | opened | Incorrect handling of symbolic link for the environment file | bug priority:high tag:model:circuit status:confirmed | The full path to the environment file is obtained via `file.getCanonicalPath()`. In Linux this method follows symbolic links, which may produce the target file without `.work` extension, even if the original file has such an extension. For example, consider a valid `env.work` file that is first renamed into `file_without_extension` and then symliked back to `env.work`:
mv env.work file_without_extension
ln -s file_without_extension env.work
If the symlink `env.work` is selected as the environment file, then the full path will point to `file_without_extension`, with will cause problems.
A solution is to get the absolute path without following symbolic links:
* Option A: This can be done via `getAbsolutePath()` method. This, however, does not guarantee unique representation.
* Option B: Use `Path.toRealPath()` method with `LinkOption.NOFOLLOW_LINKS` option.
| 1.0 | Incorrect handling of symbolic link for the environment file - The full path to the environment file is obtained via `file.getCanonicalPath()`. In Linux this method follows symbolic links, which may produce the target file without `.work` extension, even if the original file has such an extension. For example, consider a valid `env.work` file that is first renamed into `file_without_extension` and then symliked back to `env.work`:
mv env.work file_without_extension
ln -s file_without_extension env.work
If the symlink `env.work` is selected as the environment file, then the full path will point to `file_without_extension`, with will cause problems.
A solution is to get the absolute path without following symbolic links:
* Option A: This can be done via `getAbsolutePath()` method. This, however, does not guarantee unique representation.
* Option B: Use `Path.toRealPath()` method with `LinkOption.NOFOLLOW_LINKS` option.
| non_process | incorrect handling of symbolic link for the environment file the full path to the environment file is obtained via file getcanonicalpath in linux this method follows symbolic links which may produce the target file without work extension even if the original file has such an extension for example consider a valid env work file that is first renamed into file without extension and then symliked back to env work mv env work file without extension ln s file without extension env work if the symlink env work is selected as the environment file then the full path will point to file without extension with will cause problems a solution is to get the absolute path without following symbolic links option a this can be done via getabsolutepath method this however does not guarantee unique representation option b use path torealpath method with linkoption nofollow links option | 0 |
4,016 | 6,950,523,051 | IssuesEvent | 2017-12-06 11:05:52 | DevExpress/testcafe-hammerhead | https://api.github.com/repos/DevExpress/testcafe-hammerhead | closed | Proxy is not able to format code which contains the 'super' expression | AREA: server SYSTEM: resource processing TYPE: bug | Example:
```js
class x extends y {
method() {
return super[a];
}
}
```
Related with https://github.com/DevExpress/testcafe-hammerhead/issues/1389 | 1.0 | Proxy is not able to format code which contains the 'super' expression - Example:
```js
class x extends y {
method() {
return super[a];
}
}
```
Related with https://github.com/DevExpress/testcafe-hammerhead/issues/1389 | process | proxy is not able to format code which contains the super expression example js class x extends y method return super related with | 1 |
20,486 | 27,144,231,531 | IssuesEvent | 2023-02-16 18:36:04 | FTBTeam/FTB-App | https://api.github.com/repos/FTBTeam/FTB-App | closed | FTB App linux installer not working | bug app installer os/linux priority/urgent subprocess bug:functionality status: confirmed | ### What Operating System
Linux (Debian)
### App Version
FTBA_unix_202203251547-a5e4f8b1e5-release
### UI Version
_No response_
### Log Files
```
void@v0id:~/test$ ./FTBA_unix_202203251547-a5e4f8b1e5-release.sh
No suitable Java Virtual Machine could be found on your system.
Downloading JRE with wget ...
--2022-03-30 18:57:57-- https://github.com/adoptium/temurin17-binaries/releases/download/jdk-17.0.1%2B12/OpenJDK17U-jre_x64_linux_hotspot_17.0.1_12.tar.gz
Resolving github.com (github.com)... 140.82.121.4
Connecting to github.com (github.com)|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/372925194/43de5f79-f8c4-49dd-9e63-97446b1c8ebf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220330%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220330T165757Z&X-Amz-Expires=300&X-Amz-Signature=5aaa06df950974dc4feca227bb7989afc3f75ea6abb9cc8afd3e251cf01f29a2&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=372925194&response-content-disposition=attachment%3B%20filename%3DOpenJDK17U-jre_x64_linux_hotspot_17.0.1_12.tar.gz&response-content-type=application%2Foctet-stream [following]
--2022-03-30 18:57:57-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/372925194/43de5f79-f8c4-49dd-9e63-97446b1c8ebf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220330%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220330T165757Z&X-Amz-Expires=300&X-Amz-Signature=5aaa06df950974dc4feca227bb7989afc3f75ea6abb9cc8afd3e251cf01f29a2&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=372925194&response-content-disposition=attachment%3B%20filename%3DOpenJDK17U-jre_x64_linux_hotspot_17.0.1_12.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.110.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 46151359 (44M) [application/octet-stream]
Saving to: ‘jre.tar.gz’
jre.tar.gz 100%[===================>] 44.01M 8.06MB/s in 5.5s
2022-03-30 18:58:03 (8.07 MB/s) - ‘jre.tar.gz’ saved [46151359/46151359]
Unpacking JRE ...
Starting Installer ...
./FTBA_unix_202203251547-a5e4f8b1e5-release.sh: 683: /home/void/test/FTBA_unix_202203251547-a5e4f8b1e5-release.sh.27635.dir/jre/bin/java: not found
void@v0id:~/test$
```
### Debug Code
-
### Describe the bug
The install script can't find something in the downloaded JRE and is therefore crashing.
### Steps to reproduce
1. Download install script for linux on https://www.feed-the-beast.com/app
2. Make bash script excecutable
3. Excecute install script
### Expected behaviour
Installs FTB App
### Screenshots
_No response_
### Additional information
I managed to install it with some tweaks.
I looked at the tar.gz thescript is downloading and it appears that the downloaded file does't have the excpeted structure anymore. The files are not at the root level of the tar.gz. But the script expects them to be there.

My solution was to download the tar.gz seperatly extract it and create a new one where the files are on the root level of the tar.gz. Then I just put this newly created tar.gz next to the install script and ran it again. After that erverything worked fine
| 1.0 | FTB App linux installer not working - ### What Operating System
Linux (Debian)
### App Version
FTBA_unix_202203251547-a5e4f8b1e5-release
### UI Version
_No response_
### Log Files
```
void@v0id:~/test$ ./FTBA_unix_202203251547-a5e4f8b1e5-release.sh
No suitable Java Virtual Machine could be found on your system.
Downloading JRE with wget ...
--2022-03-30 18:57:57-- https://github.com/adoptium/temurin17-binaries/releases/download/jdk-17.0.1%2B12/OpenJDK17U-jre_x64_linux_hotspot_17.0.1_12.tar.gz
Resolving github.com (github.com)... 140.82.121.4
Connecting to github.com (github.com)|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/372925194/43de5f79-f8c4-49dd-9e63-97446b1c8ebf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220330%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220330T165757Z&X-Amz-Expires=300&X-Amz-Signature=5aaa06df950974dc4feca227bb7989afc3f75ea6abb9cc8afd3e251cf01f29a2&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=372925194&response-content-disposition=attachment%3B%20filename%3DOpenJDK17U-jre_x64_linux_hotspot_17.0.1_12.tar.gz&response-content-type=application%2Foctet-stream [following]
--2022-03-30 18:57:57-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/372925194/43de5f79-f8c4-49dd-9e63-97446b1c8ebf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220330%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220330T165757Z&X-Amz-Expires=300&X-Amz-Signature=5aaa06df950974dc4feca227bb7989afc3f75ea6abb9cc8afd3e251cf01f29a2&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=372925194&response-content-disposition=attachment%3B%20filename%3DOpenJDK17U-jre_x64_linux_hotspot_17.0.1_12.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.110.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 46151359 (44M) [application/octet-stream]
Saving to: ‘jre.tar.gz’
jre.tar.gz 100%[===================>] 44.01M 8.06MB/s in 5.5s
2022-03-30 18:58:03 (8.07 MB/s) - ‘jre.tar.gz’ saved [46151359/46151359]
Unpacking JRE ...
Starting Installer ...
./FTBA_unix_202203251547-a5e4f8b1e5-release.sh: 683: /home/void/test/FTBA_unix_202203251547-a5e4f8b1e5-release.sh.27635.dir/jre/bin/java: not found
void@v0id:~/test$
```
### Debug Code
-
### Describe the bug
The install script can't find something in the downloaded JRE and is therefore crashing.
### Steps to reproduce
1. Download install script for linux on https://www.feed-the-beast.com/app
2. Make bash script excecutable
3. Excecute install script
### Expected behaviour
Installs FTB App
### Screenshots
_No response_
### Additional information
I managed to install it with some tweaks.
I looked at the tar.gz thescript is downloading and it appears that the downloaded file does't have the excpeted structure anymore. The files are not at the root level of the tar.gz. But the script expects them to be there.

My solution was to download the tar.gz seperatly extract it and create a new one where the files are on the root level of the tar.gz. Then I just put this newly created tar.gz next to the install script and ran it again. After that erverything worked fine
| process | ftb app linux installer not working what operating system linux debian app version ftba unix release ui version no response log files void test ftba unix release sh no suitable java virtual machine could be found on your system downloading jre with wget resolving github com github com connecting to github com github com connected http request sent awaiting response found location resolving objects githubusercontent com objects githubusercontent com connecting to objects githubusercontent com objects githubusercontent com connected http request sent awaiting response ok length saving to ‘jre tar gz’ jre tar gz s in mb s ‘jre tar gz’ saved unpacking jre starting installer ftba unix release sh home void test ftba unix release sh dir jre bin java not found void test debug code describe the bug the install script can t find something in the downloaded jre and is therefore crashing steps to reproduce download install script for linux on make bash script excecutable excecute install script expected behaviour installs ftb app screenshots no response additional information i managed to install it with some tweaks i looked at the tar gz thescript is downloading and it appears that the downloaded file does t have the excpeted structure anymore the files are not at the root level of the tar gz but the script expects them to be there my solution was to download the tar gz seperatly extract it and create a new one where the files are on the root level of the tar gz then i just put this newly created tar gz next to the install script and ran it again after that erverything worked fine | 1 |
14,815 | 18,150,283,933 | IssuesEvent | 2021-09-26 06:27:48 | CATcher-org/CATcher | https://api.github.com/repos/CATcher-org/CATcher | closed | Automated release: 'Build' step fails as electron builder expects GH_TOKEN | aspect-Process | We're using the electron-builder tool to build the Electron application for all 3 major OS, in our automated release workflow.
Electron-builder has a default "publish" feature, where it tries to upload the built application to GitHub if it detects that we're building the application in a GitHub CI environment.
It requires the "GH_TOKEN" to be defined in the environment, in order to use this feature.
We're not actually using this publish feature, as we use another GitHub action to upload the applications.
But since we recently removed the "GH_TOKEN" from our deployment workflow file, electron-builder complains about the missing "GH_TOKEN" and stops building the application.
Possible solutions:
- Add back the "GH_TOKEN" definition into our deployment workflow
- Configure the `publish` option in `electron-builder.json` to `never` (relevant [docs](https://www.electron.build/configuration/publish))
Here are the logs from electron-builder from the failed "Build" step when I tried releasing v3.3.9 of CATcher:
```
• electron-builder version=22.2.0 os=5.8.0-1041-azure
• artifacts will be published if draft release exists reason=CI detected
• loaded configuration file=/home/runner/work/CATcher/CATcher/electron-builder.json
• description is missed in the package.json appPackageFile=/home/runner/work/CATcher/CATcher/package.json
• author is missed in the package.json appPackageFile=/home/runner/work/CATcher/CATcher/package.json
• packaging platform=linux arch=x64 electron=11.4.8 appOutDir=release/linux-unpacked
• downloading url=https://github.com/electron/electron/releases/download/v11.4.8/electron-v11.4.8-linux-x64.zip size=76 MB parts=4
• downloaded url=https://github.com/electron/electron/releases/download/v11.4.8/electron-v11.4.8-linux-x64.zip duration=1.022s
• building target=AppImage arch=x64 file=release/CATcher-3.3.9.AppImage
• application Linux category is set to default "Utility" reason=linux.category is not set and cannot map from macOS docs=https://www.electron.build/configuration/linux
• downloading url=https://github.com/electron-userland/electron-builder-binaries/releases/download/appimage-12.0.1/appimage-12.0.1.7z size=1.6 MB parts=1
• downloaded url=https://github.com/electron-userland/electron-builder-binaries/releases/download/appimage-12.0.1/appimage-12.0.1.7z duration=212ms
⨯ GitHub Personal Access Token is not set, neither programmatically, nor using env "GH_TOKEN"
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! CATcher@3.3.9 electron:linux: `npm run build:prod && electron-builder build --linux`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the CATcher@3.3.9 electron:linux script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/runner/.npm/_logs/2021-09-26T03_27_26_143Z-debug.log
Error: Process completed with exit code 1.
```
Here are the logs from the "Build" step when we released v3.3.8 (successful run):
```
• electron-builder version=22.2.0 os=5.8.0-1039-azure
• artifacts will be published if draft release exists reason=CI detected
• loaded configuration file=/home/runner/work/CATcher/CATcher/electron-builder.json
• description is missed in the package.json appPackageFile=/home/runner/work/CATcher/CATcher/package.json
• author is missed in the package.json appPackageFile=/home/runner/work/CATcher/CATcher/package.json
• packaging platform=linux arch=x64 electron=11.4.8 appOutDir=release/linux-unpacked
• downloading url=https://github.com/electron/electron/releases/download/v11.4.8/electron-v11.4.8-linux-x64.zip size=76 MB parts=4
• downloaded url=https://github.com/electron/electron/releases/download/v11.4.8/electron-v11.4.8-linux-x64.zip duration=1.338s
• building target=AppImage arch=x64 file=release/CATcher-3.3.8.AppImage
• application Linux category is set to default "Utility" reason=linux.category is not set and cannot map from macOS docs=https://www.electron.build/configuration/linux
• downloading url=https://github.com/electron-userland/electron-builder-binaries/releases/download/appimage-12.0.1/appimage-12.0.1.7z size=1.6 MB parts=1
• downloaded url=https://github.com/electron-userland/electron-builder-binaries/releases/download/appimage-12.0.1/appimage-12.0.1.7z duration=347ms
• publishing publisher=Github (owner: CATcher-org, project: CATcher, version: 3.3.8)
• uploading file=CATcher-3.3.8.AppImage provider=GitHub
• skipped publishing file=CATcher-3.3.8.AppImage reason=release doesn't exist and not created because "publish" is not "always" and build is not on tag tag=v3.3.8 version=3.3.8
• skipped publishing file=latest-linux.yml reason=release doesn't exist and not created because "publish" is not "always" and build is not on tag tag=v3.3.8 version=3.3.8
```
| 1.0 | Automated release: 'Build' step fails as electron builder expects GH_TOKEN - We're using the electron-builder tool to build the Electron application for all 3 major OS, in our automated release workflow.
Electron-builder has a default "publish" feature, where it tries to upload the built application to GitHub if it detects that we're building the application in a GitHub CI environment.
It requires the "GH_TOKEN" to be defined in the environment, in order to use this feature.
We're not actually using this publish feature, as we use another GitHub action to upload the applications.
But since we recently removed the "GH_TOKEN" from our deployment workflow file, electron-builder complains about the missing "GH_TOKEN" and stops building the application.
Possible solutions:
- Add back the "GH_TOKEN" definition into our deployment workflow
- Configure the `publish` option in `electron-builder.json` to `never` (relevant [docs](https://www.electron.build/configuration/publish))
Here are the logs from electron-builder from the failed "Build" step when I tried releasing v3.3.9 of CATcher:
```
• electron-builder version=22.2.0 os=5.8.0-1041-azure
• artifacts will be published if draft release exists reason=CI detected
• loaded configuration file=/home/runner/work/CATcher/CATcher/electron-builder.json
• description is missed in the package.json appPackageFile=/home/runner/work/CATcher/CATcher/package.json
• author is missed in the package.json appPackageFile=/home/runner/work/CATcher/CATcher/package.json
• packaging platform=linux arch=x64 electron=11.4.8 appOutDir=release/linux-unpacked
• downloading url=https://github.com/electron/electron/releases/download/v11.4.8/electron-v11.4.8-linux-x64.zip size=76 MB parts=4
• downloaded url=https://github.com/electron/electron/releases/download/v11.4.8/electron-v11.4.8-linux-x64.zip duration=1.022s
• building target=AppImage arch=x64 file=release/CATcher-3.3.9.AppImage
• application Linux category is set to default "Utility" reason=linux.category is not set and cannot map from macOS docs=https://www.electron.build/configuration/linux
• downloading url=https://github.com/electron-userland/electron-builder-binaries/releases/download/appimage-12.0.1/appimage-12.0.1.7z size=1.6 MB parts=1
• downloaded url=https://github.com/electron-userland/electron-builder-binaries/releases/download/appimage-12.0.1/appimage-12.0.1.7z duration=212ms
⨯ GitHub Personal Access Token is not set, neither programmatically, nor using env "GH_TOKEN"
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! CATcher@3.3.9 electron:linux: `npm run build:prod && electron-builder build --linux`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the CATcher@3.3.9 electron:linux script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/runner/.npm/_logs/2021-09-26T03_27_26_143Z-debug.log
Error: Process completed with exit code 1.
```
Here are the logs from the "Build" step when we released v3.3.8 (successful run):
```
• electron-builder version=22.2.0 os=5.8.0-1039-azure
• artifacts will be published if draft release exists reason=CI detected
• loaded configuration file=/home/runner/work/CATcher/CATcher/electron-builder.json
• description is missed in the package.json appPackageFile=/home/runner/work/CATcher/CATcher/package.json
• author is missed in the package.json appPackageFile=/home/runner/work/CATcher/CATcher/package.json
• packaging platform=linux arch=x64 electron=11.4.8 appOutDir=release/linux-unpacked
• downloading url=https://github.com/electron/electron/releases/download/v11.4.8/electron-v11.4.8-linux-x64.zip size=76 MB parts=4
• downloaded url=https://github.com/electron/electron/releases/download/v11.4.8/electron-v11.4.8-linux-x64.zip duration=1.338s
• building target=AppImage arch=x64 file=release/CATcher-3.3.8.AppImage
• application Linux category is set to default "Utility" reason=linux.category is not set and cannot map from macOS docs=https://www.electron.build/configuration/linux
• downloading url=https://github.com/electron-userland/electron-builder-binaries/releases/download/appimage-12.0.1/appimage-12.0.1.7z size=1.6 MB parts=1
• downloaded url=https://github.com/electron-userland/electron-builder-binaries/releases/download/appimage-12.0.1/appimage-12.0.1.7z duration=347ms
• publishing publisher=Github (owner: CATcher-org, project: CATcher, version: 3.3.8)
• uploading file=CATcher-3.3.8.AppImage provider=GitHub
• skipped publishing file=CATcher-3.3.8.AppImage reason=release doesn't exist and not created because "publish" is not "always" and build is not on tag tag=v3.3.8 version=3.3.8
• skipped publishing file=latest-linux.yml reason=release doesn't exist and not created because "publish" is not "always" and build is not on tag tag=v3.3.8 version=3.3.8
```
| process | automated release build step fails as electron builder expects gh token we re using the electron builder tool to build the electron application for all major os in our automated release workflow electron builder has a default publish feature where it tries to upload the built application to github if it detects that we re building the application in a github ci environment it requires the gh token to be defined in the environment in order to use this feature we re not actually using this publish feature as we use another github action to upload the applications but since we recently removed the gh token from our deployment workflow file electron builder complains about the missing gh token and stops building the application possible solutions add back the gh token definition into our deployment workflow configure the publish option in electron builder json to never relevant here are the logs from electron builder from the failed build step when i tried releasing of catcher • electron builder version os azure • artifacts will be published if draft release exists reason ci detected • loaded configuration file home runner work catcher catcher electron builder json • description is missed in the package json apppackagefile home runner work catcher catcher package json • author is missed in the package json apppackagefile home runner work catcher catcher package json • packaging platform linux arch electron appoutdir release linux unpacked • downloading url size mb parts • downloaded url duration • building target appimage arch file release catcher appimage • application linux category is set to default utility reason linux category is not set and cannot map from macos docs • downloading url size mb parts • downloaded url duration ⨯ github personal access token is not set neither programmatically nor using env gh token npm err code elifecycle npm err errno npm err catcher electron linux npm run build prod electron builder build linux npm err exit status npm err npm err failed at the catcher electron linux script npm err this is probably not a problem with npm there is likely additional logging output above npm err a complete log of this run can be found in npm err home runner npm logs debug log error process completed with exit code here are the logs from the build step when we released successful run • electron builder version os azure • artifacts will be published if draft release exists reason ci detected • loaded configuration file home runner work catcher catcher electron builder json • description is missed in the package json apppackagefile home runner work catcher catcher package json • author is missed in the package json apppackagefile home runner work catcher catcher package json • packaging platform linux arch electron appoutdir release linux unpacked • downloading url size mb parts • downloaded url duration • building target appimage arch file release catcher appimage • application linux category is set to default utility reason linux category is not set and cannot map from macos docs • downloading url size mb parts • downloaded url duration • publishing publisher github owner catcher org project catcher version • uploading file catcher appimage provider github • skipped publishing file catcher appimage reason release doesn t exist and not created because publish is not always and build is not on tag tag version • skipped publishing file latest linux yml reason release doesn t exist and not created because publish is not always and build is not on tag tag version | 1 |
684,295 | 23,413,715,820 | IssuesEvent | 2022-08-12 20:43:40 | googleapis/nodejs-bigtable | https://api.github.com/repos/googleapis/nodejs-bigtable | closed | Cluster: "after each" hook for "should create an instance with clusters for manual scaling" failed | type: bug priority: p1 :rotating_light: api: bigtable flakybot: issue | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 75c1a301cd3ec91c7b251b384307687d081525b9
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/60238819-33a6-4b7a-ac00-3045e8231bee), [Sponge](http://sponge2/60238819-33a6-4b7a-ac00-3045e8231bee)
status: failed
<details><summary>Test output</summary><br><pre>8 RESOURCE_EXHAUSTED: Quota exceeded for quota metric 'Instance and cluster write requests' and limit 'Instance and cluster write requests per day' of service 'bigtableadmin.googleapis.com' for consumer 'project_number:1046198160504'.
Error: 8 RESOURCE_EXHAUSTED: Quota exceeded for quota metric 'Instance and cluster write requests' and limit 'Instance and cluster write requests per day' of service 'bigtableadmin.googleapis.com' for consumer 'project_number:1046198160504'.
at Object.callErrorFromStatus (node_modules/@grpc/grpc-js/build/src/call.js:31:26)
-> /workspace/node_modules/@grpc/grpc-js/src/call.ts:81:24
at Object.onReceiveStatus (node_modules/@grpc/grpc-js/build/src/client.js:189:52)
-> /workspace/node_modules/@grpc/grpc-js/src/client.ts:351:36
at Object.onReceiveStatus (node_modules/@grpc/grpc-js/build/src/client-interceptors.js:365:141)
-> /workspace/node_modules/@grpc/grpc-js/src/client-interceptors.ts:462:34
at Object.onReceiveStatus (node_modules/@grpc/grpc-js/build/src/client-interceptors.js:328:181)
-> /workspace/node_modules/@grpc/grpc-js/src/client-interceptors.ts:424:48
at /workspace/node_modules/@grpc/grpc-js/build/src/call-stream.js:187:78
-> /workspace/node_modules/@grpc/grpc-js/src/call-stream.ts:330:24
at processTicksAndRejections (internal/process/task_queues.js:79:11)</pre></details> | 1.0 | Cluster: "after each" hook for "should create an instance with clusters for manual scaling" failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 75c1a301cd3ec91c7b251b384307687d081525b9
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/60238819-33a6-4b7a-ac00-3045e8231bee), [Sponge](http://sponge2/60238819-33a6-4b7a-ac00-3045e8231bee)
status: failed
<details><summary>Test output</summary><br><pre>8 RESOURCE_EXHAUSTED: Quota exceeded for quota metric 'Instance and cluster write requests' and limit 'Instance and cluster write requests per day' of service 'bigtableadmin.googleapis.com' for consumer 'project_number:1046198160504'.
Error: 8 RESOURCE_EXHAUSTED: Quota exceeded for quota metric 'Instance and cluster write requests' and limit 'Instance and cluster write requests per day' of service 'bigtableadmin.googleapis.com' for consumer 'project_number:1046198160504'.
at Object.callErrorFromStatus (node_modules/@grpc/grpc-js/build/src/call.js:31:26)
-> /workspace/node_modules/@grpc/grpc-js/src/call.ts:81:24
at Object.onReceiveStatus (node_modules/@grpc/grpc-js/build/src/client.js:189:52)
-> /workspace/node_modules/@grpc/grpc-js/src/client.ts:351:36
at Object.onReceiveStatus (node_modules/@grpc/grpc-js/build/src/client-interceptors.js:365:141)
-> /workspace/node_modules/@grpc/grpc-js/src/client-interceptors.ts:462:34
at Object.onReceiveStatus (node_modules/@grpc/grpc-js/build/src/client-interceptors.js:328:181)
-> /workspace/node_modules/@grpc/grpc-js/src/client-interceptors.ts:424:48
at /workspace/node_modules/@grpc/grpc-js/build/src/call-stream.js:187:78
-> /workspace/node_modules/@grpc/grpc-js/src/call-stream.ts:330:24
at processTicksAndRejections (internal/process/task_queues.js:79:11)</pre></details> | non_process | cluster after each hook for should create an instance with clusters for manual scaling failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output resource exhausted quota exceeded for quota metric instance and cluster write requests and limit instance and cluster write requests per day of service bigtableadmin googleapis com for consumer project number error resource exhausted quota exceeded for quota metric instance and cluster write requests and limit instance and cluster write requests per day of service bigtableadmin googleapis com for consumer project number at object callerrorfromstatus node modules grpc grpc js build src call js workspace node modules grpc grpc js src call ts at object onreceivestatus node modules grpc grpc js build src client js workspace node modules grpc grpc js src client ts at object onreceivestatus node modules grpc grpc js build src client interceptors js workspace node modules grpc grpc js src client interceptors ts at object onreceivestatus node modules grpc grpc js build src client interceptors js workspace node modules grpc grpc js src client interceptors ts at workspace node modules grpc grpc js build src call stream js workspace node modules grpc grpc js src call stream ts at processticksandrejections internal process task queues js | 0 |
5,307 | 8,125,238,906 | IssuesEvent | 2018-08-16 20:16:01 | MetaMask/metamask-extension | https://api.github.com/repos/MetaMask/metamask-extension | closed | Create a CLA bot | L09-process P3-soon | https://github.com/clabot/clabot
> The purpose of a CLA is to ensure that the guardian of a project's outputs has the necessary ownership or grants of rights over all contributions to allow them to distribute under the chosen licence. Wikipedia | 1.0 | Create a CLA bot - https://github.com/clabot/clabot
> The purpose of a CLA is to ensure that the guardian of a project's outputs has the necessary ownership or grants of rights over all contributions to allow them to distribute under the chosen licence. Wikipedia | process | create a cla bot the purpose of a cla is to ensure that the guardian of a project s outputs has the necessary ownership or grants of rights over all contributions to allow them to distribute under the chosen licence wikipedia | 1 |
872 | 3,330,812,446 | IssuesEvent | 2015-11-11 12:50:33 | DevExpress/testcafe-hammerhead | https://api.github.com/repos/DevExpress/testcafe-hammerhead | closed | Add support sites with 'Content-Security-Policy' | AREA: client AREA: server SYSTEM: resource processing TYPE: enhancement | 'Content-Security-Policy' header allows to restrict resouce loading (scripts, styles).
After proxing a same sites we can have a 'Refused to load the script ' errors.
Example - https://www.dropbox.com/
Details:
http://www.w3.org/TR/CSP1/
https://en.wikipedia.org/wiki/Content_Security_Policy
http://content-security-policy.com/
http://www.html5rocks.com/en/tutorials/security/content-security-policy/ | 1.0 | Add support sites with 'Content-Security-Policy' - 'Content-Security-Policy' header allows to restrict resouce loading (scripts, styles).
After proxing a same sites we can have a 'Refused to load the script ' errors.
Example - https://www.dropbox.com/
Details:
http://www.w3.org/TR/CSP1/
https://en.wikipedia.org/wiki/Content_Security_Policy
http://content-security-policy.com/
http://www.html5rocks.com/en/tutorials/security/content-security-policy/ | process | add support sites with content security policy content security policy header allows to restrict resouce loading scripts styles after proxing a same sites we can have a refused to load the script errors example details | 1 |
91,606 | 10,723,890,882 | IssuesEvent | 2019-10-27 21:51:32 | FreshRSS/FreshRSS | https://api.github.com/repos/FreshRSS/FreshRSS | closed | "subscribe" bookmarklet not working for me | Documentation :books: | Hello,
the "subscribe" bookmarklet do not work for me (and never did as long as I remember)
It always fails with the following error:
```
Error 404 - Not found
You are looking for a page which doesn’t exist
← Go back to your RSS feeds
```
Clicking on the "go back" link I'm getting redirectrd to a login form (URL: http://<my.site.com>/FreshRSS/p/i/?c=auth&a=login) which fails after entering valid user password:
```
Error 403 - Forbidden
You don’t have permission to access this page
← Go back to your RSS feeds
``` | 1.0 | "subscribe" bookmarklet not working for me - Hello,
the "subscribe" bookmarklet do not work for me (and never did as long as I remember)
It always fails with the following error:
```
Error 404 - Not found
You are looking for a page which doesn’t exist
← Go back to your RSS feeds
```
Clicking on the "go back" link I'm getting redirectrd to a login form (URL: http://<my.site.com>/FreshRSS/p/i/?c=auth&a=login) which fails after entering valid user password:
```
Error 403 - Forbidden
You don’t have permission to access this page
← Go back to your RSS feeds
``` | non_process | subscribe bookmarklet not working for me hello the subscribe bookmarklet do not work for me and never did as long as i remember it always fails with the following error error not found you are looking for a page which doesn’t exist ← go back to your rss feeds clicking on the go back link i m getting redirectrd to a login form url which fails after entering valid user password error forbidden you don’t have permission to access this page ← go back to your rss feeds | 0 |
232,519 | 18,885,306,577 | IssuesEvent | 2021-11-15 06:57:36 | CeronMayo/Twitter_GrupoAzul | https://api.github.com/repos/CeronMayo/Twitter_GrupoAzul | closed | [Interaccion con tercero] Dar like - Fail | bug TestQuality Medium | #### Steps to Reproduce:
| Step | Action | Expected | Status |
| -------- | -------- | -------- | -------- |
| 1| Bucar boton de "INICIO" para ver twits de terceros</p><br><p><img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27866.png" alt="" />| | Pass |
| 2| Escoger twit a experimentar</p><br><p><img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27868.png" alt="" />| | Pass |
| 3| Hacer clic en el corazon o el numero del contador del tweet</p><br><p><img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27894.png" alt="" />| | Pass |
| 4| Posterior al clic se resalta el like y se aumenta el contador </p><br><p><img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27896.png" alt="" />| | Fail |
| 1.0 | [Interaccion con tercero] Dar like - Fail - #### Steps to Reproduce:
| Step | Action | Expected | Status |
| -------- | -------- | -------- | -------- |
| 1| Bucar boton de "INICIO" para ver twits de terceros</p><br><p><img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27866.png" alt="" />| | Pass |
| 2| Escoger twit a experimentar</p><br><p><img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27868.png" alt="" />| | Pass |
| 3| Hacer clic en el corazon o el numero del contador del tweet</p><br><p><img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27894.png" alt="" />| | Pass |
| 4| Posterior al clic se resalta el like y se aumenta el contador </p><br><p><img src="https://bitmodern-testquality-server-storage.s3.us-west-2.amazonaws.com/attachment_Test_27896.png" alt="" />| | Fail |
| non_process | dar like fail steps to reproduce step action expected status bucar boton de inicio para ver twits de terceros pass escoger twit a experimentar pass hacer clic en el corazon o el numero del contador del tweet pass posterior al clic se resalta el like y se aumenta el contador fail | 0 |
2,573 | 5,329,535,560 | IssuesEvent | 2017-02-15 15:06:14 | paulkornikov/Pragonas | https://api.github.com/repos/paulkornikov/Pragonas | opened | Ajout indicateur déductible impots à opé, échéance et contrat | a-new feature processus workload V | Pour pouvoir ensuite produire le rapport XL des opés déductibles des impots. | 1.0 | Ajout indicateur déductible impots à opé, échéance et contrat - Pour pouvoir ensuite produire le rapport XL des opés déductibles des impots. | process | ajout indicateur déductible impots à opé échéance et contrat pour pouvoir ensuite produire le rapport xl des opés déductibles des impots | 1 |
2,675 | 5,495,375,615 | IssuesEvent | 2017-03-15 03:58:59 | ngnclht1102/VietPhuongCo | https://api.github.com/repos/ngnclht1102/VietPhuongCo | closed | Admin: add news not work | In Process | - [ ] Admin: add news not work
- [ ] Admin: add news should show product type for chosing | 1.0 | Admin: add news not work - - [ ] Admin: add news not work
- [ ] Admin: add news should show product type for chosing | process | admin add news not work admin add news not work admin add news should show product type for chosing | 1 |
188,914 | 22,046,947,670 | IssuesEvent | 2022-05-30 03:35:35 | nanopathi/linux-4.19.72_CVE-2020-14381 | https://api.github.com/repos/nanopathi/linux-4.19.72_CVE-2020-14381 | closed | CVE-2020-36322 (Medium) detected in multiple libraries - autoclosed | security vulnerability | ## CVE-2020-36322 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the FUSE filesystem implementation in the Linux kernel before 5.10.6, aka CID-5d069dbe8aaf. fuse_do_getattr() calls make_bad_inode() in inappropriate situations, causing a system crash. NOTE: the original fix for this vulnerability was incomplete, and its incompleteness is tracked as CVE-2021-28950.
<p>Publish Date: 2021-04-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36322>CVE-2020-36322</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-36322">https://www.linuxkernelcves.com/cves/CVE-2020-36322</a></p>
<p>Release Date: 2021-04-14</p>
<p>Fix Resolution: v5.4.88,v5.10.6,v5.11-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-36322 (Medium) detected in multiple libraries - autoclosed - ## CVE-2020-36322 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b>, <b>linuxlinux-4.19.237</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the FUSE filesystem implementation in the Linux kernel before 5.10.6, aka CID-5d069dbe8aaf. fuse_do_getattr() calls make_bad_inode() in inappropriate situations, causing a system crash. NOTE: the original fix for this vulnerability was incomplete, and its incompleteness is tracked as CVE-2021-28950.
<p>Publish Date: 2021-04-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36322>CVE-2020-36322</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-36322">https://www.linuxkernelcves.com/cves/CVE-2020-36322</a></p>
<p>Release Date: 2021-04-14</p>
<p>Fix Resolution: v5.4.88,v5.10.6,v5.11-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in multiple libraries autoclosed cve medium severity vulnerability vulnerable libraries linuxlinux linuxlinux linuxlinux linuxlinux linuxlinux linuxlinux vulnerability details an issue was discovered in the fuse filesystem implementation in the linux kernel before aka cid fuse do getattr calls make bad inode in inappropriate situations causing a system crash note the original fix for this vulnerability was incomplete and its incompleteness is tracked as cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
64,691 | 14,677,222,778 | IssuesEvent | 2020-12-30 22:34:13 | GooseWSS/ksa | https://api.github.com/repos/GooseWSS/ksa | opened | CVE-2018-1199 (Medium) detected in spring-core-3.1.1.RELEASE.jar | security vulnerability | ## CVE-2018-1199 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-core-3.1.1.RELEASE.jar</b></p></summary>
<p>Spring Framework Parent</p>
<p>Path to dependency file: ksa/ksa-core/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,ksa/ksa-web-root/ksa-web/target/ROOT/WEB-INF/lib/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-core-3.1.1.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/GooseWSS/ksa/commit/92b4fc1a7755c8d454d53e9ae803447b86a9521a">92b4fc1a7755c8d454d53e9ae803447b86a9521a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Security (Spring Security 4.1.x before 4.1.5, 4.2.x before 4.2.4, and 5.0.x before 5.0.1; and Spring Framework 4.3.x before 4.3.14 and 5.0.x before 5.0.3) does not consider URL path parameters when processing security constraints. By adding a URL path parameter with special encodings, an attacker may be able to bypass a security constraint. The root cause of this issue is a lack of clarity regarding the handling of path parameters in the Servlet Specification. Some Servlet containers include path parameters in the value returned for getPathInfo() and some do not. Spring Security uses the value returned by getPathInfo() as part of the process of mapping requests to security constraints. In this particular attack, different character encodings used in path parameters allows secured Spring MVC static resource URLs to be bypassed.
<p>Publish Date: 2018-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1199>CVE-2018-1199</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1199">https://nvd.nist.gov/vuln/detail/CVE-2018-1199</a></p>
<p>Release Date: 2018-03-16</p>
<p>Fix Resolution: org.springframework.security:spring-security-web:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,org.springframework.security:spring-security-config:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,org.springframework:spring-core:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,4.3.14.RELEASE</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-core","packageVersion":"3.1.1.RELEASE","isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-core:3.1.1.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.security:spring-security-web:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,org.springframework.security:spring-security-config:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,org.springframework:spring-core:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,4.3.14.RELEASE"}],"vulnerabilityIdentifier":"CVE-2018-1199","vulnerabilityDetails":"Spring Security (Spring Security 4.1.x before 4.1.5, 4.2.x before 4.2.4, and 5.0.x before 5.0.1; and Spring Framework 4.3.x before 4.3.14 and 5.0.x before 5.0.3) does not consider URL path parameters when processing security constraints. By adding a URL path parameter with special encodings, an attacker may be able to bypass a security constraint. The root cause of this issue is a lack of clarity regarding the handling of path parameters in the Servlet Specification. Some Servlet containers include path parameters in the value returned for getPathInfo() and some do not. Spring Security uses the value returned by getPathInfo() as part of the process of mapping requests to security constraints. In this particular attack, different character encodings used in path parameters allows secured Spring MVC static resource URLs to be bypassed.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1199","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-1199 (Medium) detected in spring-core-3.1.1.RELEASE.jar - ## CVE-2018-1199 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-core-3.1.1.RELEASE.jar</b></p></summary>
<p>Spring Framework Parent</p>
<p>Path to dependency file: ksa/ksa-core/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,ksa/ksa-web-root/ksa-web/target/ROOT/WEB-INF/lib/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-core/3.1.1.RELEASE/spring-core-3.1.1.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-core-3.1.1.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/GooseWSS/ksa/commit/92b4fc1a7755c8d454d53e9ae803447b86a9521a">92b4fc1a7755c8d454d53e9ae803447b86a9521a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Security (Spring Security 4.1.x before 4.1.5, 4.2.x before 4.2.4, and 5.0.x before 5.0.1; and Spring Framework 4.3.x before 4.3.14 and 5.0.x before 5.0.3) does not consider URL path parameters when processing security constraints. By adding a URL path parameter with special encodings, an attacker may be able to bypass a security constraint. The root cause of this issue is a lack of clarity regarding the handling of path parameters in the Servlet Specification. Some Servlet containers include path parameters in the value returned for getPathInfo() and some do not. Spring Security uses the value returned by getPathInfo() as part of the process of mapping requests to security constraints. In this particular attack, different character encodings used in path parameters allows secured Spring MVC static resource URLs to be bypassed.
<p>Publish Date: 2018-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1199>CVE-2018-1199</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1199">https://nvd.nist.gov/vuln/detail/CVE-2018-1199</a></p>
<p>Release Date: 2018-03-16</p>
<p>Fix Resolution: org.springframework.security:spring-security-web:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,org.springframework.security:spring-security-config:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,org.springframework:spring-core:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,4.3.14.RELEASE</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-core","packageVersion":"3.1.1.RELEASE","isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-core:3.1.1.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.security:spring-security-web:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,org.springframework.security:spring-security-config:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,org.springframework:spring-core:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE,5.0.3.RELEASE,4.3.14.RELEASE"}],"vulnerabilityIdentifier":"CVE-2018-1199","vulnerabilityDetails":"Spring Security (Spring Security 4.1.x before 4.1.5, 4.2.x before 4.2.4, and 5.0.x before 5.0.1; and Spring Framework 4.3.x before 4.3.14 and 5.0.x before 5.0.3) does not consider URL path parameters when processing security constraints. By adding a URL path parameter with special encodings, an attacker may be able to bypass a security constraint. The root cause of this issue is a lack of clarity regarding the handling of path parameters in the Servlet Specification. Some Servlet containers include path parameters in the value returned for getPathInfo() and some do not. Spring Security uses the value returned by getPathInfo() as part of the process of mapping requests to security constraints. In this particular attack, different character encodings used in path parameters allows secured Spring MVC static resource URLs to be bypassed.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1199","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_process | cve medium detected in spring core release jar cve medium severity vulnerability vulnerable library spring core release jar spring framework parent path to dependency file ksa ksa core pom xml path to vulnerable library canner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar ksa ksa web root ksa web target root web inf lib spring core release jar home wss scanner repository org springframework spring core release spring core release jar home wss scanner repository org springframework spring core release spring core release jar dependency hierarchy x spring core release jar vulnerable library found in head commit a href found in base branch master vulnerability details spring security spring security x before x before and x before and spring framework x before and x before does not consider url path parameters when processing security constraints by adding a url path parameter with special encodings an attacker may be able to bypass a security constraint the root cause of this issue is a lack of clarity regarding the handling of path parameters in the servlet specification some servlet containers include path parameters in the value returned for getpathinfo and some do not spring security uses the value returned by getpathinfo as part of the process of mapping requests to security constraints in this particular attack different character encodings used in path parameters allows secured spring mvc static resource urls to be bypassed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework security spring security web release release release release org springframework security spring security config release release release release org springframework spring core release release release release release rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails spring security spring security x before x before and x before and spring framework x before and x before does not consider url path parameters when processing security constraints by adding a url path parameter with special encodings an attacker may be able to bypass a security constraint the root cause of this issue is a lack of clarity regarding the handling of path parameters in the servlet specification some servlet containers include path parameters in the value returned for getpathinfo and some do not spring security uses the value returned by getpathinfo as part of the process of mapping requests to security constraints in this particular attack different character encodings used in path parameters allows secured spring mvc static resource urls to be bypassed vulnerabilityurl | 0 |
3,735 | 4,677,749,084 | IssuesEvent | 2016-10-07 15:59:33 | owncloud/core | https://api.github.com/repos/owncloud/core | closed | Verify updates of apps and core | enhancement research security | Currently downloaded apps and core are not verified with a public key cryptography system.
In order to ensure the security of automatic updates we should sign our updates from within an air-gapped system. Apps should be signed by the maintainer. | True | Verify updates of apps and core - Currently downloaded apps and core are not verified with a public key cryptography system.
In order to ensure the security of automatic updates we should sign our updates from within an air-gapped system. Apps should be signed by the maintainer. | non_process | verify updates of apps and core currently downloaded apps and core are not verified with a public key cryptography system in order to ensure the security of automatic updates we should sign our updates from within an air gapped system apps should be signed by the maintainer | 0 |
142,574 | 13,034,486,680 | IssuesEvent | 2020-07-28 08:47:21 | IBM/ibm-spectrum-scale-csi | https://api.github.com/repos/IBM/ibm-spectrum-scale-csi | closed | PVC fails when using uid and gid in storage class | Component: Documentation Phase: Field Severity: 3 Target: Driver Type: Documentation | For my storage class I used `uid` and `gid` params as stated in the Knowledge Center.
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibm-spectrum-scale-csi-remotefs-gid
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: "remote_fs0_1m"
clusterId: "4473793006880872527"
uid: "3000"
gid: "3000"
reclaimPolicy: Delete
```
However, when I create a PVC with this class, it fails with an error `[Remote call completed with error [400 Bad Request]]`. I can see these messages in the logs:
```
I0129 20:29:09.676230 1 rest_v2.go:64] rest_v2 isRequestAccepted. url: https://10.10.1.52:443/scalemgmt/v2/filesystems/fs0_1m/filesets, response: {{202 The request was accepted for processing.} [{{[] [] 0 [] []} {POST /scalemgmt/v2/filesystems/fs0_1m/filesets map[comment:Fileset created by IBM Container Storage Interface driver filesetName:pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f inodeSpace:new owner:3000:3000]} 3000000000126 2020-01-29 21:29:09,672 N/A RUNNING}]}
I0129 20:29:09.676299 1 rest_v2.go:44] rest_v2 isStatusOK. statusCode: 202
I0129 20:29:09.676313 1 rest_v2.go:77] rest_v2 waitForJobCompletion. jobID: 3000000000126, statusCode: 202
I0129 20:29:09.676324 1 rest_v2.go:55] rest_v2 checkAsynchronousJob. statusCode: 202
I0129 20:29:09.676336 1 http_utils.go:60] http_utils FormatURL. url: https://10.10.1.52:443/
I0129 20:29:09.676351 1 rest_v2.go:90] rest_v2 AsyncJobCompletion. jobURL: https://10.10.1.52:443/scalemgmt/v2/jobs/3000000000126?fields=:all:
I0129 20:29:09.676363 1 rest_v2.go:534] rest_v2 doHTTP. endpoint: https://10.10.1.52:443/scalemgmt/v2/jobs/3000000000126?fields=:all:, method: GET, param: <nil>
I0129 20:29:09.676378 1 http_utils.go:74] http_utils HttpExecuteUserAuth. type: GET, url: https://10.10.1.52:443/scalemgmt/v2/jobs/3000000000126?fields=:all:, user: csiadmin
I0129 20:29:09.704182 1 http_utils.go:44] http_utils UnmarshalResponse. response: &{0xc0001cc4c0 {0 0} false <nil> 0x6e1a80 0x6e1a10}
I0129 20:29:09.704308 1 rest_v2.go:44] rest_v2 isStatusOK. statusCode: 200
I0129 20:29:11.704552 1 rest_v2.go:534] rest_v2 doHTTP. endpoint: https://10.10.1.52:443/scalemgmt/v2/jobs/3000000000126?fields=:all:, method: GET, param: <nil>
I0129 20:29:11.704595 1 http_utils.go:74] http_utils HttpExecuteUserAuth. type: GET, url: https://10.10.1.52:443/scalemgmt/v2/jobs/3000000000126?fields=:all:, user: csiadmin
I0129 20:29:11.736244 1 http_utils.go:44] http_utils UnmarshalResponse. response: &{0xc0001cc640 {0 0} false <nil> 0x6e1a80 0x6e1a10}
I0129 20:29:11.736455 1 rest_v2.go:44] rest_v2 isStatusOK. statusCode: 200
E0129 20:29:11.736475 1 rest_v2.go:111] Async Job failed: {{200 The request finished successfully.} [{{[] [] 9 [EFSSP0010C CLI parser: The object "3000" specified for "user" does not exist.] []} {POST /scalemgmt/v2/filesystems/fs0_1m/filesets map[comment:Fileset created by IBM Container Storage Interface driver filesetName:pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f inodeSpace:new owner:3000:3000]} 3000000000126 2020-01-29 21:29:09,672 2020-01-29 21:29:09,705 FAILED}]}
E0129 20:29:11.736536 1 rest_v2.go:295] Unable to create fileset pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f: [EFSSP0010C CLI parser: The object "3000" specified for "user" does not exist.]
I0129 20:29:11.736553 1 rest_v2.go:390] rest_v2 ListFileset. filesystem: fs0_1m, fileset: pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f
I0129 20:29:11.736566 1 http_utils.go:60] http_utils FormatURL. url: https://10.10.1.52:443/
I0129 20:29:11.736582 1 rest_v2.go:534] rest_v2 doHTTP. endpoint: https://10.10.1.52:443/scalemgmt/v2/filesystems/fs0_1m/filesets/pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f, method: GET, param: <nil>
I0129 20:29:11.736594 1 http_utils.go:74] http_utils HttpExecuteUserAuth. type: GET, url: https://10.10.1.52:443/scalemgmt/v2/filesystems/fs0_1m/filesets/pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f, user: csiadmin
I0129 20:29:11.763780 1 http_utils.go:44] http_utils UnmarshalResponse. response: &{0xc00025aa40 {0 0} false <nil> 0x6e1a80 0x6e1a10}
I0129 20:29:11.763959 1 rest_v2.go:44] rest_v2 isStatusOK. statusCode: 400
E0129 20:29:11.763995 1 rest_v2.go:397] Error in list fileset request: Remote call completed with error [400 Bad Request]
E0129 20:29:11.764019 1 utils.go:55] GRPC error: rpc error: code = Internal desc = Unable to create fileset [pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f] in FS [fs0_1m]. Error [Remote call completed with error [400 Bad Request]]
```
Please provide a fix for this problem.
| 2.0 | PVC fails when using uid and gid in storage class - For my storage class I used `uid` and `gid` params as stated in the Knowledge Center.
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibm-spectrum-scale-csi-remotefs-gid
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: "remote_fs0_1m"
clusterId: "4473793006880872527"
uid: "3000"
gid: "3000"
reclaimPolicy: Delete
```
However, when I create a PVC with this class, it fails with an error `[Remote call completed with error [400 Bad Request]]`. I can see these messages in the logs:
```
I0129 20:29:09.676230 1 rest_v2.go:64] rest_v2 isRequestAccepted. url: https://10.10.1.52:443/scalemgmt/v2/filesystems/fs0_1m/filesets, response: {{202 The request was accepted for processing.} [{{[] [] 0 [] []} {POST /scalemgmt/v2/filesystems/fs0_1m/filesets map[comment:Fileset created by IBM Container Storage Interface driver filesetName:pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f inodeSpace:new owner:3000:3000]} 3000000000126 2020-01-29 21:29:09,672 N/A RUNNING}]}
I0129 20:29:09.676299 1 rest_v2.go:44] rest_v2 isStatusOK. statusCode: 202
I0129 20:29:09.676313 1 rest_v2.go:77] rest_v2 waitForJobCompletion. jobID: 3000000000126, statusCode: 202
I0129 20:29:09.676324 1 rest_v2.go:55] rest_v2 checkAsynchronousJob. statusCode: 202
I0129 20:29:09.676336 1 http_utils.go:60] http_utils FormatURL. url: https://10.10.1.52:443/
I0129 20:29:09.676351 1 rest_v2.go:90] rest_v2 AsyncJobCompletion. jobURL: https://10.10.1.52:443/scalemgmt/v2/jobs/3000000000126?fields=:all:
I0129 20:29:09.676363 1 rest_v2.go:534] rest_v2 doHTTP. endpoint: https://10.10.1.52:443/scalemgmt/v2/jobs/3000000000126?fields=:all:, method: GET, param: <nil>
I0129 20:29:09.676378 1 http_utils.go:74] http_utils HttpExecuteUserAuth. type: GET, url: https://10.10.1.52:443/scalemgmt/v2/jobs/3000000000126?fields=:all:, user: csiadmin
I0129 20:29:09.704182 1 http_utils.go:44] http_utils UnmarshalResponse. response: &{0xc0001cc4c0 {0 0} false <nil> 0x6e1a80 0x6e1a10}
I0129 20:29:09.704308 1 rest_v2.go:44] rest_v2 isStatusOK. statusCode: 200
I0129 20:29:11.704552 1 rest_v2.go:534] rest_v2 doHTTP. endpoint: https://10.10.1.52:443/scalemgmt/v2/jobs/3000000000126?fields=:all:, method: GET, param: <nil>
I0129 20:29:11.704595 1 http_utils.go:74] http_utils HttpExecuteUserAuth. type: GET, url: https://10.10.1.52:443/scalemgmt/v2/jobs/3000000000126?fields=:all:, user: csiadmin
I0129 20:29:11.736244 1 http_utils.go:44] http_utils UnmarshalResponse. response: &{0xc0001cc640 {0 0} false <nil> 0x6e1a80 0x6e1a10}
I0129 20:29:11.736455 1 rest_v2.go:44] rest_v2 isStatusOK. statusCode: 200
E0129 20:29:11.736475 1 rest_v2.go:111] Async Job failed: {{200 The request finished successfully.} [{{[] [] 9 [EFSSP0010C CLI parser: The object "3000" specified for "user" does not exist.] []} {POST /scalemgmt/v2/filesystems/fs0_1m/filesets map[comment:Fileset created by IBM Container Storage Interface driver filesetName:pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f inodeSpace:new owner:3000:3000]} 3000000000126 2020-01-29 21:29:09,672 2020-01-29 21:29:09,705 FAILED}]}
E0129 20:29:11.736536 1 rest_v2.go:295] Unable to create fileset pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f: [EFSSP0010C CLI parser: The object "3000" specified for "user" does not exist.]
I0129 20:29:11.736553 1 rest_v2.go:390] rest_v2 ListFileset. filesystem: fs0_1m, fileset: pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f
I0129 20:29:11.736566 1 http_utils.go:60] http_utils FormatURL. url: https://10.10.1.52:443/
I0129 20:29:11.736582 1 rest_v2.go:534] rest_v2 doHTTP. endpoint: https://10.10.1.52:443/scalemgmt/v2/filesystems/fs0_1m/filesets/pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f, method: GET, param: <nil>
I0129 20:29:11.736594 1 http_utils.go:74] http_utils HttpExecuteUserAuth. type: GET, url: https://10.10.1.52:443/scalemgmt/v2/filesystems/fs0_1m/filesets/pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f, user: csiadmin
I0129 20:29:11.763780 1 http_utils.go:44] http_utils UnmarshalResponse. response: &{0xc00025aa40 {0 0} false <nil> 0x6e1a80 0x6e1a10}
I0129 20:29:11.763959 1 rest_v2.go:44] rest_v2 isStatusOK. statusCode: 400
E0129 20:29:11.763995 1 rest_v2.go:397] Error in list fileset request: Remote call completed with error [400 Bad Request]
E0129 20:29:11.764019 1 utils.go:55] GRPC error: rpc error: code = Internal desc = Unable to create fileset [pvc-00e5a5f3-42d6-11ea-b63b-7ed30a6ab53f] in FS [fs0_1m]. Error [Remote call completed with error [400 Bad Request]]
```
Please provide a fix for this problem.
| non_process | pvc fails when using uid and gid in storage class for my storage class i used uid and gid params as stated in the knowledge center apiversion storage io kind storageclass metadata name ibm spectrum scale csi remotefs gid provisioner spectrumscale csi ibm com parameters volbackendfs remote clusterid uid gid reclaimpolicy delete however when i create a pvc with this class it fails with an error i can see these messages in the logs rest go rest isrequestaccepted url response the request was accepted for processing post scalemgmt filesystems filesets map n a running rest go rest isstatusok statuscode rest go rest waitforjobcompletion jobid statuscode rest go rest checkasynchronousjob statuscode http utils go http utils formaturl url rest go rest asyncjobcompletion joburl rest go rest dohttp endpoint method get param http utils go http utils httpexecuteuserauth type get url user csiadmin http utils go http utils unmarshalresponse response false rest go rest isstatusok statuscode rest go rest dohttp endpoint method get param http utils go http utils httpexecuteuserauth type get url user csiadmin http utils go http utils unmarshalresponse response false rest go rest isstatusok statuscode rest go async job failed the request finished successfully post scalemgmt filesystems filesets map failed rest go unable to create fileset pvc rest go rest listfileset filesystem fileset pvc http utils go http utils formaturl url rest go rest dohttp endpoint method get param http utils go http utils httpexecuteuserauth type get url user csiadmin http utils go http utils unmarshalresponse response false rest go rest isstatusok statuscode rest go error in list fileset request remote call completed with error utils go grpc error rpc error code internal desc unable to create fileset in fs error please provide a fix for this problem | 0 |
12,492 | 14,959,241,449 | IssuesEvent | 2021-01-27 02:42:24 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | Display warning if access log is empty | log-processing question | I setup goaccess but somehow i forgot to add access_log on my nginx server block. When run
```
goaccess /var/log/nginx/access.log
# OR
goaccess -f /var/log/nginx/access.log
```
didn't complaint access.log file is empty. goaccess just quit after execute these command.
```
goaccess /var/log/nginx/access.log -o report.html
```
above command just echo `/var/log/nginx/access.log` without any warning.
I'd love if `goaccess` give warning if log is empty, so we know whats wrong :) | 1.0 | Display warning if access log is empty - I setup goaccess but somehow i forgot to add access_log on my nginx server block. When run
```
goaccess /var/log/nginx/access.log
# OR
goaccess -f /var/log/nginx/access.log
```
didn't complaint access.log file is empty. goaccess just quit after execute these command.
```
goaccess /var/log/nginx/access.log -o report.html
```
above command just echo `/var/log/nginx/access.log` without any warning.
I'd love if `goaccess` give warning if log is empty, so we know whats wrong :) | process | display warning if access log is empty i setup goaccess but somehow i forgot to add access log on my nginx server block when run goaccess var log nginx access log or goaccess f var log nginx access log didn t complaint access log file is empty goaccess just quit after execute these command goaccess var log nginx access log o report html above command just echo var log nginx access log without any warning i d love if goaccess give warning if log is empty so we know whats wrong | 1 |
21,484 | 29,577,389,421 | IssuesEvent | 2023-06-07 00:48:24 | dart-lang/linter | https://api.github.com/repos/dart-lang/linter | closed | remove stale entries from `rules.json` | P2 process | I'd like to remove a few things from our generated json descriptions of lint rules (https://github.com/dart-lang/linter/blob/gh-pages/lints/machine/rules.json).
Specifically, I'd like to remove:
* `"maturity"` (now state)
* `"sinceLinter"` (see: https://github.com/dart-lang/linter/issues/4426)
@parlough, @devoncarew: do you have any idea who (if anyone) is using this file?
(I feel like I put this together for @csells a while back but can't recall how it's use has evolved...)
| 1.0 | remove stale entries from `rules.json` - I'd like to remove a few things from our generated json descriptions of lint rules (https://github.com/dart-lang/linter/blob/gh-pages/lints/machine/rules.json).
Specifically, I'd like to remove:
* `"maturity"` (now state)
* `"sinceLinter"` (see: https://github.com/dart-lang/linter/issues/4426)
@parlough, @devoncarew: do you have any idea who (if anyone) is using this file?
(I feel like I put this together for @csells a while back but can't recall how it's use has evolved...)
| process | remove stale entries from rules json i d like to remove a few things from our generated json descriptions of lint rules specifically i d like to remove maturity now state sincelinter see parlough devoncarew do you have any idea who if anyone is using this file i feel like i put this together for csells a while back but can t recall how it s use has evolved | 1 |
7,628 | 10,730,041,295 | IssuesEvent | 2019-10-28 16:39:48 | FreeUKGen/FreeBMD2 | https://api.github.com/repos/FreeUKGen/FreeBMD2 | opened | Update to master | Developer process | To ensure the code is kept up to date/moved into the Master code. Ideally monthly (to avoid the difficulties we faced when merging code over on FC/FR). | 1.0 | Update to master - To ensure the code is kept up to date/moved into the Master code. Ideally monthly (to avoid the difficulties we faced when merging code over on FC/FR). | process | update to master to ensure the code is kept up to date moved into the master code ideally monthly to avoid the difficulties we faced when merging code over on fc fr | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.