Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9,649
| 12,620,745,379
|
IssuesEvent
|
2020-06-13 08:33:21
|
OUDcollective/twenty20times
|
https://api.github.com/repos/OUDcollective/twenty20times
|
opened
|
Operations strategist determined to provide total Customer Satisfaction by fostering and mentoring the people process, developing cross-functional collaborative environments; while using creative problem solving techniques and establishing easy to understand but increasingly difficult to achieve key performance indicators. - Google Search
|
help wanted question with-wind workflow-process
|

# APPLICATION TRACKING SYSTEMS ARE TERRIBLY OUT-OF-TOUCH
```
{
Git Googlebot actionable;
ON (C BRENNAN POOLE);
FOR (RESUME);
CALCULATE...
..
.
##ERROR STATES
1. Harvard Business Review (Which I am a new member of the LinkedIn community - doubt they realize - just how un- and under - freakin' - employed I was - when the elitist took to acceptance.
2. Deloitte - Yeah - well landed on their SERP... AGAIN 10 June, 2020 wk (on a profile I barely am capable of maintaining.
3. Google SERP :
Google Bot - I OWN YOU!!!
Best,
x.____________
>Need an actionable and immediate value - add asset?
**LETS TALK!**
+1 678-338-7339
chief@chasingthewindllc.com
---
**Source URL**:
[https://www.google.com/search?q=Operations+strategist+determined+to+provide+total+Customer+Satisfaction+by+fostering+and+mentoring+the+people+process,+developing+cross-functional+collaborative+environments%3B+while+using+creative+problem+solving+techniques+and+establishing+easy+to+understand+but+increasingly+difficult+to+achieve+key+performance+indicators.&newwindow=1&sxsrf=ALeKk03MFeoCsAkjA6PCTuuab_QikImlcQ:1592030819081&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjGgcX7mP7pAhUGRjABHRnDC6sQ_AUoAnoECFkQBA&biw=2048&bih=710&dpr=1.25#imgrc=Ki1Fbe_2Z5HmJM](https://www.google.com/search?q=Operations+strategist+determined+to+provide+total+Customer+Satisfaction+by+fostering+and+mentoring+the+people+process,+developing+cross-functional+collaborative+environments%3B+while+using+creative+problem+solving+techniques+and+establishing+easy+to+understand+but+increasingly+difficult+to+achieve+key+performance+indicators.&newwindow=1&sxsrf=ALeKk03MFeoCsAkjA6PCTuuab_QikImlcQ:1592030819081&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjGgcX7mP7pAhUGRjABHRnDC6sQ_AUoAnoECFkQBA&biw=2048&bih=710&dpr=1.25#imgrc=Ki1Fbe_2Z5HmJM)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.45</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>2560x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>2560x888</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>125%</td></tr></table>
|
1.0
|
Operations strategist determined to provide total Customer Satisfaction by fostering and mentoring the people process, developing cross-functional collaborative environments; while using creative problem solving techniques and establishing easy to understand but increasingly difficult to achieve key performance indicators. - Google Search - 
# APPLICATION TRACKING SYSTEMS ARE TERRIBLY OUT-OF-TOUCH
```
{
Git Googlebot actionable;
ON (C BRENNAN POOLE);
FOR (RESUME);
CALCULATE...
..
.
##ERROR STATES
1. Harvard Business Review (Which I am a new member of the LinkedIn community - doubt they realize - just how un- and under - freakin' - employed I was - when the elitist took to acceptance.
2. Deloitte - Yeah - well landed on their SERP... AGAIN 10 June, 2020 wk (on a profile I barely am capable of maintaining.
3. Google SERP :
Google Bot - I OWN YOU!!!
Best,
x.____________
>Need an actionable and immediate value - add asset?
**LETS TALK!**
+1 678-338-7339
chief@chasingthewindllc.com
---
**Source URL**:
[https://www.google.com/search?q=Operations+strategist+determined+to+provide+total+Customer+Satisfaction+by+fostering+and+mentoring+the+people+process,+developing+cross-functional+collaborative+environments%3B+while+using+creative+problem+solving+techniques+and+establishing+easy+to+understand+but+increasingly+difficult+to+achieve+key+performance+indicators.&newwindow=1&sxsrf=ALeKk03MFeoCsAkjA6PCTuuab_QikImlcQ:1592030819081&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjGgcX7mP7pAhUGRjABHRnDC6sQ_AUoAnoECFkQBA&biw=2048&bih=710&dpr=1.25#imgrc=Ki1Fbe_2Z5HmJM](https://www.google.com/search?q=Operations+strategist+determined+to+provide+total+Customer+Satisfaction+by+fostering+and+mentoring+the+people+process,+developing+cross-functional+collaborative+environments%3B+while+using+creative+problem+solving+techniques+and+establishing+easy+to+understand+but+increasingly+difficult+to+achieve+key+performance+indicators.&newwindow=1&sxsrf=ALeKk03MFeoCsAkjA6PCTuuab_QikImlcQ:1592030819081&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjGgcX7mP7pAhUGRjABHRnDC6sQ_AUoAnoECFkQBA&biw=2048&bih=710&dpr=1.25#imgrc=Ki1Fbe_2Z5HmJM)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.45</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>2560x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>2560x888</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>125%</td></tr></table>
|
process
|
operations strategist determined to provide total customer satisfaction by fostering and mentoring the people process developing cross functional collaborative environments while using creative problem solving techniques and establishing easy to understand but increasingly difficult to achieve key performance indicators google search application tracking systems are terribly out of touch git googlebot actionable on c brennan poole for resume calculate error states harvard business review which i am a new member of the linkedin community doubt they realize just how un and under freakin employed i was when the elitist took to acceptance deloitte yeah well landed on their serp again june wk on a profile i barely am capable of maintaining google serp google bot i own you best x need an actionable and immediate value add asset lets talk chief chasingthewindllc com source url browser chrome os windows bit screen size viewport size pixel ratio zoom level
| 1
|
15,067
| 18,764,909,239
|
IssuesEvent
|
2021-11-05 21:47:05
|
ORNL-AMO/AMO-Tools-Suite
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Suite
|
closed
|
More Available heat issues
|
ASAP Needs Verification Process Heating important
|
Issue overview
--------------
GasCompositions::ProcessHeatPropertiesResults GasCompositions::getProcessHeatProperties(const double flueGasTempF, const double flueGasO2, const double combAirTemperatureF, const double fuelTempF, const double ambientAirTempF, const double combAirMoisturePerc)
^ should take in excessAir instead of flueGasO2
1 - it is possible that the user actually knows their excessAir
2 - it is using an outdated method to calculate excessAir. GasCompositions::calculateExcessAir is more accurate than getExcessAir
will there need to be a change to GasCompositions::calculateExcessAir? will it work with the changes to the rest of the gas fuel method?
|
1.0
|
More Available heat issues - Issue overview
--------------
GasCompositions::ProcessHeatPropertiesResults GasCompositions::getProcessHeatProperties(const double flueGasTempF, const double flueGasO2, const double combAirTemperatureF, const double fuelTempF, const double ambientAirTempF, const double combAirMoisturePerc)
^ should take in excessAir instead of flueGasO2
1 - it is possible that the user actually knows their excessAir
2 - it is using an outdated method to calculate excessAir. GasCompositions::calculateExcessAir is more accurate than getExcessAir
will there need to be a change to GasCompositions::calculateExcessAir? will it work with the changes to the rest of the gas fuel method?
|
process
|
more available heat issues issue overview gascompositions processheatpropertiesresults gascompositions getprocessheatproperties const double fluegastempf const double const double combairtemperaturef const double fueltempf const double ambientairtempf const double combairmoistureperc should take in excessair instead of it is possible that the user actually knows their excessair it is using an outdated method to calculate excessair gascompositions calculateexcessair is more accurate than getexcessair will there need to be a change to gascompositions calculateexcessair will it work with the changes to the rest of the gas fuel method
| 1
|
6,971
| 4,705,916,176
|
IssuesEvent
|
2016-10-13 15:42:17
|
bitsquare/bitsquare
|
https://api.github.com/repos/bitsquare/bitsquare
|
closed
|
Make all UI screens safe for min. window size
|
re: usability [ui]
|
1. Find which min. window size we want to support 760x560)?
2. Check all screens if there are problems, add scroll panes if needed
<!---
@huboard:{"order":321.6875002384186,"milestone_order":154,"custom_state":""}
-->
|
True
|
Make all UI screens safe for min. window size - 1. Find which min. window size we want to support 760x560)?
2. Check all screens if there are problems, add scroll panes if needed
<!---
@huboard:{"order":321.6875002384186,"milestone_order":154,"custom_state":""}
-->
|
non_process
|
make all ui screens safe for min window size find which min window size we want to support check all screens if there are problems add scroll panes if needed huboard order milestone order custom state
| 0
|
330,233
| 24,251,915,060
|
IssuesEvent
|
2022-09-27 14:46:28
|
castor-software/deptrim
|
https://api.github.com/repos/castor-software/deptrim
|
closed
|
Draft README
|
documentation
|
Today it is not clear what the purpose of DepTrim is.
Therefore, it is necessary to decide on:
- Input/Output
- Architecture
- Usage
|
1.0
|
Draft README - Today it is not clear what the purpose of DepTrim is.
Therefore, it is necessary to decide on:
- Input/Output
- Architecture
- Usage
|
non_process
|
draft readme today it is not clear what the purpose of deptrim is therefore it is necessary to decide on input output architecture usage
| 0
|
2,589
| 5,346,608,884
|
IssuesEvent
|
2017-02-17 20:09:16
|
codurance/site
|
https://api.github.com/repos/codurance/site
|
closed
|
Add a bot to close stale pull requests
|
enhancement improve-process
|
If we would deploy separate sites (#418) or use other resources per pull-request it would be good to have something automatically close them if they become stale so we can free up those resources.
|
1.0
|
Add a bot to close stale pull requests - If we would deploy separate sites (#418) or use other resources per pull-request it would be good to have something automatically close them if they become stale so we can free up those resources.
|
process
|
add a bot to close stale pull requests if we would deploy separate sites or use other resources per pull request it would be good to have something automatically close them if they become stale so we can free up those resources
| 1
|
20,722
| 3,626,838,461
|
IssuesEvent
|
2016-02-10 03:53:25
|
edbirmingham/network
|
https://api.github.com/repos/edbirmingham/network
|
closed
|
Member administration
|
design needed Done
|
Functionality:
* List members in alphabetical order by name with pagination.
* Create a member.
* Edit a member.
* Delete a member.
See: #18, #20
See the story of release [0.1.0](https://github.com/edbirmingham/network/wiki/Story-of-0.1.0-Release) for context.
|
1.0
|
Member administration - Functionality:
* List members in alphabetical order by name with pagination.
* Create a member.
* Edit a member.
* Delete a member.
See: #18, #20
See the story of release [0.1.0](https://github.com/edbirmingham/network/wiki/Story-of-0.1.0-Release) for context.
|
non_process
|
member administration functionality list members in alphabetical order by name with pagination create a member edit a member delete a member see see the story of release for context
| 0
|
104,974
| 9,013,512,955
|
IssuesEvent
|
2019-02-05 19:42:18
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
[CI] RetentionLeaseSyncIT.testRetentionLeasesSyncOnExpiration failure on 6.x
|
:Distributed/Distributed >test-failure
|
Logs: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+intake/1239/console
```
REPRODUCE WITH: ./gradlew :server:integTest \
-Dtests.seed=A7802B12B16AAE08 \
-Dtests.class=org.elasticsearch.index.seqno.RetentionLeaseSyncIT \
-Dtests.method="testRetentionLeasesSyncOnExpiration" \
-Dtests.security.manager=true \
-Dtests.locale=ar-JO \
-Dtests.timezone=NET \
-Dcompiler.java=11 \
-Druntime.java=8
```
Unable to reproduce locally (50 runs)
```
11:34:54 1> [2019-01-29T13:34:53,919][INFO ][o.e.i.s.RetentionLeaseSyncIT] [testRetentionLeasesSyncOnExpiration] after test
11:34:54 FAILURE 10.6s J3 | RetentionLeaseSyncIT.testRetentionLeasesSyncOnExpiration <<< FAILURES!
11:34:54 > Throwable #1: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at __randomizedtesting.SeedInfo.seed([A7802B12B16AAE08:E044B073BE5A3A0]:0)
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:848)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:822)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.testRetentionLeasesSyncOnExpiration(RetentionLeaseSyncIT.java:152)
11:34:54 > at java.lang.Thread.run(Thread.java:748)
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: item 0: was <RetentionLease{id='uBdsaYWK', retainingSequenceNumber=5554481904067957458, timestamp=1548754483278, source='VNShmdtL'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: item 0: was <RetentionLease{id='uBdsaYWK', retainingSequenceNumber=5554481904067957458, timestamp=1548754483278, source='VNShmdtL'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: item 0: was <RetentionLease{id='uBdsaYWK', retainingSequenceNumber=5554481904067957458, timestamp=1548754483278, source='VNShmdtL'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/elastic+elasticsearch+6.x+intake/server/build/testrun/integTest/J3/temp/org.elasticsearch.index.seqno.RetentionLeaseSyncIT_A7802B12B16AAE08-001
11:34:54 2> NOTE: test params are: codec=Asserting(Lucene70): {}, docValues:{}, maxPointsInLeafNode=576, maxMBSortInHeap=6.103012952259353, sim=RandomSimilarity(queryNorm=true): {}, locale=ar-JO, timezone=NET
11:34:54 2> NOTE: Linux 4.4.0-1061-aws amd64/Oracle Corporation 1.8.0_202 (64-bit)/cpus=16,threads=1,free=380326680,total=522715136
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 2> NOTE: All tests run in this JVM: [LegacyInnerHitsIT, CompletionSuggestSearchIT, HotThreadsIT, ClusterSearchShardsIT, RepositoriesServiceIT, IndicesExistsIT, ForceMergeBlocksIT, RetentionLeaseSyncIT]
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 1> [2019-01-29T13:34:53,945][INFO ][o.e.n.Node ] [suite] stopping ...
11:34:54 1> [2019-01-29T13:34:53,946][INFO ][o.e.c.s.MasterService ] [node_s2] zen-disco-node-left({node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482}), reason(left)[{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482} left], reason: removed {{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482},}
11:34:54 1> [2019-01-29T13:34:53,948][INFO ][o.e.c.s.ClusterApplierService] [node_sc3] removed {{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482},}, reason: apply cluster state (from master [master {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769} committed version [19]])
11:34:54 1> [2019-01-29T13:34:53,948][INFO ][o.e.c.s.ClusterApplierService] [node_s1] removed {{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482},}, reason: apply cluster state (from master [master {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769} committed version [19]])
11:34:54 1> [2019-01-29T13:34:53,949][INFO ][o.e.n.Node ] [suite] stopped
11:34:54 1> [2019-01-29T13:34:53,949][INFO ][o.e.c.s.ClusterApplierService] [node_s2] removed {{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482},}, reason: apply cluster state (from master [master {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769} committed version [19] source [zen-disco-node-left({node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482}), reason(left)[{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482} left]]])
11:34:54 1> [2019-01-29T13:34:53,949][INFO ][o.e.n.Node ] [suite] closing ...
11:34:54 1> [2019-01-29T13:34:53,950][INFO ][o.e.n.Node ] [suite] closed
11:34:54 1> [2019-01-29T13:34:53,951][INFO ][o.e.n.Node ] [suite] stopping ...
11:34:54 1> [2019-01-29T13:34:53,951][WARN ][o.e.d.z.ZenDiscovery ] [node_s2] not enough master nodes (has [1], but needed [2]), current nodes: nodes:
11:34:54 1> {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769}, local, master
11:34:54 1> {node_sc3}{TdQBPuFDRwWNKk34APUmmQ}{5V69ciVuTJa56yaCaYrvPQ}{127.0.0.1}{127.0.0.1:43056}
11:34:54 1> {node_s1}{RJiGJfpOQdGYVy-dHmhQhA}{Z7ejYTaiQ1S7MWrR1-jfsQ}{127.0.0.1}{127.0.0.1:45791}
11:34:54 1> [2019-01-29T13:34:53,952][INFO ][o.e.t.d.MockZenPing ] [node_s2] pinging using mock zen ping
11:34:54 1> [2019-01-29T13:34:53,953][INFO ][o.e.n.Node ] [suite] stopped
11:34:54 1> [2019-01-29T13:34:53,953][INFO ][o.e.n.Node ] [suite] closing ...
11:34:54 1> [2019-01-29T13:34:53,954][INFO ][o.e.n.Node ] [suite] closed
11:34:54 1> [2019-01-29T13:34:53,955][INFO ][o.e.n.Node ] [suite] stopping ...
11:34:54 1> [2019-01-29T13:34:53,955][WARN ][o.e.d.z.ZenDiscovery ] [node_s2] not enough master nodes discovered during pinging (found [[Candidate{node={node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769}, clusterStateVersion=19}]], but needed [2]), pinging again
11:34:54 1> [2019-01-29T13:34:53,956][INFO ][o.e.d.z.ZenDiscovery ] [node_sc3] master_left [{node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769}], reason [transport disconnected]
11:34:54 1> [2019-01-29T13:34:53,956][WARN ][o.e.d.z.ZenDiscovery ] [node_sc3] master left (reason = transport disconnected), current nodes: nodes:
11:34:54 1> {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769}, master
11:34:54 1> {node_sc3}{TdQBPuFDRwWNKk34APUmmQ}{5V69ciVuTJa56yaCaYrvPQ}{127.0.0.1}{127.0.0.1:43056}, local
11:34:54 1> {node_s1}{RJiGJfpOQdGYVy-dHmhQhA}{Z7ejYTaiQ1S7MWrR1-jfsQ}{127.0.0.1}{127.0.0.1:45791}
11:34:54 1> [2019-01-29T13:34:53,956][INFO ][o.e.n.Node ] [suite] stopped
11:34:54 1> [2019-01-29T13:34:53,956][INFO ][o.e.t.d.MockZenPing ] [node_sc3] pinging using mock zen ping
11:34:54 1> [2019-01-29T13:34:53,956][INFO ][o.e.n.Node ] [suite] closing ...
11:34:54 1> [2019-01-29T13:34:53,958][INFO ][o.e.n.Node ] [suite] closed
11:34:54 1> [2019-01-29T13:34:53,958][WARN ][o.e.c.NodeConnectionsService] [node_sc3] failed to connect to node {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769} (tried [1] times)
11:34:54 1> org.elasticsearch.transport.ConnectTransportException: [node_s2][127.0.0.1:37769] connect_exception
11:34:54 1> at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1308) ~[main/:?]
11:34:54 1> at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$2(ActionListener.java:100) ~[main/:?]
11:34:54 1> at org.elasticsearch.common.concurrent.CompletableContext.lambda$addListener$0(CompletableContext.java:42) ~[elasticsearch-core-6.7.0-SNAPSHOT.jar:6.7.0-SNAPSHOT]
11:34:54 1> at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.common.concurrent.CompletableContext.completeExceptionally(CompletableContext.java:57) ~[elasticsearch-core-6.7.0-SNAPSHOT.jar:6.7.0-SNAPSHOT]
11:34:54 1> at org.elasticsearch.transport.MockTcpTransport.lambda$initiateChannel$0(MockTcpTransport.java:195) ~[framework-6.7.0-SNAPSHOT.jar:?]
11:34:54 1> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_202]
11:34:54 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_202]
11:34:54 1> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
11:34:54 1> Caused by: java.net.ConnectException: Connection refused (Connection refused)
11:34:54 1> at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_202]
11:34:54 1> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_202]
11:34:54 1> at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.access$101(MockSocket.java:32) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.lambda$connect$0(MockSocket.java:66) ~[mocksocket-1.2.jar:?]
11:34:54 1> at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.connect(MockSocket.java:65) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.connect(MockSocket.java:59) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.transport.MockTcpTransport.lambda$initiateChannel$0(MockTcpTransport.java:190) ~[framework-6.7.0-SNAPSHOT.jar:?]
11:34:54 1> ... 5 more
11:34:54 1> [2019-01-29T13:34:53,958][WARN ][o.e.c.NodeConnectionsService] [node_sc3] failed to connect to node {node_s1}{RJiGJfpOQdGYVy-dHmhQhA}{Z7ejYTaiQ1S7MWrR1-jfsQ}{127.0.0.1}{127.0.0.1:45791} (tried [1] times)
11:34:54 1> org.elasticsearch.transport.ConnectTransportException: [node_s1][127.0.0.1:45791] connect_exception
11:34:54 1> at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1308) ~[main/:?]
11:34:54 1> at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$2(ActionListener.java:100) ~[main/:?]
11:34:54 1> at org.elasticsearch.common.concurrent.CompletableContext.lambda$addListener$0(CompletableContext.java:42) ~[elasticsearch-core-6.7.0-SNAPSHOT.jar:6.7.0-SNAPSHOT]
11:34:54 1> at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.common.concurrent.CompletableContext.completeExceptionally(CompletableContext.java:57) ~[elasticsearch-core-6.7.0-SNAPSHOT.jar:6.7.0-SNAPSHOT]
11:34:54 1> at org.elasticsearch.transport.MockTcpTransport.lambda$initiateChannel$0(MockTcpTransport.java:195) ~[framework-6.7.0-SNAPSHOT.jar:?]
11:34:54 1> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_202]
11:34:54 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_202]
11:34:54 1> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
11:34:54 1> Caused by: java.net.ConnectException: Connection refused (Connection refused)
11:34:54 1> at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_202]
11:34:54 1> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_202]
11:34:54 1> at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.access$101(MockSocket.java:32) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.lambda$connect$0(MockSocket.java:66) ~[mocksocket-1.2.jar:?]
11:34:54 1> at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.connect(MockSocket.java:65) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.connect(MockSocket.java:59) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.transport.MockTcpTransport.lambda$initiateChannel$0(MockTcpTransport.java:190) ~[framework-6.7.0-SNAPSHOT.jar:?]
11:34:54 1> ... 5 more
```
There is also this:
```
2.1/net.sf.jopt-simple/jopt-simple/5.0.2/98cafc6081d5632b61be2c9e60650b64ddbc637c/jopt-simple-5.0.2.jar:/var/lib/jenkins/workspace/elastic+elasticsearch+6.x+intake/client/rest/build/distributions/elasticsearch-rest-client-6.7.0-SNAPSHOT.ja at com.carrotsearch.ant.tasks.junit4.JUnit4.executeSlave(JUnit4.java:1542)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4.access$000(JUnit4.java:123)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:997)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:994)
11:39:14 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
11:39:14 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
11:39:14 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
11:39:14 at java.base/java.lang.Thread.run(Thread.java:834)
11:39:14 [ant:junit4] ERROR: JVM J7 ended with an exception: Forked process returned with error code: 137. Very likely a JVM crash. See process stderr at: /var/lib/jenkins/workspace/elastic+elasticsearch+6.x+intake/server/build/testrun/integTest/temp/junit4-J7-20190129_093305_81516079772723897455842.syserr
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4.executeSlave(JUnit4.java:1542)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4.access$000(JUnit4.java:123)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:997)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:994)
11:39:14 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
11:39:14 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
11:39:14 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
11:39:14 at java.base/java.lang.Thread.run(Thread.java:834)
```
|
1.0
|
[CI] RetentionLeaseSyncIT.testRetentionLeasesSyncOnExpiration failure on 6.x - Logs: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+intake/1239/console
```
REPRODUCE WITH: ./gradlew :server:integTest \
-Dtests.seed=A7802B12B16AAE08 \
-Dtests.class=org.elasticsearch.index.seqno.RetentionLeaseSyncIT \
-Dtests.method="testRetentionLeasesSyncOnExpiration" \
-Dtests.security.manager=true \
-Dtests.locale=ar-JO \
-Dtests.timezone=NET \
-Dcompiler.java=11 \
-Druntime.java=8
```
Unable to reproduce locally (50 runs)
```
11:34:54 1> [2019-01-29T13:34:53,919][INFO ][o.e.i.s.RetentionLeaseSyncIT] [testRetentionLeasesSyncOnExpiration] after test
11:34:54 FAILURE 10.6s J3 | RetentionLeaseSyncIT.testRetentionLeasesSyncOnExpiration <<< FAILURES!
11:34:54 > Throwable #1: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at __randomizedtesting.SeedInfo.seed([A7802B12B16AAE08:E044B073BE5A3A0]:0)
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:848)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:822)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.testRetentionLeasesSyncOnExpiration(RetentionLeaseSyncIT.java:152)
11:34:54 > at java.lang.Thread.run(Thread.java:748)
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: item 0: was <RetentionLease{id='uBdsaYWK', retainingSequenceNumber=5554481904067957458, timestamp=1548754483278, source='VNShmdtL'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: item 0: was <RetentionLease{id='uBdsaYWK', retainingSequenceNumber=5554481904067957458, timestamp=1548754483278, source='VNShmdtL'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: item 0: was <RetentionLease{id='uBdsaYWK', retainingSequenceNumber=5554481904067957458, timestamp=1548754483278, source='VNShmdtL'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 > Suppressed: java.lang.AssertionError:
11:34:54 > Expected: iterable containing [<RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>]
11:34:54 > but: No item matched: <RetentionLease{id='wZvuRhEt', retainingSequenceNumber=1420806087916294428, timestamp=1548754483478, source='hotgIUTx'}>
11:34:54 2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/elastic+elasticsearch+6.x+intake/server/build/testrun/integTest/J3/temp/org.elasticsearch.index.seqno.RetentionLeaseSyncIT_A7802B12B16AAE08-001
11:34:54 2> NOTE: test params are: codec=Asserting(Lucene70): {}, docValues:{}, maxPointsInLeafNode=576, maxMBSortInHeap=6.103012952259353, sim=RandomSimilarity(queryNorm=true): {}, locale=ar-JO, timezone=NET
11:34:54 2> NOTE: Linux 4.4.0-1061-aws amd64/Oracle Corporation 1.8.0_202 (64-bit)/cpus=16,threads=1,free=380326680,total=522715136
11:34:54 > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
11:34:54 2> NOTE: All tests run in this JVM: [LegacyInnerHitsIT, CompletionSuggestSearchIT, HotThreadsIT, ClusterSearchShardsIT, RepositoriesServiceIT, IndicesExistsIT, ForceMergeBlocksIT, RetentionLeaseSyncIT]
11:34:54 > at org.elasticsearch.index.seqno.RetentionLeaseSyncIT.lambda$testRetentionLeasesSyncOnExpiration$5(RetentionLeaseSyncIT.java:162)
11:34:54 > at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:836)
11:34:54 > ... 39 more
11:34:54 1> [2019-01-29T13:34:53,945][INFO ][o.e.n.Node ] [suite] stopping ...
11:34:54 1> [2019-01-29T13:34:53,946][INFO ][o.e.c.s.MasterService ] [node_s2] zen-disco-node-left({node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482}), reason(left)[{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482} left], reason: removed {{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482},}
11:34:54 1> [2019-01-29T13:34:53,948][INFO ][o.e.c.s.ClusterApplierService] [node_sc3] removed {{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482},}, reason: apply cluster state (from master [master {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769} committed version [19]])
11:34:54 1> [2019-01-29T13:34:53,948][INFO ][o.e.c.s.ClusterApplierService] [node_s1] removed {{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482},}, reason: apply cluster state (from master [master {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769} committed version [19]])
11:34:54 1> [2019-01-29T13:34:53,949][INFO ][o.e.n.Node ] [suite] stopped
11:34:54 1> [2019-01-29T13:34:53,949][INFO ][o.e.c.s.ClusterApplierService] [node_s2] removed {{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482},}, reason: apply cluster state (from master [master {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769} committed version [19] source [zen-disco-node-left({node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482}), reason(left)[{node_s0}{0Bchj2avTK-08wk80p3kaA}{axMeSwkET7qvcESoSnjuEQ}{127.0.0.1}{127.0.0.1:38482} left]]])
11:34:54 1> [2019-01-29T13:34:53,949][INFO ][o.e.n.Node ] [suite] closing ...
11:34:54 1> [2019-01-29T13:34:53,950][INFO ][o.e.n.Node ] [suite] closed
11:34:54 1> [2019-01-29T13:34:53,951][INFO ][o.e.n.Node ] [suite] stopping ...
11:34:54 1> [2019-01-29T13:34:53,951][WARN ][o.e.d.z.ZenDiscovery ] [node_s2] not enough master nodes (has [1], but needed [2]), current nodes: nodes:
11:34:54 1> {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769}, local, master
11:34:54 1> {node_sc3}{TdQBPuFDRwWNKk34APUmmQ}{5V69ciVuTJa56yaCaYrvPQ}{127.0.0.1}{127.0.0.1:43056}
11:34:54 1> {node_s1}{RJiGJfpOQdGYVy-dHmhQhA}{Z7ejYTaiQ1S7MWrR1-jfsQ}{127.0.0.1}{127.0.0.1:45791}
11:34:54 1> [2019-01-29T13:34:53,952][INFO ][o.e.t.d.MockZenPing ] [node_s2] pinging using mock zen ping
11:34:54 1> [2019-01-29T13:34:53,953][INFO ][o.e.n.Node ] [suite] stopped
11:34:54 1> [2019-01-29T13:34:53,953][INFO ][o.e.n.Node ] [suite] closing ...
11:34:54 1> [2019-01-29T13:34:53,954][INFO ][o.e.n.Node ] [suite] closed
11:34:54 1> [2019-01-29T13:34:53,955][INFO ][o.e.n.Node ] [suite] stopping ...
11:34:54 1> [2019-01-29T13:34:53,955][WARN ][o.e.d.z.ZenDiscovery ] [node_s2] not enough master nodes discovered during pinging (found [[Candidate{node={node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769}, clusterStateVersion=19}]], but needed [2]), pinging again
11:34:54 1> [2019-01-29T13:34:53,956][INFO ][o.e.d.z.ZenDiscovery ] [node_sc3] master_left [{node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769}], reason [transport disconnected]
11:34:54 1> [2019-01-29T13:34:53,956][WARN ][o.e.d.z.ZenDiscovery ] [node_sc3] master left (reason = transport disconnected), current nodes: nodes:
11:34:54 1> {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769}, master
11:34:54 1> {node_sc3}{TdQBPuFDRwWNKk34APUmmQ}{5V69ciVuTJa56yaCaYrvPQ}{127.0.0.1}{127.0.0.1:43056}, local
11:34:54 1> {node_s1}{RJiGJfpOQdGYVy-dHmhQhA}{Z7ejYTaiQ1S7MWrR1-jfsQ}{127.0.0.1}{127.0.0.1:45791}
11:34:54 1> [2019-01-29T13:34:53,956][INFO ][o.e.n.Node ] [suite] stopped
11:34:54 1> [2019-01-29T13:34:53,956][INFO ][o.e.t.d.MockZenPing ] [node_sc3] pinging using mock zen ping
11:34:54 1> [2019-01-29T13:34:53,956][INFO ][o.e.n.Node ] [suite] closing ...
11:34:54 1> [2019-01-29T13:34:53,958][INFO ][o.e.n.Node ] [suite] closed
11:34:54 1> [2019-01-29T13:34:53,958][WARN ][o.e.c.NodeConnectionsService] [node_sc3] failed to connect to node {node_s2}{PhKdUp5qSKG-aUFUW3cIuQ}{vOb9YjDVR9-mXWf8skXriw}{127.0.0.1}{127.0.0.1:37769} (tried [1] times)
11:34:54 1> org.elasticsearch.transport.ConnectTransportException: [node_s2][127.0.0.1:37769] connect_exception
11:34:54 1> at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1308) ~[main/:?]
11:34:54 1> at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$2(ActionListener.java:100) ~[main/:?]
11:34:54 1> at org.elasticsearch.common.concurrent.CompletableContext.lambda$addListener$0(CompletableContext.java:42) ~[elasticsearch-core-6.7.0-SNAPSHOT.jar:6.7.0-SNAPSHOT]
11:34:54 1> at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.common.concurrent.CompletableContext.completeExceptionally(CompletableContext.java:57) ~[elasticsearch-core-6.7.0-SNAPSHOT.jar:6.7.0-SNAPSHOT]
11:34:54 1> at org.elasticsearch.transport.MockTcpTransport.lambda$initiateChannel$0(MockTcpTransport.java:195) ~[framework-6.7.0-SNAPSHOT.jar:?]
11:34:54 1> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_202]
11:34:54 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_202]
11:34:54 1> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
11:34:54 1> Caused by: java.net.ConnectException: Connection refused (Connection refused)
11:34:54 1> at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_202]
11:34:54 1> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_202]
11:34:54 1> at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.access$101(MockSocket.java:32) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.lambda$connect$0(MockSocket.java:66) ~[mocksocket-1.2.jar:?]
11:34:54 1> at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.connect(MockSocket.java:65) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.connect(MockSocket.java:59) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.transport.MockTcpTransport.lambda$initiateChannel$0(MockTcpTransport.java:190) ~[framework-6.7.0-SNAPSHOT.jar:?]
11:34:54 1> ... 5 more
11:34:54 1> [2019-01-29T13:34:53,958][WARN ][o.e.c.NodeConnectionsService] [node_sc3] failed to connect to node {node_s1}{RJiGJfpOQdGYVy-dHmhQhA}{Z7ejYTaiQ1S7MWrR1-jfsQ}{127.0.0.1}{127.0.0.1:45791} (tried [1] times)
11:34:54 1> org.elasticsearch.transport.ConnectTransportException: [node_s1][127.0.0.1:45791] connect_exception
11:34:54 1> at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1308) ~[main/:?]
11:34:54 1> at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$2(ActionListener.java:100) ~[main/:?]
11:34:54 1> at org.elasticsearch.common.concurrent.CompletableContext.lambda$addListener$0(CompletableContext.java:42) ~[elasticsearch-core-6.7.0-SNAPSHOT.jar:6.7.0-SNAPSHOT]
11:34:54 1> at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.common.concurrent.CompletableContext.completeExceptionally(CompletableContext.java:57) ~[elasticsearch-core-6.7.0-SNAPSHOT.jar:6.7.0-SNAPSHOT]
11:34:54 1> at org.elasticsearch.transport.MockTcpTransport.lambda$initiateChannel$0(MockTcpTransport.java:195) ~[framework-6.7.0-SNAPSHOT.jar:?]
11:34:54 1> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_202]
11:34:54 1> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_202]
11:34:54 1> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_202]
11:34:54 1> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
11:34:54 1> Caused by: java.net.ConnectException: Connection refused (Connection refused)
11:34:54 1> at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_202]
11:34:54 1> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_202]
11:34:54 1> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_202]
11:34:54 1> at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.access$101(MockSocket.java:32) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.lambda$connect$0(MockSocket.java:66) ~[mocksocket-1.2.jar:?]
11:34:54 1> at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_202]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.connect(MockSocket.java:65) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.mocksocket.MockSocket.connect(MockSocket.java:59) ~[mocksocket-1.2.jar:?]
11:34:54 1> at org.elasticsearch.transport.MockTcpTransport.lambda$initiateChannel$0(MockTcpTransport.java:190) ~[framework-6.7.0-SNAPSHOT.jar:?]
11:34:54 1> ... 5 more
```
There is also this:
```
2.1/net.sf.jopt-simple/jopt-simple/5.0.2/98cafc6081d5632b61be2c9e60650b64ddbc637c/jopt-simple-5.0.2.jar:/var/lib/jenkins/workspace/elastic+elasticsearch+6.x+intake/client/rest/build/distributions/elasticsearch-rest-client-6.7.0-SNAPSHOT.ja at com.carrotsearch.ant.tasks.junit4.JUnit4.executeSlave(JUnit4.java:1542)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4.access$000(JUnit4.java:123)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:997)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:994)
11:39:14 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
11:39:14 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
11:39:14 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
11:39:14 at java.base/java.lang.Thread.run(Thread.java:834)
11:39:14 [ant:junit4] ERROR: JVM J7 ended with an exception: Forked process returned with error code: 137. Very likely a JVM crash. See process stderr at: /var/lib/jenkins/workspace/elastic+elasticsearch+6.x+intake/server/build/testrun/integTest/temp/junit4-J7-20190129_093305_81516079772723897455842.syserr
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4.executeSlave(JUnit4.java:1542)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4.access$000(JUnit4.java:123)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:997)
11:39:14 at com.carrotsearch.ant.tasks.junit4.JUnit4$2.call(JUnit4.java:994)
11:39:14 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
11:39:14 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
11:39:14 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
11:39:14 at java.base/java.lang.Thread.run(Thread.java:834)
```
|
non_process
|
retentionleasesyncit testretentionleasessynconexpiration failure on x logs reproduce with gradlew server integtest dtests seed dtests class org elasticsearch index seqno retentionleasesyncit dtests method testretentionleasessynconexpiration dtests security manager true dtests locale ar jo dtests timezone net dcompiler java druntime java unable to reproduce locally runs after test failure retentionleasesyncit testretentionleasessynconexpiration failures throwable java lang assertionerror expected iterable containing but no item matched at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java at org elasticsearch test estestcase assertbusy estestcase java at org elasticsearch index seqno retentionleasesyncit testretentionleasessynconexpiration retentionleasesyncit java at java lang thread run thread java suppressed java lang assertionerror expected iterable containing but item was at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but item was at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but item was at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but no item matched at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but no item matched at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but no item matched at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but no item matched at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but no item matched at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but no item matched at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but no item matched at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but no item matched at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but no item matched at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more suppressed java lang assertionerror expected iterable containing but no item matched note leaving temporary files on disk at var lib jenkins workspace elastic elasticsearch x intake server build testrun integtest temp org elasticsearch index seqno retentionleasesyncit note test params are codec asserting docvalues maxpointsinleafnode maxmbsortinheap sim randomsimilarity querynorm true locale ar jo timezone net note linux aws oracle corporation bit cpus threads free total at org hamcrest matcherassert assertthat matcherassert java note all tests run in this jvm at org elasticsearch index seqno retentionleasesyncit lambda testretentionleasessynconexpiration retentionleasesyncit java at org elasticsearch test estestcase assertbusy estestcase java more stopping zen disco node left node reason left reason removed node removed node reason apply cluster state from master removed node reason apply cluster state from master stopped removed node reason apply cluster state from master source closing closed stopping not enough master nodes has but needed current nodes nodes node local master node node rjigjfpoqdgyvy dhmhqha jfsq pinging using mock zen ping stopped closing closed stopping not enough master nodes discovered during pinging found but needed pinging again master left reason master left reason transport disconnected current nodes nodes node master node local node rjigjfpoqdgyvy dhmhqha jfsq stopped pinging using mock zen ping closing closed failed to connect to node node tried times org elasticsearch transport connecttransportexception connect exception at org elasticsearch transport tcptransport channelsconnectedlistener onfailure tcptransport java at org elasticsearch action actionlistener lambda tobiconsumer actionlistener java at org elasticsearch common concurrent completablecontext lambda addlistener completablecontext java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at org elasticsearch common concurrent completablecontext completeexceptionally completablecontext java at org elasticsearch transport mocktcptransport lambda initiatechannel mocktcptransport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java net connectexception connection refused connection refused at java net plainsocketimpl socketconnect native method at java net abstractplainsocketimpl doconnect abstractplainsocketimpl java at java net abstractplainsocketimpl connecttoaddress abstractplainsocketimpl java at java net abstractplainsocketimpl connect abstractplainsocketimpl java at java net sockssocketimpl connect sockssocketimpl java at java net socket connect socket java at org elasticsearch mocksocket mocksocket access mocksocket java at org elasticsearch mocksocket mocksocket lambda connect mocksocket java at java security accesscontroller doprivileged native method at org elasticsearch mocksocket mocksocket connect mocksocket java at org elasticsearch mocksocket mocksocket connect mocksocket java at org elasticsearch transport mocktcptransport lambda initiatechannel mocktcptransport java more failed to connect to node node rjigjfpoqdgyvy dhmhqha jfsq tried times org elasticsearch transport connecttransportexception connect exception at org elasticsearch transport tcptransport channelsconnectedlistener onfailure tcptransport java at org elasticsearch action actionlistener lambda tobiconsumer actionlistener java at org elasticsearch common concurrent completablecontext lambda addlistener completablecontext java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at org elasticsearch common concurrent completablecontext completeexceptionally completablecontext java at org elasticsearch transport mocktcptransport lambda initiatechannel mocktcptransport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java net connectexception connection refused connection refused at java net plainsocketimpl socketconnect native method at java net abstractplainsocketimpl doconnect abstractplainsocketimpl java at java net abstractplainsocketimpl connecttoaddress abstractplainsocketimpl java at java net abstractplainsocketimpl connect abstractplainsocketimpl java at java net sockssocketimpl connect sockssocketimpl java at java net socket connect socket java at org elasticsearch mocksocket mocksocket access mocksocket java at org elasticsearch mocksocket mocksocket lambda connect mocksocket java at java security accesscontroller doprivileged native method at org elasticsearch mocksocket mocksocket connect mocksocket java at org elasticsearch mocksocket mocksocket connect mocksocket java at org elasticsearch transport mocktcptransport lambda initiatechannel mocktcptransport java more there is also this net sf jopt simple jopt simple jopt simple jar var lib jenkins workspace elastic elasticsearch x intake client rest build distributions elasticsearch rest client snapshot ja at com carrotsearch ant tasks executeslave java at com carrotsearch ant tasks access java at com carrotsearch ant tasks call java at com carrotsearch ant tasks call java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java error jvm ended with an exception forked process returned with error code very likely a jvm crash see process stderr at var lib jenkins workspace elastic elasticsearch x intake server build testrun integtest temp syserr at com carrotsearch ant tasks executeslave java at com carrotsearch ant tasks access java at com carrotsearch ant tasks call java at com carrotsearch ant tasks call java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java
| 0
|
9,620
| 12,556,198,108
|
IssuesEvent
|
2020-06-07 08:54:43
|
HackYourFutureBelgium/class-9-10
|
https://api.github.com/repos/HackYourFutureBelgium/class-9-10
|
opened
|
Process Week 1, Sunday Roll Call
|
process-week roll-call
|
## Roll Call!
Leave us a comment with these two things:
1. A summary of your week using only emojis
1. Something you'd like to share, anything goes! (within respect)
|
1.0
|
Process Week 1, Sunday Roll Call - ## Roll Call!
Leave us a comment with these two things:
1. A summary of your week using only emojis
1. Something you'd like to share, anything goes! (within respect)
|
process
|
process week sunday roll call roll call leave us a comment with these two things a summary of your week using only emojis something you d like to share anything goes within respect
| 1
|
773,997
| 27,179,715,600
|
IssuesEvent
|
2023-02-18 13:16:55
|
azerothcore/azerothcore-wotlk
|
https://api.github.com/repos/azerothcore/azerothcore-wotlk
|
closed
|
[Opening the Dark Portal] Wipe Event is incorrect
|
Confirmed Priority-Low Instance - Dungeon - Outland 65-69
|
### Current Behaviour
Medivh drops dead instantly without animation once 0% integrity is reached.
Black Portal Dummy never despawns.
### Expected Blizzlike Behaviour
On 0% reached:
Play Sound 10441
Text: No! Damn this feeble, mortal coil! (14)
Wait 4000ms
Medivh Dies, performing the death animation
All Medivh Auras removed
All Entries 18625, 21862, 17838 Despawn
Wait 2000ms
All Adds cast spell 7791 on self and despawn 1200ms later
### Source
_No response_
### Steps to reproduce the problem
`.go c id 15608`
Let the adds destroy Medivh's Shield
### Extra Notes
_No response_
### AC rev. hash/commit
https://github.com/azerothcore/azerothcore-wotlk/commit/c3dd1b7a5ca0a9945d609213fdd3c4a19e0106e2
### Operating system
Windows 10
### Custom changes or Modules
_No response_
|
1.0
|
[Opening the Dark Portal] Wipe Event is incorrect - ### Current Behaviour
Medivh drops dead instantly without animation once 0% integrity is reached.
Black Portal Dummy never despawns.
### Expected Blizzlike Behaviour
On 0% reached:
Play Sound 10441
Text: No! Damn this feeble, mortal coil! (14)
Wait 4000ms
Medivh Dies, performing the death animation
All Medivh Auras removed
All Entries 18625, 21862, 17838 Despawn
Wait 2000ms
All Adds cast spell 7791 on self and despawn 1200ms later
### Source
_No response_
### Steps to reproduce the problem
`.go c id 15608`
Let the adds destroy Medivh's Shield
### Extra Notes
_No response_
### AC rev. hash/commit
https://github.com/azerothcore/azerothcore-wotlk/commit/c3dd1b7a5ca0a9945d609213fdd3c4a19e0106e2
### Operating system
Windows 10
### Custom changes or Modules
_No response_
|
non_process
|
wipe event is incorrect current behaviour medivh drops dead instantly without animation once integrity is reached black portal dummy never despawns expected blizzlike behaviour on reached play sound text no damn this feeble mortal coil wait medivh dies performing the death animation all medivh auras removed all entries despawn wait all adds cast spell on self and despawn later source no response steps to reproduce the problem go c id let the adds destroy medivh s shield extra notes no response ac rev hash commit operating system windows custom changes or modules no response
| 0
|
14,421
| 10,853,276,949
|
IssuesEvent
|
2019-11-13 14:26:22
|
gammapy/gammapy
|
https://api.github.com/repos/gammapy/gammapy
|
closed
|
Introduce YAML in Gammapy
|
feature infrastructure
|
I'd like us to use YAML in Gammapy for configuration files.
- [ ] Update [this docs page](https://gammapy.readthedocs.org/en/latest/dataformats/file_formats.html) with some info on JSON and YAML and explain why we prefer YAML.
- [x] Add PyYAML to `.travis.yml` and some simple test to show that it works.
|
1.0
|
Introduce YAML in Gammapy - I'd like us to use YAML in Gammapy for configuration files.
- [ ] Update [this docs page](https://gammapy.readthedocs.org/en/latest/dataformats/file_formats.html) with some info on JSON and YAML and explain why we prefer YAML.
- [x] Add PyYAML to `.travis.yml` and some simple test to show that it works.
|
non_process
|
introduce yaml in gammapy i d like us to use yaml in gammapy for configuration files update with some info on json and yaml and explain why we prefer yaml add pyyaml to travis yml and some simple test to show that it works
| 0
|
64,924
| 18,960,805,483
|
IssuesEvent
|
2021-11-19 04:24:21
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Decrypting a series of events, which happen to contain a Jitsi widget, leads to a 1/4 second call notification
|
T-Defect S-Minor A-E2EE A-Jitsi O-Uncommon
|
1. Have someone start a call in an encrypted room
2. Finish the call
3. Give Riot some time to store the events
4. Experience https://github.com/vector-im/riot-web/issues/7116 (easiest way to reproduce)
|
1.0
|
Decrypting a series of events, which happen to contain a Jitsi widget, leads to a 1/4 second call notification - 1. Have someone start a call in an encrypted room
2. Finish the call
3. Give Riot some time to store the events
4. Experience https://github.com/vector-im/riot-web/issues/7116 (easiest way to reproduce)
|
non_process
|
decrypting a series of events which happen to contain a jitsi widget leads to a second call notification have someone start a call in an encrypted room finish the call give riot some time to store the events experience easiest way to reproduce
| 0
|
1,080
| 3,541,665,109
|
IssuesEvent
|
2016-01-19 02:49:23
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
closed
|
add color controls to the edge module
|
ui video processing
|
see #52 (edge effect) and also #108 (multiple levels for the edge effect)
may or may not depend on #54 (color multi-param component)
|
1.0
|
add color controls to the edge module - see #52 (edge effect) and also #108 (multiple levels for the edge effect)
may or may not depend on #54 (color multi-param component)
|
process
|
add color controls to the edge module see edge effect and also multiple levels for the edge effect may or may not depend on color multi param component
| 1
|
6,740
| 9,872,921,720
|
IssuesEvent
|
2019-06-22 09:27:55
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Sort inputs parameters in QGIS modeler
|
Feature Request Processing
|
Author Name: **Maxime RIOU** (Maxime RIOU)
Original Redmine Issue: [22121](https://issues.qgis.org/issues/22121)
Redmine category:processing/modeller
---
This refers to a question on GIS Stack exchange (https://gis.stackexchange.com/questions/322831/sort-inputs-parameters-in-qgis-modeler).
I created a model in QGIS 3.6 with three inputs parameters (two vector layer and one vector field). When I run the model, the inputs are not sorted in a correct way: the vector field is requested before the vector layer.
I searched on previous feature requests and I found this #19944 but it is closed now.
It would be nice to choose the order of the inputs parameters. For example, if the input parameters are sorted in alphabetical order, the user could name inputs "1_inputXX, 2_inputYY, etc.".
The proposed answer on GIS stackexchange is the following workaround :
Export Model as Python Algorithm to get a translated script of your model. Here you could order the parameters however you like in the
```
initAlgorithm() function. E.g.:
def initAlgorithm(self, config=None):
self.addParameter(QgsProcessingParameterFeatureSource('1conduites', '1_Conduites', types=[QgsProcessing.TypeVectorLine], defaultValue=None))
self.addParameter(QgsProcessingParameterVectorLayer('2regard', '2_Regard', types=[QgsProcessing.TypeVectorPoint], defaultValue=None))
self.addParameter(QgsProcessingParameterFeatureSource('3idregard', '3_ID_regard', types=[QgsProcessing.TypeVector], defaultValue=None))
self.addParameter(QgsProcessingParameterFeatureSink('Result', 'result', type=QgsProcessing.TypeVectorPolygon, createByDefault=True, defaultValue=None))
```
|
1.0
|
Sort inputs parameters in QGIS modeler - Author Name: **Maxime RIOU** (Maxime RIOU)
Original Redmine Issue: [22121](https://issues.qgis.org/issues/22121)
Redmine category:processing/modeller
---
This refers to a question on GIS Stack exchange (https://gis.stackexchange.com/questions/322831/sort-inputs-parameters-in-qgis-modeler).
I created a model in QGIS 3.6 with three inputs parameters (two vector layer and one vector field). When I run the model, the inputs are not sorted in a correct way: the vector field is requested before the vector layer.
I searched on previous feature requests and I found this #19944 but it is closed now.
It would be nice to choose the order of the inputs parameters. For example, if the input parameters are sorted in alphabetical order, the user could name inputs "1_inputXX, 2_inputYY, etc.".
The proposed answer on GIS stackexchange is the following workaround :
Export Model as Python Algorithm to get a translated script of your model. Here you could order the parameters however you like in the
```
initAlgorithm() function. E.g.:
def initAlgorithm(self, config=None):
self.addParameter(QgsProcessingParameterFeatureSource('1conduites', '1_Conduites', types=[QgsProcessing.TypeVectorLine], defaultValue=None))
self.addParameter(QgsProcessingParameterVectorLayer('2regard', '2_Regard', types=[QgsProcessing.TypeVectorPoint], defaultValue=None))
self.addParameter(QgsProcessingParameterFeatureSource('3idregard', '3_ID_regard', types=[QgsProcessing.TypeVector], defaultValue=None))
self.addParameter(QgsProcessingParameterFeatureSink('Result', 'result', type=QgsProcessing.TypeVectorPolygon, createByDefault=True, defaultValue=None))
```
|
process
|
sort inputs parameters in qgis modeler author name maxime riou maxime riou original redmine issue redmine category processing modeller this refers to a question on gis stack exchange i created a model in qgis with three inputs parameters two vector layer and one vector field when i run the model the inputs are not sorted in a correct way the vector field is requested before the vector layer i searched on previous feature requests and i found this but it is closed now it would be nice to choose the order of the inputs parameters for example if the input parameters are sorted in alphabetical order the user could name inputs inputxx inputyy etc the proposed answer on gis stackexchange is the following workaround export model as python algorithm to get a translated script of your model here you could order the parameters however you like in the initalgorithm function e g def initalgorithm self config none self addparameter qgsprocessingparameterfeaturesource conduites types defaultvalue none self addparameter qgsprocessingparametervectorlayer regard types defaultvalue none self addparameter qgsprocessingparameterfeaturesource id regard types defaultvalue none self addparameter qgsprocessingparameterfeaturesink result result type qgsprocessing typevectorpolygon createbydefault true defaultvalue none
| 1
|
49,201
| 13,185,289,384
|
IssuesEvent
|
2020-08-12 21:05:52
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
Steamshovel tries to build without GLUT (Trac #946)
|
Incomplete Migration Migrated from Trac cmake defect
|
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/946
, reported by richman and owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-04-22T21:28:00",
"description": "Steamshovel tries to build, then fails because GLUT is not installed. It shouldn't try to build if GLUT's not there.",
"reporter": "richman",
"cc": "",
"resolution": "fixed",
"_ts": "1429738080648709",
"component": "cmake",
"summary": "Steamshovel tries to build without GLUT",
"priority": "normal",
"keywords": "",
"time": "2015-04-22T21:05:55",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Steamshovel tries to build without GLUT (Trac #946) - <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/946
, reported by richman and owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-04-22T21:28:00",
"description": "Steamshovel tries to build, then fails because GLUT is not installed. It shouldn't try to build if GLUT's not there.",
"reporter": "richman",
"cc": "",
"resolution": "fixed",
"_ts": "1429738080648709",
"component": "cmake",
"summary": "Steamshovel tries to build without GLUT",
"priority": "normal",
"keywords": "",
"time": "2015-04-22T21:05:55",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
steamshovel tries to build without glut trac migrated from reported by richman and owned by david schultz json status closed changetime description steamshovel tries to build then fails because glut is not installed it shouldn t try to build if glut s not there reporter richman cc resolution fixed ts component cmake summary steamshovel tries to build without glut priority normal keywords time milestone owner david schultz type defect
| 0
|
378,067
| 11,195,404,804
|
IssuesEvent
|
2020-01-03 06:17:44
|
elementary/stylesheet
|
https://api.github.com/repos/elementary/stylesheet
|
closed
|
FileChooserDialog should not inherit brand styling
|
Priority: Medium Status: Confirmed
|
The `Gtk.FileChooserDialog` shouldn't inherit the `colorPrimary*` brand color properties set by the applications.

Screenshot from https://github.com/donadigo/eddy/issues/37
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/49056093-filechooserdialog-should-not-inherit-brand-styling?utm_campaign=plugin&utm_content=tracker%2F45189256&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F45189256&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
FileChooserDialog should not inherit brand styling - The `Gtk.FileChooserDialog` shouldn't inherit the `colorPrimary*` brand color properties set by the applications.

Screenshot from https://github.com/donadigo/eddy/issues/37
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/49056093-filechooserdialog-should-not-inherit-brand-styling?utm_campaign=plugin&utm_content=tracker%2F45189256&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F45189256&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
non_process
|
filechooserdialog should not inherit brand styling the gtk filechooserdialog shouldn t inherit the colorprimary brand color properties set by the applications screenshot from want to back this issue we accept bounties via
| 0
|
24,237
| 12,244,688,571
|
IssuesEvent
|
2020-05-05 11:37:34
|
Nemocas/AbstractAlgebra.jl
|
https://api.github.com/repos/Nemocas/AbstractAlgebra.jl
|
closed
|
performance regression in hash(::Perm) in Julia 0.7
|
performance
|
this was probably introduced during transition to julia-0.7:
```julia
julia> versioninfo()
Julia Version 0.6.4
Commit 9d11f62bcb (2018-07-09 19:09 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=16)
LAPACK: libopenblas64_
LIBM: libopenlibm
LLVM: libLLVM-3.9.1 (ORCJIT, skylake)
julia> function test(n)
res = UInt(0)
for p in Generic.elements!(PermGroup(n))
res = hash(p, res)
end
return res
end
test (generic function with 1 method)
julia> @time test(5)
0.108565 seconds (34.74 k allocations: 1.877 MiB)
0x0f2cd8ba75f36698
julia> @time test(5)
0.000009 seconds (10 allocations: 528 bytes)
0x0f2cd8ba75f36698
julia> @time test(6)
0.000319 seconds (10 allocations: 528 bytes)
0xdedb7b56542cbe1a
julia> @time test(10)
0.346696 seconds (10 allocations: 592 bytes)
0xeba1b46577ab914c
```
```julia
julia> versioninfo()
Julia Version 1.0.3
Commit 099e826241 (2018-12-18 01:34 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.0 (ORCJIT, skylake)
Environment:
JULIA_NUM_THREADS = 2
julia> function test(n)
res = UInt(0)
for p in Generic.elements!(PermGroup(n))
res = hash(p, res)
end
return res
end
test (generic function with 1 method)
julia> @time test(5)
0.145024 seconds (236.19 k allocations: 11.687 MiB, 4.45% gc time)
0x3a8569f5ec15165c
julia> @time test(5)
0.000046 seconds (129 allocations: 4.234 KiB)
0x3a8569f5ec15165c
julia> @time test(6)
0.000185 seconds (729 allocations: 22.984 KiB)
0x1b1548da152f74a8
julia> @time test(10)
1.058082 seconds (3.63 M allocations: 110.743 MiB, 1.27% gc time)
0x009115240712e5c5
```
|
True
|
performance regression in hash(::Perm) in Julia 0.7 - this was probably introduced during transition to julia-0.7:
```julia
julia> versioninfo()
Julia Version 0.6.4
Commit 9d11f62bcb (2018-07-09 19:09 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=16)
LAPACK: libopenblas64_
LIBM: libopenlibm
LLVM: libLLVM-3.9.1 (ORCJIT, skylake)
julia> function test(n)
res = UInt(0)
for p in Generic.elements!(PermGroup(n))
res = hash(p, res)
end
return res
end
test (generic function with 1 method)
julia> @time test(5)
0.108565 seconds (34.74 k allocations: 1.877 MiB)
0x0f2cd8ba75f36698
julia> @time test(5)
0.000009 seconds (10 allocations: 528 bytes)
0x0f2cd8ba75f36698
julia> @time test(6)
0.000319 seconds (10 allocations: 528 bytes)
0xdedb7b56542cbe1a
julia> @time test(10)
0.346696 seconds (10 allocations: 592 bytes)
0xeba1b46577ab914c
```
```julia
julia> versioninfo()
Julia Version 1.0.3
Commit 099e826241 (2018-12-18 01:34 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.0 (ORCJIT, skylake)
Environment:
JULIA_NUM_THREADS = 2
julia> function test(n)
res = UInt(0)
for p in Generic.elements!(PermGroup(n))
res = hash(p, res)
end
return res
end
test (generic function with 1 method)
julia> @time test(5)
0.145024 seconds (236.19 k allocations: 11.687 MiB, 4.45% gc time)
0x3a8569f5ec15165c
julia> @time test(5)
0.000046 seconds (129 allocations: 4.234 KiB)
0x3a8569f5ec15165c
julia> @time test(6)
0.000185 seconds (729 allocations: 22.984 KiB)
0x1b1548da152f74a8
julia> @time test(10)
1.058082 seconds (3.63 M allocations: 110.743 MiB, 1.27% gc time)
0x009115240712e5c5
```
|
non_process
|
performance regression in hash perm in julia this was probably introduced during transition to julia julia julia versioninfo julia version commit utc platform info os linux pc linux gnu cpu intel r core tm cpu word size blas libopenblas dynamic arch no affinity haswell max threads lapack libm libopenlibm llvm libllvm orcjit skylake julia function test n res uint for p in generic elements permgroup n res hash p res end return res end test generic function with method julia time test seconds k allocations mib julia time test seconds allocations bytes julia time test seconds allocations bytes julia time test seconds allocations bytes julia julia versioninfo julia version commit utc platform info os linux pc linux gnu cpu intel r core tm cpu word size libm libopenlibm llvm libllvm orcjit skylake environment julia num threads julia function test n res uint for p in generic elements permgroup n res hash p res end return res end test generic function with method julia time test seconds k allocations mib gc time julia time test seconds allocations kib julia time test seconds allocations kib julia time test seconds m allocations mib gc time
| 0
|
179,134
| 30,122,166,842
|
IssuesEvent
|
2023-06-30 16:02:46
|
Kwenta/kwenta
|
https://api.github.com/repos/Kwenta/kwenta
|
opened
|
Feat: Display smart margin account address
|
design core dev
|
### Description
Display the users smart margin account.
### Motivation
Users often ask how they can view their smart margin address, for things like tracking transactions on Etherscan etc.
### Potential Solutions
We should explore how best to expose the account address to users. Some suggestions include:
- Make the wallet nav button a drop down to show wallet, account and other actions
- include it in the trade panel account details drop down, with margin etc
- Replace the rainbowkit modal with a custom component which displays wallet info and account
- Display somewhere on the dashboard
|
1.0
|
Feat: Display smart margin account address - ### Description
Display the users smart margin account.
### Motivation
Users often ask how they can view their smart margin address, for things like tracking transactions on Etherscan etc.
### Potential Solutions
We should explore how best to expose the account address to users. Some suggestions include:
- Make the wallet nav button a drop down to show wallet, account and other actions
- include it in the trade panel account details drop down, with margin etc
- Replace the rainbowkit modal with a custom component which displays wallet info and account
- Display somewhere on the dashboard
|
non_process
|
feat display smart margin account address description display the users smart margin account motivation users often ask how they can view their smart margin address for things like tracking transactions on etherscan etc potential solutions we should explore how best to expose the account address to users some suggestions include make the wallet nav button a drop down to show wallet account and other actions include it in the trade panel account details drop down with margin etc replace the rainbowkit modal with a custom component which displays wallet info and account display somewhere on the dashboard
| 0
|
17,129
| 22,649,049,737
|
IssuesEvent
|
2022-07-01 11:40:31
|
PyCQA/pylint
|
https://api.github.com/repos/PyCQA/pylint
|
closed
|
Using --jobs affects monkey patching detection
|
Bug :beetle: topic-multiprocessing Needs PR
|
Originally reported by: **Pavel Roskin (BitBucket: [pavel_roskin](http://bitbucket.org/pavel_roskin))**
---
first.py:
```
import sys
sys.foo = 0
```
second.py:
```
import sys
sys.foo
```
`pylint -E first.py second.py`
No output
`pylint -E --jobs=2 first.py second.py`
```
************* Module second
E: 2, 0: Module 'sys' has no 'foo' member (no-member)
```
I'm actually fine if pylint reports that error in every case, as long as there is no `import first` in second.py before `sys.foo` is used. The original code that triggered the error message is overengineered and needs fixing.
---
- Bitbucket: https://bitbucket.org/logilab/pylint/issue/502
|
1.0
|
Using --jobs affects monkey patching detection - Originally reported by: **Pavel Roskin (BitBucket: [pavel_roskin](http://bitbucket.org/pavel_roskin))**
---
first.py:
```
import sys
sys.foo = 0
```
second.py:
```
import sys
sys.foo
```
`pylint -E first.py second.py`
No output
`pylint -E --jobs=2 first.py second.py`
```
************* Module second
E: 2, 0: Module 'sys' has no 'foo' member (no-member)
```
I'm actually fine if pylint reports that error in every case, as long as there is no `import first` in second.py before `sys.foo` is used. The original code that triggered the error message is overengineered and needs fixing.
---
- Bitbucket: https://bitbucket.org/logilab/pylint/issue/502
|
process
|
using jobs affects monkey patching detection originally reported by pavel roskin bitbucket first py import sys sys foo second py import sys sys foo pylint e first py second py no output pylint e jobs first py second py module second e module sys has no foo member no member i m actually fine if pylint reports that error in every case as long as there is no import first in second py before sys foo is used the original code that triggered the error message is overengineered and needs fixing bitbucket
| 1
|
378,351
| 26,293,228,230
|
IssuesEvent
|
2023-01-08 17:14:36
|
coq/coq
|
https://api.github.com/repos/coq/coq
|
closed
|
Same word "class" for both coercion/scope classes and type classes is source of confusion
|
kind: documentation kind: question
|
See discussion on coq-club [here](https://sympa.inria.fr/sympa/arc/coq-club/2017-11/msg00047.html).
Maybe these tags used for classifying types for coercion and scopes could be renamed "type pattern", "type skeleton", ...?
One issue however is that the names `FunClass` and `SortClass` are already in use. So, maybe rather something like "type pattern class", so as to keep the word "class" in the name?
Any good idea?
|
1.0
|
Same word "class" for both coercion/scope classes and type classes is source of confusion - See discussion on coq-club [here](https://sympa.inria.fr/sympa/arc/coq-club/2017-11/msg00047.html).
Maybe these tags used for classifying types for coercion and scopes could be renamed "type pattern", "type skeleton", ...?
One issue however is that the names `FunClass` and `SortClass` are already in use. So, maybe rather something like "type pattern class", so as to keep the word "class" in the name?
Any good idea?
|
non_process
|
same word class for both coercion scope classes and type classes is source of confusion see discussion on coq club maybe these tags used for classifying types for coercion and scopes could be renamed type pattern type skeleton one issue however is that the names funclass and sortclass are already in use so maybe rather something like type pattern class so as to keep the word class in the name any good idea
| 0
|
14,168
| 17,086,189,836
|
IssuesEvent
|
2021-07-08 12:10:33
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Responsive issue in Enrollment registry screen
|
Bug P2 Participant manager Process: Fixed Process: Reopened Process: Tested dev
|
The responsive issue in the Enrollment registry screen

|
3.0
|
[PM] Responsive issue in Enrollment registry screen - The responsive issue in the Enrollment registry screen

|
process
|
responsive issue in enrollment registry screen the responsive issue in the enrollment registry screen
| 1
|
129,171
| 18,071,072,618
|
IssuesEvent
|
2021-09-21 03:02:23
|
Dima2022/JS-Demo
|
https://api.github.com/repos/Dima2022/JS-Demo
|
opened
|
CVE-2012-6708 (Medium) detected in jquery-1.4.4.min.js
|
security vulnerability
|
## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.4.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js</a></p>
<p>Path to dependency file: JS-Demo/node_modules/.staging/selenium-webdriver-059d9ca0/lib/test/data/draggableLists.html</p>
<p>Path to vulnerable library: /node_modules/.staging/selenium-webdriver-059d9ca0/lib/test/data/js/jquery-1.4.4.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.4.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/JS-Demo/commit/d2b50a157c9dcc579fb01370d66876e9f4472962">d2b50a157c9dcc579fb01370d66876e9f4472962</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.4.4","packageFilePaths":["/node_modules/.staging/selenium-webdriver-059d9ca0/lib/test/data/draggableLists.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:1.4.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v1.9.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2012-6708","vulnerabilityDetails":"jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the \u0027\u003c\u0027 character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the \u0027\u003c\u0027 character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2012-6708 (Medium) detected in jquery-1.4.4.min.js - ## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.4.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js</a></p>
<p>Path to dependency file: JS-Demo/node_modules/.staging/selenium-webdriver-059d9ca0/lib/test/data/draggableLists.html</p>
<p>Path to vulnerable library: /node_modules/.staging/selenium-webdriver-059d9ca0/lib/test/data/js/jquery-1.4.4.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.4.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/JS-Demo/commit/d2b50a157c9dcc579fb01370d66876e9f4472962">d2b50a157c9dcc579fb01370d66876e9f4472962</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.4.4","packageFilePaths":["/node_modules/.staging/selenium-webdriver-059d9ca0/lib/test/data/draggableLists.html"],"isTransitiveDependency":false,"dependencyTree":"jquery:1.4.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v1.9.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2012-6708","vulnerabilityDetails":"jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the \u0027\u003c\u0027 character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the \u0027\u003c\u0027 character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file js demo node modules staging selenium webdriver lib test data draggablelists html path to vulnerable library node modules staging selenium webdriver lib test data js jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree jquery isminimumfixversionavailable true minimumfixversion jquery basebranches vulnerabilityidentifier cve vulnerabilitydetails jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common vulnerabilityurl
| 0
|
229,227
| 18,286,662,431
|
IssuesEvent
|
2021-10-05 11:04:24
|
DILCISBoard/eark-ip-test-corpus
|
https://api.github.com/repos/DILCISBoard/eark-ip-test-corpus
|
closed
|
CSIP66 Test Case Description
|
test case corpus package
|
**Specification:**
- **Name:** E-ARK CSIP
- **Version:** 2.0-DRAFT
- **URL:** http://earkcsip.dilcis.eu/
**Requirement:**
- **Id:** CSIP66
- **Link:** http://earkcsip.dilcis.eu/#CSIP66
**Error Level:** ERROR
**Description:**
CSIP66 | File fileSec/fileGrp/file | File mets/fileSec/fileGrp/file The file group (<fileGrp>) contains the file elements which describe the file objects. | 1..n MUST
-- | -- | -- | --
|
1.0
|
CSIP66 Test Case Description - **Specification:**
- **Name:** E-ARK CSIP
- **Version:** 2.0-DRAFT
- **URL:** http://earkcsip.dilcis.eu/
**Requirement:**
- **Id:** CSIP66
- **Link:** http://earkcsip.dilcis.eu/#CSIP66
**Error Level:** ERROR
**Description:**
CSIP66 | File fileSec/fileGrp/file | File mets/fileSec/fileGrp/file The file group (<fileGrp>) contains the file elements which describe the file objects. | 1..n MUST
-- | -- | -- | --
|
non_process
|
test case description specification name e ark csip version draft url requirement id link error level error description file filesec filegrp file file mets filesec filegrp file the file group contains the file elements which describe the file objects n must
| 0
|
6,697
| 9,813,854,304
|
IssuesEvent
|
2019-06-13 08:56:58
|
cropmapteam/Scotland-crop-map
|
https://api.github.com/repos/cropmapteam/Scotland-crop-map
|
closed
|
Write script to mask out digitised radar interference RFI locations
|
GIS process
|
Team members will digitise in QGIS polygons describing the location of radar interference noise present in the radar data. Script should take the polygons and mask out in the associated image the pixels falling within the digitised polygons as nodata.
This is an alternative to #22
The manually digitised polygons should be retained as they could provide possible training data for #22
|
1.0
|
Write script to mask out digitised radar interference RFI locations - Team members will digitise in QGIS polygons describing the location of radar interference noise present in the radar data. Script should take the polygons and mask out in the associated image the pixels falling within the digitised polygons as nodata.
This is an alternative to #22
The manually digitised polygons should be retained as they could provide possible training data for #22
|
process
|
write script to mask out digitised radar interference rfi locations team members will digitise in qgis polygons describing the location of radar interference noise present in the radar data script should take the polygons and mask out in the associated image the pixels falling within the digitised polygons as nodata this is an alternative to the manually digitised polygons should be retained as they could provide possible training data for
| 1
|
129,146
| 5,089,356,298
|
IssuesEvent
|
2017-01-01 15:07:20
|
chartjs/Chart.js
|
https://api.github.com/repos/chartjs/Chart.js
|
closed
|
Possible to group a stacked bar chart?
|
Category: Enhancement Help wanted Priority: p1
|
Is it possible to group a set of stacked bars in this version of ChartJS? I've got a fiddle of a triple stacked bar with 3 datasets, and I would like to group them together so I can add another set of 3 stacked bars. Hopefully this makes sense.
https://jsfiddle.net/7oo4ugbj/

|
1.0
|
Possible to group a stacked bar chart? - Is it possible to group a set of stacked bars in this version of ChartJS? I've got a fiddle of a triple stacked bar with 3 datasets, and I would like to group them together so I can add another set of 3 stacked bars. Hopefully this makes sense.
https://jsfiddle.net/7oo4ugbj/

|
non_process
|
possible to group a stacked bar chart is it possible to group a set of stacked bars in this version of chartjs i ve got a fiddle of a triple stacked bar with datasets and i would like to group them together so i can add another set of stacked bars hopefully this makes sense
| 0
|
10,260
| 13,111,016,474
|
IssuesEvent
|
2020-08-04 21:55:00
|
SCIInstitute/Seg3D
|
https://api.github.com/repos/SCIInstitute/Seg3D
|
opened
|
Off-axis data handling and visualization
|
data renderer software processes usability volume rendering
|
Related to #365, but with the ability to actually handle and visualize the data in Seg3D.
Features to include:
Remapping
Arbitrary axis visualization
Handle going through slices when they aren't on the same plane
*VTK could help with these features. Related to #363.
|
1.0
|
Off-axis data handling and visualization - Related to #365, but with the ability to actually handle and visualize the data in Seg3D.
Features to include:
Remapping
Arbitrary axis visualization
Handle going through slices when they aren't on the same plane
*VTK could help with these features. Related to #363.
|
process
|
off axis data handling and visualization related to but with the ability to actually handle and visualize the data in features to include remapping arbitrary axis visualization handle going through slices when they aren t on the same plane vtk could help with these features related to
| 1
|
166,347
| 26,341,503,907
|
IssuesEvent
|
2023-01-10 18:03:34
|
dotnet/winforms
|
https://api.github.com/repos/dotnet/winforms
|
closed
|
ListView Item: Icon picture quality is too low
|
help wanted :construction: work in progress area: VS designer area: listview
|
### Environment
Visual Studio: 17.1.2
### .NET version
ALL
### Did this work in a previous version of Visual Studio and/or previous .NET release?
NO
### Issue description

### Steps to reproduce
Nothing
### Diagnostics
```text
Nothing
```
|
1.0
|
ListView Item: Icon picture quality is too low - ### Environment
Visual Studio: 17.1.2
### .NET version
ALL
### Did this work in a previous version of Visual Studio and/or previous .NET release?
NO
### Issue description

### Steps to reproduce
Nothing
### Diagnostics
```text
Nothing
```
|
non_process
|
listview item icon picture quality is too low environment visual studio net version all did this work in a previous version of visual studio and or previous net release no issue description steps to reproduce nothing diagnostics text nothing
| 0
|
5,531
| 8,390,888,659
|
IssuesEvent
|
2018-10-09 13:47:30
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
closed
|
Centerline angle is biased on edge when using NURBS
|
bug sct_process_segmentation
|
Still need to find data... More to come later
|
1.0
|
Centerline angle is biased on edge when using NURBS - Still need to find data... More to come later
|
process
|
centerline angle is biased on edge when using nurbs still need to find data more to come later
| 1
|
3,309
| 6,412,340,597
|
IssuesEvent
|
2017-08-08 02:47:27
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
child process : unexpected exit code when process is out of memory
|
in progress memory process
|
[edit by @refack]
suggested fix https://github.com/nodejs/node/issues/12271#issuecomment-304607842
[end edit]
Version: 7.8.0
Platform: Windows 10 64
Subsystem: Child processes
I fork some child processes to do some intensive calculations.
Because I likely have some memory leaks, I pass the following arguments to the forked processes :
```js
execArgv: process.execArgv.concat(['--expose-gc', '--max-executable-size=192', '--max-old-space-size=256', '--max-semi-space-size=2'])
```
and as expected, the processes run out of memory after a given time, with the following error :
```
<--- Last few GCs --->
[13196:000001C7DD2B3520] 120623 ms: Mark-sweep 249.8 (275.1) -> 249.8 (275.1) MB, 367.6 / 0.2 ms deserialize GC in old space requested
[13196:000001C7DD2B3520] 121001 ms: Mark-sweep 249.8 (275.1) -> 249.8 (275.1) MB, 376.2 / 0.4 ms deserialize GC in old space requested
[13196:000001C7DD2B3520] 121428 ms: Mark-sweep 249.8 (275.1) -> 249.8 (275.1) MB, 426.4 / 0.4 ms deserialize GC in old space requested
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 000001994CFA66A1 <JS Object>
1: createContext [vm.js:~41] [pc=0000029DC91AFC50](this=00000344694A26E9 <an Object with map 0000015E4FEAACB9>,sandbox=00000164890109D1 <an Object with map 000002F68947DEB1>)
2: createSandbox(aka createSandbox) [C:\node-projects\payroll-app\server\payrollEngine\PayrollEngine.js:~750] [pc=0000029DC91C93C4](this=000001C562102201 <null>,bul=000000F3E1CB4639 <an Object with map 000000BA66...
FATAL ERROR: deserialize context Allocation failed - process out of memory
DEBUG 17:02:28 : manageCalcProcess.onChildExit() : child process 13196 exited with code 3
```
What has surprised me is that the child process exits with a code 3.
Per [docs](https://nodejs.org/api/process.html#process_exit_codes), code 3 seems pretty vague, and not related to memory shortage.
I wonder if it's intentional, and if it would be more useful to have a specific exit code in that case.
Edit : Perhaps it's because of the vm.createContext() which mess the code ?
|
1.0
|
child process : unexpected exit code when process is out of memory - [edit by @refack]
suggested fix https://github.com/nodejs/node/issues/12271#issuecomment-304607842
[end edit]
Version: 7.8.0
Platform: Windows 10 64
Subsystem: Child processes
I fork some child processes to do some intensive calculations.
Because I likely have some memory leaks, I pass the following arguments to the forked processes :
```js
execArgv: process.execArgv.concat(['--expose-gc', '--max-executable-size=192', '--max-old-space-size=256', '--max-semi-space-size=2'])
```
and as expected, the processes run out of memory after a given time, with the following error :
```
<--- Last few GCs --->
[13196:000001C7DD2B3520] 120623 ms: Mark-sweep 249.8 (275.1) -> 249.8 (275.1) MB, 367.6 / 0.2 ms deserialize GC in old space requested
[13196:000001C7DD2B3520] 121001 ms: Mark-sweep 249.8 (275.1) -> 249.8 (275.1) MB, 376.2 / 0.4 ms deserialize GC in old space requested
[13196:000001C7DD2B3520] 121428 ms: Mark-sweep 249.8 (275.1) -> 249.8 (275.1) MB, 426.4 / 0.4 ms deserialize GC in old space requested
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 000001994CFA66A1 <JS Object>
1: createContext [vm.js:~41] [pc=0000029DC91AFC50](this=00000344694A26E9 <an Object with map 0000015E4FEAACB9>,sandbox=00000164890109D1 <an Object with map 000002F68947DEB1>)
2: createSandbox(aka createSandbox) [C:\node-projects\payroll-app\server\payrollEngine\PayrollEngine.js:~750] [pc=0000029DC91C93C4](this=000001C562102201 <null>,bul=000000F3E1CB4639 <an Object with map 000000BA66...
FATAL ERROR: deserialize context Allocation failed - process out of memory
DEBUG 17:02:28 : manageCalcProcess.onChildExit() : child process 13196 exited with code 3
```
What has surprised me is that the child process exits with a code 3.
Per [docs](https://nodejs.org/api/process.html#process_exit_codes), code 3 seems pretty vague, and not related to memory shortage.
I wonder if it's intentional, and if it would be more useful to have a specific exit code in that case.
Edit : Perhaps it's because of the vm.createContext() which mess the code ?
|
process
|
child process unexpected exit code when process is out of memory suggested fix version platform windows subsystem child processes i fork some child processes to do some intensive calculations because i likely have some memory leaks i pass the following arguments to the forked processes js execargv process execargv concat and as expected the processes run out of memory after a given time with the following error ms mark sweep mb ms deserialize gc in old space requested ms mark sweep mb ms deserialize gc in old space requested ms mark sweep mb ms deserialize gc in old space requested js stack trace security context createcontext this sandbox createsandbox aka createsandbox this bul an object with map fatal error deserialize context allocation failed process out of memory debug managecalcprocess onchildexit child process exited with code what has surprised me is that the child process exits with a code per code seems pretty vague and not related to memory shortage i wonder if it s intentional and if it would be more useful to have a specific exit code in that case edit perhaps it s because of the vm createcontext which mess the code
| 1
|
1,968
| 4,788,346,483
|
IssuesEvent
|
2016-10-30 14:30:06
|
dataproofer/Dataproofer
|
https://api.github.com/repos/dataproofer/Dataproofer
|
closed
|
Create JSON report export
|
engine: processing medium suite: core
|
Great idea from @fil here: https://github.com/dataproofer/Dataproofer/issues/64#issuecomment-218728978
When using Dataproofer in a CLI context, it is most useful in a workflow if the results are given in a way that can be automated or used by other applications.
One way to approach this is to create a JSON template that can be exported that contains the results of the tests that ran.
The other is to return extremely simple, repeatable values depending on command flags passed to the dataproofer CLI. For example "dataproofer --boolean-report file.csv" might simply return "1" or "0" depending on whether all the tests pass or fail.
|
1.0
|
Create JSON report export - Great idea from @fil here: https://github.com/dataproofer/Dataproofer/issues/64#issuecomment-218728978
When using Dataproofer in a CLI context, it is most useful in a workflow if the results are given in a way that can be automated or used by other applications.
One way to approach this is to create a JSON template that can be exported that contains the results of the tests that ran.
The other is to return extremely simple, repeatable values depending on command flags passed to the dataproofer CLI. For example "dataproofer --boolean-report file.csv" might simply return "1" or "0" depending on whether all the tests pass or fail.
|
process
|
create json report export great idea from fil here when using dataproofer in a cli context it is most useful in a workflow if the results are given in a way that can be automated or used by other applications one way to approach this is to create a json template that can be exported that contains the results of the tests that ran the other is to return extremely simple repeatable values depending on command flags passed to the dataproofer cli for example dataproofer boolean report file csv might simply return or depending on whether all the tests pass or fail
| 1
|
45,061
| 11,581,372,223
|
IssuesEvent
|
2020-02-21 22:26:57
|
rook/rook
|
https://api.github.com/repos/rook/rook
|
closed
|
Add CII Best Practices Badge (CNCF requirement)
|
CNCF requirement build
|
A CNCF graduation requirement is to follow the CII Best Practices Badging:
https://bestpractices.coreinfrastructure.org/
|
1.0
|
Add CII Best Practices Badge (CNCF requirement) - A CNCF graduation requirement is to follow the CII Best Practices Badging:
https://bestpractices.coreinfrastructure.org/
|
non_process
|
add cii best practices badge cncf requirement a cncf graduation requirement is to follow the cii best practices badging
| 0
|
74,240
| 9,007,880,326
|
IssuesEvent
|
2019-02-05 00:54:55
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
Text cursor doesn't blink when the focus is requested programmatically within onSubmitted callback
|
a: text input f: material design framework
|
I wanted to retain keyboard focus in a `TextField` even after the text is submitted, so that I can type in multiple messages in a row in a chat app. The default behavior is to unfocus when the text is submitted, so I added a `FocusScope.of(context).requestFocus()` call in my `handleSubmit` callback.
With this change, after submitting the text, the text field goes into a weird state where it has focus (i.e. I can type text) but the cursor isn't blinking.
Below is the minimal example code that reproduces this issue.
```dart
import 'package:flutter/material.dart';
void main() {
runApp(new MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return new MaterialApp(
home: new MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key}) : super(key: key);
@override
_MyHomePageState createState() => new _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
final TextEditingController _controller = new TextEditingController();
final FocusNode _focusNode = new FocusNode();
@override
Widget build(BuildContext context) {
return new Scaffold(
body: new Center(
child: new Container(
width: 300.0,
child: new TextField(
controller: _controller,
focusNode: _focusNode,
onSubmitted: (String text) {
print(text);
_controller.clear();
FocusScope.of(context).requestFocus(_focusNode);
},
),
),
),
);
}
}
```
|
1.0
|
Text cursor doesn't blink when the focus is requested programmatically within onSubmitted callback - I wanted to retain keyboard focus in a `TextField` even after the text is submitted, so that I can type in multiple messages in a row in a chat app. The default behavior is to unfocus when the text is submitted, so I added a `FocusScope.of(context).requestFocus()` call in my `handleSubmit` callback.
With this change, after submitting the text, the text field goes into a weird state where it has focus (i.e. I can type text) but the cursor isn't blinking.
Below is the minimal example code that reproduces this issue.
```dart
import 'package:flutter/material.dart';
void main() {
runApp(new MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return new MaterialApp(
home: new MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key}) : super(key: key);
@override
_MyHomePageState createState() => new _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
final TextEditingController _controller = new TextEditingController();
final FocusNode _focusNode = new FocusNode();
@override
Widget build(BuildContext context) {
return new Scaffold(
body: new Center(
child: new Container(
width: 300.0,
child: new TextField(
controller: _controller,
focusNode: _focusNode,
onSubmitted: (String text) {
print(text);
_controller.clear();
FocusScope.of(context).requestFocus(_focusNode);
},
),
),
),
);
}
}
```
|
non_process
|
text cursor doesn t blink when the focus is requested programmatically within onsubmitted callback i wanted to retain keyboard focus in a textfield even after the text is submitted so that i can type in multiple messages in a row in a chat app the default behavior is to unfocus when the text is submitted so i added a focusscope of context requestfocus call in my handlesubmit callback with this change after submitting the text the text field goes into a weird state where it has focus i e i can type text but the cursor isn t blinking below is the minimal example code that reproduces this issue dart import package flutter material dart void main runapp new myapp class myapp extends statelesswidget override widget build buildcontext context return new materialapp home new myhomepage class myhomepage extends statefulwidget myhomepage key key super key key override myhomepagestate createstate new myhomepagestate class myhomepagestate extends state final texteditingcontroller controller new texteditingcontroller final focusnode focusnode new focusnode override widget build buildcontext context return new scaffold body new center child new container width child new textfield controller controller focusnode focusnode onsubmitted string text print text controller clear focusscope of context requestfocus focusnode
| 0
|
12,406
| 14,916,061,812
|
IssuesEvent
|
2021-01-22 17:37:34
|
amor71/LiuAlgoTrader
|
https://api.github.com/repos/amor71/LiuAlgoTrader
|
closed
|
market_miner attempt to import from 'examples' which is not part of liualgotrader's python package
|
bug in-process
|
**Describe the bug**
I finally had some time again to come back to this project :-)
Unfortunately it seems that the current `market_miner` script attempts to import from 'examples' which is not part of liualgotrader's python package. Even running from the root of the clone this fails. The only workaround seems to copy the examples folder into the examples folder into the site-packages folder of the current .venv (e.g. `cp -R examples /.venv/lib/python3.8/site-packages/`)
**To Reproduce**
Steps to reproduce the behavior:
1. create a fresh venv & install liualgotrader's
```
rm -rf .venv
python3.8 -m virtualenv -p python3.8 .venv
python setup.py install
```
2 call `market_miner`
```
$ market_miner
Traceback (most recent call last):
File "/.venv/bin/market_miner", line 4, in <module>
__import__('pkg_resources').run_script('liualgotrader==0.0.86', 'market_miner')
File "/.venv/lib/python3.8/site-packages/pkg_resources/__init__.py", line 667, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/.venv/lib/python3.8/site-packages/pkg_resources/__init__.py", line 1471, in run_script
exec(script_code, namespace, namespace)
File "/.venv/lib/python3.8/site-packages/liualgotrader-0.0.86-py3.8.egg/EGG-INFO/scripts/market_miner", line 16, in <module>
ModuleNotFoundError: No module named 'examples'
```
**Expected behavior**
No Exception :-)
|
1.0
|
market_miner attempt to import from 'examples' which is not part of liualgotrader's python package - **Describe the bug**
I finally had some time again to come back to this project :-)
Unfortunately it seems that the current `market_miner` script attempts to import from 'examples' which is not part of liualgotrader's python package. Even running from the root of the clone this fails. The only workaround seems to copy the examples folder into the examples folder into the site-packages folder of the current .venv (e.g. `cp -R examples /.venv/lib/python3.8/site-packages/`)
**To Reproduce**
Steps to reproduce the behavior:
1. create a fresh venv & install liualgotrader's
```
rm -rf .venv
python3.8 -m virtualenv -p python3.8 .venv
python setup.py install
```
2 call `market_miner`
```
$ market_miner
Traceback (most recent call last):
File "/.venv/bin/market_miner", line 4, in <module>
__import__('pkg_resources').run_script('liualgotrader==0.0.86', 'market_miner')
File "/.venv/lib/python3.8/site-packages/pkg_resources/__init__.py", line 667, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/.venv/lib/python3.8/site-packages/pkg_resources/__init__.py", line 1471, in run_script
exec(script_code, namespace, namespace)
File "/.venv/lib/python3.8/site-packages/liualgotrader-0.0.86-py3.8.egg/EGG-INFO/scripts/market_miner", line 16, in <module>
ModuleNotFoundError: No module named 'examples'
```
**Expected behavior**
No Exception :-)
|
process
|
market miner attempt to import from examples which is not part of liualgotrader s python package describe the bug i finally had some time again to come back to this project unfortunately it seems that the current market miner script attempts to import from examples which is not part of liualgotrader s python package even running from the root of the clone this fails the only workaround seems to copy the examples folder into the examples folder into the site packages folder of the current venv e g cp r examples venv lib site packages to reproduce steps to reproduce the behavior create a fresh venv install liualgotrader s rm rf venv m virtualenv p venv python setup py install call market miner market miner traceback most recent call last file venv bin market miner line in import pkg resources run script liualgotrader market miner file venv lib site packages pkg resources init py line in run script self require requires run script script name ns file venv lib site packages pkg resources init py line in run script exec script code namespace namespace file venv lib site packages liualgotrader egg egg info scripts market miner line in modulenotfounderror no module named examples expected behavior no exception
| 1
|
72,150
| 8,706,887,126
|
IssuesEvent
|
2018-12-06 05:15:04
|
ServiceInnovationLab/PresenceChecker
|
https://api.github.com/repos/ServiceInnovationLab/PresenceChecker
|
closed
|
Drafting Bruteforce Rules breakdown
|
design development review
|
As a Lab
We need to be able to understand the logic of the rules and how they need to be expressed in code, so we are producing correct information.
A / C
- Map 'If this, then that' statements of the rules
- Determine the parameters
- Rules to be able to be transferred to the developers to use as a framework
|
1.0
|
Drafting Bruteforce Rules breakdown - As a Lab
We need to be able to understand the logic of the rules and how they need to be expressed in code, so we are producing correct information.
A / C
- Map 'If this, then that' statements of the rules
- Determine the parameters
- Rules to be able to be transferred to the developers to use as a framework
|
non_process
|
drafting bruteforce rules breakdown as a lab we need to be able to understand the logic of the rules and how they need to be expressed in code so we are producing correct information a c map if this then that statements of the rules determine the parameters rules to be able to be transferred to the developers to use as a framework
| 0
|
2,514
| 5,286,550,117
|
IssuesEvent
|
2017-02-08 09:40:23
|
HackBrexit/MinistersUnderTheInfluence
|
https://api.github.com/repos/HackBrexit/MinistersUnderTheInfluence
|
closed
|
Take a processed row and push that data through to the API
|
File Processing
|
### Description
After having processed a row:
Use the minister data to generate/lookup the id of the minister (using type 'person')
Use the minsters job role data to generate/lookup the id of the government office (using type 'government-office')
Use the organisation/participant column to generate/lookup the ids of the organisations and their representatives (using type 'organisation'/'person')
Use the date range and purpose data to generate a meeting id (using type 'meeting')
Combine the minister, government office and meeting ids to create a 'influence-government-office-people' entry
For each other participant combine the participant, organisation, and meeting ids to create a 'influence-organisation-people' entry
#### Comments, Questions and Considerations
Using http://en.staging.meetings.vidhya.tv/api/v1/docs for reference on what needs to be sent
### Acceptance Criteria
This story can be considered done when:
Given I have a row of data with a minister who is not yet in the database
When I have pushed the data to the API
Then it uses the logic described above
|
1.0
|
Take a processed row and push that data through to the API - ### Description
After having processed a row:
Use the minister data to generate/lookup the id of the minister (using type 'person')
Use the minsters job role data to generate/lookup the id of the government office (using type 'government-office')
Use the organisation/participant column to generate/lookup the ids of the organisations and their representatives (using type 'organisation'/'person')
Use the date range and purpose data to generate a meeting id (using type 'meeting')
Combine the minister, government office and meeting ids to create a 'influence-government-office-people' entry
For each other participant combine the participant, organisation, and meeting ids to create a 'influence-organisation-people' entry
#### Comments, Questions and Considerations
Using http://en.staging.meetings.vidhya.tv/api/v1/docs for reference on what needs to be sent
### Acceptance Criteria
This story can be considered done when:
Given I have a row of data with a minister who is not yet in the database
When I have pushed the data to the API
Then it uses the logic described above
|
process
|
take a processed row and push that data through to the api description after having processed a row use the minister data to generate lookup the id of the minister using type person use the minsters job role data to generate lookup the id of the government office using type government office use the organisation participant column to generate lookup the ids of the organisations and their representatives using type organisation person use the date range and purpose data to generate a meeting id using type meeting combine the minister government office and meeting ids to create a influence government office people entry for each other participant combine the participant organisation and meeting ids to create a influence organisation people entry comments questions and considerations using for reference on what needs to be sent acceptance criteria this story can be considered done when given i have a row of data with a minister who is not yet in the database when i have pushed the data to the api then it uses the logic described above
| 1
|
17,250
| 23,033,183,078
|
IssuesEvent
|
2022-07-22 15:46:11
|
HausDAO/daohaus-monorepo
|
https://api.github.com/repos/HausDAO/daohaus-monorepo
|
closed
|
Define Review Process
|
process
|
Based on feedback from the [April 1, 2022 Retro](https://github.com/HausDAO/daohaus-monorepo/wiki/Retro:-April-1,-2022) we need to define what the Review process is in the v3 [workflow](https://github.com/HausDAO/daohaus-monorepo/wiki/Process#workflow).
## Concerns
- Still unsure of what the review process is.
- Review is taking a decent amount of time.
- Defining our stages (ie Review -> Done)
We need to define what [Review](https://github.com/HausDAO/daohaus-monorepo/wiki/Process#review) is and what steps need to be completed in that phase.
How do we define review for the different types of tasks?
|
1.0
|
Define Review Process - Based on feedback from the [April 1, 2022 Retro](https://github.com/HausDAO/daohaus-monorepo/wiki/Retro:-April-1,-2022) we need to define what the Review process is in the v3 [workflow](https://github.com/HausDAO/daohaus-monorepo/wiki/Process#workflow).
## Concerns
- Still unsure of what the review process is.
- Review is taking a decent amount of time.
- Defining our stages (ie Review -> Done)
We need to define what [Review](https://github.com/HausDAO/daohaus-monorepo/wiki/Process#review) is and what steps need to be completed in that phase.
How do we define review for the different types of tasks?
|
process
|
define review process based on feedback from the we need to define what the review process is in the concerns still unsure of what the review process is review is taking a decent amount of time defining our stages ie review done we need to define what is and what steps need to be completed in that phase how do we define review for the different types of tasks
| 1
|
21,970
| 30,464,246,868
|
IssuesEvent
|
2023-07-17 09:14:20
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Minor missing word
|
doc-bug Pri1 azure-devops-pipelines/svc azure-devops-pipelines-process/subsvc
|
[Enter feedback here]
In this block (almost at the end), the word 'of' is missing, I guess:
When you specify both CI triggers and pipeline triggers in your pipeline, you can expect new runs to be started every time a push is made that matches the filters "(of)" the CI trigger, and a run of the source pipeline is completed that matches the filters of the pipeline completion trigger.
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 86285f72-9e28-da97-59bb-c29eb60f627d
* Version Independent ID: 18d5a591-a7d3-c261-6bff-8808ae433f54
* Content: [Configure pipeline triggers - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops)
* Content Source: [docs/pipelines/process/pipeline-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/pipeline-triggers.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie**
|
1.0
|
Minor missing word -
[Enter feedback here]
In this block (almost at the end), the word 'of' is missing, I guess:
When you specify both CI triggers and pipeline triggers in your pipeline, you can expect new runs to be started every time a push is made that matches the filters "(of)" the CI trigger, and a run of the source pipeline is completed that matches the filters of the pipeline completion trigger.
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 86285f72-9e28-da97-59bb-c29eb60f627d
* Version Independent ID: 18d5a591-a7d3-c261-6bff-8808ae433f54
* Content: [Configure pipeline triggers - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops)
* Content Source: [docs/pipelines/process/pipeline-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/pipeline-triggers.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie**
|
process
|
minor missing word in this block almost at the end the word of is missing i guess when you specify both ci triggers and pipeline triggers in your pipeline you can expect new runs to be started every time a push is made that matches the filters of the ci trigger and a run of the source pipeline is completed that matches the filters of the pipeline completion trigger document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service azure devops pipelines sub service azure devops pipelines process github login microsoft alias sdanie
| 1
|
7,736
| 10,855,034,161
|
IssuesEvent
|
2019-11-13 17:31:43
|
codeuniversity/smag-mvp
|
https://api.github.com/repos/codeuniversity/smag-mvp
|
opened
|
[Interests/Pictures] Assign detected objects to groups
|
Backend Image Processing
|
When analysing the instagram images for Interests/Preferences, we need to assign theses objects to groups for categorisation. -> awaits splittered interests
|
1.0
|
[Interests/Pictures] Assign detected objects to groups - When analysing the instagram images for Interests/Preferences, we need to assign theses objects to groups for categorisation. -> awaits splittered interests
|
process
|
assign detected objects to groups when analysing the instagram images for interests preferences we need to assign theses objects to groups for categorisation awaits splittered interests
| 1
|
13,153
| 15,573,132,561
|
IssuesEvent
|
2021-03-17 08:09:45
|
bitpal/bitpal_umbrella
|
https://api.github.com/repos/bitpal/bitpal_umbrella
|
opened
|
Generate a new address for each payment request
|
1.0 release Payment processor enhancement
|
Instead of reusing the same address, we should generate a new one from a public key.
We might still keep this behavior to simplify usage? But it shouldn't be recommended.
|
1.0
|
Generate a new address for each payment request - Instead of reusing the same address, we should generate a new one from a public key.
We might still keep this behavior to simplify usage? But it shouldn't be recommended.
|
process
|
generate a new address for each payment request instead of reusing the same address we should generate a new one from a public key we might still keep this behavior to simplify usage but it shouldn t be recommended
| 1
|
7,945
| 11,137,526,713
|
IssuesEvent
|
2019-12-20 19:36:11
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Update modal - update the text to provide more information
|
Apply Process Approved Requirements Ready State Dept.
|
Who: Applicants
What: Update the update application modal
Why: in order to provide additional information
Acceptance Criteria:
Update the update application modal to provide more information related to closing time EST and if you update make sure you submit
New content:
4. Review your application and click **Submit application**. You must submit changes before [insert closing date] at 11:59 p.m. EST.
Current Screen Shot:

|
1.0
|
Update modal - update the text to provide more information - Who: Applicants
What: Update the update application modal
Why: in order to provide additional information
Acceptance Criteria:
Update the update application modal to provide more information related to closing time EST and if you update make sure you submit
New content:
4. Review your application and click **Submit application**. You must submit changes before [insert closing date] at 11:59 p.m. EST.
Current Screen Shot:

|
process
|
update modal update the text to provide more information who applicants what update the update application modal why in order to provide additional information acceptance criteria update the update application modal to provide more information related to closing time est and if you update make sure you submit new content review your application and click submit application you must submit changes before at p m est current screen shot
| 1
|
5,940
| 8,766,773,781
|
IssuesEvent
|
2018-12-17 17:44:07
|
matth3us/arisScripts
|
https://api.github.com/repos/matth3us/arisScripts
|
closed
|
Estudar scripts de impressão de modelos
|
imprimir processo
|
Estudar scripts atuais de impressão de modelo e buscar manter apenas um funcionando. Testar no publisher.
|
1.0
|
Estudar scripts de impressão de modelos - Estudar scripts atuais de impressão de modelo e buscar manter apenas um funcionando. Testar no publisher.
|
process
|
estudar scripts de impressão de modelos estudar scripts atuais de impressão de modelo e buscar manter apenas um funcionando testar no publisher
| 1
|
3,880
| 6,817,717,413
|
IssuesEvent
|
2017-11-07 00:54:07
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Chain reorgs
|
libs-etherlib status-inprocess type-question
|
I’m not sure where to ask this. I’m running a node, but I’m not mining. Periodically, I retreive block and transaction data using RPC. It then save that data locally. I hear all this talk about re-orgs, so I would expect that once in a while, some of the data I pull would be “re-orged"
I should save data for X blocks. And then spin and re grab data fresh to check for re-orgs. This will be a big deal if the blocks are being re-orged, but I am not reflecting that.
From: https://github.com/Great-Hill-Corporation/ethslurp/issues/146
|
1.0
|
Chain reorgs - I’m not sure where to ask this. I’m running a node, but I’m not mining. Periodically, I retreive block and transaction data using RPC. It then save that data locally. I hear all this talk about re-orgs, so I would expect that once in a while, some of the data I pull would be “re-orged"
I should save data for X blocks. And then spin and re grab data fresh to check for re-orgs. This will be a big deal if the blocks are being re-orged, but I am not reflecting that.
From: https://github.com/Great-Hill-Corporation/ethslurp/issues/146
|
process
|
chain reorgs i’m not sure where to ask this i’m running a node but i’m not mining periodically i retreive block and transaction data using rpc it then save that data locally i hear all this talk about re orgs so i would expect that once in a while some of the data i pull would be “re orged i should save data for x blocks and then spin and re grab data fresh to check for re orgs this will be a big deal if the blocks are being re orged but i am not reflecting that from
| 1
|
416,561
| 28,088,519,252
|
IssuesEvent
|
2023-03-30 11:29:27
|
MLH-Fellowship/pyre-check
|
https://api.github.com/repos/MLH-Fellowship/pyre-check
|
closed
|
[Fall 2021] Step 3: Add Python and third-party library version information to our Pysa models
|
documentation Fall 2021 step 3
|
Pysa models are closely related to the Python code they're modeling. They look very similar to type stubs, and rely on annotating specific function names, parameters, or other values. Unfortunately, things like function names and parameters are subject to change across library versions, whether this is the standard Python library, or a third party library.
Right now there is no information about what Python version or what library version our models are for. This means it can be very confusing when someone installs Pysa and is running a different version of Python to the ones our models are for, since we can see.
The goal of this project is to support annotating Pysa models with a special type of comment, i.e. shebang-style annotations. Similar to how Python and library versions are specified in requirements.txt files, we want to support the same syntax in Pysa model files. Please see https://www.python.org/dev/peps/pep-0508/ and https://www.python.org/dev/peps/pep-0440/#version-specifiers for how version information should be specified.
1. A global version annotation at the top of a .pysa file will apply to all models in that file.
```
#! python == 3.8
def foo.bar(x: TaintSink[Test]): ...
def foo.baz(y: TaintSink[Test]): ...
```
2. Inline annotations can override the global version annotation:
```
#! python == 3.8
def foo.bar(x: TaintSink[Test]): ... #! python >= 3.7, foo >= 1.1
def foo.baz(y: TaintSink[Test]): ...
```
3. Finally, we want to have some pre-processing step before we run Pysa that checks the user's currently Python environment and installed modules, and skips or disables models if there is a version mismatch or the version requirement is not satisfied.
#### Additional follow up tasks
1. Add version annotations for all of our current models.
2. Add documentation for the new version annotation features
|
1.0
|
[Fall 2021] Step 3: Add Python and third-party library version information to our Pysa models - Pysa models are closely related to the Python code they're modeling. They look very similar to type stubs, and rely on annotating specific function names, parameters, or other values. Unfortunately, things like function names and parameters are subject to change across library versions, whether this is the standard Python library, or a third party library.
Right now there is no information about what Python version or what library version our models are for. This means it can be very confusing when someone installs Pysa and is running a different version of Python to the ones our models are for, since we can see.
The goal of this project is to support annotating Pysa models with a special type of comment, i.e. shebang-style annotations. Similar to how Python and library versions are specified in requirements.txt files, we want to support the same syntax in Pysa model files. Please see https://www.python.org/dev/peps/pep-0508/ and https://www.python.org/dev/peps/pep-0440/#version-specifiers for how version information should be specified.
1. A global version annotation at the top of a .pysa file will apply to all models in that file.
```
#! python == 3.8
def foo.bar(x: TaintSink[Test]): ...
def foo.baz(y: TaintSink[Test]): ...
```
2. Inline annotations can override the global version annotation:
```
#! python == 3.8
def foo.bar(x: TaintSink[Test]): ... #! python >= 3.7, foo >= 1.1
def foo.baz(y: TaintSink[Test]): ...
```
3. Finally, we want to have some pre-processing step before we run Pysa that checks the user's currently Python environment and installed modules, and skips or disables models if there is a version mismatch or the version requirement is not satisfied.
#### Additional follow up tasks
1. Add version annotations for all of our current models.
2. Add documentation for the new version annotation features
|
non_process
|
step add python and third party library version information to our pysa models pysa models are closely related to the python code they re modeling they look very similar to type stubs and rely on annotating specific function names parameters or other values unfortunately things like function names and parameters are subject to change across library versions whether this is the standard python library or a third party library right now there is no information about what python version or what library version our models are for this means it can be very confusing when someone installs pysa and is running a different version of python to the ones our models are for since we can see the goal of this project is to support annotating pysa models with a special type of comment i e shebang style annotations similar to how python and library versions are specified in requirements txt files we want to support the same syntax in pysa model files please see and for how version information should be specified a global version annotation at the top of a pysa file will apply to all models in that file python def foo bar x taintsink def foo baz y taintsink inline annotations can override the global version annotation python def foo bar x taintsink python foo def foo baz y taintsink finally we want to have some pre processing step before we run pysa that checks the user s currently python environment and installed modules and skips or disables models if there is a version mismatch or the version requirement is not satisfied additional follow up tasks add version annotations for all of our current models add documentation for the new version annotation features
| 0
|
104,500
| 4,212,457,128
|
IssuesEvent
|
2016-06-29 16:19:43
|
jgne/bubbles
|
https://api.github.com/repos/jgne/bubbles
|
closed
|
Create Landing Page for Exhibitions
|
complete priority - 1
|
- [x] Dropdown Exhibtion Centre
- [x] Dropdown of exhibitions and dates they will be present e.g. Healthcare Conf, Excel, 14th-15th June
- [x] Make clear as possible as needs to be editable
|
1.0
|
Create Landing Page for Exhibitions - - [x] Dropdown Exhibtion Centre
- [x] Dropdown of exhibitions and dates they will be present e.g. Healthcare Conf, Excel, 14th-15th June
- [x] Make clear as possible as needs to be editable
|
non_process
|
create landing page for exhibitions dropdown exhibtion centre dropdown of exhibitions and dates they will be present e g healthcare conf excel june make clear as possible as needs to be editable
| 0
|
70,751
| 8,577,451,386
|
IssuesEvent
|
2018-11-13 00:03:21
|
jonfroehlich/makeabilitylabwebsite
|
https://api.github.com/repos/jonfroehlich/makeabilitylabwebsite
|
opened
|
Fix top of member.html
|
People Page UI Design
|
Lots of things wrong:
- [ ] Remove banner
- [ ] When at top of page in non-scroll position, make menu bar white, text black, and ml logo black
- [ ] Change all icons (e.g., twitter, mail icon) to be black
- [ ] Fix web link icon so that it's not showing the little text at the bottom or find a different icon to use
- [ ] I don't like how the bio information is shown. What to do with this? Just in general this whole headshot -> name + links -> bio feels weird in its layout and design

|
1.0
|
Fix top of member.html - Lots of things wrong:
- [ ] Remove banner
- [ ] When at top of page in non-scroll position, make menu bar white, text black, and ml logo black
- [ ] Change all icons (e.g., twitter, mail icon) to be black
- [ ] Fix web link icon so that it's not showing the little text at the bottom or find a different icon to use
- [ ] I don't like how the bio information is shown. What to do with this? Just in general this whole headshot -> name + links -> bio feels weird in its layout and design

|
non_process
|
fix top of member html lots of things wrong remove banner when at top of page in non scroll position make menu bar white text black and ml logo black change all icons e g twitter mail icon to be black fix web link icon so that it s not showing the little text at the bottom or find a different icon to use i don t like how the bio information is shown what to do with this just in general this whole headshot name links bio feels weird in its layout and design
| 0
|
49,982
| 13,495,790,797
|
IssuesEvent
|
2020-09-12 01:01:24
|
hammondjm/ksa
|
https://api.github.com/repos/hammondjm/ksa
|
opened
|
WS-2014-0034 (High) detected in commons-fileupload-1.2.2.jar
|
security vulnerability
|
## WS-2014-0034 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.2.2.jar</b></p></summary>
<p>The FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Path to dependency file: /tmp/ws-scm/ksa/ksa-web-root/ksa-bd-web/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/ksa/ksa-web-root/ksa-web/target/ROOT/WEB-INF/lib/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar</p>
<p>
Dependency Hierarchy:
- struts2-core-2.3.31.jar (Root Library)
- :x: **commons-fileupload-1.2.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/hammondjm/ksa/commits/5a3799544bbdfbed38c2c8191a9866ba18bc9768">5a3799544bbdfbed38c2c8191a9866ba18bc9768</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The class FileUploadBase in Apache Commons Fileupload before 1.4 has potential resource leak - InputStream not closed on exception.
<p>Publish Date: 2014-02-17
<p>URL: <a href=https://commons.apache.org/proper/commons-fileupload/changes-report.html>WS-2014-0034</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/apache/commons-fileupload/commit/5b4881d7f75f439326f54fa554a9ca7de6d60814">https://github.com/apache/commons-fileupload/commit/5b4881d7f75f439326f54fa554a9ca7de6d60814</a></p>
<p>Release Date: 2019-09-26</p>
<p>Fix Resolution: 1.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-fileupload","packageName":"commons-fileupload","packageVersion":"1.2.2","isTransitiveDependency":true,"dependencyTree":"org.apache.struts:struts2-core:2.3.31;commons-fileupload:commons-fileupload:1.2.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.4"}],"vulnerabilityIdentifier":"WS-2014-0034","vulnerabilityDetails":"The class FileUploadBase in Apache Commons Fileupload before 1.4 has potential resource leak - InputStream not closed on exception.","vulnerabilityUrl":"https://commons.apache.org/proper/commons-fileupload/changes-report.html","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2014-0034 (High) detected in commons-fileupload-1.2.2.jar - ## WS-2014-0034 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.2.2.jar</b></p></summary>
<p>The FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Path to dependency file: /tmp/ws-scm/ksa/ksa-web-root/ksa-bd-web/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/ksa/ksa-web-root/ksa-web/target/ROOT/WEB-INF/lib/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar</p>
<p>
Dependency Hierarchy:
- struts2-core-2.3.31.jar (Root Library)
- :x: **commons-fileupload-1.2.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/hammondjm/ksa/commits/5a3799544bbdfbed38c2c8191a9866ba18bc9768">5a3799544bbdfbed38c2c8191a9866ba18bc9768</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The class FileUploadBase in Apache Commons Fileupload before 1.4 has potential resource leak - InputStream not closed on exception.
<p>Publish Date: 2014-02-17
<p>URL: <a href=https://commons.apache.org/proper/commons-fileupload/changes-report.html>WS-2014-0034</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/apache/commons-fileupload/commit/5b4881d7f75f439326f54fa554a9ca7de6d60814">https://github.com/apache/commons-fileupload/commit/5b4881d7f75f439326f54fa554a9ca7de6d60814</a></p>
<p>Release Date: 2019-09-26</p>
<p>Fix Resolution: 1.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-fileupload","packageName":"commons-fileupload","packageVersion":"1.2.2","isTransitiveDependency":true,"dependencyTree":"org.apache.struts:struts2-core:2.3.31;commons-fileupload:commons-fileupload:1.2.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.4"}],"vulnerabilityIdentifier":"WS-2014-0034","vulnerabilityDetails":"The class FileUploadBase in Apache Commons Fileupload before 1.4 has potential resource leak - InputStream not closed on exception.","vulnerabilityUrl":"https://commons.apache.org/proper/commons-fileupload/changes-report.html","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
ws high detected in commons fileupload jar ws high severity vulnerability vulnerable library commons fileupload jar the fileupload component provides a simple yet flexible means of adding support for multipart file upload functionality to servlets and web applications path to dependency file tmp ws scm ksa ksa web root ksa bd web pom xml path to vulnerable library home wss scanner repository commons fileupload commons fileupload commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar ksa ksa web root ksa web target root web inf lib commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar home wss scanner repository commons fileupload commons fileupload commons fileupload jar dependency hierarchy core jar root library x commons fileupload jar vulnerable library found in head commit a href vulnerability details the class fileuploadbase in apache commons fileupload before has potential resource leak inputstream not closed on exception publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails the class fileuploadbase in apache commons fileupload before has potential resource leak inputstream not closed on exception vulnerabilityurl
| 0
|
144,697
| 22,490,040,590
|
IssuesEvent
|
2022-06-23 00:15:36
|
apache/druid
|
https://api.github.com/repos/apache/druid
|
opened
|
Druid nested data columns
|
Design Review Proposal
|
### Motivation
Apache Druid has quite a lot of tricks up its sleeve for providing extremely fast queries on very large datasets. However, one of the major limitations in the current system is that this only works on completely flattened data since that is all that Druid segments are currently able to natively store (and table to table join support is limited). To achieve this flattened table requires either external transformation or utilizing the built-in ['flattening'](https://druid.apache.org/docs/latest/ingestion/data-formats.html#flattenspec) that Druid ingestion supports, in order to pluck specific nested values and translate them into top level columns within a segment.
This however has a downside in that the exact set of extractions to be performed must be completely known up front, prior to ingestion, which is especially hard if not impossible to deal with in the case of loosely structured data whose schema might vary row to row. Additionally, often-times this structure is in fact interesting, illustrating relations between values, which is lost completely when transformed into flattened Druid tables without careful naming.
In order to overcome this, this proposal focuses on building out the capabilities to store nested and structured data _directly_ as it is, and query nested fields within this structure without sacrificing the performance available to queries operating on traditional Druid flattened columns.
### Proposed changes
To achieve this, we will introduce a new type of column for storing structured data in Druid segments. The initial implementation centers on leaning heavily into what we already know Druid does very well, taking an approach I like to refer to as "a bunch of columns in a trench coat".

This column is built on top of Druids ['complex' type system](https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/segment/serde/ComplexMetrics.java), which allows complete control over how columns are encoded and decoded, and virtual columns to allow building specialized value selectors for the nested columns through [`VirtualColumn`](https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/segment/VirtualColumn.java) implementations. At ingestion time, all 'paths' in the structured data which contain a 'literal' field (Druid `STRING`, `LONG`, or `DOUBLE`) will be split out into internal 'nested field literal' columns, and stored in a manner similar to how we store normal literal columns, complete with dictionary encoding and bitmap value indexes.
To prove feasibility, I've actually been prototyping this functionality for a bit over 6 months now, making core improvements along the way as needed to improve the complex type system and indexes functionality, and testing with a variety of different workloads. This effort is a spiritual successor to the 'map-string-string' column of https://github.com/apache/druid/pull/10628, except instead of 1 layer deep with only strings, this proposal allows for any level of nesting and supporting the complete set of Druid literal types.
#### Column format
Internally, the nested column is structured into a main column file in the smoosh, and several associated "internal" files for every nested literal field in the structure. All literal fields are dictionary encoded, but unlike our dictionary encoded `STRING` columns, will share a value dictionary that is 'global' to all of the nested columns. The global value dictionaries are split by type and stacked (strings are ids `0` through `m`, longs `m + 1` through `n`, doubles `n + 1` to the end). Locally, the nested columns will have a dictionary which maps local dictionary ids to these global dictionary ids (int -> int), so value lookup is a 2 step operation of local to global, then global to value.
The complex column is composed of:
* compressed, 'raw' representation of the structured data
* bitmap to indicate which rows are null values
* a list of all 'literal' nested columns contained in the structure
* type information for all 'literal' nested columns contained in the structure
* global value dictionaries for all 'literal' values that are shared between all nested columns
The nested field literal contain:
* local to global integer dictionary
* local dictionary encoded compressed integer value column
* bitmap value indexes
* for numeric columns, compressed numeric value columns
<img width="2360" alt="PNG image-956253FBC905-1" src="https://user-images.githubusercontent.com/1577461/175181028-7530170a-54b5-4068-b8b9-257fb1d6d228.png">
#### Querying
Querying will be done primarily through specialized `VirtualColumn`, and which will create optimized selectors to read the nested fields. These will look a lot like the standard Druid column selectors for other types, though with some subtle differences.
These `VirtualColumn` implementations will also be wired up to SQL functions to allow nested data to be queried with ease. The initial set of functions will be a standard-ish set of `JSON` based functions:
##### SQL functions
| function | notes |
|---|---|
| `JSON_VALUE(expr, path)` | Extract a Druid literal (`STRING`, `LONG`, `DOUBLE`) value from a `COMPLEX<json>` column or input `expr` using JSONPath syntax of `path` |
| `JSON_QUERY(expr, path)` | Extract a `COMPLEX<json>` value from a `COMPLEX<json>` column or input `expr` using JSONPath syntax of `path` |
| `JSON_OBJECT(KEY expr1 VALUE expr2[, KEY expr3 VALUE expr4 ...])` | Construct a `COMPLEX<json>` storing the results of `VALUE` expressions at `KEY` expressions |
| `PARSE_JSON(expr)` | Deserialize a JSON `STRING` into a `COMPLEX<json>` to be used with expressions which operate on `COMPLEX<json>` inputs. |
| `TO_JSON(expr)` | Convert any input type to `COMPLEX<json>` to be used with expressions which operate on `COMPLEX<json>` inputs, like a `CAST` operation (rather than deserializing `STRING` values like `PARSE_JSON`) |
| `TO_JSON_STRING(expr)` | Convert a `COMPLEX<json>` input into a JSON `STRING` value |
| `JSON_KEYS(expr, path)`| get array of field names in `expr` at the specified JSONPath `path`, or null if the data does not exist or have any fields |
| `JSON_PATHS(expr)` | get array of all JSONPath paths available in `expr` |
###### JSONPath syntax
Initially we will support only a small simplified subset of the [JSONPath syntax](https://github.com/json-path/JsonPath/blob/master/README.md) operators, primarily limited to extracting individual values from nested data structures.
|operator|description|
|---|---|
|`$`|'Root' element, all JSONPath expressions will start with this operator|
|`.<name>`|'Child' element in 'dot' notation|
|`['<name>']`|'Child' element in 'bracket' notation|
|`[<number>]`|'Array' index|
though in the future we will likely expand on this.
#### Ingestion
During ingestion, a new nested column indexer will process nested data from input rows, traversing the structure and building a global dictionary of all literal values encountered. At persist time, this dictionary is sorted, and then the 'raw' data is serialized with SMILE encoding into a compressed column. As we serialize the rows, we traverse the nested structure again, this time with sorted dictionary in hand and write out columns for the nested literal field columns into temporary files, building local value dictionaries in the process. Once the 'raw' column is complete, we iterate over the nested literal columns, sort their local dictionaries, and write out their finished column, with compressed dictionary encoded value columns, for numeric types compressed numeric columns, and the local dictionaries and bitmap value indexes.
The nested data column indexer will be specified via a new `DimensionSchema` type, initially using `json` as the type as the initial implementation will only support JSON format, which will process the rows that are pointed at it (even literals).
```json
{
"type": "json",
"name": "someNestedColumnName"
}
```
That's basically it. For convenience when working with text input formats, like TSV, if all processed rows are string literals the indexer will try to deserialize as JSON, if the data looks like JSON.
Additionally, we will add a handful of native Druid expressions (which will also handle composition uses at query time), which will be able to perform many of the operations which are currently done via `flattenSpec`, but instead through [`transformSpec`](https://druid.apache.org/docs/latest/ingestion/ingestion-spec.html#transformspec).
```json
"transformSpec": {
"transforms": [
{ "type": "expression", "name": "transformedJson", "expression": "json_value(someNestedColumnName, '$.x.y')" }
]
}
```
##### Native expressions
| function | notes |
|---|---|
| `json_value(expr, path)` | Extract a Druid literal (`STRING`, `LONG`, `DOUBLE`) value from a `COMPLEX<json>` column or input `expr` using JSONPath syntax of `path` |
| `json_query(expr, path)` | Extract a `COMPLEX<json>` value from a `COMPLEX<json>` column or input `expr` using JSONPath syntax of `path` |
| `json_object(expr1, expr2[, expr3, expr4 ...])` | Construct a `COMPLEX<json>` with alternating 'key' and 'value' arguments|
| `parse_json(expr)` | Deserialize a JSON `STRING` into a `COMPLEX<json>` to be used with expressions which operate on `COMPLEX<json>` inputs. |
| `to_json(expr)` | Convert any input type to `COMPLEX<json>` to be used with expressions which operate on `COMPLEX<json>` inputs, like a `CAST` operation (rather than deserializing `STRING` values like `PARSE_JSON`) |
| `to_json_string(expr)` | Convert a `COMPLEX<json>` input into a JSON `STRING` value |
| `json_keys(expr, path)`| get array of field names in `expr` at the specified JSONPath `path`, or null if the data does not exist or have any fields |
| `json_paths(expr)` | get array of all JSONPath paths available in `expr` |
### Rationale
I believe the utility of being able to store nested structure is obvious - besides `flattenSpec` and up front ETL being inflexible and complicated. As to why this implementation was chosen for the initial effort, it comes down to starting with what we know and mapping Druids current capability onto a nested structure. There is _a lot_ of room for experimentation after this initial implementation is added, especially in the realm of storage format, as there are a wide variety of approaches to storing this type of data. The proposed implementation will have the same strengths and weaknesses as standard Druid queries, but with the initial implementation in place, we will have a point of comparison to conduct further investigation.
### Operational impact
The expense of nested column ingestion is correlated with the complexity of the schema of the nested input data. The majority of the expense happens when serializing the segment (persist/merge), so these operations will take longer than normal for complex schemas, and could require additional heap and disk. Each nested literal field is roughly an additional column, and we're building them all at the end of the process on the fly while persisting the 'raw' data. Additionally, while I've gone through a few iterations so far, the current ingestion algorithm is still rather expensive and could use additional tuning, especially in regard to the number of temporary files involved.
Additionally, since this introduces a new column type, these columns will be unavailable when rolling back to older versions.
### Test plan
The surface area of this feature is quite large, since it is effectively allowing the full functionality of segments within a single column and several ways of interacting with this data. `JSON_VALUE` in particular can be utilized as any other Druid column type across all query types (grouping, filtering, aggregation, etc). Quite a lot of testing has been done so far, including a bit of stress testing, and I've internally gone through a handful of iterations on the code, but work will need to continue on hardening the feature. Because the column format is versioned, we should be able to iterate freely without impacting existing data. Unit test coverage in my prototype is currently pretty decent, so the main focus of testing now will be in 'production'-ish use cases to observe how well things are performing and looking for incremental improvements.
### Future work
#### automatic typing for schema-less ingestion
The nested columns could be improved to make Druid schema-less ingestion support automatic type discovery. All discovered columns could be created with a nested data indexer, and at serialization time we could improve the persistence code to recognize single typed columns with only 'root' literal values and allow rewriting the type and writing out a standard Druid literal column. This primary work here would be allow this to work seamlessly with realtime queries, allowing the realtime selector to make instead a value selector on the root literal value instead of the 'raw' data selector.
#### literal arrays
While the current proposal can process and store array values, it does not include the ability to interact with them as native Druid `ARRAY` types and utilize the associated functions. Arrays of literal values could be stored with specialized nested columns (instead of a nested column for each array element),
#### JSONPath wildcards
Interaction with arrays could also be improved by introducing support for wildcards in our JSONPath syntax, to allow selecting an array of values instead of being limited to selecting specific array elements. This would make arrays significantly more useful.
#### better general array handling
Druid support for `ARRAY` types is growing, but still could use some improvement. In particular, an `UNNEST` function to allow turning arrays of values into columns of values would unlock a lot of functionality when interacting with nested arrays.
#### better complex dimension handling, grouping, filtering, aggregation
Druid support for direct usage of `COMPLEX` types is still rather limited, and I want to work on improving this to make using nested data columns a more pleasant experience. This includes allowing direct grouping (the 'raw' values, like any variably sized type, could use a dictionary building strategy in the grouping engines). The filtering system could allow complex types to better participate in indexes and value matching. The current workaround is to use `TO_JSON_STRING` to stringify these values into a type that Druid can work with, but I think we can eliminate this need in the future.
#### formal Druid type instead of complex
It might be useful to consider switching from using generic `COMPLEX` types and promote the nested data type into a top level Druid type and call it something like `OBJECT` or `STRUCT` or ... something. This would allow various parts of the engine to take a more active stance on how nested types are handled, and allow tighter integration with various pieces. I'm not certain if this is strictly necessary at this point, just something I've been thinking about.
#### support for ingesting from other nested formats (Parquet, Avro, ORC)
The nested column implementation is not specific to JSON, so supporting other data formats would give us near full feature parity with the `flattenSpec`, allowing it to be deprecated.
#### customized control over ingestion (which fields to extract, which fields to index, retain raw data, etc)
Fine tuned control over how the nested data indexer produces columns would allow for retaining a larger blob of data but only extracting a specific set of columns to be 'optimized' to support use with `JSON_VALUE` and filtering with indexes, allowing the other columns to fall back to the 'raw' data. We could also allow omitting the 'raw' data, and instead opt to reconstruct it on the fly from the nested columns. Additionally, indexes might not be that useful on _all_ nested columns, so control over which fields are indexed for fast filtering would be useful. All of these options would give operators a way to control the output size of nested columns.
#### bring technical enhancements to normal numeric columns
Nested numeric columns have both a numeric value column and a dictionary encoded column and bitmap indexes. This allows for both fast aggregation and fast filtering in exchange for additional storage space. These improvements can be folded into Druid `LONG`, `DOUBLE`, and `FLOAT` columns to allow operators to optionally specify creating indexes for numeric values.
#### alternative storage formats
There is a lot of room for exploration on alternative storage formats to suit various nested data use cases. For example, in cases where the structure is interesting and it is likely that a collection of nested fields will be taking part in the same query often, it might make sense to explore formats that allow compressing the values of these columns together into a single column (a fixed width row oriented format), allowing lower overhead to read multiple values in the same query (whether or not this is actually better would need proving). That said, I don't really have anything specific in mind in this area, just throwing it out there as an area of interest.
|
1.0
|
Druid nested data columns - ### Motivation
Apache Druid has quite a lot of tricks up its sleeve for providing extremely fast queries on very large datasets. However, one of the major limitations in the current system is that this only works on completely flattened data since that is all that Druid segments are currently able to natively store (and table to table join support is limited). To achieve this flattened table requires either external transformation or utilizing the built-in ['flattening'](https://druid.apache.org/docs/latest/ingestion/data-formats.html#flattenspec) that Druid ingestion supports, in order to pluck specific nested values and translate them into top level columns within a segment.
This however has a downside in that the exact set of extractions to be performed must be completely known up front, prior to ingestion, which is especially hard if not impossible to deal with in the case of loosely structured data whose schema might vary row to row. Additionally, often-times this structure is in fact interesting, illustrating relations between values, which is lost completely when transformed into flattened Druid tables without careful naming.
In order to overcome this, this proposal focuses on building out the capabilities to store nested and structured data _directly_ as it is, and query nested fields within this structure without sacrificing the performance available to queries operating on traditional Druid flattened columns.
### Proposed changes
To achieve this, we will introduce a new type of column for storing structured data in Druid segments. The initial implementation centers on leaning heavily into what we already know Druid does very well, taking an approach I like to refer to as "a bunch of columns in a trench coat".

This column is built on top of Druids ['complex' type system](https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/segment/serde/ComplexMetrics.java), which allows complete control over how columns are encoded and decoded, and virtual columns to allow building specialized value selectors for the nested columns through [`VirtualColumn`](https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/segment/VirtualColumn.java) implementations. At ingestion time, all 'paths' in the structured data which contain a 'literal' field (Druid `STRING`, `LONG`, or `DOUBLE`) will be split out into internal 'nested field literal' columns, and stored in a manner similar to how we store normal literal columns, complete with dictionary encoding and bitmap value indexes.
To prove feasibility, I've actually been prototyping this functionality for a bit over 6 months now, making core improvements along the way as needed to improve the complex type system and indexes functionality, and testing with a variety of different workloads. This effort is a spiritual successor to the 'map-string-string' column of https://github.com/apache/druid/pull/10628, except instead of 1 layer deep with only strings, this proposal allows for any level of nesting and supporting the complete set of Druid literal types.
#### Column format
Internally, the nested column is structured into a main column file in the smoosh, and several associated "internal" files for every nested literal field in the structure. All literal fields are dictionary encoded, but unlike our dictionary encoded `STRING` columns, will share a value dictionary that is 'global' to all of the nested columns. The global value dictionaries are split by type and stacked (strings are ids `0` through `m`, longs `m + 1` through `n`, doubles `n + 1` to the end). Locally, the nested columns will have a dictionary which maps local dictionary ids to these global dictionary ids (int -> int), so value lookup is a 2 step operation of local to global, then global to value.
The complex column is composed of:
* compressed, 'raw' representation of the structured data
* bitmap to indicate which rows are null values
* a list of all 'literal' nested columns contained in the structure
* type information for all 'literal' nested columns contained in the structure
* global value dictionaries for all 'literal' values that are shared between all nested columns
The nested field literal contain:
* local to global integer dictionary
* local dictionary encoded compressed integer value column
* bitmap value indexes
* for numeric columns, compressed numeric value columns
<img width="2360" alt="PNG image-956253FBC905-1" src="https://user-images.githubusercontent.com/1577461/175181028-7530170a-54b5-4068-b8b9-257fb1d6d228.png">
#### Querying
Querying will be done primarily through specialized `VirtualColumn`, and which will create optimized selectors to read the nested fields. These will look a lot like the standard Druid column selectors for other types, though with some subtle differences.
These `VirtualColumn` implementations will also be wired up to SQL functions to allow nested data to be queried with ease. The initial set of functions will be a standard-ish set of `JSON` based functions:
##### SQL functions
| function | notes |
|---|---|
| `JSON_VALUE(expr, path)` | Extract a Druid literal (`STRING`, `LONG`, `DOUBLE`) value from a `COMPLEX<json>` column or input `expr` using JSONPath syntax of `path` |
| `JSON_QUERY(expr, path)` | Extract a `COMPLEX<json>` value from a `COMPLEX<json>` column or input `expr` using JSONPath syntax of `path` |
| `JSON_OBJECT(KEY expr1 VALUE expr2[, KEY expr3 VALUE expr4 ...])` | Construct a `COMPLEX<json>` storing the results of `VALUE` expressions at `KEY` expressions |
| `PARSE_JSON(expr)` | Deserialize a JSON `STRING` into a `COMPLEX<json>` to be used with expressions which operate on `COMPLEX<json>` inputs. |
| `TO_JSON(expr)` | Convert any input type to `COMPLEX<json>` to be used with expressions which operate on `COMPLEX<json>` inputs, like a `CAST` operation (rather than deserializing `STRING` values like `PARSE_JSON`) |
| `TO_JSON_STRING(expr)` | Convert a `COMPLEX<json>` input into a JSON `STRING` value |
| `JSON_KEYS(expr, path)`| get array of field names in `expr` at the specified JSONPath `path`, or null if the data does not exist or have any fields |
| `JSON_PATHS(expr)` | get array of all JSONPath paths available in `expr` |
###### JSONPath syntax
Initially we will support only a small simplified subset of the [JSONPath syntax](https://github.com/json-path/JsonPath/blob/master/README.md) operators, primarily limited to extracting individual values from nested data structures.
|operator|description|
|---|---|
|`$`|'Root' element, all JSONPath expressions will start with this operator|
|`.<name>`|'Child' element in 'dot' notation|
|`['<name>']`|'Child' element in 'bracket' notation|
|`[<number>]`|'Array' index|
though in the future we will likely expand on this.
#### Ingestion
During ingestion, a new nested column indexer will process nested data from input rows, traversing the structure and building a global dictionary of all literal values encountered. At persist time, this dictionary is sorted, and then the 'raw' data is serialized with SMILE encoding into a compressed column. As we serialize the rows, we traverse the nested structure again, this time with sorted dictionary in hand and write out columns for the nested literal field columns into temporary files, building local value dictionaries in the process. Once the 'raw' column is complete, we iterate over the nested literal columns, sort their local dictionaries, and write out their finished column, with compressed dictionary encoded value columns, for numeric types compressed numeric columns, and the local dictionaries and bitmap value indexes.
The nested data column indexer will be specified via a new `DimensionSchema` type, initially using `json` as the type as the initial implementation will only support JSON format, which will process the rows that are pointed at it (even literals).
```json
{
"type": "json",
"name": "someNestedColumnName"
}
```
That's basically it. For convenience when working with text input formats, like TSV, if all processed rows are string literals the indexer will try to deserialize as JSON, if the data looks like JSON.
Additionally, we will add a handful of native Druid expressions (which will also handle composition uses at query time), which will be able to perform many of the operations which are currently done via `flattenSpec`, but instead through [`transformSpec`](https://druid.apache.org/docs/latest/ingestion/ingestion-spec.html#transformspec).
```json
"transformSpec": {
"transforms": [
{ "type": "expression", "name": "transformedJson", "expression": "json_value(someNestedColumnName, '$.x.y')" }
]
}
```
##### Native expressions
| function | notes |
|---|---|
| `json_value(expr, path)` | Extract a Druid literal (`STRING`, `LONG`, `DOUBLE`) value from a `COMPLEX<json>` column or input `expr` using JSONPath syntax of `path` |
| `json_query(expr, path)` | Extract a `COMPLEX<json>` value from a `COMPLEX<json>` column or input `expr` using JSONPath syntax of `path` |
| `json_object(expr1, expr2[, expr3, expr4 ...])` | Construct a `COMPLEX<json>` with alternating 'key' and 'value' arguments|
| `parse_json(expr)` | Deserialize a JSON `STRING` into a `COMPLEX<json>` to be used with expressions which operate on `COMPLEX<json>` inputs. |
| `to_json(expr)` | Convert any input type to `COMPLEX<json>` to be used with expressions which operate on `COMPLEX<json>` inputs, like a `CAST` operation (rather than deserializing `STRING` values like `PARSE_JSON`) |
| `to_json_string(expr)` | Convert a `COMPLEX<json>` input into a JSON `STRING` value |
| `json_keys(expr, path)`| get array of field names in `expr` at the specified JSONPath `path`, or null if the data does not exist or have any fields |
| `json_paths(expr)` | get array of all JSONPath paths available in `expr` |
### Rationale
I believe the utility of being able to store nested structure is obvious - besides `flattenSpec` and up front ETL being inflexible and complicated. As to why this implementation was chosen for the initial effort, it comes down to starting with what we know and mapping Druids current capability onto a nested structure. There is _a lot_ of room for experimentation after this initial implementation is added, especially in the realm of storage format, as there are a wide variety of approaches to storing this type of data. The proposed implementation will have the same strengths and weaknesses as standard Druid queries, but with the initial implementation in place, we will have a point of comparison to conduct further investigation.
### Operational impact
The expense of nested column ingestion is correlated with the complexity of the schema of the nested input data. The majority of the expense happens when serializing the segment (persist/merge), so these operations will take longer than normal for complex schemas, and could require additional heap and disk. Each nested literal field is roughly an additional column, and we're building them all at the end of the process on the fly while persisting the 'raw' data. Additionally, while I've gone through a few iterations so far, the current ingestion algorithm is still rather expensive and could use additional tuning, especially in regard to the number of temporary files involved.
Additionally, since this introduces a new column type, these columns will be unavailable when rolling back to older versions.
### Test plan
The surface area of this feature is quite large, since it is effectively allowing the full functionality of segments within a single column and several ways of interacting with this data. `JSON_VALUE` in particular can be utilized as any other Druid column type across all query types (grouping, filtering, aggregation, etc). Quite a lot of testing has been done so far, including a bit of stress testing, and I've internally gone through a handful of iterations on the code, but work will need to continue on hardening the feature. Because the column format is versioned, we should be able to iterate freely without impacting existing data. Unit test coverage in my prototype is currently pretty decent, so the main focus of testing now will be in 'production'-ish use cases to observe how well things are performing and looking for incremental improvements.
### Future work
#### automatic typing for schema-less ingestion
The nested columns could be improved to make Druid schema-less ingestion support automatic type discovery. All discovered columns could be created with a nested data indexer, and at serialization time we could improve the persistence code to recognize single typed columns with only 'root' literal values and allow rewriting the type and writing out a standard Druid literal column. This primary work here would be allow this to work seamlessly with realtime queries, allowing the realtime selector to make instead a value selector on the root literal value instead of the 'raw' data selector.
#### literal arrays
While the current proposal can process and store array values, it does not include the ability to interact with them as native Druid `ARRAY` types and utilize the associated functions. Arrays of literal values could be stored with specialized nested columns (instead of a nested column for each array element),
#### JSONPath wildcards
Interaction with arrays could also be improved by introducing support for wildcards in our JSONPath syntax, to allow selecting an array of values instead of being limited to selecting specific array elements. This would make arrays significantly more useful.
#### better general array handling
Druid support for `ARRAY` types is growing, but still could use some improvement. In particular, an `UNNEST` function to allow turning arrays of values into columns of values would unlock a lot of functionality when interacting with nested arrays.
#### better complex dimension handling, grouping, filtering, aggregation
Druid support for direct usage of `COMPLEX` types is still rather limited, and I want to work on improving this to make using nested data columns a more pleasant experience. This includes allowing direct grouping (the 'raw' values, like any variably sized type, could use a dictionary building strategy in the grouping engines). The filtering system could allow complex types to better participate in indexes and value matching. The current workaround is to use `TO_JSON_STRING` to stringify these values into a type that Druid can work with, but I think we can eliminate this need in the future.
#### formal Druid type instead of complex
It might be useful to consider switching from using generic `COMPLEX` types and promote the nested data type into a top level Druid type and call it something like `OBJECT` or `STRUCT` or ... something. This would allow various parts of the engine to take a more active stance on how nested types are handled, and allow tighter integration with various pieces. I'm not certain if this is strictly necessary at this point, just something I've been thinking about.
#### support for ingesting from other nested formats (Parquet, Avro, ORC)
The nested column implementation is not specific to JSON, so supporting other data formats would give us near full feature parity with the `flattenSpec`, allowing it to be deprecated.
#### customized control over ingestion (which fields to extract, which fields to index, retain raw data, etc)
Fine tuned control over how the nested data indexer produces columns would allow for retaining a larger blob of data but only extracting a specific set of columns to be 'optimized' to support use with `JSON_VALUE` and filtering with indexes, allowing the other columns to fall back to the 'raw' data. We could also allow omitting the 'raw' data, and instead opt to reconstruct it on the fly from the nested columns. Additionally, indexes might not be that useful on _all_ nested columns, so control over which fields are indexed for fast filtering would be useful. All of these options would give operators a way to control the output size of nested columns.
#### bring technical enhancements to normal numeric columns
Nested numeric columns have both a numeric value column and a dictionary encoded column and bitmap indexes. This allows for both fast aggregation and fast filtering in exchange for additional storage space. These improvements can be folded into Druid `LONG`, `DOUBLE`, and `FLOAT` columns to allow operators to optionally specify creating indexes for numeric values.
#### alternative storage formats
There is a lot of room for exploration on alternative storage formats to suit various nested data use cases. For example, in cases where the structure is interesting and it is likely that a collection of nested fields will be taking part in the same query often, it might make sense to explore formats that allow compressing the values of these columns together into a single column (a fixed width row oriented format), allowing lower overhead to read multiple values in the same query (whether or not this is actually better would need proving). That said, I don't really have anything specific in mind in this area, just throwing it out there as an area of interest.
|
non_process
|
druid nested data columns motivation apache druid has quite a lot of tricks up its sleeve for providing extremely fast queries on very large datasets however one of the major limitations in the current system is that this only works on completely flattened data since that is all that druid segments are currently able to natively store and table to table join support is limited to achieve this flattened table requires either external transformation or utilizing the built in that druid ingestion supports in order to pluck specific nested values and translate them into top level columns within a segment this however has a downside in that the exact set of extractions to be performed must be completely known up front prior to ingestion which is especially hard if not impossible to deal with in the case of loosely structured data whose schema might vary row to row additionally often times this structure is in fact interesting illustrating relations between values which is lost completely when transformed into flattened druid tables without careful naming in order to overcome this this proposal focuses on building out the capabilities to store nested and structured data directly as it is and query nested fields within this structure without sacrificing the performance available to queries operating on traditional druid flattened columns proposed changes to achieve this we will introduce a new type of column for storing structured data in druid segments the initial implementation centers on leaning heavily into what we already know druid does very well taking an approach i like to refer to as a bunch of columns in a trench coat this column is built on top of druids which allows complete control over how columns are encoded and decoded and virtual columns to allow building specialized value selectors for the nested columns through implementations at ingestion time all paths in the structured data which contain a literal field druid string long or double will be split out into internal nested field literal columns and stored in a manner similar to how we store normal literal columns complete with dictionary encoding and bitmap value indexes to prove feasibility i ve actually been prototyping this functionality for a bit over months now making core improvements along the way as needed to improve the complex type system and indexes functionality and testing with a variety of different workloads this effort is a spiritual successor to the map string string column of except instead of layer deep with only strings this proposal allows for any level of nesting and supporting the complete set of druid literal types column format internally the nested column is structured into a main column file in the smoosh and several associated internal files for every nested literal field in the structure all literal fields are dictionary encoded but unlike our dictionary encoded string columns will share a value dictionary that is global to all of the nested columns the global value dictionaries are split by type and stacked strings are ids through m longs m through n doubles n to the end locally the nested columns will have a dictionary which maps local dictionary ids to these global dictionary ids int int so value lookup is a step operation of local to global then global to value the complex column is composed of compressed raw representation of the structured data bitmap to indicate which rows are null values a list of all literal nested columns contained in the structure type information for all literal nested columns contained in the structure global value dictionaries for all literal values that are shared between all nested columns the nested field literal contain local to global integer dictionary local dictionary encoded compressed integer value column bitmap value indexes for numeric columns compressed numeric value columns img width alt png image src querying querying will be done primarily through specialized virtualcolumn and which will create optimized selectors to read the nested fields these will look a lot like the standard druid column selectors for other types though with some subtle differences these virtualcolumn implementations will also be wired up to sql functions to allow nested data to be queried with ease the initial set of functions will be a standard ish set of json based functions sql functions function notes json value expr path extract a druid literal string long double value from a complex column or input expr using jsonpath syntax of path json query expr path extract a complex value from a complex column or input expr using jsonpath syntax of path json object key value construct a complex storing the results of value expressions at key expressions parse json expr deserialize a json string into a complex to be used with expressions which operate on complex inputs to json expr convert any input type to complex to be used with expressions which operate on complex inputs like a cast operation rather than deserializing string values like parse json to json string expr convert a complex input into a json string value json keys expr path get array of field names in expr at the specified jsonpath path or null if the data does not exist or have any fields json paths expr get array of all jsonpath paths available in expr jsonpath syntax initially we will support only a small simplified subset of the operators primarily limited to extracting individual values from nested data structures operator description root element all jsonpath expressions will start with this operator child element in dot notation child element in bracket notation array index though in the future we will likely expand on this ingestion during ingestion a new nested column indexer will process nested data from input rows traversing the structure and building a global dictionary of all literal values encountered at persist time this dictionary is sorted and then the raw data is serialized with smile encoding into a compressed column as we serialize the rows we traverse the nested structure again this time with sorted dictionary in hand and write out columns for the nested literal field columns into temporary files building local value dictionaries in the process once the raw column is complete we iterate over the nested literal columns sort their local dictionaries and write out their finished column with compressed dictionary encoded value columns for numeric types compressed numeric columns and the local dictionaries and bitmap value indexes the nested data column indexer will be specified via a new dimensionschema type initially using json as the type as the initial implementation will only support json format which will process the rows that are pointed at it even literals json type json name somenestedcolumnname that s basically it for convenience when working with text input formats like tsv if all processed rows are string literals the indexer will try to deserialize as json if the data looks like json additionally we will add a handful of native druid expressions which will also handle composition uses at query time which will be able to perform many of the operations which are currently done via flattenspec but instead through json transformspec transforms type expression name transformedjson expression json value somenestedcolumnname x y native expressions function notes json value expr path extract a druid literal string long double value from a complex column or input expr using jsonpath syntax of path json query expr path extract a complex value from a complex column or input expr using jsonpath syntax of path json object construct a complex with alternating key and value arguments parse json expr deserialize a json string into a complex to be used with expressions which operate on complex inputs to json expr convert any input type to complex to be used with expressions which operate on complex inputs like a cast operation rather than deserializing string values like parse json to json string expr convert a complex input into a json string value json keys expr path get array of field names in expr at the specified jsonpath path or null if the data does not exist or have any fields json paths expr get array of all jsonpath paths available in expr rationale i believe the utility of being able to store nested structure is obvious besides flattenspec and up front etl being inflexible and complicated as to why this implementation was chosen for the initial effort it comes down to starting with what we know and mapping druids current capability onto a nested structure there is a lot of room for experimentation after this initial implementation is added especially in the realm of storage format as there are a wide variety of approaches to storing this type of data the proposed implementation will have the same strengths and weaknesses as standard druid queries but with the initial implementation in place we will have a point of comparison to conduct further investigation operational impact the expense of nested column ingestion is correlated with the complexity of the schema of the nested input data the majority of the expense happens when serializing the segment persist merge so these operations will take longer than normal for complex schemas and could require additional heap and disk each nested literal field is roughly an additional column and we re building them all at the end of the process on the fly while persisting the raw data additionally while i ve gone through a few iterations so far the current ingestion algorithm is still rather expensive and could use additional tuning especially in regard to the number of temporary files involved additionally since this introduces a new column type these columns will be unavailable when rolling back to older versions test plan the surface area of this feature is quite large since it is effectively allowing the full functionality of segments within a single column and several ways of interacting with this data json value in particular can be utilized as any other druid column type across all query types grouping filtering aggregation etc quite a lot of testing has been done so far including a bit of stress testing and i ve internally gone through a handful of iterations on the code but work will need to continue on hardening the feature because the column format is versioned we should be able to iterate freely without impacting existing data unit test coverage in my prototype is currently pretty decent so the main focus of testing now will be in production ish use cases to observe how well things are performing and looking for incremental improvements future work automatic typing for schema less ingestion the nested columns could be improved to make druid schema less ingestion support automatic type discovery all discovered columns could be created with a nested data indexer and at serialization time we could improve the persistence code to recognize single typed columns with only root literal values and allow rewriting the type and writing out a standard druid literal column this primary work here would be allow this to work seamlessly with realtime queries allowing the realtime selector to make instead a value selector on the root literal value instead of the raw data selector literal arrays while the current proposal can process and store array values it does not include the ability to interact with them as native druid array types and utilize the associated functions arrays of literal values could be stored with specialized nested columns instead of a nested column for each array element jsonpath wildcards interaction with arrays could also be improved by introducing support for wildcards in our jsonpath syntax to allow selecting an array of values instead of being limited to selecting specific array elements this would make arrays significantly more useful better general array handling druid support for array types is growing but still could use some improvement in particular an unnest function to allow turning arrays of values into columns of values would unlock a lot of functionality when interacting with nested arrays better complex dimension handling grouping filtering aggregation druid support for direct usage of complex types is still rather limited and i want to work on improving this to make using nested data columns a more pleasant experience this includes allowing direct grouping the raw values like any variably sized type could use a dictionary building strategy in the grouping engines the filtering system could allow complex types to better participate in indexes and value matching the current workaround is to use to json string to stringify these values into a type that druid can work with but i think we can eliminate this need in the future formal druid type instead of complex it might be useful to consider switching from using generic complex types and promote the nested data type into a top level druid type and call it something like object or struct or something this would allow various parts of the engine to take a more active stance on how nested types are handled and allow tighter integration with various pieces i m not certain if this is strictly necessary at this point just something i ve been thinking about support for ingesting from other nested formats parquet avro orc the nested column implementation is not specific to json so supporting other data formats would give us near full feature parity with the flattenspec allowing it to be deprecated customized control over ingestion which fields to extract which fields to index retain raw data etc fine tuned control over how the nested data indexer produces columns would allow for retaining a larger blob of data but only extracting a specific set of columns to be optimized to support use with json value and filtering with indexes allowing the other columns to fall back to the raw data we could also allow omitting the raw data and instead opt to reconstruct it on the fly from the nested columns additionally indexes might not be that useful on all nested columns so control over which fields are indexed for fast filtering would be useful all of these options would give operators a way to control the output size of nested columns bring technical enhancements to normal numeric columns nested numeric columns have both a numeric value column and a dictionary encoded column and bitmap indexes this allows for both fast aggregation and fast filtering in exchange for additional storage space these improvements can be folded into druid long double and float columns to allow operators to optionally specify creating indexes for numeric values alternative storage formats there is a lot of room for exploration on alternative storage formats to suit various nested data use cases for example in cases where the structure is interesting and it is likely that a collection of nested fields will be taking part in the same query often it might make sense to explore formats that allow compressing the values of these columns together into a single column a fixed width row oriented format allowing lower overhead to read multiple values in the same query whether or not this is actually better would need proving that said i don t really have anything specific in mind in this area just throwing it out there as an area of interest
| 0
|
6,773
| 9,913,009,549
|
IssuesEvent
|
2019-06-28 10:29:07
|
wso2/docs-ei
|
https://api.github.com/repos/wso2/docs-ei
|
closed
|
Update styles and templates
|
Priority/High Severity/Major ballerina micro-integrator stream-processor
|
Use the following PRs to update the styles and templates in docs-ei to reflect latest UX changes:
https://github.com/wso2/docs-is/pull/45
https://github.com/wso2/docs-is/pull/49
|
1.0
|
Update styles and templates - Use the following PRs to update the styles and templates in docs-ei to reflect latest UX changes:
https://github.com/wso2/docs-is/pull/45
https://github.com/wso2/docs-is/pull/49
|
process
|
update styles and templates use the following prs to update the styles and templates in docs ei to reflect latest ux changes
| 1
|
5,884
| 8,705,568,408
|
IssuesEvent
|
2018-12-05 22:52:24
|
williamho123/PokemonGo
|
https://api.github.com/repos/williamho123/PokemonGo
|
closed
|
Remove unnecessary data fields and examples
|
preprocess
|
- PokemonId
- Latitude / Longitude (?)
- CellId (??)
- All other "appeared" variables except timeOfDay
- Continent
- Wind Bearing
- Pressure
- WeatherIcon
- SunriseX
- PopDensity
- Dummy values for GymDist & PokeStopDist
- Cooc (?)
|
1.0
|
Remove unnecessary data fields and examples - - PokemonId
- Latitude / Longitude (?)
- CellId (??)
- All other "appeared" variables except timeOfDay
- Continent
- Wind Bearing
- Pressure
- WeatherIcon
- SunriseX
- PopDensity
- Dummy values for GymDist & PokeStopDist
- Cooc (?)
|
process
|
remove unnecessary data fields and examples pokemonid latitude longitude cellid all other appeared variables except timeofday continent wind bearing pressure weathericon sunrisex popdensity dummy values for gymdist pokestopdist cooc
| 1
|
13,495
| 16,020,868,478
|
IssuesEvent
|
2021-04-20 23:00:10
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
On macOS, Process.Dispose() doesn't kill the process
|
area-System.Diagnostics.Process untriaged
|
<!--This is just a template - feel free to delete any and all of it and replace as appropriate.-->
### Description
In PowerShell, when there is a problem in a pipeline, we use Process.Dispose() to clean up any remaining processes that is part of the pipeline. We recently noticed that on macOS, those processes aren't actually killed. This doesn't repro on Linux where the processes are properly killed.
```csharp
using System;
using System.Diagnostics;
using System.Threading;
namespace processdispose
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("yes running: {0}", Process.GetProcessesByName("yes").Length);
var psi = new ProcessStartInfo();
psi.FileName = "/usr/bin/yes";
psi.RedirectStandardOutput = true;
var y = Process.Start(psi);
y.Dispose();
Thread.Sleep(250);
Console.WriteLine("yes running: {0}", Process.GetProcessesByName("yes").Length);
}
}
}
```
Expected: same number of `yes` processes before and after
Actual: the newly created `yes` process is still running after `Dispose()` is called and waiting some time for cleanup. If you check, the `yes` process is still running after this program finishes.
### Configuration
macOS 11.2.3
.NET 5.0 or .NET 6.0-preview3
### Regression?
Don't know
|
1.0
|
On macOS, Process.Dispose() doesn't kill the process - <!--This is just a template - feel free to delete any and all of it and replace as appropriate.-->
### Description
In PowerShell, when there is a problem in a pipeline, we use Process.Dispose() to clean up any remaining processes that is part of the pipeline. We recently noticed that on macOS, those processes aren't actually killed. This doesn't repro on Linux where the processes are properly killed.
```csharp
using System;
using System.Diagnostics;
using System.Threading;
namespace processdispose
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("yes running: {0}", Process.GetProcessesByName("yes").Length);
var psi = new ProcessStartInfo();
psi.FileName = "/usr/bin/yes";
psi.RedirectStandardOutput = true;
var y = Process.Start(psi);
y.Dispose();
Thread.Sleep(250);
Console.WriteLine("yes running: {0}", Process.GetProcessesByName("yes").Length);
}
}
}
```
Expected: same number of `yes` processes before and after
Actual: the newly created `yes` process is still running after `Dispose()` is called and waiting some time for cleanup. If you check, the `yes` process is still running after this program finishes.
### Configuration
macOS 11.2.3
.NET 5.0 or .NET 6.0-preview3
### Regression?
Don't know
|
process
|
on macos process dispose doesn t kill the process description in powershell when there is a problem in a pipeline we use process dispose to clean up any remaining processes that is part of the pipeline we recently noticed that on macos those processes aren t actually killed this doesn t repro on linux where the processes are properly killed csharp using system using system diagnostics using system threading namespace processdispose class program static void main string args console writeline yes running process getprocessesbyname yes length var psi new processstartinfo psi filename usr bin yes psi redirectstandardoutput true var y process start psi y dispose thread sleep console writeline yes running process getprocessesbyname yes length expected same number of yes processes before and after actual the newly created yes process is still running after dispose is called and waiting some time for cleanup if you check the yes process is still running after this program finishes configuration macos net or net regression don t know
| 1
|
3,249
| 6,314,084,050
|
IssuesEvent
|
2017-07-24 09:52:35
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
closed
|
ImportError: libSM.so.6: cannot open shared object file on Cedar (Compute Canada)
|
bug card:WORK_IN_PROCESS question
|
on Cedar (Compute Canada)
~~~
Running /home/jcohen/sct/scripts/sct_register_to_template.py -qc /home/jcohen/qc_data -c t2 -l /home/jcohen/data/sct_testing/large/vand
erbilt_sct-users_20150910-joshua/t2/labels.nii.gz -i /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2.nii
.gz -t /home/jcohen/sct/data/PAM50 -v 1 -s /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2_seg_manual.ni
i.gz -r 1 -ref template -ofolder sct_register_to_template_vanderbilt_sct-users_20150910-joshua_170623111911_118831/
Check folder existence...
Check folder existence...
Check folder existence...
Check template files...
OK: /home/jcohen/sct/data/PAM50/template/PAM50_t2.nii.gz
OK: /home/jcohen/sct/data/PAM50/template/PAM50_levels.nii.gz
OK: /home/jcohen/sct/data/PAM50/template/PAM50_cord.nii.gz
Check parameters:
Data: /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2.nii.gz
Landmarks: /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/labels.nii.gz
Segmentation: /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2_seg_manual.nii.gz
Path template: /home/jcohen/sct/data/PAM50/
Remove temp files: 1
Check input labels...
Create temporary folder...
Copying input data to tmp folder and convert to nii...
sct_convert -i /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2.nii.gz -o tmp.170623111911_903687/data.ni
i
sct_convert -i /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2_seg_manual.nii.gz -o tmp.170623111911_903
687/seg.nii.gz
sct_convert -i /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/labels.nii.gz -o tmp.170623111911_903687/lab
el.nii.gz
sct_convert -i /home/jcohen/sct/data/PAM50/template/PAM50_t2.nii.gz -o tmp.170623111911_903687/template.nii
sct_convert -i /home/jcohen/sct/data/PAM50/template/PAM50_cord.nii.gz -o tmp.170623111911_903687/template_seg.nii.gz
Generate labels from template vertebral labeling
sct_label_utils -i /home/jcohen/sct/data/PAM50/template/PAM50_levels.nii.gz -vert-body 0 -o template_label.nii.gz
Check if provided labels are available in the template
Binarize segmentation
sct_maths -i seg.nii.gz -bin 0.5 -o seg.nii.gz
Resample data to 1mm isotropic...
sct_resample -i data.nii -mm 1.0x1.0x1.0 -x linear -o data_1mm.nii
Running /home/jcohen/sct/scripts/sct_resample.py -i data.nii -mm 1.0x1.0x1.0 -x linear -o data_1mm.nii
Traceback (most recent call last):
File "/home/jcohen/sct/scripts/sct_resample.py", line 365, in <module>
main()
File "/home/jcohen/sct/scripts/sct_resample.py", line 362, in main
resample()
File "/home/jcohen/sct/scripts/sct_resample.py", line 59, in resample
from nipy.algorithms.registration import resample
File "/home/jcohen/sct/python/lib/python2.7/site-packages/nipy/algorithms/registration/__init__.py", line 18, in <module>
from .scripting import space_time_realign, aff2euler
File "/home/jcohen/sct/python/lib/python2.7/site-packages/nipy/algorithms/registration/scripting.py", line 22, in <module>
import matplotlib.pyplot as plt
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/pyplot.py", line 114, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/backends/backend_qt4agg.py", line 18, in <module>
from .backend_qt5agg import FigureCanvasQTAggBase as _FigureCanvasQTAggBase
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/backends/backend_qt5agg.py", line 15, in <module>
from .backend_qt5 import QtCore
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/backends/backend_qt5.py", line 31, in <module>
from .qt_compat import QtCore, QtGui, QtWidgets, _getSaveFileName, __version__
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/backends/qt_compat.py", line 124, in <module>
from PyQt4 import QtCore, QtGui
ImportError: libSM.so.6: cannot open shared object file: No such file or directory
/home/jcohen/sct/scripts/sct_utils.pyNone
/home/jcohen/sct/scripts/sct_utils.pyNone
~~~
~~~
sct_check_dependencies
--
Spinal Cord Toolbox (jca_issue1362/1795a89130c457176811ef8c7707ff3aac73fc08)
Running /home/jcohen/sct/scripts/sct_check_dependencies.py
OS: linux (Linux-3.10.0-514.21.1.el7.x86_64-x86_64-with-centos-7.3.1611-Core)
CPU cores: Available: 32, Used by SCT: 32
RAM: MemTotal: 264035648 kB
total used free shared buff/cache available
Mem: 257847 3095 226543 406 28207 250061
Swap: 4095 0 4095
SCT path: /home/jcohen/sct
Installation type: git
commit: 1795a89130c457176811ef8c7707ff3aac73fc08
branch: jca_issue1362
Check Python path...................................[OK]
Check if data are installed.........................[OK]
Check if xlwt (1.0.0) is installed..................[OK]
Check if xlutils (1.7.1) is installed...............[OK]
Check if cryptography (1.6) is installed............[OK]
Check if scikit-learn (0.17.1) is installed.........[OK]
Check if scikit-image (0.12.3) is installed.........[OK]
Check if pyqt (4.11.4) is installed.................[OK]
Check if psutil (5.2.2) is installed................[OK]
Check if matplotlib (1.5.1) is installed............[OK]
Check if pip (9.0.1) is installed...................[WARNING]
Detected version: 8.1.2. Required version: 9.0.1
Check if requests (2.12.4) is installed.............[OK]
Check if xlrd (0.9.4) is installed..................[OK]
Check if pandas (0.18.1) is installed...............[OK]
Check if mpi4py (2.0.0) is installed................[OK]
Check if dipy (0.11.0) is installed.................[OK]
Check if distribute2mpi (0.3.0) is installed........[OK]
Check if nibabel (2.1.0) is installed...............[OK]
Check if tqdm (4.11.2) is installed.................[OK]
Check if nipy (0.4.0) is installed..................[OK]
Check if numpy is installed.........................[OK]
Check if scipy is installed.........................[OK]
Check if spinalcordtoolbox is installed.............[OK]
Check ANTs compatibility with OS ...................[OK]
Check PropSeg compatibility with OS ................[OK]
Check if figure can be opened.......................[FAIL]
~~~
|
1.0
|
ImportError: libSM.so.6: cannot open shared object file on Cedar (Compute Canada) - on Cedar (Compute Canada)
~~~
Running /home/jcohen/sct/scripts/sct_register_to_template.py -qc /home/jcohen/qc_data -c t2 -l /home/jcohen/data/sct_testing/large/vand
erbilt_sct-users_20150910-joshua/t2/labels.nii.gz -i /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2.nii
.gz -t /home/jcohen/sct/data/PAM50 -v 1 -s /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2_seg_manual.ni
i.gz -r 1 -ref template -ofolder sct_register_to_template_vanderbilt_sct-users_20150910-joshua_170623111911_118831/
Check folder existence...
Check folder existence...
Check folder existence...
Check template files...
OK: /home/jcohen/sct/data/PAM50/template/PAM50_t2.nii.gz
OK: /home/jcohen/sct/data/PAM50/template/PAM50_levels.nii.gz
OK: /home/jcohen/sct/data/PAM50/template/PAM50_cord.nii.gz
Check parameters:
Data: /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2.nii.gz
Landmarks: /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/labels.nii.gz
Segmentation: /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2_seg_manual.nii.gz
Path template: /home/jcohen/sct/data/PAM50/
Remove temp files: 1
Check input labels...
Create temporary folder...
Copying input data to tmp folder and convert to nii...
sct_convert -i /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2.nii.gz -o tmp.170623111911_903687/data.ni
i
sct_convert -i /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/t2_seg_manual.nii.gz -o tmp.170623111911_903
687/seg.nii.gz
sct_convert -i /home/jcohen/data/sct_testing/large/vanderbilt_sct-users_20150910-joshua/t2/labels.nii.gz -o tmp.170623111911_903687/lab
el.nii.gz
sct_convert -i /home/jcohen/sct/data/PAM50/template/PAM50_t2.nii.gz -o tmp.170623111911_903687/template.nii
sct_convert -i /home/jcohen/sct/data/PAM50/template/PAM50_cord.nii.gz -o tmp.170623111911_903687/template_seg.nii.gz
Generate labels from template vertebral labeling
sct_label_utils -i /home/jcohen/sct/data/PAM50/template/PAM50_levels.nii.gz -vert-body 0 -o template_label.nii.gz
Check if provided labels are available in the template
Binarize segmentation
sct_maths -i seg.nii.gz -bin 0.5 -o seg.nii.gz
Resample data to 1mm isotropic...
sct_resample -i data.nii -mm 1.0x1.0x1.0 -x linear -o data_1mm.nii
Running /home/jcohen/sct/scripts/sct_resample.py -i data.nii -mm 1.0x1.0x1.0 -x linear -o data_1mm.nii
Traceback (most recent call last):
File "/home/jcohen/sct/scripts/sct_resample.py", line 365, in <module>
main()
File "/home/jcohen/sct/scripts/sct_resample.py", line 362, in main
resample()
File "/home/jcohen/sct/scripts/sct_resample.py", line 59, in resample
from nipy.algorithms.registration import resample
File "/home/jcohen/sct/python/lib/python2.7/site-packages/nipy/algorithms/registration/__init__.py", line 18, in <module>
from .scripting import space_time_realign, aff2euler
File "/home/jcohen/sct/python/lib/python2.7/site-packages/nipy/algorithms/registration/scripting.py", line 22, in <module>
import matplotlib.pyplot as plt
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/pyplot.py", line 114, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/backends/backend_qt4agg.py", line 18, in <module>
from .backend_qt5agg import FigureCanvasQTAggBase as _FigureCanvasQTAggBase
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/backends/backend_qt5agg.py", line 15, in <module>
from .backend_qt5 import QtCore
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/backends/backend_qt5.py", line 31, in <module>
from .qt_compat import QtCore, QtGui, QtWidgets, _getSaveFileName, __version__
File "/home/jcohen/sct/python/lib/python2.7/site-packages/matplotlib/backends/qt_compat.py", line 124, in <module>
from PyQt4 import QtCore, QtGui
ImportError: libSM.so.6: cannot open shared object file: No such file or directory
/home/jcohen/sct/scripts/sct_utils.pyNone
/home/jcohen/sct/scripts/sct_utils.pyNone
~~~
~~~
sct_check_dependencies
--
Spinal Cord Toolbox (jca_issue1362/1795a89130c457176811ef8c7707ff3aac73fc08)
Running /home/jcohen/sct/scripts/sct_check_dependencies.py
OS: linux (Linux-3.10.0-514.21.1.el7.x86_64-x86_64-with-centos-7.3.1611-Core)
CPU cores: Available: 32, Used by SCT: 32
RAM: MemTotal: 264035648 kB
total used free shared buff/cache available
Mem: 257847 3095 226543 406 28207 250061
Swap: 4095 0 4095
SCT path: /home/jcohen/sct
Installation type: git
commit: 1795a89130c457176811ef8c7707ff3aac73fc08
branch: jca_issue1362
Check Python path...................................[OK]
Check if data are installed.........................[OK]
Check if xlwt (1.0.0) is installed..................[OK]
Check if xlutils (1.7.1) is installed...............[OK]
Check if cryptography (1.6) is installed............[OK]
Check if scikit-learn (0.17.1) is installed.........[OK]
Check if scikit-image (0.12.3) is installed.........[OK]
Check if pyqt (4.11.4) is installed.................[OK]
Check if psutil (5.2.2) is installed................[OK]
Check if matplotlib (1.5.1) is installed............[OK]
Check if pip (9.0.1) is installed...................[WARNING]
Detected version: 8.1.2. Required version: 9.0.1
Check if requests (2.12.4) is installed.............[OK]
Check if xlrd (0.9.4) is installed..................[OK]
Check if pandas (0.18.1) is installed...............[OK]
Check if mpi4py (2.0.0) is installed................[OK]
Check if dipy (0.11.0) is installed.................[OK]
Check if distribute2mpi (0.3.0) is installed........[OK]
Check if nibabel (2.1.0) is installed...............[OK]
Check if tqdm (4.11.2) is installed.................[OK]
Check if nipy (0.4.0) is installed..................[OK]
Check if numpy is installed.........................[OK]
Check if scipy is installed.........................[OK]
Check if spinalcordtoolbox is installed.............[OK]
Check ANTs compatibility with OS ...................[OK]
Check PropSeg compatibility with OS ................[OK]
Check if figure can be opened.......................[FAIL]
~~~
|
process
|
importerror libsm so cannot open shared object file on cedar compute canada on cedar compute canada running home jcohen sct scripts sct register to template py qc home jcohen qc data c l home jcohen data sct testing large vand erbilt sct users joshua labels nii gz i home jcohen data sct testing large vanderbilt sct users joshua nii gz t home jcohen sct data v s home jcohen data sct testing large vanderbilt sct users joshua seg manual ni i gz r ref template ofolder sct register to template vanderbilt sct users joshua check folder existence check folder existence check folder existence check template files ok home jcohen sct data template nii gz ok home jcohen sct data template levels nii gz ok home jcohen sct data template cord nii gz check parameters data home jcohen data sct testing large vanderbilt sct users joshua nii gz landmarks home jcohen data sct testing large vanderbilt sct users joshua labels nii gz segmentation home jcohen data sct testing large vanderbilt sct users joshua seg manual nii gz path template home jcohen sct data remove temp files check input labels create temporary folder copying input data to tmp folder and convert to nii sct convert i home jcohen data sct testing large vanderbilt sct users joshua nii gz o tmp data ni i sct convert i home jcohen data sct testing large vanderbilt sct users joshua seg manual nii gz o tmp seg nii gz sct convert i home jcohen data sct testing large vanderbilt sct users joshua labels nii gz o tmp lab el nii gz sct convert i home jcohen sct data template nii gz o tmp template nii sct convert i home jcohen sct data template cord nii gz o tmp template seg nii gz generate labels from template vertebral labeling sct label utils i home jcohen sct data template levels nii gz vert body o template label nii gz check if provided labels are available in the template binarize segmentation sct maths i seg nii gz bin o seg nii gz resample data to isotropic sct resample i data nii mm x linear o data nii running home jcohen sct scripts sct resample py i data nii mm x linear o data nii traceback most recent call last file home jcohen sct scripts sct resample py line in main file home jcohen sct scripts sct resample py line in main resample file home jcohen sct scripts sct resample py line in resample from nipy algorithms registration import resample file home jcohen sct python lib site packages nipy algorithms registration init py line in from scripting import space time realign file home jcohen sct python lib site packages nipy algorithms registration scripting py line in import matplotlib pyplot as plt file home jcohen sct python lib site packages matplotlib pyplot py line in backend mod new figure manager draw if interactive show pylab setup file home jcohen sct python lib site packages matplotlib backends init py line in pylab setup globals locals file home jcohen sct python lib site packages matplotlib backends backend py line in from backend import figurecanvasqtaggbase as figurecanvasqtaggbase file home jcohen sct python lib site packages matplotlib backends backend py line in from backend import qtcore file home jcohen sct python lib site packages matplotlib backends backend py line in from qt compat import qtcore qtgui qtwidgets getsavefilename version file home jcohen sct python lib site packages matplotlib backends qt compat py line in from import qtcore qtgui importerror libsm so cannot open shared object file no such file or directory home jcohen sct scripts sct utils pynone home jcohen sct scripts sct utils pynone sct check dependencies spinal cord toolbox jca running home jcohen sct scripts sct check dependencies py os linux linux with centos core cpu cores available used by sct ram memtotal kb total used free shared buff cache available mem swap sct path home jcohen sct installation type git commit branch jca check python path check if data are installed check if xlwt is installed check if xlutils is installed check if cryptography is installed check if scikit learn is installed check if scikit image is installed check if pyqt is installed check if psutil is installed check if matplotlib is installed check if pip is installed detected version required version check if requests is installed check if xlrd is installed check if pandas is installed check if is installed check if dipy is installed check if is installed check if nibabel is installed check if tqdm is installed check if nipy is installed check if numpy is installed check if scipy is installed check if spinalcordtoolbox is installed check ants compatibility with os check propseg compatibility with os check if figure can be opened
| 1
|
4,746
| 7,603,940,041
|
IssuesEvent
|
2018-04-29 19:40:12
|
P0cL4bs/WiFi-Pumpkin
|
https://api.github.com/repos/P0cL4bs/WiFi-Pumpkin
|
closed
|
TCP Proxy error
|
enhancement in process priority solved
|
Hello, cannot resolve this issue, can you tell me wha's wrong with tcp proxy?
OS: Ubuntu 17
Python: 2.7, 3*
```
Traceback (most recent call last):
File "/usr/share/WiFi-Pumpkin/core/servers/proxy/tcp/intercept.py", line 43, in run
self.main()
File "/usr/share/WiFi-Pumpkin/core/servers/proxy/tcp/intercept.py", line 86, in main
q = Queue.Queue()
AttributeError: class Queue has no attribute 'Queue'
```
|
1.0
|
TCP Proxy error - Hello, cannot resolve this issue, can you tell me wha's wrong with tcp proxy?
OS: Ubuntu 17
Python: 2.7, 3*
```
Traceback (most recent call last):
File "/usr/share/WiFi-Pumpkin/core/servers/proxy/tcp/intercept.py", line 43, in run
self.main()
File "/usr/share/WiFi-Pumpkin/core/servers/proxy/tcp/intercept.py", line 86, in main
q = Queue.Queue()
AttributeError: class Queue has no attribute 'Queue'
```
|
process
|
tcp proxy error hello cannot resolve this issue can you tell me wha s wrong with tcp proxy os ubuntu python traceback most recent call last file usr share wifi pumpkin core servers proxy tcp intercept py line in run self main file usr share wifi pumpkin core servers proxy tcp intercept py line in main q queue queue attributeerror class queue has no attribute queue
| 1
|
252,740
| 21,628,565,900
|
IssuesEvent
|
2022-05-05 07:10:53
|
ijioio/object-formatter
|
https://api.github.com/repos/ijioio/object-formatter
|
closed
|
Formatters self tests
|
test
|
Update full-coverage tests for usage of `com.ijioio.object.format.formatter.Formatter` to include self pattern.
|
1.0
|
Formatters self tests - Update full-coverage tests for usage of `com.ijioio.object.format.formatter.Formatter` to include self pattern.
|
non_process
|
formatters self tests update full coverage tests for usage of com ijioio object format formatter formatter to include self pattern
| 0
|
167,256
| 14,106,422,238
|
IssuesEvent
|
2020-11-06 14:54:43
|
rl-institut/open_plan
|
https://api.github.com/repos/rl-institut/open_plan
|
closed
|
View-component load_input_dataseries is not described yet
|
documentation
|
This is a view-component where the user can choose to upload arrays, .xls, .csv files as value for a parameter, this is linked to the view-component [input_parameter_field](https://open-plan.readthedocs.io/en/latest/tool_interface.html#input-parameter-field)
File exists
|
1.0
|
View-component load_input_dataseries is not described yet - This is a view-component where the user can choose to upload arrays, .xls, .csv files as value for a parameter, this is linked to the view-component [input_parameter_field](https://open-plan.readthedocs.io/en/latest/tool_interface.html#input-parameter-field)
File exists
|
non_process
|
view component load input dataseries is not described yet this is a view component where the user can choose to upload arrays xls csv files as value for a parameter this is linked to the view component file exists
| 0
|
63,035
| 8,652,786,004
|
IssuesEvent
|
2018-11-27 09:08:09
|
ericmorand/twing
|
https://api.github.com/repos/ericmorand/twing
|
closed
|
Quickstart guide?
|
documentation enhancement
|
I'm new to twing so sorry if this is obvious, but is there a quickstart guide? Something like
1. `npm i twing --save`
2. **app.js**
```
const {TwingEnvironment, TwingLoaderFilesystem} = require('twing');
let loader = new TwingLoaderFilesystem('/pages');
let twing = new TwingEnvironment(loader);
let ouput = twing.render('index.html', {'name': 'World'});
```
3. **pages/index.html**
```
<!doctype html>
<html>
<head>
<title>Quickstart</title>
</head>
<body>
<h2>Hello{{ ' ' + name|trim }}!</h2>
</body>
</html>
```
4. `node app.js`
|
1.0
|
Quickstart guide? - I'm new to twing so sorry if this is obvious, but is there a quickstart guide? Something like
1. `npm i twing --save`
2. **app.js**
```
const {TwingEnvironment, TwingLoaderFilesystem} = require('twing');
let loader = new TwingLoaderFilesystem('/pages');
let twing = new TwingEnvironment(loader);
let ouput = twing.render('index.html', {'name': 'World'});
```
3. **pages/index.html**
```
<!doctype html>
<html>
<head>
<title>Quickstart</title>
</head>
<body>
<h2>Hello{{ ' ' + name|trim }}!</h2>
</body>
</html>
```
4. `node app.js`
|
non_process
|
quickstart guide i m new to twing so sorry if this is obvious but is there a quickstart guide something like npm i twing save app js const twingenvironment twingloaderfilesystem require twing let loader new twingloaderfilesystem pages let twing new twingenvironment loader let ouput twing render index html name world pages index html quickstart hello name trim node app js
| 0
|
1,292
| 3,828,731,501
|
IssuesEvent
|
2016-03-31 07:36:36
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
Component\Process\Process reads stream input only once on Process::start/run
|
Feature Process
|
I want to implement a Command, which starts a child Process and interacts with it at certain points by writing into its STDIN. I did similar with proc_open a while ago and I know it is possible with the process API, which PHP provides, using the nonblocking option and pseudo terminal. The Process component provides an "async" API to run a process, but no means to write to the STDIN during the execution. Only once at start time.
The entire testcase only checks the stream feature writing to the stream prior the execution:
https://github.com/romainneutron/symfony/blob/e19ce573bc26f64fec610181cd96425af54389fa/src/Symfony/Component/Process/Tests/AbstractProcessTest.php#L164-L181
Not during.
|
1.0
|
Component\Process\Process reads stream input only once on Process::start/run - I want to implement a Command, which starts a child Process and interacts with it at certain points by writing into its STDIN. I did similar with proc_open a while ago and I know it is possible with the process API, which PHP provides, using the nonblocking option and pseudo terminal. The Process component provides an "async" API to run a process, but no means to write to the STDIN during the execution. Only once at start time.
The entire testcase only checks the stream feature writing to the stream prior the execution:
https://github.com/romainneutron/symfony/blob/e19ce573bc26f64fec610181cd96425af54389fa/src/Symfony/Component/Process/Tests/AbstractProcessTest.php#L164-L181
Not during.
|
process
|
component process process reads stream input only once on process start run i want to implement a command which starts a child process and interacts with it at certain points by writing into its stdin i did similar with proc open a while ago and i know it is possible with the process api which php provides using the nonblocking option and pseudo terminal the process component provides an async api to run a process but no means to write to the stdin during the execution only once at start time the entire testcase only checks the stream feature writing to the stream prior the execution not during
| 1
|
13,953
| 16,737,100,050
|
IssuesEvent
|
2021-06-11 04:08:31
|
GoogleCloudPlatform/cloud-ops-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
|
closed
|
Error 400: One or more TimeSeries could not be written
|
external priority: p3 type: bug type: process
|
Upgrading to workload identity caused the following errors to start spamming logs.
It's a known issue in [k8s-stackdriver](https://github.com/GoogleCloudPlatform/k8s-stackdriver/issues/308), and it exists in some GKE versions but not others.
```
Error while sending request to Stackdriver googleapi:
Error 400: One or more TimeSeries could not be written:
Unknown metric: container.googleapis.com/internal/addons/workload_identity/loading_cache_fetch_count: timeSeries[51-53];
Unknown metric: container.googleapis.com/internal/addons/workload_identity/loading_cache_latency: timeSeries[29-31];
Value type for metric container.googleapis.com/internal/addons/workload_identity/metadata_server_build_info must be DOUBLE, but is INT64.: timeSeries[41],
badRequest
```
|
1.0
|
Error 400: One or more TimeSeries could not be written - Upgrading to workload identity caused the following errors to start spamming logs.
It's a known issue in [k8s-stackdriver](https://github.com/GoogleCloudPlatform/k8s-stackdriver/issues/308), and it exists in some GKE versions but not others.
```
Error while sending request to Stackdriver googleapi:
Error 400: One or more TimeSeries could not be written:
Unknown metric: container.googleapis.com/internal/addons/workload_identity/loading_cache_fetch_count: timeSeries[51-53];
Unknown metric: container.googleapis.com/internal/addons/workload_identity/loading_cache_latency: timeSeries[29-31];
Value type for metric container.googleapis.com/internal/addons/workload_identity/metadata_server_build_info must be DOUBLE, but is INT64.: timeSeries[41],
badRequest
```
|
process
|
error one or more timeseries could not be written upgrading to workload identity caused the following errors to start spamming logs it s a known issue in and it exists in some gke versions but not others error while sending request to stackdriver googleapi error one or more timeseries could not be written unknown metric container googleapis com internal addons workload identity loading cache fetch count timeseries unknown metric container googleapis com internal addons workload identity loading cache latency timeseries value type for metric container googleapis com internal addons workload identity metadata server build info must be double but is timeseries badrequest
| 1
|
15,954
| 20,172,183,434
|
IssuesEvent
|
2022-02-10 11:24:32
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
x86: Incorrect decompilation output when FST instruction is used
|
Feature: Processor/x86
|
**Describe the bug**
Decompiler produces erroneous results when FST instruction is used.
The following function is decompiled incorrectly:
```double __stdcall FUN_00401800(double * num)
double ST0:10 <RETURN>
double * Stack[0x4]:4 num
FUN_00401800
00401800 8b 44 24 04 MOV EAX,dword ptr [ESP + num]
00401804 d9 ee FLDZ
00401806 dd 00 FLD qword ptr [EAX]
00401808 dd d1 FST ST1
0040180a d8 c9 FMUL ST1
0040180c dd d9 FSTP ST1
0040180e c2 04 00 RET 0x4
```
this is the current decompilation output:
```
double FUN_00401800(double *num)
{
return 0.0;
}
```
**Expected behavior**
This function should be decompiled as
```
double FUN_00401800(double *num)
{
return *num * *num;
}
```
**Environment**
- OS: Win10 19044.1469
- Java Version: 14
- Ghidra Version: 10.1.1
- Ghidra Origin: official ghidra-sre.org distro
|
1.0
|
x86: Incorrect decompilation output when FST instruction is used - **Describe the bug**
Decompiler produces erroneous results when FST instruction is used.
The following function is decompiled incorrectly:
```double __stdcall FUN_00401800(double * num)
double ST0:10 <RETURN>
double * Stack[0x4]:4 num
FUN_00401800
00401800 8b 44 24 04 MOV EAX,dword ptr [ESP + num]
00401804 d9 ee FLDZ
00401806 dd 00 FLD qword ptr [EAX]
00401808 dd d1 FST ST1
0040180a d8 c9 FMUL ST1
0040180c dd d9 FSTP ST1
0040180e c2 04 00 RET 0x4
```
this is the current decompilation output:
```
double FUN_00401800(double *num)
{
return 0.0;
}
```
**Expected behavior**
This function should be decompiled as
```
double FUN_00401800(double *num)
{
return *num * *num;
}
```
**Environment**
- OS: Win10 19044.1469
- Java Version: 14
- Ghidra Version: 10.1.1
- Ghidra Origin: official ghidra-sre.org distro
|
process
|
incorrect decompilation output when fst instruction is used describe the bug decompiler produces erroneous results when fst instruction is used the following function is decompiled incorrectly double stdcall fun double num double double stack num fun mov eax dword ptr ee fldz dd fld qword ptr dd fst fmul dd fstp ret this is the current decompilation output double fun double num return expected behavior this function should be decompiled as double fun double num return num num environment os java version ghidra version ghidra origin official ghidra sre org distro
| 1
|
294,744
| 22,161,985,730
|
IssuesEvent
|
2022-06-04 16:31:08
|
carmls/snacs-guidelines
|
https://api.github.com/repos/carmls/snacs-guidelines
|
closed
|
Impelled agents
|
documentation ko en force-dynamics hi
|
Hindi can use the accusative case marker _ko_ or the instrumental case marker _se_ to mark someone forced to do something in a causative construction. Basically, this object is involved in two scenes: (1) the impeller making them do the action; (2) them performing the action (ambiguous whether the impeller is aiding or not). The action itself may have a Theme (and in this case the instrumental is used on the impelled agent), further enforcing (2).
Currently I label this Agent~Instrument, but you'll see why this doesn't work...
> माँ ने बच्चे **को** सुलाया।
> mother ERG child **ACC** sleep.CAUS.PFV
> The mother made the child sleep.
> मालिक ने नौकर **से** प्याज़ कटाए।
> owner ERG servant **INS** onion cut.CAUS.PFV
> The landlord made the servant cut onions.
So not only is there force dynamics going on in scene (1), which we don't really know how to label, but there's also the conflict between the two scenes. And of course, this will be an issue in English too if we annotate non-marked core participants. Korean also has causatives, but I'm not sure how they're being treated.
To make matters more troubling, the impelled agent may have a scene role that isn't Agent:
> मैंने अपने भाई **से** किताब लिखवायी।
> I.ERG self.GEN brother **INS** book write.CAUS2.PFV
> I got my brother to write a book./I made my brother write a book.
So this is some combination of Originator (scene role), Agent (function as writer), ??? (function as someone being impelled), Instrument (the prototypical function of the instrumental case).
The best place to start, I think, is to figure out what to label someone who is being impelled to do something.
|
1.0
|
Impelled agents - Hindi can use the accusative case marker _ko_ or the instrumental case marker _se_ to mark someone forced to do something in a causative construction. Basically, this object is involved in two scenes: (1) the impeller making them do the action; (2) them performing the action (ambiguous whether the impeller is aiding or not). The action itself may have a Theme (and in this case the instrumental is used on the impelled agent), further enforcing (2).
Currently I label this Agent~Instrument, but you'll see why this doesn't work...
> माँ ने बच्चे **को** सुलाया।
> mother ERG child **ACC** sleep.CAUS.PFV
> The mother made the child sleep.
> मालिक ने नौकर **से** प्याज़ कटाए।
> owner ERG servant **INS** onion cut.CAUS.PFV
> The landlord made the servant cut onions.
So not only is there force dynamics going on in scene (1), which we don't really know how to label, but there's also the conflict between the two scenes. And of course, this will be an issue in English too if we annotate non-marked core participants. Korean also has causatives, but I'm not sure how they're being treated.
To make matters more troubling, the impelled agent may have a scene role that isn't Agent:
> मैंने अपने भाई **से** किताब लिखवायी।
> I.ERG self.GEN brother **INS** book write.CAUS2.PFV
> I got my brother to write a book./I made my brother write a book.
So this is some combination of Originator (scene role), Agent (function as writer), ??? (function as someone being impelled), Instrument (the prototypical function of the instrumental case).
The best place to start, I think, is to figure out what to label someone who is being impelled to do something.
|
non_process
|
impelled agents hindi can use the accusative case marker ko or the instrumental case marker se to mark someone forced to do something in a causative construction basically this object is involved in two scenes the impeller making them do the action them performing the action ambiguous whether the impeller is aiding or not the action itself may have a theme and in this case the instrumental is used on the impelled agent further enforcing currently i label this agent instrument but you ll see why this doesn t work माँ ने बच्चे को सुलाया। mother erg child acc sleep caus pfv the mother made the child sleep मालिक ने नौकर से प्याज़ कटाए। owner erg servant ins onion cut caus pfv the landlord made the servant cut onions so not only is there force dynamics going on in scene which we don t really know how to label but there s also the conflict between the two scenes and of course this will be an issue in english too if we annotate non marked core participants korean also has causatives but i m not sure how they re being treated to make matters more troubling the impelled agent may have a scene role that isn t agent मैंने अपने भाई से किताब लिखवायी। i erg self gen brother ins book write pfv i got my brother to write a book i made my brother write a book so this is some combination of originator scene role agent function as writer function as someone being impelled instrument the prototypical function of the instrumental case the best place to start i think is to figure out what to label someone who is being impelled to do something
| 0
|
772,209
| 27,111,704,289
|
IssuesEvent
|
2023-02-15 15:44:07
|
leepeuker/movary
|
https://api.github.com/repos/leepeuker/movary
|
closed
|
Rating is editable when not having clicked on the edit button in the movie page
|
bug priority: middle
|
When visiting a movie, I can change the rating without having changed on the edit button.
I believe you should only be able to edit the rating when clicking on the rating button right?
|
1.0
|
Rating is editable when not having clicked on the edit button in the movie page - When visiting a movie, I can change the rating without having changed on the edit button.
I believe you should only be able to edit the rating when clicking on the rating button right?
|
non_process
|
rating is editable when not having clicked on the edit button in the movie page when visiting a movie i can change the rating without having changed on the edit button i believe you should only be able to edit the rating when clicking on the rating button right
| 0
|
253,243
| 8,053,352,088
|
IssuesEvent
|
2018-08-01 22:36:01
|
facebook/prepack
|
https://api.github.com/repos/facebook/prepack
|
opened
|
Sourcemaps are broken with more than one source file
|
Instant Render bug help wanted priority: medium
|
For every source file, we read in (another copy (!) of) _the same_ source map file:
https://github.com/facebook/prepack/blob/bfe8cd26afcdcf412c776086c9ef1641c2aa8475/src/prepack-node.js#L123
The API later used to map locations isn't even trying to match source file names either:
https://github.com/facebook/prepack/blob/88d9495226efca8fee135b158e219d0eba5fd6a0/src/environment.js#L1305
This affects both error messages given my Prepack (will be bogus!) as well as the rewritten source maps file.
To fix this, the Prepack API needs to get a mapping of source files to source map files, not just a single source map file.
|
1.0
|
Sourcemaps are broken with more than one source file - For every source file, we read in (another copy (!) of) _the same_ source map file:
https://github.com/facebook/prepack/blob/bfe8cd26afcdcf412c776086c9ef1641c2aa8475/src/prepack-node.js#L123
The API later used to map locations isn't even trying to match source file names either:
https://github.com/facebook/prepack/blob/88d9495226efca8fee135b158e219d0eba5fd6a0/src/environment.js#L1305
This affects both error messages given my Prepack (will be bogus!) as well as the rewritten source maps file.
To fix this, the Prepack API needs to get a mapping of source files to source map files, not just a single source map file.
|
non_process
|
sourcemaps are broken with more than one source file for every source file we read in another copy of the same source map file the api later used to map locations isn t even trying to match source file names either this affects both error messages given my prepack will be bogus as well as the rewritten source maps file to fix this the prepack api needs to get a mapping of source files to source map files not just a single source map file
| 0
|
671,147
| 22,744,542,504
|
IssuesEvent
|
2022-07-07 08:00:50
|
telstra/open-kilda
|
https://api.github.com/repos/telstra/open-kilda
|
closed
|
Some fields are missed in History dump response
|
priority/4-low improvement
|
API `/v1/flows/{flow_id}/history` returns json with dumps of 2 types `stateBefore` and `stateAfter`
```
"type": "stateAfter",
"bandwidth": 500,
"ignoreBandwidth": false,
"forwardCookie": 4611686018427496000,
"reverseCookie": 2305843009213802500,
"sourceSwitch": "00:00:00:00:00:00:00:00",
"destinationSwitch": "00:00:00:00:00:00:00:00",
"sourcePort": 64,
"destinationPort": 67,
"sourceVlan": 3442,
"destinationVlan": 677,
"sourceInnerVlan": 0,
"destinationInnerVlan": 0,
"forwardMeterId": 2141,
"reverseMeterId": 841,
"forwardPath": [],
"reversePath": [],
"forwardStatus": "IN_PROGRESS",
"reverseStatus": "IN_PROGRESS",
"diverseGroupId": null,
"affinityGroupId": null,
"allocateProtectedPath": false,
"pinned": false,
"periodicPings": false,
"encapsulationType": "TRANSIT_VLAN",
"pathComputationStrategy": "COST_AND_AVAILABLE_BANDWIDTH",
"maxLatency": 0,
"loopSwitchId": null
```
There some fields are missed in this dump. At least filed `max_latency_tier2`
Need to check what fields are missed (use response of `/v2/flows/{flow_id}` as reference) and add missed fields
|
1.0
|
Some fields are missed in History dump response - API `/v1/flows/{flow_id}/history` returns json with dumps of 2 types `stateBefore` and `stateAfter`
```
"type": "stateAfter",
"bandwidth": 500,
"ignoreBandwidth": false,
"forwardCookie": 4611686018427496000,
"reverseCookie": 2305843009213802500,
"sourceSwitch": "00:00:00:00:00:00:00:00",
"destinationSwitch": "00:00:00:00:00:00:00:00",
"sourcePort": 64,
"destinationPort": 67,
"sourceVlan": 3442,
"destinationVlan": 677,
"sourceInnerVlan": 0,
"destinationInnerVlan": 0,
"forwardMeterId": 2141,
"reverseMeterId": 841,
"forwardPath": [],
"reversePath": [],
"forwardStatus": "IN_PROGRESS",
"reverseStatus": "IN_PROGRESS",
"diverseGroupId": null,
"affinityGroupId": null,
"allocateProtectedPath": false,
"pinned": false,
"periodicPings": false,
"encapsulationType": "TRANSIT_VLAN",
"pathComputationStrategy": "COST_AND_AVAILABLE_BANDWIDTH",
"maxLatency": 0,
"loopSwitchId": null
```
There some fields are missed in this dump. At least filed `max_latency_tier2`
Need to check what fields are missed (use response of `/v2/flows/{flow_id}` as reference) and add missed fields
|
non_process
|
some fields are missed in history dump response api flows flow id history returns json with dumps of types statebefore and stateafter type stateafter bandwidth ignorebandwidth false forwardcookie reversecookie sourceswitch destinationswitch sourceport destinationport sourcevlan destinationvlan sourceinnervlan destinationinnervlan forwardmeterid reversemeterid forwardpath reversepath forwardstatus in progress reversestatus in progress diversegroupid null affinitygroupid null allocateprotectedpath false pinned false periodicpings false encapsulationtype transit vlan pathcomputationstrategy cost and available bandwidth maxlatency loopswitchid null there some fields are missed in this dump at least filed max latency need to check what fields are missed use response of flows flow id as reference and add missed fields
| 0
|
466,407
| 13,401,362,608
|
IssuesEvent
|
2020-09-03 17:11:31
|
OpenNebula/one
|
https://api.github.com/repos/OpenNebula/one
|
closed
|
Force IP from Sunstone stop working
|
Category: Sunstone Priority: High Sponsored Status: Accepted Type: Bug
|
**Description**
Susntone allows to force the IP for a NIC in the instantiation dialog.
This is not working anymore.
**To Reproduce**
Force the IP for a NIC at the instantiate dialog.
**Expected behavior**
The specified IP should be set.
**Details**
- Affected Component: Sunstone
- Version: development
**Additional context**
Sunstone is not sending the information to `oned`.
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
|
1.0
|
Force IP from Sunstone stop working - **Description**
Susntone allows to force the IP for a NIC in the instantiation dialog.
This is not working anymore.
**To Reproduce**
Force the IP for a NIC at the instantiate dialog.
**Expected behavior**
The specified IP should be set.
**Details**
- Affected Component: Sunstone
- Version: development
**Additional context**
Sunstone is not sending the information to `oned`.
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
|
non_process
|
force ip from sunstone stop working description susntone allows to force the ip for a nic in the instantiation dialog this is not working anymore to reproduce force the ip for a nic at the instantiate dialog expected behavior the specified ip should be set details affected component sunstone version development additional context sunstone is not sending the information to oned progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches
| 0
|
10,081
| 13,044,161,976
|
IssuesEvent
|
2020-07-29 03:47:28
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SubDateAndDuration` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SubDateAndDuration` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SubDateAndDuration` from TiDB -
## Description
Port the scalar function `SubDateAndDuration` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function subdateandduration from tidb description port the scalar function subdateandduration from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
10,143
| 13,044,162,524
|
IssuesEvent
|
2020-07-29 03:47:33
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `ReleaseLock` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `ReleaseLock` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `ReleaseLock` from TiDB -
## Description
Port the scalar function `ReleaseLock` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function releaselock from tidb description port the scalar function releaselock from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
967
| 3,422,953,194
|
IssuesEvent
|
2015-12-09 02:15:21
|
MaretEngineering/MROV
|
https://api.github.com/repos/MaretEngineering/MROV
|
closed
|
Support arbitrary number of servos
|
Arduino Necessary Addition Processing
|
Right now the code only supports 3 servos (2 camera + 1 claw). We need to add support for an unspecified number of servos because we don't yet know how many we are going to need.
It should probably be based around a constant NUMBER_OF_SERVOS at the top of the file which we can change depending on what we do. We obviously don't know the pin values of any of the new servos but we can add those in later.
|
1.0
|
Support arbitrary number of servos - Right now the code only supports 3 servos (2 camera + 1 claw). We need to add support for an unspecified number of servos because we don't yet know how many we are going to need.
It should probably be based around a constant NUMBER_OF_SERVOS at the top of the file which we can change depending on what we do. We obviously don't know the pin values of any of the new servos but we can add those in later.
|
process
|
support arbitrary number of servos right now the code only supports servos camera claw we need to add support for an unspecified number of servos because we don t yet know how many we are going to need it should probably be based around a constant number of servos at the top of the file which we can change depending on what we do we obviously don t know the pin values of any of the new servos but we can add those in later
| 1
|
19,419
| 25,565,523,456
|
IssuesEvent
|
2022-11-30 14:02:52
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Tail-sampling processor: drop spans matching an attribute
|
question processor/tailsampling
|
It is currently possible to prescribe that spans matching a certain condition are always sampled.
As far as I can see, there is no mechanism to do the opposite (even in other processors): how do I instruct the collector to always drop spans matching a certain condition?
E.g. drop all spans where the HTTP path is `/health_check`.
Looking at the source, it would be fairly simple to add this functionality to the tail-sampling processor and I'd be happy to open a PR for it, but I am also aware of #1797 - what is the best way forward here?
|
1.0
|
Tail-sampling processor: drop spans matching an attribute - It is currently possible to prescribe that spans matching a certain condition are always sampled.
As far as I can see, there is no mechanism to do the opposite (even in other processors): how do I instruct the collector to always drop spans matching a certain condition?
E.g. drop all spans where the HTTP path is `/health_check`.
Looking at the source, it would be fairly simple to add this functionality to the tail-sampling processor and I'd be happy to open a PR for it, but I am also aware of #1797 - what is the best way forward here?
|
process
|
tail sampling processor drop spans matching an attribute it is currently possible to prescribe that spans matching a certain condition are always sampled as far as i can see there is no mechanism to do the opposite even in other processors how do i instruct the collector to always drop spans matching a certain condition e g drop all spans where the http path is health check looking at the source it would be fairly simple to add this functionality to the tail sampling processor and i d be happy to open a pr for it but i am also aware of what is the best way forward here
| 1
|
16,777
| 21,958,772,206
|
IssuesEvent
|
2022-05-24 14:13:25
|
ethereum/EIPs
|
https://api.github.com/repos/ethereum/EIPs
|
closed
|
Jekyll theme only supports headings up to `###`
|
type: EIP1 (Process) stale
|
This leads to EIPs with deeper subsections not looking very good.
|
1.0
|
Jekyll theme only supports headings up to `###` - This leads to EIPs with deeper subsections not looking very good.
|
process
|
jekyll theme only supports headings up to this leads to eips with deeper subsections not looking very good
| 1
|
2,376
| 5,176,425,450
|
IssuesEvent
|
2017-01-19 00:39:19
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
Time filter not correct.
|
Correctness Priority/P1 Query Processor
|
If you add an absolute date/time filter with a time, the time is ignored entirely and the entire day is used.
This happens on at least Postgres and H2.
|
1.0
|
Time filter not correct. - If you add an absolute date/time filter with a time, the time is ignored entirely and the entire day is used.
This happens on at least Postgres and H2.
|
process
|
time filter not correct if you add an absolute date time filter with a time the time is ignored entirely and the entire day is used this happens on at least postgres and
| 1
|
19,863
| 26,275,853,236
|
IssuesEvent
|
2023-01-06 21:55:24
|
GoogleCloudPlatform/php-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/php-docs-samples
|
closed
|
Upgrade all samples to new sample format
|
type: process samples
|
We have a new samples format that greatly reduces the complexity and overhead of maintaining samples by using reflection to invoke the sample method. We need to upgrade all samples to use this format.
**Background**
We have two existing formats, the `symfony/console` format and the "functionless" format.
1. The `symfony/console`format: This format uses the `symfony/console` component to create an executable command file (e.g. [`vision.php`](https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/vision/vision.php)) which executes the samples. This requires the overhead of adding every sample to the command file, and adds a layer of abstraction for the users.
2. The "functionless" format: This format uses [inline PHP](https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/bigtable/src/create_cluster.php#L32) to execute the sample. This has the advantage of making the sample files executable, but we lose the wrapping function, which is useful for consistency and documentation.
**The New Format**
The new format uses a method in [`testing/sample_helpers.php`](https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/testing/sample_helpers.php#L7) to execute the function automatically from the CLI, and also to print usage instructions based on the PHP doc for the function. All that is required is [these two lines](https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/asset/src/export_assets.php#L53) at the bottom of each sample file.
**Samples already upgraded**
- [x] Asset (#1311)
- [x] Compute
- [x] Firestore (#1363, #1381)
- [x] Pubsub (#1356)
- [x] IAP (#1378)
- [x] IOT (#1379)
- [x] Spanner (#1234)
- [x] Auth (#1375)
- [x] Storage (#1384)
- [x] Bigtable (#1496)
- [x] DLP (#1615)
- [x] [TextToSpeech](texttospeech) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1641)
- [x] [Video](video) (9 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1639)
- [x] [SecretManager](secretmanager) (15 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1647)
- [x] [Language](language) (13 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1650)
- [x] [Speech](speech) (14 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1642)
- [x] [KMS](kms) (36 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1649)
- [x] [logging](logging) (4 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1643)
- [x] [translate](translate) (21 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1719)
- [x] [BigQuery](bigquery/api) (28 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1720)
- [x] [error_reporting](error_reporting) (1 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1726)
- [x] [vision](vision) (4 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1638)
- [x] [datastore/tutorial](datastore/tutorial) (6 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1648)
- [x] [monitoring](monitoring) (27 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1640)
- [x] [securitycenter](https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1761)
- [x] [servicedirectory](https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1762)
**Samples in progress**
- [ ] [endpoints/getting-started](endpoints/getting-started) (1 sample app)
**Samples still using `symfony/console` format**
- [x] ~~[appengine/flexible/tasks](appengine/flexible/tasks)~~ (not maintained)
- [x] ~~[appengine/flexible/wordpress](appengine/flexible/wordpress)~~ (not maintained)
- [x] ~~[appengine/flexible/drupal8](appengine/flexible/drupal8)~~ (not maintained)
- [x] ~~[appengine/flexible/symfony](appengine/flexible/symfony)~~ (not maintained)
- [x] ~~[dialogflow](dialogflow) (19 samples)~~ (not maintained)
|
1.0
|
Upgrade all samples to new sample format - We have a new samples format that greatly reduces the complexity and overhead of maintaining samples by using reflection to invoke the sample method. We need to upgrade all samples to use this format.
**Background**
We have two existing formats, the `symfony/console` format and the "functionless" format.
1. The `symfony/console`format: This format uses the `symfony/console` component to create an executable command file (e.g. [`vision.php`](https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/vision/vision.php)) which executes the samples. This requires the overhead of adding every sample to the command file, and adds a layer of abstraction for the users.
2. The "functionless" format: This format uses [inline PHP](https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/bigtable/src/create_cluster.php#L32) to execute the sample. This has the advantage of making the sample files executable, but we lose the wrapping function, which is useful for consistency and documentation.
**The New Format**
The new format uses a method in [`testing/sample_helpers.php`](https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/testing/sample_helpers.php#L7) to execute the function automatically from the CLI, and also to print usage instructions based on the PHP doc for the function. All that is required is [these two lines](https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/asset/src/export_assets.php#L53) at the bottom of each sample file.
**Samples already upgraded**
- [x] Asset (#1311)
- [x] Compute
- [x] Firestore (#1363, #1381)
- [x] Pubsub (#1356)
- [x] IAP (#1378)
- [x] IOT (#1379)
- [x] Spanner (#1234)
- [x] Auth (#1375)
- [x] Storage (#1384)
- [x] Bigtable (#1496)
- [x] DLP (#1615)
- [x] [TextToSpeech](texttospeech) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1641)
- [x] [Video](video) (9 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1639)
- [x] [SecretManager](secretmanager) (15 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1647)
- [x] [Language](language) (13 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1650)
- [x] [Speech](speech) (14 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1642)
- [x] [KMS](kms) (36 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1649)
- [x] [logging](logging) (4 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1643)
- [x] [translate](translate) (21 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1719)
- [x] [BigQuery](bigquery/api) (28 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1720)
- [x] [error_reporting](error_reporting) (1 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1726)
- [x] [vision](vision) (4 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1638)
- [x] [datastore/tutorial](datastore/tutorial) (6 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1648)
- [x] [monitoring](monitoring) (27 samples) (https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1640)
- [x] [securitycenter](https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1761)
- [x] [servicedirectory](https://github.com/GoogleCloudPlatform/php-docs-samples/pull/1762)
**Samples in progress**
- [ ] [endpoints/getting-started](endpoints/getting-started) (1 sample app)
**Samples still using `symfony/console` format**
- [x] ~~[appengine/flexible/tasks](appengine/flexible/tasks)~~ (not maintained)
- [x] ~~[appengine/flexible/wordpress](appengine/flexible/wordpress)~~ (not maintained)
- [x] ~~[appengine/flexible/drupal8](appengine/flexible/drupal8)~~ (not maintained)
- [x] ~~[appengine/flexible/symfony](appengine/flexible/symfony)~~ (not maintained)
- [x] ~~[dialogflow](dialogflow) (19 samples)~~ (not maintained)
|
process
|
upgrade all samples to new sample format we have a new samples format that greatly reduces the complexity and overhead of maintaining samples by using reflection to invoke the sample method we need to upgrade all samples to use this format background we have two existing formats the symfony console format and the functionless format the symfony console format this format uses the symfony console component to create an executable command file e g which executes the samples this requires the overhead of adding every sample to the command file and adds a layer of abstraction for the users the functionless format this format uses to execute the sample this has the advantage of making the sample files executable but we lose the wrapping function which is useful for consistency and documentation the new format the new format uses a method in to execute the function automatically from the cli and also to print usage instructions based on the php doc for the function all that is required is at the bottom of each sample file samples already upgraded asset compute firestore pubsub iap iot spanner auth storage bigtable dlp texttospeech video samples secretmanager samples language samples speech samples kms samples logging samples translate samples bigquery api samples error reporting samples vision samples datastore tutorial samples monitoring samples samples in progress endpoints getting started sample app samples still using symfony console format appengine flexible tasks not maintained appengine flexible wordpress not maintained appengine flexible not maintained appengine flexible symfony not maintained dialogflow samples not maintained
| 1
|
17,019
| 22,390,155,347
|
IssuesEvent
|
2022-06-17 06:45:37
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
closed
|
Add operation tracking to VmStateIterator
|
processor
|
Currently, the [VmState](https://github.com/maticnetwork/miden/blob/next/processor/src/debug.rs#L7) struct which is returned from the `VmStateIterator` does not contain the operation which was executed to put the VM into this state. We should add another field to this struct so that it looks something like this:
```Rust
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct VmState {
pub op: Operation,
pub clk: usize,
pub fmp: Felt,
pub stack: Vec<Felt>,
pub memory: Vec<(u64, Word)>,
}
```
The `op` field would need to be populated from the information in the [decoder](https://github.com/maticnetwork/miden/blob/next/processor/src/decoder/mod.rs#L209). The decoder doesn't explicitly track the operations yet. It is possible to infer the operations from the trace, but I think a better approach would be to have a vector of operations in the decoder struct. Then, as the VM executes operations, they would be pushed into this vector.
Since tracking operations would result in some overhead, the above should happen only when we are executing programs via [execute_iter()](https://github.com/maticnetwork/miden/blob/next/processor/src/lib.rs#L80) function.
Implementing this functionality will enable counting operations executed by the VM which would be useful for things like #198.
|
1.0
|
Add operation tracking to VmStateIterator - Currently, the [VmState](https://github.com/maticnetwork/miden/blob/next/processor/src/debug.rs#L7) struct which is returned from the `VmStateIterator` does not contain the operation which was executed to put the VM into this state. We should add another field to this struct so that it looks something like this:
```Rust
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct VmState {
pub op: Operation,
pub clk: usize,
pub fmp: Felt,
pub stack: Vec<Felt>,
pub memory: Vec<(u64, Word)>,
}
```
The `op` field would need to be populated from the information in the [decoder](https://github.com/maticnetwork/miden/blob/next/processor/src/decoder/mod.rs#L209). The decoder doesn't explicitly track the operations yet. It is possible to infer the operations from the trace, but I think a better approach would be to have a vector of operations in the decoder struct. Then, as the VM executes operations, they would be pushed into this vector.
Since tracking operations would result in some overhead, the above should happen only when we are executing programs via [execute_iter()](https://github.com/maticnetwork/miden/blob/next/processor/src/lib.rs#L80) function.
Implementing this functionality will enable counting operations executed by the VM which would be useful for things like #198.
|
process
|
add operation tracking to vmstateiterator currently the struct which is returned from the vmstateiterator does not contain the operation which was executed to put the vm into this state we should add another field to this struct so that it looks something like this rust pub struct vmstate pub op operation pub clk usize pub fmp felt pub stack vec pub memory vec the op field would need to be populated from the information in the the decoder doesn t explicitly track the operations yet it is possible to infer the operations from the trace but i think a better approach would be to have a vector of operations in the decoder struct then as the vm executes operations they would be pushed into this vector since tracking operations would result in some overhead the above should happen only when we are executing programs via function implementing this functionality will enable counting operations executed by the vm which would be useful for things like
| 1
|
18,677
| 24,594,197,761
|
IssuesEvent
|
2022-10-14 06:47:37
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[FHIR] All the created activities in the study builder are not mapping into the FHIR datastore
|
Bug Blocker P0 Response datastore Process: Fixed Process: Tested dev
|
Steps:
1. Create multiple activities in SB and launch the study [Created around 15 to 20 activities in SB]
2. Go to the google cloud console
3. Search for FHIR viewer
4. Click on the particular dataset and click on the FHIR datastore
5. Search for the Questionnaire and click on it
6. Observe
AR: All the activities created in the Study builder are not getting mapped in to the Questionnaire of the FHIR datastore
ER: All the activities created in the SB should get mapped into FHIR store (As per above scenario only 6 activities are mapped into FHIR store)
|
2.0
|
[FHIR] All the created activities in the study builder are not mapping into the FHIR datastore - Steps:
1. Create multiple activities in SB and launch the study [Created around 15 to 20 activities in SB]
2. Go to the google cloud console
3. Search for FHIR viewer
4. Click on the particular dataset and click on the FHIR datastore
5. Search for the Questionnaire and click on it
6. Observe
AR: All the activities created in the Study builder are not getting mapped in to the Questionnaire of the FHIR datastore
ER: All the activities created in the SB should get mapped into FHIR store (As per above scenario only 6 activities are mapped into FHIR store)
|
process
|
all the created activities in the study builder are not mapping into the fhir datastore steps create multiple activities in sb and launch the study go to the google cloud console search for fhir viewer click on the particular dataset and click on the fhir datastore search for the questionnaire and click on it observe ar all the activities created in the study builder are not getting mapped in to the questionnaire of the fhir datastore er all the activities created in the sb should get mapped into fhir store as per above scenario only activities are mapped into fhir store
| 1
|
8,422
| 11,589,400,838
|
IssuesEvent
|
2020-02-24 02:01:23
|
kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines
|
closed
|
GitHub ref file path returns 404
|
area/deployment/standalone kind/process
|
https://www.kubeflow.org/docs/pipelines/installation/standalone-deployment/
Document mentioned the path
github.com/kubeflow/pipelines//manifests/kustomize/base/crds?ref=$PIPELINE_VERSION
It returns 404. Not sure whether github changed behavior.
workaround: directly clone codes and kubectl on it.
fix: find the new Github file path pattern and update document
|
1.0
|
GitHub ref file path returns 404 - https://www.kubeflow.org/docs/pipelines/installation/standalone-deployment/
Document mentioned the path
github.com/kubeflow/pipelines//manifests/kustomize/base/crds?ref=$PIPELINE_VERSION
It returns 404. Not sure whether github changed behavior.
workaround: directly clone codes and kubectl on it.
fix: find the new Github file path pattern and update document
|
process
|
github ref file path returns document mentioned the path github com kubeflow pipelines manifests kustomize base crds ref pipeline version it returns not sure whether github changed behavior workaround directly clone codes and kubectl on it fix find the new github file path pattern and update document
| 1
|
12,789
| 15,169,136,475
|
IssuesEvent
|
2021-02-12 20:36:10
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Unable to save results to PostgreSQL database when iterating features in processing algorithms
|
Bug Processing
|
# Problem

# Results


*NB* I haven't tested this in other versions before to test if it it works
|
1.0
|
Unable to save results to PostgreSQL database when iterating features in processing algorithms -
# Problem

# Results


*NB* I haven't tested this in other versions before to test if it it works
|
process
|
unable to save results to postgresql database when iterating features in processing algorithms problem results nb i haven t tested this in other versions before to test if it it works
| 1
|
285,201
| 31,080,069,921
|
IssuesEvent
|
2023-08-13 00:57:36
|
vital-ws/java-goof
|
https://api.github.com/repos/vital-ws/java-goof
|
opened
|
jquery-1.10.2.min.js: 4 vulnerabilities (highest severity is: 6.1)
|
Mend: dependency security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.10.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: /todolist-goof/src/site/template/createTodo.html</p>
<p>Path to vulnerable library: /todolist-goof/src/site/template/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/src/main/webapp/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/target/todolist/static/js/jquery-1.10.2.min.js</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (jquery version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2020-11023](https://www.mend.io/vulnerability-database/CVE-2020-11023) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 6.1 | jquery-1.10.2.min.js | Direct | jquery - 3.5.0;jquery-rails - 4.4.0 | ❌ |
| [CVE-2020-11022](https://www.mend.io/vulnerability-database/CVE-2020-11022) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 6.1 | jquery-1.10.2.min.js | Direct | jQuery - 3.5.0 | ❌ |
| [CVE-2015-9251](https://www.mend.io/vulnerability-database/CVE-2015-9251) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 6.1 | jquery-1.10.2.min.js | Direct | jQuery - 3.0.0 | ❌ |
| [CVE-2019-11358](https://www.mend.io/vulnerability-database/CVE-2019-11358) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 6.1 | jquery-1.10.2.min.js | Direct | jquery - 3.4.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2020-11023</summary>
### Vulnerable Library - <b>jquery-1.10.2.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: /todolist-goof/src/site/template/createTodo.html</p>
<p>Path to vulnerable library: /todolist-goof/src/site/template/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/src/main/webapp/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/target/todolist/static/js/jquery-1.10.2.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.10.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2020-11022</summary>
### Vulnerable Library - <b>jquery-1.10.2.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: /todolist-goof/src/site/template/createTodo.html</p>
<p>Path to vulnerable library: /todolist-goof/src/site/template/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/src/main/webapp/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/target/todolist/static/js/jquery-1.10.2.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.10.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2015-9251</summary>
### Vulnerable Library - <b>jquery-1.10.2.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: /todolist-goof/src/site/template/createTodo.html</p>
<p>Path to vulnerable library: /todolist-goof/src/site/template/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/src/main/webapp/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/target/todolist/static/js/jquery-1.10.2.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.10.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - 3.0.0</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2019-11358</summary>
### Vulnerable Library - <b>jquery-1.10.2.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: /todolist-goof/src/site/template/createTodo.html</p>
<p>Path to vulnerable library: /todolist-goof/src/site/template/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/src/main/webapp/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/target/todolist/static/js/jquery-1.10.2.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.10.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: jquery - 3.4.0</p>
</p>
<p></p>
</details>
|
True
|
jquery-1.10.2.min.js: 4 vulnerabilities (highest severity is: 6.1) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.10.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: /todolist-goof/src/site/template/createTodo.html</p>
<p>Path to vulnerable library: /todolist-goof/src/site/template/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/src/main/webapp/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/target/todolist/static/js/jquery-1.10.2.min.js</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (jquery version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2020-11023](https://www.mend.io/vulnerability-database/CVE-2020-11023) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 6.1 | jquery-1.10.2.min.js | Direct | jquery - 3.5.0;jquery-rails - 4.4.0 | ❌ |
| [CVE-2020-11022](https://www.mend.io/vulnerability-database/CVE-2020-11022) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 6.1 | jquery-1.10.2.min.js | Direct | jQuery - 3.5.0 | ❌ |
| [CVE-2015-9251](https://www.mend.io/vulnerability-database/CVE-2015-9251) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 6.1 | jquery-1.10.2.min.js | Direct | jQuery - 3.0.0 | ❌ |
| [CVE-2019-11358](https://www.mend.io/vulnerability-database/CVE-2019-11358) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 6.1 | jquery-1.10.2.min.js | Direct | jquery - 3.4.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2020-11023</summary>
### Vulnerable Library - <b>jquery-1.10.2.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: /todolist-goof/src/site/template/createTodo.html</p>
<p>Path to vulnerable library: /todolist-goof/src/site/template/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/src/main/webapp/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/target/todolist/static/js/jquery-1.10.2.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.10.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2020-11022</summary>
### Vulnerable Library - <b>jquery-1.10.2.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: /todolist-goof/src/site/template/createTodo.html</p>
<p>Path to vulnerable library: /todolist-goof/src/site/template/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/src/main/webapp/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/target/todolist/static/js/jquery-1.10.2.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.10.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2015-9251</summary>
### Vulnerable Library - <b>jquery-1.10.2.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: /todolist-goof/src/site/template/createTodo.html</p>
<p>Path to vulnerable library: /todolist-goof/src/site/template/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/src/main/webapp/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/target/todolist/static/js/jquery-1.10.2.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.10.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - 3.0.0</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2019-11358</summary>
### Vulnerable Library - <b>jquery-1.10.2.min.js</b></p>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.10.2/jquery.min.js</a></p>
<p>Path to dependency file: /todolist-goof/src/site/template/createTodo.html</p>
<p>Path to vulnerable library: /todolist-goof/src/site/template/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/src/main/webapp/static/js/jquery-1.10.2.min.js,/todolist-goof/todolist-web-struts/target/todolist/static/js/jquery-1.10.2.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.10.2.min.js** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: jquery - 3.4.0</p>
</p>
<p></p>
</details>
|
non_process
|
jquery min js vulnerabilities highest severity is vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file todolist goof src site template createtodo html path to vulnerable library todolist goof src site template static js jquery min js todolist goof todolist web struts src main webapp static js jquery min js todolist goof todolist web struts target todolist static js jquery min js vulnerabilities cve severity cvss dependency type fixed in jquery version remediation available medium jquery min js direct jquery jquery rails medium jquery min js direct jquery medium jquery min js direct jquery medium jquery min js direct jquery details cve vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file todolist goof src site template createtodo html path to vulnerable library todolist goof src site template static js jquery min js todolist goof todolist web struts src main webapp static js jquery min js todolist goof todolist web struts target todolist static js jquery min js dependency hierarchy x jquery min js vulnerable library found in base branch main vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails cve vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file todolist goof src site template createtodo html path to vulnerable library todolist goof src site template static js jquery min js todolist goof todolist web struts src main webapp static js jquery min js todolist goof todolist web struts target todolist static js jquery min js dependency hierarchy x jquery min js vulnerable library found in base branch main vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery cve vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file todolist goof src site template createtodo html path to vulnerable library todolist goof src site template static js jquery min js todolist goof todolist web struts src main webapp static js jquery min js todolist goof todolist web struts target todolist static js jquery min js dependency hierarchy x jquery min js vulnerable library found in base branch main vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery cve vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file todolist goof src site template createtodo html path to vulnerable library todolist goof src site template static js jquery min js todolist goof todolist web struts src main webapp static js jquery min js todolist goof todolist web struts target todolist static js jquery min js dependency hierarchy x jquery min js vulnerable library found in base branch main vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery
| 0
|
523
| 2,994,484,606
|
IssuesEvent
|
2015-07-22 12:06:33
|
tomchristie/django-rest-framework
|
https://api.github.com/repos/tomchristie/django-rest-framework
|
closed
|
Update contribution guidelines.
|
Process
|
I think we should probably slim the [CONTRIBUTING.md](https://github.com/tomchristie/django-rest-framework/blob/master/CONTRIBUTING.md) text right down to 2-5 key points.
Something like:
* Usage questions should be directed to the discussion group.
* Narrow issues down to the the most minimal possible reproducing case before raising them.
There's other possible stuff, but those are probably the two most critical points. The less we say, the more weight what we do say should have.
We can also link to the longer form contribution guidelines.
/cc @kevin-brown @jpadilla @carltongibson @xordoquy
|
1.0
|
Update contribution guidelines. - I think we should probably slim the [CONTRIBUTING.md](https://github.com/tomchristie/django-rest-framework/blob/master/CONTRIBUTING.md) text right down to 2-5 key points.
Something like:
* Usage questions should be directed to the discussion group.
* Narrow issues down to the the most minimal possible reproducing case before raising them.
There's other possible stuff, but those are probably the two most critical points. The less we say, the more weight what we do say should have.
We can also link to the longer form contribution guidelines.
/cc @kevin-brown @jpadilla @carltongibson @xordoquy
|
process
|
update contribution guidelines i think we should probably slim the text right down to key points something like usage questions should be directed to the discussion group narrow issues down to the the most minimal possible reproducing case before raising them there s other possible stuff but those are probably the two most critical points the less we say the more weight what we do say should have we can also link to the longer form contribution guidelines cc kevin brown jpadilla carltongibson xordoquy
| 1
|
88,838
| 15,823,083,432
|
IssuesEvent
|
2021-04-05 23:49:42
|
ZcashFoundation/zebra
|
https://api.github.com/repos/ZcashFoundation/zebra
|
closed
|
Safe Preallocation for (most) `Vec<T>`
|
A-consensus A-rust C-enhancement C-security I-heavy I-slow P-Medium
|
# Overview - Safe Preallocation
Currently, Zebra [doesn't preallocate Vectors](https://github.com/ZcashFoundation/zebra/blob/main/zebra-chain/src/serialization/zcash_deserialize.rs) when deserializing, since blind preallocations present a DOS vector. Instead, it relies on maximum block and transaction size to limit allocations. This is inefficient.
We can mitigate the DOS potential and allow for preallocation as follows:
1. Define a new trait `SafeAllocate`, defining the maximum length a Vec<T: SafeAllocate> can sensibly reach for each implementing type
1. Create a specialized `ZcashDeserialize` implementation for `SafeAllocate`rs.
```
pub trait SafeAllocate {
const fn max_allocation() -> usize;
}
impl<T: ZcashDeserialize + SafeAllocate> ZcashDeserialize for Vec<T> {
fn zcash_deserialize<R: io::Read>(mut reader: R) -> Result<Self, SerializationError> {
let len = reader.read_compactsize()?;
if len > T::max_allocation() {
return Err(SerializationError::Parse(("Vector longer than max_allocation")));
}
let mut vec = Vec::with_capacity(len);
for _ in 0..len {
vec.push(T::zcash_deserialize(&mut reader)?);
}
Ok(vec)
}
}
```
This allows us to pre-allocate for certain types without presenting a DOS vector, while retaining the flexibilty to use the current (lazy) allocation strategy for types that defy safe blind allocation.
Note that deserialization using this method is guaranteed to fail during the deserialization of the first malicious vector. However, Block messages contain nested `Vec<T>` types, and it is possible for a Byzantine message to cause improper allocations at each level of nesting before being discovered. This is why the potential spike for a Block message is much higher than for other message types.
# Potential SafeAllocation Implementors
- `Arc<Transaction>` - Suggested allocation limit: 49_000. A Zcash block is capped at 2MB. Since transparent Txs are the smallest allowable transactions, and each transparent Tx requires at least one Input with a minimum serialized size of 41 bytes, an upper bound the length of a `Vec<Arc<Transaction>>` is 2MB / 41 Bytes < 49_000. Note that `std::mem::size_of::<Arc<T>>()` is 8, so the maximum memory wasted by a Byzantine `Vec<Arc<Transaction>>` is 49_000 \* 8 Bytes < 400KB.
- `transparent::Input` - Suggested allocation limit: 49_000. A Zcash block is capped at 2MB. Each Input has a minimum serialized size of 41 bytes, so an upper bound the length of a `Vec<transparent::Input>` is 2MB / 41 Bytes < 49_000. With Inputs taking less than 80 bytes on the stack (36 for the Outpoint, 26 for the Script, and 4 for the sequence), the maximum memory wasted by a Byzantine `Vec<transparent::Input>` is 49_000 \* 80 Bytes < 4MB.
- `transparent::Output` - Suggested allocation limit: 225_000. A Zcash block is capped at 2MB. Outputs have a minimum serialized size of 9 bytes, so an upper bound the length of a `Vec<Arc<transparent::Output>>` is 2MB / 9 Bytes < 225_000. With Outputs taking less than 40 bytes on the stack (call it 10 for the value, 26 for the Script, for a total of 36 ), the maximum memory wasted by a Byzantine `Vec<Arc<transparent::Output>>` is 225_000 \* 40 Bytes = 9MB.
- `MetaAddr` - Suggested allocation limit: 1000. No fancy math required, since this limit is in the Addr message [specification](https://developer.bitcoin.org/reference/p2p_networking.html#addr). Estimate a MetaAddr at a generous 100 bytes of stack space and you get max memory waste = 1000 \* 100B = 100KB.
- `block::Hash` - Suggested allocation limit: 65536. We derive this limit as MAX_MESSAGE_SIZE / 32 = 2 \* 1024 \_ 1024 / 32. Since a block hash takes 32 bytes on the stack, the max waste here is MAX_MESSAGE_SIZE.
- `InventoryHash` - Suggested allocation limit: 50,000. This limit is the listed in the [Inv message spec](https://developer.bitcoin.org/reference/p2p_networking.html#inv). Maximum waste: 50,000 \* 32 B = 1.6 MB.
- `block::CountedHeader` - Suggested allocation limit: 2000. This limit is in the [Headers message spec](https://developer.bitcoin.org/reference/p2p_networking.html#headers). Per the most recent spec, each [Zcash header is less than 2kb](https://zips.z.cash/protocol/protocol.pdf#page=90&zoom=100,72,73). Maximum waste: 2000 \* 2000 Bytes < 4MB
- `u8` - Suggested allocation limit: MAX_MESSAGE_SIZE. Since a u8 takes 1 byte on the stack, the maximum waste here is MAX_MESSAGE_SIZE = 2MB.
## Example attacks
Using all of these numbers, we calculate the total improper allocation caused by the worst-case instance of each Zcash Message as follows:
- Version: N.A.
- Verack: N.A.
- Ping: N.A.
- Pong: N.A.
- Reject: N.A.
- GetAddr: N.A.
- Addr: contains 1 `Vec<MetaAddr>` => **100KB** (see above)
- Get locks: contains 1 `Vec<block::Hash>` => **MAX_MESSAGE_SIZE** (2MB)
- Inv: contains 1 `Vec<InventoryHash>` => **1.6 MB** (see above)
- GetHeaders: contains 1 `Vec<block::Hash>` => **MAX_MESSAGE_SIZE** (2MB)
- Headers: contains 1 `Vec<block::CountedHeader>` => **4MB**
- GetData: contains 1 `Vec<InventoryHash>` => **1.6 MB** (see above)
- Block: The worst case block contains 1 `Vec<Arc<Transaction>`, 1 `Vec<transparent::Output>`, and 1 malicious `Script(Vec<u8>)` => 400KB + 9MB + 2MB = **11.4 MB**.
- - Note that a dishonest vector is discovered during its own deserialization, so a malicious `Vec<transparent::Input>` would be discovered before the malicious `Vec<transparent::Output>` was allocated for. Since Outputs can waste more memory than Inputs, a smart attacker will choose to make only his `Vec<transparent::Output>` malicious.
- Tx: N.A.
- NotFound: contains 1 `Vec<InventoryHash>` => **1.6 MB** (see above)
- Mempool: N.A.
- FilterLoad: contains 1 `Filter(Vec<u8>)` => **2MB**
- FilterAdd: contains 1 `Vec<u8>` => **2MB**
- FilterClear: N.A.
# Alternatives
Do nothing. This is a fine option, since Zebra's networking stack is already fast!
# Summary and Recommendations
With the `SafeAllocate` trait, we can allow preallocation for many Vector types without risking DOS attacks.
In the worst case, a malicious message can cause a short-lived spike in memory usage. The size of this spike depends on the max_allocation defined in `SafeAllocate` and the depth of nested `Vec<T: SafeAllocate>` types. Calculations of the maximum spike caused by each message type are included above. Based on these calculations, I recommend implementing SafeAllocate for all types listed in the "Potential SafeAllocate Implementors" section, with the possible exception of `transparent::Input` and `transparent::Output`.
If this recommendation is adopted , the worst case memory spike that a malicious peer can cause will be roughly 4MB, or roughly double that peer connection's usual memory consumption.
If `transparent::Input`s and `transparent::Output`s are included, the worst case spike rises to 11.5MB, or about six times a peer connection's usual memory consumption.
|
True
|
Safe Preallocation for (most) `Vec<T>` - # Overview - Safe Preallocation
Currently, Zebra [doesn't preallocate Vectors](https://github.com/ZcashFoundation/zebra/blob/main/zebra-chain/src/serialization/zcash_deserialize.rs) when deserializing, since blind preallocations present a DOS vector. Instead, it relies on maximum block and transaction size to limit allocations. This is inefficient.
We can mitigate the DOS potential and allow for preallocation as follows:
1. Define a new trait `SafeAllocate`, defining the maximum length a Vec<T: SafeAllocate> can sensibly reach for each implementing type
1. Create a specialized `ZcashDeserialize` implementation for `SafeAllocate`rs.
```
pub trait SafeAllocate {
const fn max_allocation() -> usize;
}
impl<T: ZcashDeserialize + SafeAllocate> ZcashDeserialize for Vec<T> {
fn zcash_deserialize<R: io::Read>(mut reader: R) -> Result<Self, SerializationError> {
let len = reader.read_compactsize()?;
if len > T::max_allocation() {
return Err(SerializationError::Parse(("Vector longer than max_allocation")));
}
let mut vec = Vec::with_capacity(len);
for _ in 0..len {
vec.push(T::zcash_deserialize(&mut reader)?);
}
Ok(vec)
}
}
```
This allows us to pre-allocate for certain types without presenting a DOS vector, while retaining the flexibilty to use the current (lazy) allocation strategy for types that defy safe blind allocation.
Note that deserialization using this method is guaranteed to fail during the deserialization of the first malicious vector. However, Block messages contain nested `Vec<T>` types, and it is possible for a Byzantine message to cause improper allocations at each level of nesting before being discovered. This is why the potential spike for a Block message is much higher than for other message types.
# Potential SafeAllocation Implementors
- `Arc<Transaction>` - Suggested allocation limit: 49_000. A Zcash block is capped at 2MB. Since transparent Txs are the smallest allowable transactions, and each transparent Tx requires at least one Input with a minimum serialized size of 41 bytes, an upper bound the length of a `Vec<Arc<Transaction>>` is 2MB / 41 Bytes < 49_000. Note that `std::mem::size_of::<Arc<T>>()` is 8, so the maximum memory wasted by a Byzantine `Vec<Arc<Transaction>>` is 49_000 \* 8 Bytes < 400KB.
- `transparent::Input` - Suggested allocation limit: 49_000. A Zcash block is capped at 2MB. Each Input has a minimum serialized size of 41 bytes, so an upper bound the length of a `Vec<transparent::Input>` is 2MB / 41 Bytes < 49_000. With Inputs taking less than 80 bytes on the stack (36 for the Outpoint, 26 for the Script, and 4 for the sequence), the maximum memory wasted by a Byzantine `Vec<transparent::Input>` is 49_000 \* 80 Bytes < 4MB.
- `transparent::Output` - Suggested allocation limit: 225_000. A Zcash block is capped at 2MB. Outputs have a minimum serialized size of 9 bytes, so an upper bound the length of a `Vec<Arc<transparent::Output>>` is 2MB / 9 Bytes < 225_000. With Outputs taking less than 40 bytes on the stack (call it 10 for the value, 26 for the Script, for a total of 36 ), the maximum memory wasted by a Byzantine `Vec<Arc<transparent::Output>>` is 225_000 \* 40 Bytes = 9MB.
- `MetaAddr` - Suggested allocation limit: 1000. No fancy math required, since this limit is in the Addr message [specification](https://developer.bitcoin.org/reference/p2p_networking.html#addr). Estimate a MetaAddr at a generous 100 bytes of stack space and you get max memory waste = 1000 \* 100B = 100KB.
- `block::Hash` - Suggested allocation limit: 65536. We derive this limit as MAX_MESSAGE_SIZE / 32 = 2 \* 1024 \_ 1024 / 32. Since a block hash takes 32 bytes on the stack, the max waste here is MAX_MESSAGE_SIZE.
- `InventoryHash` - Suggested allocation limit: 50,000. This limit is the listed in the [Inv message spec](https://developer.bitcoin.org/reference/p2p_networking.html#inv). Maximum waste: 50,000 \* 32 B = 1.6 MB.
- `block::CountedHeader` - Suggested allocation limit: 2000. This limit is in the [Headers message spec](https://developer.bitcoin.org/reference/p2p_networking.html#headers). Per the most recent spec, each [Zcash header is less than 2kb](https://zips.z.cash/protocol/protocol.pdf#page=90&zoom=100,72,73). Maximum waste: 2000 \* 2000 Bytes < 4MB
- `u8` - Suggested allocation limit: MAX_MESSAGE_SIZE. Since a u8 takes 1 byte on the stack, the maximum waste here is MAX_MESSAGE_SIZE = 2MB.
## Example attacks
Using all of these numbers, we calculate the total improper allocation caused by the worst-case instance of each Zcash Message as follows:
- Version: N.A.
- Verack: N.A.
- Ping: N.A.
- Pong: N.A.
- Reject: N.A.
- GetAddr: N.A.
- Addr: contains 1 `Vec<MetaAddr>` => **100KB** (see above)
- Get locks: contains 1 `Vec<block::Hash>` => **MAX_MESSAGE_SIZE** (2MB)
- Inv: contains 1 `Vec<InventoryHash>` => **1.6 MB** (see above)
- GetHeaders: contains 1 `Vec<block::Hash>` => **MAX_MESSAGE_SIZE** (2MB)
- Headers: contains 1 `Vec<block::CountedHeader>` => **4MB**
- GetData: contains 1 `Vec<InventoryHash>` => **1.6 MB** (see above)
- Block: The worst case block contains 1 `Vec<Arc<Transaction>`, 1 `Vec<transparent::Output>`, and 1 malicious `Script(Vec<u8>)` => 400KB + 9MB + 2MB = **11.4 MB**.
- - Note that a dishonest vector is discovered during its own deserialization, so a malicious `Vec<transparent::Input>` would be discovered before the malicious `Vec<transparent::Output>` was allocated for. Since Outputs can waste more memory than Inputs, a smart attacker will choose to make only his `Vec<transparent::Output>` malicious.
- Tx: N.A.
- NotFound: contains 1 `Vec<InventoryHash>` => **1.6 MB** (see above)
- Mempool: N.A.
- FilterLoad: contains 1 `Filter(Vec<u8>)` => **2MB**
- FilterAdd: contains 1 `Vec<u8>` => **2MB**
- FilterClear: N.A.
# Alternatives
Do nothing. This is a fine option, since Zebra's networking stack is already fast!
# Summary and Recommendations
With the `SafeAllocate` trait, we can allow preallocation for many Vector types without risking DOS attacks.
In the worst case, a malicious message can cause a short-lived spike in memory usage. The size of this spike depends on the max_allocation defined in `SafeAllocate` and the depth of nested `Vec<T: SafeAllocate>` types. Calculations of the maximum spike caused by each message type are included above. Based on these calculations, I recommend implementing SafeAllocate for all types listed in the "Potential SafeAllocate Implementors" section, with the possible exception of `transparent::Input` and `transparent::Output`.
If this recommendation is adopted , the worst case memory spike that a malicious peer can cause will be roughly 4MB, or roughly double that peer connection's usual memory consumption.
If `transparent::Input`s and `transparent::Output`s are included, the worst case spike rises to 11.5MB, or about six times a peer connection's usual memory consumption.
|
non_process
|
safe preallocation for most vec overview safe preallocation currently zebra when deserializing since blind preallocations present a dos vector instead it relies on maximum block and transaction size to limit allocations this is inefficient we can mitigate the dos potential and allow for preallocation as follows define a new trait safeallocate defining the maximum length a vec can sensibly reach for each implementing type create a specialized zcashdeserialize implementation for safeallocate rs pub trait safeallocate const fn max allocation usize impl zcashdeserialize for vec fn zcash deserialize mut reader r result let len reader read compactsize if len t max allocation return err serializationerror parse vector longer than max allocation let mut vec vec with capacity len for in len vec push t zcash deserialize mut reader ok vec this allows us to pre allocate for certain types without presenting a dos vector while retaining the flexibilty to use the current lazy allocation strategy for types that defy safe blind allocation note that deserialization using this method is guaranteed to fail during the deserialization of the first malicious vector however block messages contain nested vec types and it is possible for a byzantine message to cause improper allocations at each level of nesting before being discovered this is why the potential spike for a block message is much higher than for other message types potential safeallocation implementors arc suggested allocation limit a zcash block is capped at since transparent txs are the smallest allowable transactions and each transparent tx requires at least one input with a minimum serialized size of bytes an upper bound the length of a vec is bytes is so the maximum memory wasted by a byzantine vec is bytes transparent input suggested allocation limit a zcash block is capped at each input has a minimum serialized size of bytes so an upper bound the length of a vec is bytes is bytes transparent output suggested allocation limit a zcash block is capped at outputs have a minimum serialized size of bytes so an upper bound the length of a vec is bytes is bytes metaaddr suggested allocation limit no fancy math required since this limit is in the addr message estimate a metaaddr at a generous bytes of stack space and you get max memory waste block hash suggested allocation limit we derive this limit as max message size since a block hash takes bytes on the stack the max waste here is max message size inventoryhash suggested allocation limit this limit is the listed in the maximum waste b mb block countedheader suggested allocation limit this limit is in the per the most recent spec each maximum waste bytes suggested allocation limit max message size since a takes byte on the stack the maximum waste here is max message size example attacks using all of these numbers we calculate the total improper allocation caused by the worst case instance of each zcash message as follows version n a verack n a ping n a pong n a reject n a getaddr n a addr contains vec see above get locks contains vec max message size inv contains vec mb see above getheaders contains vec max message size headers contains vec getdata contains vec mb see above block the worst case block contains vec vec and malicious script vec mb note that a dishonest vector is discovered during its own deserialization so a malicious vec would be discovered before the malicious vec was allocated for since outputs can waste more memory than inputs a smart attacker will choose to make only his vec malicious tx n a notfound contains vec mb see above mempool n a filterload contains filter vec filteradd contains vec filterclear n a alternatives do nothing this is a fine option since zebra s networking stack is already fast summary and recommendations with the safeallocate trait we can allow preallocation for many vector types without risking dos attacks in the worst case a malicious message can cause a short lived spike in memory usage the size of this spike depends on the max allocation defined in safeallocate and the depth of nested vec types calculations of the maximum spike caused by each message type are included above based on these calculations i recommend implementing safeallocate for all types listed in the potential safeallocate implementors section with the possible exception of transparent input and transparent output if this recommendation is adopted the worst case memory spike that a malicious peer can cause will be roughly or roughly double that peer connection s usual memory consumption if transparent input s and transparent output s are included the worst case spike rises to or about six times a peer connection s usual memory consumption
| 0
|
49,717
| 13,187,255,676
|
IssuesEvent
|
2020-08-13 02:50:23
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
[dataclasses] Rounding error in I3Time pybindings (Trac #1920)
|
Incomplete Migration Migrated from Trac combo core defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1920">https://code.icecube.wisc.edu/ticket/1920</a>, reported by thomas.kintscher and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:52",
"description": "The conversion from an I3Time object to a Python datetime doesn't round correctly and can create invalid datetime objects.\n\nExample:\n{{{\nIn[1]: from icecube import dataclasses\n\nIn[2]: mjd = 57727.83222222222\n\nIn [3]: dataclasses.I3Time(mjd).date_time\nOut[3]: datetime.datetime(2016, 12, 5, 19, 58, 23, 1000000)\n}}}\n\nHere, the datetime object has one million milliseconds, when instead, the seconds should have been incremented by one, and the milliseconds should be zero. When creating the datetime object directly in python, bounds checking would have prevented this.\n\nWhy does it matter?\nThe subsequent conversion to a UTC string will be invalid because the fractional seconds representation is wrong. These strings are passed around in the I3Live API and trigger further errors there.\nSuch behaviour is easily encountered if the times have been rounded to seconds at some point and the precision of float turns x.0 into (x-1).9999999...\n\nThe error originates here: [http://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/dataclasses/trunk/private/pybindings/I3Time.cxx#L46 I3Time.cxx]\nThe nanoseconds from I3Time are rounded to milliseconds, but the overflow is not transferred to seconds, etc. A possible solution would be to round the timestamp first, and then derive the year/month/day/hours/minutes/seconds/milliseconds.",
"reporter": "thomas.kintscher",
"cc": "dxu",
"resolution": "fixed",
"_ts": "1550067232264679",
"component": "combo core",
"summary": "[dataclasses] Rounding error in I3Time pybindings",
"priority": "normal",
"keywords": "",
"time": "2016-12-06T10:34:31",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[dataclasses] Rounding error in I3Time pybindings (Trac #1920) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1920">https://code.icecube.wisc.edu/ticket/1920</a>, reported by thomas.kintscher and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:52",
"description": "The conversion from an I3Time object to a Python datetime doesn't round correctly and can create invalid datetime objects.\n\nExample:\n{{{\nIn[1]: from icecube import dataclasses\n\nIn[2]: mjd = 57727.83222222222\n\nIn [3]: dataclasses.I3Time(mjd).date_time\nOut[3]: datetime.datetime(2016, 12, 5, 19, 58, 23, 1000000)\n}}}\n\nHere, the datetime object has one million milliseconds, when instead, the seconds should have been incremented by one, and the milliseconds should be zero. When creating the datetime object directly in python, bounds checking would have prevented this.\n\nWhy does it matter?\nThe subsequent conversion to a UTC string will be invalid because the fractional seconds representation is wrong. These strings are passed around in the I3Live API and trigger further errors there.\nSuch behaviour is easily encountered if the times have been rounded to seconds at some point and the precision of float turns x.0 into (x-1).9999999...\n\nThe error originates here: [http://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/dataclasses/trunk/private/pybindings/I3Time.cxx#L46 I3Time.cxx]\nThe nanoseconds from I3Time are rounded to milliseconds, but the overflow is not transferred to seconds, etc. A possible solution would be to round the timestamp first, and then derive the year/month/day/hours/minutes/seconds/milliseconds.",
"reporter": "thomas.kintscher",
"cc": "dxu",
"resolution": "fixed",
"_ts": "1550067232264679",
"component": "combo core",
"summary": "[dataclasses] Rounding error in I3Time pybindings",
"priority": "normal",
"keywords": "",
"time": "2016-12-06T10:34:31",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
rounding error in pybindings trac migrated from json status closed changetime description the conversion from an object to a python datetime doesn t round correctly and can create invalid datetime objects n nexample n nin from icecube import dataclasses n nin mjd n nin dataclasses mjd date time nout datetime datetime n n nhere the datetime object has one million milliseconds when instead the seconds should have been incremented by one and the milliseconds should be zero when creating the datetime object directly in python bounds checking would have prevented this n nwhy does it matter nthe subsequent conversion to a utc string will be invalid because the fractional seconds representation is wrong these strings are passed around in the api and trigger further errors there nsuch behaviour is easily encountered if the times have been rounded to seconds at some point and the precision of float turns x into x n nthe error originates here nthe nanoseconds from are rounded to milliseconds but the overflow is not transferred to seconds etc a possible solution would be to round the timestamp first and then derive the year month day hours minutes seconds milliseconds reporter thomas kintscher cc dxu resolution fixed ts component combo core summary rounding error in pybindings priority normal keywords time milestone owner olivas type defect
| 0
|
9,260
| 12,294,714,908
|
IssuesEvent
|
2020-05-11 01:15:54
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
goaccess crashing when using --keep-db-files --load-from-disk
|
bug duplicate html report log-processing on-disk
|
here is the error:
```
==10202== GoAccess 1.0.2 crashed by Sig 11
==10202==
==10202== VALUES AT CRASH POINT
==10202==
==10202== Line number: 1864853
==10202== Offset: 0
==10202== Invalid data: 1653
==10202== Piping: 0
==10202== Response size: 120276307679 bytes
==10202==
==10202== STACK TRACE:
==10202==
==10202== 0 goaccess(sigsegv_handler+0x188) [0x40b8b8]
==10202== 1 /lib64/libc.so.6(+0x35250) [0x7f662682d250]
==10202== 2 /lib64/libc.so.6(+0x14ac86) [0x7f6626942c86]
==10202== 3 /lib64/libtokyocabinet.so.9(+0x3fd94) [0x7f6626e00d94]
==10202== 4 /lib64/libtokyocabinet.so.9(tcbdbput+0x8f) [0x7f6626e0198f]
==10202== 5 /lib64/libtokyocabinet.so.9(tcadbput+0x1a9) [0x7f6626e22409]
==10202== 6 goaccess() [0x424d89]
==10202== 7 goaccess() [0x418325]
==10202== 8 goaccess(parse_log+0x129) [0x418aa9]
==10202== 9 goaccess(main+0x1fe) [0x40813e]
==10202== 10 /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f6626819b35]
==10202== 11 goaccess() [0x4090cd]
==10202==
==10202== Please report it by opening an issue on GitHub:
==10202== https://github.com/allinurl/goaccess/issues
```
Here is how I am running goaccess:
goaccess -f /var/log/jigsaw/jigsaw.log -o /var/www/goaccess/report.html --real-time-html --ws-url=hostname.com --keep-db-files --load-from-disk
It works perfectly without: --keep-db-files --load-from-disk
Proof that Tokyo Cabinet is installed:
goaccess -s
Built using Tokyo Cabinet On-Disk B+ Tree.
Please let me know how I should tune the memory in the configuration file to make this work properly.
|
1.0
|
goaccess crashing when using --keep-db-files --load-from-disk - here is the error:
```
==10202== GoAccess 1.0.2 crashed by Sig 11
==10202==
==10202== VALUES AT CRASH POINT
==10202==
==10202== Line number: 1864853
==10202== Offset: 0
==10202== Invalid data: 1653
==10202== Piping: 0
==10202== Response size: 120276307679 bytes
==10202==
==10202== STACK TRACE:
==10202==
==10202== 0 goaccess(sigsegv_handler+0x188) [0x40b8b8]
==10202== 1 /lib64/libc.so.6(+0x35250) [0x7f662682d250]
==10202== 2 /lib64/libc.so.6(+0x14ac86) [0x7f6626942c86]
==10202== 3 /lib64/libtokyocabinet.so.9(+0x3fd94) [0x7f6626e00d94]
==10202== 4 /lib64/libtokyocabinet.so.9(tcbdbput+0x8f) [0x7f6626e0198f]
==10202== 5 /lib64/libtokyocabinet.so.9(tcadbput+0x1a9) [0x7f6626e22409]
==10202== 6 goaccess() [0x424d89]
==10202== 7 goaccess() [0x418325]
==10202== 8 goaccess(parse_log+0x129) [0x418aa9]
==10202== 9 goaccess(main+0x1fe) [0x40813e]
==10202== 10 /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f6626819b35]
==10202== 11 goaccess() [0x4090cd]
==10202==
==10202== Please report it by opening an issue on GitHub:
==10202== https://github.com/allinurl/goaccess/issues
```
Here is how I am running goaccess:
goaccess -f /var/log/jigsaw/jigsaw.log -o /var/www/goaccess/report.html --real-time-html --ws-url=hostname.com --keep-db-files --load-from-disk
It works perfectly without: --keep-db-files --load-from-disk
Proof that Tokyo Cabinet is installed:
goaccess -s
Built using Tokyo Cabinet On-Disk B+ Tree.
Please let me know how I should tune the memory in the configuration file to make this work properly.
|
process
|
goaccess crashing when using keep db files load from disk here is the error goaccess crashed by sig values at crash point line number offset invalid data piping response size bytes stack trace goaccess sigsegv handler libc so libc so libtokyocabinet so libtokyocabinet so tcbdbput libtokyocabinet so tcadbput goaccess goaccess goaccess parse log goaccess main libc so libc start main goaccess please report it by opening an issue on github here is how i am running goaccess goaccess f var log jigsaw jigsaw log o var www goaccess report html real time html ws url hostname com keep db files load from disk it works perfectly without keep db files load from disk proof that tokyo cabinet is installed goaccess s built using tokyo cabinet on disk b tree please let me know how i should tune the memory in the configuration file to make this work properly
| 1
|
2,725
| 5,612,438,725
|
IssuesEvent
|
2017-04-03 05:10:04
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Keyboard navigation does not work if hideSelectColumn is true
|
bug inprocess
|
(Using 3.1.1:) If my table has the property hideSelectColumn, then the cell navigation with keyboard does not work any more, in that the tab key will sometime jump columns (usually from the first column to the third one).
I have included (bad quality) animated gifs that shows the behaviour. The only change between the two examples is the "hideSelectColumn" being commented out.
```
render() {
const { data, columns, showFilter, selectionMode } = this.props;
const selectRow = (selectionMode === 'none') ? {}
: {
mode: (selectionMode === 'single') ? 'radio' : 'checkbox',
clickToSelect: true,
hideSelectColumn: (selectionMode === 'single'),
onSelect: this.onRowSelect,
bgColor: 'orange'
};
const tableHeaderColumns = columns ? columns.map((col) => {
return (
<TableHeaderColumn
id={col.key}
key={col.key}
dataField={col.key}
dataSort
dataFormat={col.dataFormat}
isKey={col.isKey}
hidden={col.hidden}
filter={showFilter ? { type: 'TextFilter', placeholder: 'Please enter a value', delay: 500 } : null}
>
{col.name}
</TableHeaderColumn>
);
}) : [];
const cellEdit = {
mode: 'click', // click cell to edit
blurToSave: true,
};
return (
<BootstrapTable
ref={(c) => { this.table = c; }}
data={data}
striped
condensed
hover
insertRow
deleteRow
search
exportCSV
bordered={Boolean(false)}
keyBoardNav={Boolean(true)}
selectRow={selectRow}
cellEdit={cellEdit}
>
{tableHeaderColumns}
</BootstrapTable>
);
}
```
Example without hideSelectColumn:

Example with hideSelectColumn:

|
1.0
|
Keyboard navigation does not work if hideSelectColumn is true - (Using 3.1.1:) If my table has the property hideSelectColumn, then the cell navigation with keyboard does not work any more, in that the tab key will sometime jump columns (usually from the first column to the third one).
I have included (bad quality) animated gifs that shows the behaviour. The only change between the two examples is the "hideSelectColumn" being commented out.
```
render() {
const { data, columns, showFilter, selectionMode } = this.props;
const selectRow = (selectionMode === 'none') ? {}
: {
mode: (selectionMode === 'single') ? 'radio' : 'checkbox',
clickToSelect: true,
hideSelectColumn: (selectionMode === 'single'),
onSelect: this.onRowSelect,
bgColor: 'orange'
};
const tableHeaderColumns = columns ? columns.map((col) => {
return (
<TableHeaderColumn
id={col.key}
key={col.key}
dataField={col.key}
dataSort
dataFormat={col.dataFormat}
isKey={col.isKey}
hidden={col.hidden}
filter={showFilter ? { type: 'TextFilter', placeholder: 'Please enter a value', delay: 500 } : null}
>
{col.name}
</TableHeaderColumn>
);
}) : [];
const cellEdit = {
mode: 'click', // click cell to edit
blurToSave: true,
};
return (
<BootstrapTable
ref={(c) => { this.table = c; }}
data={data}
striped
condensed
hover
insertRow
deleteRow
search
exportCSV
bordered={Boolean(false)}
keyBoardNav={Boolean(true)}
selectRow={selectRow}
cellEdit={cellEdit}
>
{tableHeaderColumns}
</BootstrapTable>
);
}
```
Example without hideSelectColumn:

Example with hideSelectColumn:

|
process
|
keyboard navigation does not work if hideselectcolumn is true using if my table has the property hideselectcolumn then the cell navigation with keyboard does not work any more in that the tab key will sometime jump columns usually from the first column to the third one i have included bad quality animated gifs that shows the behaviour the only change between the two examples is the hideselectcolumn being commented out render const data columns showfilter selectionmode this props const selectrow selectionmode none mode selectionmode single radio checkbox clicktoselect true hideselectcolumn selectionmode single onselect this onrowselect bgcolor orange const tableheadercolumns columns columns map col return tableheadercolumn id col key key col key datafield col key datasort dataformat col dataformat iskey col iskey hidden col hidden filter showfilter type textfilter placeholder please enter a value delay null col name const celledit mode click click cell to edit blurtosave true return bootstraptable ref c this table c data data striped condensed hover insertrow deleterow search exportcsv bordered boolean false keyboardnav boolean true selectrow selectrow celledit celledit tableheadercolumns example without hideselectcolumn example with hideselectcolumn
| 1
|
2,963
| 5,960,315,087
|
IssuesEvent
|
2017-05-29 13:45:46
|
CERNDocumentServer/cds
|
https://api.github.com/repos/CERNDocumentServer/cds
|
closed
|
deposit: investigate metadata-extraction errors
|
avc_processing bug in progress
|
When uploading a video, the `ff_probe` command executed by the `extract_metadata` task gives an error.
|
1.0
|
deposit: investigate metadata-extraction errors - When uploading a video, the `ff_probe` command executed by the `extract_metadata` task gives an error.
|
process
|
deposit investigate metadata extraction errors when uploading a video the ff probe command executed by the extract metadata task gives an error
| 1
|
15,748
| 19,911,572,976
|
IssuesEvent
|
2022-01-25 17:41:06
|
input-output-hk/high-assurance-legacy
|
https://api.github.com/repos/input-output-hk/high-assurance-legacy
|
closed
|
Add missing spaces between transition operators and residuals
|
language: isabelle topic: process calculus topic: ouroboros topic: examples type: improvement
|
When writing a transitions where the residual explicitly mentions a label and a target process, we don’t put a space between the transition operator and the residual; so we write, for example, `p →⇩♭⦃α⦄ q`. In the past, we also didn’t put a space when the residual was a variable; so we wrote, for example, `p →⇩♭c`. However, we consider the latter bad practice meanwhile.
Our goal is to insert spaces between the transition operator and the residual where there are currently none.
|
1.0
|
Add missing spaces between transition operators and residuals - When writing a transitions where the residual explicitly mentions a label and a target process, we don’t put a space between the transition operator and the residual; so we write, for example, `p →⇩♭⦃α⦄ q`. In the past, we also didn’t put a space when the residual was a variable; so we wrote, for example, `p →⇩♭c`. However, we consider the latter bad practice meanwhile.
Our goal is to insert spaces between the transition operator and the residual where there are currently none.
|
process
|
add missing spaces between transition operators and residuals when writing a transitions where the residual explicitly mentions a label and a target process we don’t put a space between the transition operator and the residual so we write for example p →⇩♭⦃α⦄ q in the past we also didn’t put a space when the residual was a variable so we wrote for example p →⇩♭c however we consider the latter bad practice meanwhile our goal is to insert spaces between the transition operator and the residual where there are currently none
| 1
|
509,698
| 14,741,910,210
|
IssuesEvent
|
2021-01-07 11:22:23
|
gammapy/gammapy
|
https://api.github.com/repos/gammapy/gammapy
|
closed
|
Gammapy validation : LightCurve
|
effort-medium package-novice priority-high
|
Light Curve extraction and validation is not sufficiently tested so far for v1.0. We should add at least one test of light curve extraction in 1D and 3D.
Ideally this could use PKS 2155-304 flare data from the HESS DL3-DR1 and some lightcurves from CTA DC1.
See:
- https://github.com/gammapy/gammapy-benchmarks/tree/master/validation/lightcurve
|
1.0
|
Gammapy validation : LightCurve - Light Curve extraction and validation is not sufficiently tested so far for v1.0. We should add at least one test of light curve extraction in 1D and 3D.
Ideally this could use PKS 2155-304 flare data from the HESS DL3-DR1 and some lightcurves from CTA DC1.
See:
- https://github.com/gammapy/gammapy-benchmarks/tree/master/validation/lightcurve
|
non_process
|
gammapy validation lightcurve light curve extraction and validation is not sufficiently tested so far for we should add at least one test of light curve extraction in and ideally this could use pks flare data from the hess and some lightcurves from cta see
| 0
|
10,543
| 13,324,817,650
|
IssuesEvent
|
2020-08-27 09:01:26
|
pystatgen/sgkit
|
https://api.github.com/repos/pystatgen/sgkit
|
opened
|
Can't install precommit toolchain on OSX
|
process + tools
|
I don't think this is exactly a sgkit issue, but @daletovar and I both encountered errors when trying to install the precommit toolchain on a mac.
For a work around I used this dockerfile.
```
FROM python:3.8.5-buster
RUN mkdir /project
WORKDIR /project
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update --fix-missing && \
apt-get install -y wget bzip2 ca-certificates curl git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY requirements-dev.txt /project/requirements-dev.txt
COPY requirements.txt /project/requirements.txt
RUN python -m pip install --upgrade pip && \
pip install -r requirements.txt -r requirements-dev.txt
CMD bash -c "cd /project/sgkit && \
python -m pip install --upgrade pip && \
pip install -r requirements.txt -r requirements-dev.txt && \
cd docs; \
make html SPHINXOPTS='-W --keep-going' && \
cd .. && \
pytest -v --cov=sgkit --cov-report=term-missin g&& \
pre-commit run --all-files"
```
**Run this once to build**
```
docker build -t sgkit-dev .
docker volume create sgkit-precommit-cache
```
**Run this to test, lint, and run precommit hooks**
```
docker run -it -v sgkit-precommit-cache:/root/.cache -v $HOME/.gitconfig:/root/.gitconfig -v $(pwd):/project/sgkit sgkit-dev
**Run this to commit**
```
docker run -it -v sgkit-precommit-cache:/root/.cache -v $HOME/.gitconfig:/root/.gitconfig -v $(pwd):/project/sgkit \
bash -c 'cd /project/sgkit; git commit -m "My commit msg"'
```
|
1.0
|
Can't install precommit toolchain on OSX - I don't think this is exactly a sgkit issue, but @daletovar and I both encountered errors when trying to install the precommit toolchain on a mac.
For a work around I used this dockerfile.
```
FROM python:3.8.5-buster
RUN mkdir /project
WORKDIR /project
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update --fix-missing && \
apt-get install -y wget bzip2 ca-certificates curl git && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY requirements-dev.txt /project/requirements-dev.txt
COPY requirements.txt /project/requirements.txt
RUN python -m pip install --upgrade pip && \
pip install -r requirements.txt -r requirements-dev.txt
CMD bash -c "cd /project/sgkit && \
python -m pip install --upgrade pip && \
pip install -r requirements.txt -r requirements-dev.txt && \
cd docs; \
make html SPHINXOPTS='-W --keep-going' && \
cd .. && \
pytest -v --cov=sgkit --cov-report=term-missin g&& \
pre-commit run --all-files"
```
**Run this once to build**
```
docker build -t sgkit-dev .
docker volume create sgkit-precommit-cache
```
**Run this to test, lint, and run precommit hooks**
```
docker run -it -v sgkit-precommit-cache:/root/.cache -v $HOME/.gitconfig:/root/.gitconfig -v $(pwd):/project/sgkit sgkit-dev
**Run this to commit**
```
docker run -it -v sgkit-precommit-cache:/root/.cache -v $HOME/.gitconfig:/root/.gitconfig -v $(pwd):/project/sgkit \
bash -c 'cd /project/sgkit; git commit -m "My commit msg"'
```
|
process
|
can t install precommit toolchain on osx i don t think this is exactly a sgkit issue but daletovar and i both encountered errors when trying to install the precommit toolchain on a mac for a work around i used this dockerfile from python buster run mkdir project workdir project env lang c utf lc all c utf run apt get update fix missing apt get install y wget ca certificates curl git apt get clean rm rf var lib apt lists copy requirements dev txt project requirements dev txt copy requirements txt project requirements txt run python m pip install upgrade pip pip install r requirements txt r requirements dev txt cmd bash c cd project sgkit python m pip install upgrade pip pip install r requirements txt r requirements dev txt cd docs make html sphinxopts w keep going cd pytest v cov sgkit cov report term missin g pre commit run all files run this once to build docker build t sgkit dev docker volume create sgkit precommit cache run this to test lint and run precommit hooks docker run it v sgkit precommit cache root cache v home gitconfig root gitconfig v pwd project sgkit sgkit dev run this to commit docker run it v sgkit precommit cache root cache v home gitconfig root gitconfig v pwd project sgkit bash c cd project sgkit git commit m my commit msg
| 1
|
3,913
| 6,827,729,403
|
IssuesEvent
|
2017-11-08 17:59:03
|
trilinos/Trilinos
|
https://api.github.com/repos/trilinos/Trilinos
|
closed
|
Fill out CONTRIBUTING.md
|
Framework process improvement ready
|
A the moment, `CONTRIBUTING.md` just has some placeholder text in it. We need to fill it out such that potential contributors understand what is expected when interacting with Trilinos. I can take a crack at this, but I won't be able to start on it till late next week.
@jwillenbring, @bmpersc, any requests as to content?
@trilinos/framework
|
1.0
|
Fill out CONTRIBUTING.md - A the moment, `CONTRIBUTING.md` just has some placeholder text in it. We need to fill it out such that potential contributors understand what is expected when interacting with Trilinos. I can take a crack at this, but I won't be able to start on it till late next week.
@jwillenbring, @bmpersc, any requests as to content?
@trilinos/framework
|
process
|
fill out contributing md a the moment contributing md just has some placeholder text in it we need to fill it out such that potential contributors understand what is expected when interacting with trilinos i can take a crack at this but i won t be able to start on it till late next week jwillenbring bmpersc any requests as to content trilinos framework
| 1
|
9,579
| 12,533,039,740
|
IssuesEvent
|
2020-06-04 16:54:40
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
[iap] Debug why iap test doesn't pass locally.
|
api: iap testing type: process
|
If you run `iap/iap_test.py` locally, it fails with the following logs:
I'm not sure why. Maybe there're some firewall rules on the project?
I don't have access to the project `gcp-devrel-iap-reflect` right now, so it's hard to debug further. I'm trying to obtain the ownership of that project.
```
iap_test.py::test_main FAILED [100%]
============================================================================ FAILURES =============================================================================
____________________________________________________________________________ test_main ____________________________________________________________________________
Traceback (most recent call last):
File "/usr/local/google/home/tmatsuo/work/python-docs-samples/iap/iap_test.py", line 51, in test_main
IAP_CLIENT_ID)
File "/usr/local/google/home/tmatsuo/work/python-docs-samples/iap/make_iap_request.py", line 54, in make_iap_request
raise Exception('Service account does not have permission to '
Exception: Service account does not have permission to access the IAP-protected application.
--------------------------------- generated xml file: /usr/local/google/home/tmatsuo/work/python-docs-samples/iap/sponge_log.xml ----------------------------------
===Flaky Test Report===
test_main failed (1 runs remaining out of 2).
<class 'Exception'>
Service account does not have permission to access the IAP-protected application.
[<TracebackEntry /usr/local/google/home/tmatsuo/work/python-docs-samples/iap/iap_test.py:51>, <TracebackEntry /usr/local/google/home/tmatsuo/work/python-docs-samples/iap/make_iap_request.py:54>]
test_main failed; it passed 0 out of the required 1 times.
<class 'Exception'>
Service account does not have permission to access the IAP-protected application.
[<TracebackEntry /usr/local/google/home/tmatsuo/work/python-docs-samples/iap/iap_test.py:51>, <TracebackEntry /usr/local/google/home/tmatsuo/work/python-docs-samples/iap/make_iap_request.py:54>]
===End Flaky Test Report===
======================================================================== 1 failed in 0.60s ========================================================================
nox > Command pytest --junitxml=sponge_log.xml failed with exit code 1
nox > Session py-3.7 failed.
```
|
1.0
|
[iap] Debug why iap test doesn't pass locally. - If you run `iap/iap_test.py` locally, it fails with the following logs:
I'm not sure why. Maybe there're some firewall rules on the project?
I don't have access to the project `gcp-devrel-iap-reflect` right now, so it's hard to debug further. I'm trying to obtain the ownership of that project.
```
iap_test.py::test_main FAILED [100%]
============================================================================ FAILURES =============================================================================
____________________________________________________________________________ test_main ____________________________________________________________________________
Traceback (most recent call last):
File "/usr/local/google/home/tmatsuo/work/python-docs-samples/iap/iap_test.py", line 51, in test_main
IAP_CLIENT_ID)
File "/usr/local/google/home/tmatsuo/work/python-docs-samples/iap/make_iap_request.py", line 54, in make_iap_request
raise Exception('Service account does not have permission to '
Exception: Service account does not have permission to access the IAP-protected application.
--------------------------------- generated xml file: /usr/local/google/home/tmatsuo/work/python-docs-samples/iap/sponge_log.xml ----------------------------------
===Flaky Test Report===
test_main failed (1 runs remaining out of 2).
<class 'Exception'>
Service account does not have permission to access the IAP-protected application.
[<TracebackEntry /usr/local/google/home/tmatsuo/work/python-docs-samples/iap/iap_test.py:51>, <TracebackEntry /usr/local/google/home/tmatsuo/work/python-docs-samples/iap/make_iap_request.py:54>]
test_main failed; it passed 0 out of the required 1 times.
<class 'Exception'>
Service account does not have permission to access the IAP-protected application.
[<TracebackEntry /usr/local/google/home/tmatsuo/work/python-docs-samples/iap/iap_test.py:51>, <TracebackEntry /usr/local/google/home/tmatsuo/work/python-docs-samples/iap/make_iap_request.py:54>]
===End Flaky Test Report===
======================================================================== 1 failed in 0.60s ========================================================================
nox > Command pytest --junitxml=sponge_log.xml failed with exit code 1
nox > Session py-3.7 failed.
```
|
process
|
debug why iap test doesn t pass locally if you run iap iap test py locally it fails with the following logs i m not sure why maybe there re some firewall rules on the project i don t have access to the project gcp devrel iap reflect right now so it s hard to debug further i m trying to obtain the ownership of that project iap test py test main failed failures test main traceback most recent call last file usr local google home tmatsuo work python docs samples iap iap test py line in test main iap client id file usr local google home tmatsuo work python docs samples iap make iap request py line in make iap request raise exception service account does not have permission to exception service account does not have permission to access the iap protected application generated xml file usr local google home tmatsuo work python docs samples iap sponge log xml flaky test report test main failed runs remaining out of service account does not have permission to access the iap protected application test main failed it passed out of the required times service account does not have permission to access the iap protected application end flaky test report failed in nox command pytest junitxml sponge log xml failed with exit code nox session py failed
| 1
|
133,407
| 18,297,459,644
|
IssuesEvent
|
2021-10-05 21:59:46
|
ghc-dev/Jennifer-Poole
|
https://api.github.com/repos/ghc-dev/Jennifer-Poole
|
opened
|
CVE-2020-8203 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-3.10.1.tgz</b>, <b>lodash-3.7.0.tgz</b>, <b>lodash-0.10.0.tgz</b>, <b>lodash-0.9.2.tgz</b></p></summary>
<p>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: Jennifer-Poole/package.json</p>
<p>Path to vulnerable library: Jennifer-Poole/node_modules/grunt-usemin/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-usemin-3.1.1.tgz (Root Library)
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.7.0.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz">https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz</a></p>
<p>Path to dependency file: Jennifer-Poole/package.json</p>
<p>Path to vulnerable library: Jennifer-Poole/node_modules/htmlhint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-htmlhint-0.9.13.tgz (Root Library)
- htmlhint-0.9.13.tgz
- jshint-2.8.0.tgz
- :x: **lodash-3.7.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-0.10.0.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz">https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz</a></p>
<p>Path to dependency file: Jennifer-Poole/package.json</p>
<p>Path to vulnerable library: Jennifer-Poole/node_modules/grunt-bower-task/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-bower-task-0.5.0.tgz (Root Library)
- :x: **lodash-0.10.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-0.9.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz">https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz</a></p>
<p>Path to dependency file: Jennifer-Poole/package.json</p>
<p>Path to vulnerable library: Jennifer-Poole/node_modules/grunt-connect-proxy-updated/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-connect-proxy-updated-0.2.1.tgz (Root Library)
- :x: **lodash-0.9.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Jennifer-Poole/commit/a42e1f276b13ba57ab48e60289e7f00c2858fab6">a42e1f276b13ba57ab48e60289e7f00c2858fab6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-10-21</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.10.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-usemin:3.1.1;lodash:3.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.7.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-htmlhint:0.9.13;htmlhint:0.9.13;jshint:2.8.0;lodash:3.7.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.10.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-bower-task:0.5.0;lodash:0.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.9.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-connect-proxy-updated:0.2.1;lodash:0.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-8203 (High) detected in multiple libraries - ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-3.10.1.tgz</b>, <b>lodash-3.7.0.tgz</b>, <b>lodash-0.10.0.tgz</b>, <b>lodash-0.9.2.tgz</b></p></summary>
<p>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: Jennifer-Poole/package.json</p>
<p>Path to vulnerable library: Jennifer-Poole/node_modules/grunt-usemin/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-usemin-3.1.1.tgz (Root Library)
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.7.0.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz">https://registry.npmjs.org/lodash/-/lodash-3.7.0.tgz</a></p>
<p>Path to dependency file: Jennifer-Poole/package.json</p>
<p>Path to vulnerable library: Jennifer-Poole/node_modules/htmlhint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-htmlhint-0.9.13.tgz (Root Library)
- htmlhint-0.9.13.tgz
- jshint-2.8.0.tgz
- :x: **lodash-3.7.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-0.10.0.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz">https://registry.npmjs.org/lodash/-/lodash-0.10.0.tgz</a></p>
<p>Path to dependency file: Jennifer-Poole/package.json</p>
<p>Path to vulnerable library: Jennifer-Poole/node_modules/grunt-bower-task/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-bower-task-0.5.0.tgz (Root Library)
- :x: **lodash-0.10.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-0.9.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz">https://registry.npmjs.org/lodash/-/lodash-0.9.2.tgz</a></p>
<p>Path to dependency file: Jennifer-Poole/package.json</p>
<p>Path to vulnerable library: Jennifer-Poole/node_modules/grunt-connect-proxy-updated/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-connect-proxy-updated-0.2.1.tgz (Root Library)
- :x: **lodash-0.9.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Jennifer-Poole/commit/a42e1f276b13ba57ab48e60289e7f00c2858fab6">a42e1f276b13ba57ab48e60289e7f00c2858fab6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-10-21</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.10.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-usemin:3.1.1;lodash:3.10.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"3.7.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-htmlhint:0.9.13;htmlhint:0.9.13;jshint:2.8.0;lodash:3.7.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.10.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-bower-task:0.5.0;lodash:0.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"0.9.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-connect-proxy-updated:0.2.1;lodash:0.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file jennifer poole package json path to vulnerable library jennifer poole node modules grunt usemin node modules lodash package json dependency hierarchy grunt usemin tgz root library x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file jennifer poole package json path to vulnerable library jennifer poole node modules htmlhint node modules lodash package json dependency hierarchy grunt htmlhint tgz root library htmlhint tgz jshint tgz x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file jennifer poole package json path to vulnerable library jennifer poole node modules grunt bower task node modules lodash package json dependency hierarchy grunt bower task tgz root library x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file jennifer poole package json path to vulnerable library jennifer poole node modules grunt connect proxy updated node modules lodash package json dependency hierarchy grunt connect proxy updated tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution attack when using zipobjectdeep in lodash before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt usemin lodash isminimumfixversionavailable true minimumfixversion lodash packagetype javascript node js packagename lodash packageversion packagefilepaths istransitivedependency true dependencytree grunt htmlhint htmlhint jshint lodash isminimumfixversionavailable true minimumfixversion lodash packagetype javascript node js packagename lodash packageversion packagefilepaths istransitivedependency true dependencytree grunt bower task lodash isminimumfixversionavailable true minimumfixversion lodash packagetype javascript node js packagename lodash packageversion packagefilepaths istransitivedependency true dependencytree grunt connect proxy updated lodash isminimumfixversionavailable true minimumfixversion lodash basebranches vulnerabilityidentifier cve vulnerabilitydetails prototype pollution attack when using zipobjectdeep in lodash before vulnerabilityurl
| 0
|
1,684
| 4,326,789,426
|
IssuesEvent
|
2016-07-26 08:04:57
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Рішення про внесення змін або продовження терміну дії рішеня міської ради - м.Первомайський - Харківська обл
|
In process of testing in work test
|
Косінова Людмила Миколаївна 050 975 17 37
Виконавчий комітет Первомайської міської ради Харківської області
E-mail: tsnap.pervom@ukr.net
інфо-карта:
[Внесення змін до рішень міської ради щодо регулювання земельних відносин.docx](https://github.com/e-government-ua/iBP/files/366241/default.docx)
Інформація про центр надання адміністративної послуги
Найменування центру надання адміністративної послуги, в якому здійснюється обслуговування суб’єкта звернення
Центр надання адміністративних послуг в м. Первомайський
1 Місцезнаходження центру надання адміністративної послуги 64102, Харківська обл., м.Первомайський, пр.40 років Перемоги, 1, каб. № 8
Інформація щодо режиму роботи центру надання адміністративної послуги ПРИЙОМ ГРОМАДЯН
Понеділок – 9:00 – 18:00
Вівторок – 9:00 – 18:00
Середа – 9:00 – 18:00
Четвер – 9:00 – 20:00
П’ятниця – 9:00 – 20:00
Субота – 8:00 – 15:00
Телефон/факс (довідки), адреса електронної пошти та веб-сайт центру надання адміністративної послуги тел/ факс.3-41-03, 050 975 17 37
Адреса веб-сайту: http://www.pervom-rada.gov.ua
E-mail: tsnap.pervom@ukr.net
skype: tsnap.pervomayskiy
|
1.0
|
Рішення про внесення змін або продовження терміну дії рішеня міської ради - м.Первомайський - Харківська обл - Косінова Людмила Миколаївна 050 975 17 37
Виконавчий комітет Первомайської міської ради Харківської області
E-mail: tsnap.pervom@ukr.net
інфо-карта:
[Внесення змін до рішень міської ради щодо регулювання земельних відносин.docx](https://github.com/e-government-ua/iBP/files/366241/default.docx)
Інформація про центр надання адміністративної послуги
Найменування центру надання адміністративної послуги, в якому здійснюється обслуговування суб’єкта звернення
Центр надання адміністративних послуг в м. Первомайський
1 Місцезнаходження центру надання адміністративної послуги 64102, Харківська обл., м.Первомайський, пр.40 років Перемоги, 1, каб. № 8
Інформація щодо режиму роботи центру надання адміністративної послуги ПРИЙОМ ГРОМАДЯН
Понеділок – 9:00 – 18:00
Вівторок – 9:00 – 18:00
Середа – 9:00 – 18:00
Четвер – 9:00 – 20:00
П’ятниця – 9:00 – 20:00
Субота – 8:00 – 15:00
Телефон/факс (довідки), адреса електронної пошти та веб-сайт центру надання адміністративної послуги тел/ факс.3-41-03, 050 975 17 37
Адреса веб-сайту: http://www.pervom-rada.gov.ua
E-mail: tsnap.pervom@ukr.net
skype: tsnap.pervomayskiy
|
process
|
рішення про внесення змін або продовження терміну дії рішеня міської ради м первомайський харківська обл косінова людмила миколаївна виконавчий комітет первомайської міської ради харківської області e mail tsnap pervom ukr net інфо карта інформація про центр надання адміністративної послуги найменування центру надання адміністративної послуги в якому здійснюється обслуговування суб’єкта звернення центр надання адміністративних послуг в м первомайський місцезнаходження центру надання адміністративної послуги харківська обл м первомайський пр років перемоги каб № інформація щодо режиму роботи центру надання адміністративної послуги прийом громадян понеділок – – вівторок – – середа – – четвер – – п’ятниця – – субота – – телефон факс довідки адреса електронної пошти та веб сайт центру надання адміністративної послуги тел факс адреса веб сайту e mail tsnap pervom ukr net skype tsnap pervomayskiy
| 1
|
36,859
| 2,813,102,332
|
IssuesEvent
|
2015-05-18 13:04:35
|
FLEXIcontent/flexicontent-cck
|
https://api.github.com/repos/FLEXIcontent/flexicontent-cck
|
opened
|
Modules that load flexicontent.css have parameter to disabled loading, default value for this should be 'Use component setting'
|
enhancement Priority Normal
|
This will make simpler for users to disable default flexicontent CSS
|
1.0
|
Modules that load flexicontent.css have parameter to disabled loading, default value for this should be 'Use component setting' - This will make simpler for users to disable default flexicontent CSS
|
non_process
|
modules that load flexicontent css have parameter to disabled loading default value for this should be use component setting this will make simpler for users to disable default flexicontent css
| 0
|
71,299
| 15,193,517,049
|
IssuesEvent
|
2021-02-16 00:56:09
|
olivialancaster/thimble.mozilla.org
|
https://api.github.com/repos/olivialancaster/thimble.mozilla.org
|
opened
|
WS-2020-0217 (Medium) detected in bunyan-1.3.5.tgz
|
security vulnerability
|
## WS-2020-0217 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bunyan-1.3.5.tgz</b></p></summary>
<p>a JSON logging library for node.js services</p>
<p>Library home page: <a href="https://registry.npmjs.org/bunyan/-/bunyan-1.3.5.tgz">https://registry.npmjs.org/bunyan/-/bunyan-1.3.5.tgz</a></p>
<p>Path to dependency file: thimble.mozilla.org/services/id.webmaker.org/package.json</p>
<p>Path to vulnerable library: thimble.mozilla.org/services/id.webmaker.org/node_modules/bunyan/package.json</p>
<p>
Dependency Hierarchy:
- :x: **bunyan-1.3.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/olivialancaster/thimble.mozilla.org/commit/efd99d6cbfbb39bd515621896ca4d268a4081395">efd99d6cbfbb39bd515621896ca4d268a4081395</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Remote Command Execution (RCE) vulnerability was found in bunyan before 1.8.13 and 2.x before 2.0.3. The issue occurs because a user input is formatted inside a command that will be executed without any check.
<p>Publish Date: 2020-06-27
<p>URL: <a href=https://github.com/trentm/node-bunyan/commit/ea21d75f548373f29bb772b15faeb83e87089746>WS-2020-0217</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/trentm/node-bunyan/blob/master/CHANGES.md">https://github.com/trentm/node-bunyan/blob/master/CHANGES.md</a></p>
<p>Release Date: 2020-06-27</p>
<p>Fix Resolution: bunyan - 1.8.13,2.0.3</p>
</p>
</details>
<p></p>
|
True
|
WS-2020-0217 (Medium) detected in bunyan-1.3.5.tgz - ## WS-2020-0217 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bunyan-1.3.5.tgz</b></p></summary>
<p>a JSON logging library for node.js services</p>
<p>Library home page: <a href="https://registry.npmjs.org/bunyan/-/bunyan-1.3.5.tgz">https://registry.npmjs.org/bunyan/-/bunyan-1.3.5.tgz</a></p>
<p>Path to dependency file: thimble.mozilla.org/services/id.webmaker.org/package.json</p>
<p>Path to vulnerable library: thimble.mozilla.org/services/id.webmaker.org/node_modules/bunyan/package.json</p>
<p>
Dependency Hierarchy:
- :x: **bunyan-1.3.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/olivialancaster/thimble.mozilla.org/commit/efd99d6cbfbb39bd515621896ca4d268a4081395">efd99d6cbfbb39bd515621896ca4d268a4081395</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Remote Command Execution (RCE) vulnerability was found in bunyan before 1.8.13 and 2.x before 2.0.3. The issue occurs because a user input is formatted inside a command that will be executed without any check.
<p>Publish Date: 2020-06-27
<p>URL: <a href=https://github.com/trentm/node-bunyan/commit/ea21d75f548373f29bb772b15faeb83e87089746>WS-2020-0217</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/trentm/node-bunyan/blob/master/CHANGES.md">https://github.com/trentm/node-bunyan/blob/master/CHANGES.md</a></p>
<p>Release Date: 2020-06-27</p>
<p>Fix Resolution: bunyan - 1.8.13,2.0.3</p>
</p>
</details>
<p></p>
|
non_process
|
ws medium detected in bunyan tgz ws medium severity vulnerability vulnerable library bunyan tgz a json logging library for node js services library home page a href path to dependency file thimble mozilla org services id webmaker org package json path to vulnerable library thimble mozilla org services id webmaker org node modules bunyan package json dependency hierarchy x bunyan tgz vulnerable library found in head commit a href vulnerability details a remote command execution rce vulnerability was found in bunyan before and x before the issue occurs because a user input is formatted inside a command that will be executed without any check publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bunyan
| 0
|
402,459
| 27,370,002,571
|
IssuesEvent
|
2023-02-27 22:30:30
|
chicago-cdac/nm-exp-active-netrics
|
https://api.github.com/repos/chicago-cdac/nm-exp-active-netrics
|
opened
|
Measurement development roadmap
|
documentation Measurements
|
I will compile a list of measurement enhancements as well as new measurements that we want to prioritize for the coming quarters. The technical team will meet to discuss these.
|
1.0
|
Measurement development roadmap - I will compile a list of measurement enhancements as well as new measurements that we want to prioritize for the coming quarters. The technical team will meet to discuss these.
|
non_process
|
measurement development roadmap i will compile a list of measurement enhancements as well as new measurements that we want to prioritize for the coming quarters the technical team will meet to discuss these
| 0
|
189,949
| 14,529,249,383
|
IssuesEvent
|
2020-12-14 17:33:19
|
pysat/pysat
|
https://api.github.com/repos/pysat/pysat
|
closed
|
TST: InstTestClass support for module-level specification of `user` and `password`
|
testing
|
**Is your feature request related to a problem? Please describe.**
Currently, pysatMadrigal cannot raise an Error if the user forgets to supply a username and password. Instead, the loading routine defaults to supplying the test information. This is not helpful for Madrigal, which likely uses this information in their reports to funding agencies.
**Describe the solution you'd like**
I would like the InsTestClass to have `user` and `password` attributes.
**Describe alternatives you've considered**
Leaving this as is or exploring other alternatives.
|
1.0
|
TST: InstTestClass support for module-level specification of `user` and `password` - **Is your feature request related to a problem? Please describe.**
Currently, pysatMadrigal cannot raise an Error if the user forgets to supply a username and password. Instead, the loading routine defaults to supplying the test information. This is not helpful for Madrigal, which likely uses this information in their reports to funding agencies.
**Describe the solution you'd like**
I would like the InsTestClass to have `user` and `password` attributes.
**Describe alternatives you've considered**
Leaving this as is or exploring other alternatives.
|
non_process
|
tst insttestclass support for module level specification of user and password is your feature request related to a problem please describe currently pysatmadrigal cannot raise an error if the user forgets to supply a username and password instead the loading routine defaults to supplying the test information this is not helpful for madrigal which likely uses this information in their reports to funding agencies describe the solution you d like i would like the instestclass to have user and password attributes describe alternatives you ve considered leaving this as is or exploring other alternatives
| 0
|
30,076
| 8,475,319,158
|
IssuesEvent
|
2018-10-24 18:34:52
|
omnisci/mapd-core
|
https://api.github.com/repos/omnisci/mapd-core
|
closed
|
CMake minimal version is inadequate
|
build
|
According to CMakeLists.txt the minimum required version is 2.8 while subsequently VERSION_GREATER_EQUAL boolean operator is used that was introduced in CMake release 3.7 (https://cmake.org/cmake/help/v3.7/release/3.7.html).
Ubuntu 16.04 provides CMake version 3.5.1.
Options:
1. Require cmake 3.7 as the minimal requirement.
2. Find a workaround to the usage of VERSION_GREATER_EQUAL without changing cmake 2.8 requirement condition.
3. Anything else?
|
1.0
|
CMake minimal version is inadequate - According to CMakeLists.txt the minimum required version is 2.8 while subsequently VERSION_GREATER_EQUAL boolean operator is used that was introduced in CMake release 3.7 (https://cmake.org/cmake/help/v3.7/release/3.7.html).
Ubuntu 16.04 provides CMake version 3.5.1.
Options:
1. Require cmake 3.7 as the minimal requirement.
2. Find a workaround to the usage of VERSION_GREATER_EQUAL without changing cmake 2.8 requirement condition.
3. Anything else?
|
non_process
|
cmake minimal version is inadequate according to cmakelists txt the minimum required version is while subsequently version greater equal boolean operator is used that was introduced in cmake release ubuntu provides cmake version options require cmake as the minimal requirement find a workaround to the usage of version greater equal without changing cmake requirement condition anything else
| 0
|
2,081
| 2,587,455,094
|
IssuesEvent
|
2015-02-17 18:34:43
|
niloufars/CSCW2016
|
https://api.github.com/repos/niloufars/CSCW2016
|
closed
|
Fix the Facebook box layout
|
design
|
It isn't properly laid out in some browsers. Needs some CSS hacking.
|
1.0
|
Fix the Facebook box layout - It isn't properly laid out in some browsers. Needs some CSS hacking.
|
non_process
|
fix the facebook box layout it isn t properly laid out in some browsers needs some css hacking
| 0
|
7,770
| 10,895,722,450
|
IssuesEvent
|
2019-11-19 11:13:42
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
hyphal growth ?
|
multi-species process
|
hyphal growth
Growth of fungi as threadlike, tubular structures that may contain multiple nuclei and may or may not be divided internally by septa, or cross-walls.
formation of symbiont invasive hypha in host
Definition | The assembly by the symbiont of a threadlike, tubular structure, which may contain multiple nuclei and may or may not be divided internally by septa or cross-walls, for the purpose of invasive growth within its host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
formation of symbiont invasive hypha in host
seems to be describing "invasive hyphal growth"?
should this be a synonym?
should this be a child of
hyphal growth?
formation of hyphae is "growth of hyphae" (assembly seems an odd wording?)
|
1.0
|
hyphal growth ? -
hyphal growth
Growth of fungi as threadlike, tubular structures that may contain multiple nuclei and may or may not be divided internally by septa, or cross-walls.
formation of symbiont invasive hypha in host
Definition | The assembly by the symbiont of a threadlike, tubular structure, which may contain multiple nuclei and may or may not be divided internally by septa or cross-walls, for the purpose of invasive growth within its host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
formation of symbiont invasive hypha in host
seems to be describing "invasive hyphal growth"?
should this be a synonym?
should this be a child of
hyphal growth?
formation of hyphae is "growth of hyphae" (assembly seems an odd wording?)
|
process
|
hyphal growth hyphal growth growth of fungi as threadlike tubular structures that may contain multiple nuclei and may or may not be divided internally by septa or cross walls formation of symbiont invasive hypha in host definition the assembly by the symbiont of a threadlike tubular structure which may contain multiple nuclei and may or may not be divided internally by septa or cross walls for the purpose of invasive growth within its host organism the host is defined as the larger of the organisms involved in a symbiotic interaction formation of symbiont invasive hypha in host seems to be describing invasive hyphal growth should this be a synonym should this be a child of hyphal growth formation of hyphae is growth of hyphae assembly seems an odd wording
| 1
|
36,139
| 5,036,957,600
|
IssuesEvent
|
2016-12-17 11:15:01
|
cclib/cclib
|
https://api.github.com/repos/cclib/cclib
|
closed
|
Add a testall script
|
maintenance tests
|
Our INSTALL file references such a file, so we should probably have one.
|
1.0
|
Add a testall script - Our INSTALL file references such a file, so we should probably have one.
|
non_process
|
add a testall script our install file references such a file so we should probably have one
| 0
|
9,553
| 6,926,121,684
|
IssuesEvent
|
2017-11-30 18:03:07
|
calmPress/calmPress
|
https://api.github.com/repos/calmPress/calmPress
|
opened
|
Make sure 'theme_switched' option is always set
|
Performance
|
Seems like the install do not set it to false, which will be a problem for people that use the bundled theme.
|
True
|
Make sure 'theme_switched' option is always set - Seems like the install do not set it to false, which will be a problem for people that use the bundled theme.
|
non_process
|
make sure theme switched option is always set seems like the install do not set it to false which will be a problem for people that use the bundled theme
| 0
|
55,802
| 6,923,145,735
|
IssuesEvent
|
2017-11-30 07:43:34
|
ppy/osu-web
|
https://api.github.com/repos/ppy/osu-web
|
closed
|
Beatmap Description goes right up against the scrollbar
|
beatmap design
|
On the current website, descriptions shift slightly to the left overall when the description is long enough for there to be a scrollbar.
The new website's beatmap descriptions do this too, but not enough for its design, resulting in things such as centered text or banners looking uncentered and pretty funky.
This is especially unbalanced/bad looking when description banners are added, such as on this example because it goes right up to the right border but has lots of space to the left:
https://osu.ppy.sh/beatmapsets/501719#osu/1083103

It happens with text as well if it is long enough to reach the right border

|
1.0
|
Beatmap Description goes right up against the scrollbar - On the current website, descriptions shift slightly to the left overall when the description is long enough for there to be a scrollbar.
The new website's beatmap descriptions do this too, but not enough for its design, resulting in things such as centered text or banners looking uncentered and pretty funky.
This is especially unbalanced/bad looking when description banners are added, such as on this example because it goes right up to the right border but has lots of space to the left:
https://osu.ppy.sh/beatmapsets/501719#osu/1083103

It happens with text as well if it is long enough to reach the right border

|
non_process
|
beatmap description goes right up against the scrollbar on the current website descriptions shift slightly to the left overall when the description is long enough for there to be a scrollbar the new website s beatmap descriptions do this too but not enough for its design resulting in things such as centered text or banners looking uncentered and pretty funky this is especially unbalanced bad looking when description banners are added such as on this example because it goes right up to the right border but has lots of space to the left it happens with text as well if it is long enough to reach the right border
| 0
|
817,664
| 30,648,648,616
|
IssuesEvent
|
2023-07-25 07:20:29
|
googleapis/python-spanner
|
https://api.github.com/repos/googleapis/python-spanner
|
closed
|
tests.system.test_session_api: test_transaction_batch_update_w_parent_span failed
|
api: spanner type: bug priority: p1 flakybot: issue flakybot: flaky
|
Note: #951 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 78b73f02103e7eba8286112e20aff592a81b7934
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/5c1a3929-594d-409d-9f16-5b75e75e1a23), [Sponge](http://sponge2/5c1a3929-594d-409d-9f16-5b75e75e1a23)
status: failed
<details><summary>Test output</summary><br><pre>sessions_database = <google.cloud.spanner_v1.database.Database object at 0x7fd89852d8b0>
sessions_to_delete = [<google.cloud.spanner_v1.session.Session object at 0x7fd898669d30>]
ot_exporter = <opentelemetry.sdk.trace.export.in_memory_span_exporter.InMemorySpanExporter object at 0x7fd8986793a0>
database_dialect = <DatabaseDialect.POSTGRESQL: 2>
@pytest.mark.skipif(
not ot_helpers.HAS_OPENTELEMETRY_INSTALLED,
reason="trace requires OpenTelemetry",
)
def test_transaction_batch_update_w_parent_span(
sessions_database, sessions_to_delete, ot_exporter, database_dialect
):
from opentelemetry import trace
sd = _sample_data
param_types = spanner_v1.param_types
tracer = trace.get_tracer(__name__)
session = sessions_database.session()
session.create()
sessions_to_delete.append(session)
with session.batch() as batch:
batch.delete(sd.TABLE, sd.ALL)
keys = (
["p1", "p2"]
if database_dialect == DatabaseDialect.POSTGRESQL
else ["contact_id", "email"]
)
placeholders = (
["$1", "$2"]
if database_dialect == DatabaseDialect.POSTGRESQL
else [f"@{key}" for key in keys]
)
insert_statement = list(_generate_insert_statements())[0]
update_statement = (
f"UPDATE contacts SET email = {placeholders[1]} WHERE contact_id = {placeholders[0]};",
{keys[0]: 1, keys[1]: "phreddy@example.com"},
{keys[0]: param_types.INT64, keys[1]: param_types.STRING},
)
delete_statement = (
f"DELETE FROM contacts WHERE contact_id = {placeholders[0]};",
{keys[0]: 1},
{keys[0]: param_types.INT64},
)
def unit_of_work(transaction):
status, row_counts = transaction.batch_update(
[insert_statement, update_statement, delete_statement]
)
_check_batch_status(status.code)
assert len(row_counts) == 3
for row_count in row_counts:
assert row_count == 1
with tracer.start_as_current_span("Test Span"):
session.run_in_transaction(unit_of_work)
span_list = ot_exporter.get_finished_spans()
> assert len(span_list) == 5
E assert 6 == 5
E + where 6 = len((<opentelemetry.sdk.trace.ReadableSpan object at 0x7fd8984eba60>, <opentelemetry.sdk.trace.ReadableSpan object at 0x7f...etry.sdk.trace.ReadableSpan object at 0x7fd8984969a0>, <opentelemetry.sdk.trace.ReadableSpan object at 0x7fd8984ebd30>))
tests/system/test_session_api.py:1115: AssertionError</pre></details>
|
1.0
|
tests.system.test_session_api: test_transaction_batch_update_w_parent_span failed - Note: #951 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 78b73f02103e7eba8286112e20aff592a81b7934
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/5c1a3929-594d-409d-9f16-5b75e75e1a23), [Sponge](http://sponge2/5c1a3929-594d-409d-9f16-5b75e75e1a23)
status: failed
<details><summary>Test output</summary><br><pre>sessions_database = <google.cloud.spanner_v1.database.Database object at 0x7fd89852d8b0>
sessions_to_delete = [<google.cloud.spanner_v1.session.Session object at 0x7fd898669d30>]
ot_exporter = <opentelemetry.sdk.trace.export.in_memory_span_exporter.InMemorySpanExporter object at 0x7fd8986793a0>
database_dialect = <DatabaseDialect.POSTGRESQL: 2>
@pytest.mark.skipif(
not ot_helpers.HAS_OPENTELEMETRY_INSTALLED,
reason="trace requires OpenTelemetry",
)
def test_transaction_batch_update_w_parent_span(
sessions_database, sessions_to_delete, ot_exporter, database_dialect
):
from opentelemetry import trace
sd = _sample_data
param_types = spanner_v1.param_types
tracer = trace.get_tracer(__name__)
session = sessions_database.session()
session.create()
sessions_to_delete.append(session)
with session.batch() as batch:
batch.delete(sd.TABLE, sd.ALL)
keys = (
["p1", "p2"]
if database_dialect == DatabaseDialect.POSTGRESQL
else ["contact_id", "email"]
)
placeholders = (
["$1", "$2"]
if database_dialect == DatabaseDialect.POSTGRESQL
else [f"@{key}" for key in keys]
)
insert_statement = list(_generate_insert_statements())[0]
update_statement = (
f"UPDATE contacts SET email = {placeholders[1]} WHERE contact_id = {placeholders[0]};",
{keys[0]: 1, keys[1]: "phreddy@example.com"},
{keys[0]: param_types.INT64, keys[1]: param_types.STRING},
)
delete_statement = (
f"DELETE FROM contacts WHERE contact_id = {placeholders[0]};",
{keys[0]: 1},
{keys[0]: param_types.INT64},
)
def unit_of_work(transaction):
status, row_counts = transaction.batch_update(
[insert_statement, update_statement, delete_statement]
)
_check_batch_status(status.code)
assert len(row_counts) == 3
for row_count in row_counts:
assert row_count == 1
with tracer.start_as_current_span("Test Span"):
session.run_in_transaction(unit_of_work)
span_list = ot_exporter.get_finished_spans()
> assert len(span_list) == 5
E assert 6 == 5
E + where 6 = len((<opentelemetry.sdk.trace.ReadableSpan object at 0x7fd8984eba60>, <opentelemetry.sdk.trace.ReadableSpan object at 0x7f...etry.sdk.trace.ReadableSpan object at 0x7fd8984969a0>, <opentelemetry.sdk.trace.ReadableSpan object at 0x7fd8984ebd30>))
tests/system/test_session_api.py:1115: AssertionError</pre></details>
|
non_process
|
tests system test session api test transaction batch update w parent span failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output sessions database sessions to delete ot exporter database dialect pytest mark skipif not ot helpers has opentelemetry installed reason trace requires opentelemetry def test transaction batch update w parent span sessions database sessions to delete ot exporter database dialect from opentelemetry import trace sd sample data param types spanner param types tracer trace get tracer name session sessions database session session create sessions to delete append session with session batch as batch batch delete sd table sd all keys if database dialect databasedialect postgresql else placeholders if database dialect databasedialect postgresql else insert statement list generate insert statements update statement f update contacts set email placeholders where contact id placeholders keys keys phreddy example com keys param types keys param types string delete statement f delete from contacts where contact id placeholders keys keys param types def unit of work transaction status row counts transaction batch update check batch status status code assert len row counts for row count in row counts assert row count with tracer start as current span test span session run in transaction unit of work span list ot exporter get finished spans assert len span list e assert e where len tests system test session api py assertionerror
| 0
|
75,886
| 9,905,452,935
|
IssuesEvent
|
2019-06-27 11:38:22
|
sequelize/sequelize
|
https://api.github.com/repos/sequelize/sequelize
|
opened
|
[Question] How to `findAll` through many association
|
documentation
|
models associations:
```js
Picture.belongsToMany(Tag, { through: 'fee_picture_tag' });
Tag.belongsToMany(Picture, { through: 'fee_picture_tag' });
```
query:
```js
const pictures = Picture.findAll({
distinct: true,
include: [
{
model: Tag,
attributes: ['id', 'name'],
},
],
});
```
result:
```json
[
{
"id": 1,
"tags": [
{
"id": 1,
"name": "烹饪",
},
{
"id": 6,
"name": "书籍",
}
]
},
{
"id": 2,
"tags": [
{
"id": 3,
"name": "烘焙",
},
{
"id": 6,
"name": "书籍",
}
]
},
{
"id": 3,
"tags": [
{
"id": 2,
"name": "花艺",
},
{
"id": 6,
"name": "书籍",
}
]
}
]
```
I need to filter out the pictures with the tag by id = 3, the query is
```js
const pictures = Picture.findAll({
include: [
{
model: Tag,
attributes: ['id', 'name'],
through: {
where: {
tagId: 3,
},
},
},
],
});
```
## actual
```json
[
{
"id": 1,
"tags": []
},
{
"id": 2,
"tags": [
{
"id": 3,
"name": "烘焙",
}
]
},
{
"id": 3,
"tags": []
}
]
```
## expect
```json
[
{
"id": 2,
"tags": [
{
"id": 3,
"name": "烘焙",
},
{
"id": 6,
"name": "书籍",
}
]
},
]
```
How to implement this query? thank you!
|
1.0
|
[Question] How to `findAll` through many association - models associations:
```js
Picture.belongsToMany(Tag, { through: 'fee_picture_tag' });
Tag.belongsToMany(Picture, { through: 'fee_picture_tag' });
```
query:
```js
const pictures = Picture.findAll({
distinct: true,
include: [
{
model: Tag,
attributes: ['id', 'name'],
},
],
});
```
result:
```json
[
{
"id": 1,
"tags": [
{
"id": 1,
"name": "烹饪",
},
{
"id": 6,
"name": "书籍",
}
]
},
{
"id": 2,
"tags": [
{
"id": 3,
"name": "烘焙",
},
{
"id": 6,
"name": "书籍",
}
]
},
{
"id": 3,
"tags": [
{
"id": 2,
"name": "花艺",
},
{
"id": 6,
"name": "书籍",
}
]
}
]
```
I need to filter out the pictures with the tag by id = 3, the query is
```js
const pictures = Picture.findAll({
include: [
{
model: Tag,
attributes: ['id', 'name'],
through: {
where: {
tagId: 3,
},
},
},
],
});
```
## actual
```json
[
{
"id": 1,
"tags": []
},
{
"id": 2,
"tags": [
{
"id": 3,
"name": "烘焙",
}
]
},
{
"id": 3,
"tags": []
}
]
```
## expect
```json
[
{
"id": 2,
"tags": [
{
"id": 3,
"name": "烘焙",
},
{
"id": 6,
"name": "书籍",
}
]
},
]
```
How to implement this query? thank you!
|
non_process
|
how to findall through many association models associations js picture belongstomany tag through fee picture tag tag belongstomany picture through fee picture tag query js const pictures picture findall distinct true include model tag attributes result json id tags id name 烹饪 id name 书籍 id tags id name 烘焙 id name 书籍 id tags id name 花艺 id name 书籍 i need to filter out the pictures with the tag by id the query is js const pictures picture findall include model tag attributes through where tagid actual json id tags id tags id name 烘焙 id tags expect json id tags id name 烘焙 id name 书籍 how to implement this query thank you
| 0
|
7,325
| 10,467,345,632
|
IssuesEvent
|
2019-09-22 04:10:19
|
yodaos-project/ShadowNode
|
https://api.github.com/repos/yodaos-project/ShadowNode
|
closed
|
process: port signals to global process object
|
process
|
<!--
Thank you for suggesting an idea to make ShadowNode better.
Please fill in as much of the template below as you're able.
-->
**Describe the solution you'd like**
Please describe the desired behavior.
Signals like `SIGTERM` and `SIGINT` shall be passed through to JS engine and emitted on global process event emitter.
|
1.0
|
process: port signals to global process object - <!--
Thank you for suggesting an idea to make ShadowNode better.
Please fill in as much of the template below as you're able.
-->
**Describe the solution you'd like**
Please describe the desired behavior.
Signals like `SIGTERM` and `SIGINT` shall be passed through to JS engine and emitted on global process event emitter.
|
process
|
process port signals to global process object thank you for suggesting an idea to make shadownode better please fill in as much of the template below as you re able describe the solution you d like please describe the desired behavior signals like sigterm and sigint shall be passed through to js engine and emitted on global process event emitter
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.