Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
16,177
| 20,622,625,120
|
IssuesEvent
|
2022-03-07 18:58:56
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
PH not EAF exhaust gas
|
Process Heating
|
Instead of using an abbreviated AH% calculation that assumes NG
Have a field that is "Fuel Fired Efficiency" that users can enter.
Have the "blue calculate" button that links to Flue Gas calculator like in steam assessment for Boiler Efficiency
|
1.0
|
PH not EAF exhaust gas - Instead of using an abbreviated AH% calculation that assumes NG
Have a field that is "Fuel Fired Efficiency" that users can enter.
Have the "blue calculate" button that links to Flue Gas calculator like in steam assessment for Boiler Efficiency
|
process
|
ph not eaf exhaust gas instead of using an abbreviated ah calculation that assumes ng have a field that is fuel fired efficiency that users can enter have the blue calculate button that links to flue gas calculator like in steam assessment for boiler efficiency
| 1
|
4,027
| 6,961,579,202
|
IssuesEvent
|
2017-12-08 10:01:41
|
nlbdev/pipeline
|
https://api.github.com/repos/nlbdev/pipeline
|
closed
|
Move original colophon to end of rearmatter
|
enhancement pre-processing Priority:2 - Medium
|
For DTBook, extract `<level1 class="colophon">`, for EPUB, extract `<body epub:type="colophon">`, and move to the end of the book (end of `<rearmatter>` for DTBook, and equivalent for EPUB).
If there's no headline, insert `<h1>Kolofon</h1>` for norwegian books and `<h1>Colophon</h1>` for english books (at the top of the `<level>`/`<body>`).
Do this as part of pre-processing. The colophon here refers to the one in the input book. A simpler colophon is generated for the braille version and inserted at the start of the book.
|
1.0
|
Move original colophon to end of rearmatter - For DTBook, extract `<level1 class="colophon">`, for EPUB, extract `<body epub:type="colophon">`, and move to the end of the book (end of `<rearmatter>` for DTBook, and equivalent for EPUB).
If there's no headline, insert `<h1>Kolofon</h1>` for norwegian books and `<h1>Colophon</h1>` for english books (at the top of the `<level>`/`<body>`).
Do this as part of pre-processing. The colophon here refers to the one in the input book. A simpler colophon is generated for the braille version and inserted at the start of the book.
|
process
|
move original colophon to end of rearmatter for dtbook extract for epub extract and move to the end of the book end of for dtbook and equivalent for epub if there s no headline insert kolofon for norwegian books and colophon for english books at the top of the do this as part of pre processing the colophon here refers to the one in the input book a simpler colophon is generated for the braille version and inserted at the start of the book
| 1
|
17,719
| 23,623,384,147
|
IssuesEvent
|
2022-08-24 23:36:24
|
googleapis/gapic-generator-python
|
https://api.github.com/repos/googleapis/gapic-generator-python
|
closed
|
ensure unit tests work for GAPICs using transport=rest
|
type: process priority: p1
|
I can locally run the generated unit tests for the texttospeech GAPIC generated with `transport=grpc`, and they pass. When `transport=rest`, those tests fail. We should ensure the tests generated for the clients can run and pass.
Having the tests running is a blocker for releasing the `transport=rest` functionality.
|
1.0
|
ensure unit tests work for GAPICs using transport=rest - I can locally run the generated unit tests for the texttospeech GAPIC generated with `transport=grpc`, and they pass. When `transport=rest`, those tests fail. We should ensure the tests generated for the clients can run and pass.
Having the tests running is a blocker for releasing the `transport=rest` functionality.
|
process
|
ensure unit tests work for gapics using transport rest i can locally run the generated unit tests for the texttospeech gapic generated with transport grpc and they pass when transport rest those tests fail we should ensure the tests generated for the clients can run and pass having the tests running is a blocker for releasing the transport rest functionality
| 1
|
16,649
| 21,712,379,982
|
IssuesEvent
|
2022-05-10 14:50:44
|
ankidroid/Anki-Android
|
https://api.github.com/repos/ankidroid/Anki-Android
|
closed
|
Add explanation to all tests
|
Priority-Low Good First Issue! Test process
|
Most of the tests have a message stating what was expected that did not occur. This ensure that a test failure is accompanied with at least a first hint of what is wrong.
Since most tests are not flaky, the absence of such message is not a big deal, but still, it would be nice for code beauty to ensure that most tests have a message explaining what was expected. I believe it is a good first issue because most assertions are straightforward when you read the class name, the test method name and even what is actually tested.
On a bonus, that will allow to remove a lot of `equalTo(true)` and `is(true)`, which I personally find ugly. Given that "I assert foo is true" is the same thing as "I assert foo" but more verbose. (Right now `assertThat("cause", property)` and `assertThat(value, matcher)` exists, but not `assertThat(property)`.)
Once it's done, it would be nice to add a linter that forbids to use `assertThat` without message.
|
1.0
|
Add explanation to all tests - Most of the tests have a message stating what was expected that did not occur. This ensure that a test failure is accompanied with at least a first hint of what is wrong.
Since most tests are not flaky, the absence of such message is not a big deal, but still, it would be nice for code beauty to ensure that most tests have a message explaining what was expected. I believe it is a good first issue because most assertions are straightforward when you read the class name, the test method name and even what is actually tested.
On a bonus, that will allow to remove a lot of `equalTo(true)` and `is(true)`, which I personally find ugly. Given that "I assert foo is true" is the same thing as "I assert foo" but more verbose. (Right now `assertThat("cause", property)` and `assertThat(value, matcher)` exists, but not `assertThat(property)`.)
Once it's done, it would be nice to add a linter that forbids to use `assertThat` without message.
|
process
|
add explanation to all tests most of the tests have a message stating what was expected that did not occur this ensure that a test failure is accompanied with at least a first hint of what is wrong since most tests are not flaky the absence of such message is not a big deal but still it would be nice for code beauty to ensure that most tests have a message explaining what was expected i believe it is a good first issue because most assertions are straightforward when you read the class name the test method name and even what is actually tested on a bonus that will allow to remove a lot of equalto true and is true which i personally find ugly given that i assert foo is true is the same thing as i assert foo but more verbose right now assertthat cause property and assertthat value matcher exists but not assertthat property once it s done it would be nice to add a linter that forbids to use assertthat without message
| 1
|
6,952
| 10,113,584,537
|
IssuesEvent
|
2019-07-30 17:04:54
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
[MaskedTransition] Test all use cases
|
Accessibility [MaskedTransition] type:Process
|
Definition of done:
- [ ] All of this component's use cases have been evaluated in the accessibility audit spreadsheet located at [go/mdc-ios-a11y-matrix](http://go/mdc-ios-a11y-matrix).
- [ ] Issues have been filed for bugs that are blocking our accessibility rating.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/117179007](http://b/117179007)
|
1.0
|
[MaskedTransition] Test all use cases - Definition of done:
- [ ] All of this component's use cases have been evaluated in the accessibility audit spreadsheet located at [go/mdc-ios-a11y-matrix](http://go/mdc-ios-a11y-matrix).
- [ ] Issues have been filed for bugs that are blocking our accessibility rating.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/117179007](http://b/117179007)
|
process
|
test all use cases definition of done all of this component s use cases have been evaluated in the accessibility audit spreadsheet located at issues have been filed for bugs that are blocking our accessibility rating internal data associated internal bug
| 1
|
19,941
| 26,410,056,131
|
IssuesEvent
|
2023-01-13 11:25:00
|
hermes-hmc/workflow
|
https://api.github.com/repos/hermes-hmc/workflow
|
closed
|
Split authors and contributors
|
enhancement 2️ process/validate
|
- **Requires:** #52
Authors and contributors of software are different roles and should end up in different CodeMeta fields.
E.g.,
- person1 is in CFF authors AND git contributors: person is author only
- person1 is in CFF authors AND NOT git contributors: person is author only
- person1 is NOT in CFF authors BUT in git contributors: person is contributor only
This must be implemented as processing step.
|
1.0
|
Split authors and contributors - - **Requires:** #52
Authors and contributors of software are different roles and should end up in different CodeMeta fields.
E.g.,
- person1 is in CFF authors AND git contributors: person is author only
- person1 is in CFF authors AND NOT git contributors: person is author only
- person1 is NOT in CFF authors BUT in git contributors: person is contributor only
This must be implemented as processing step.
|
process
|
split authors and contributors requires authors and contributors of software are different roles and should end up in different codemeta fields e g is in cff authors and git contributors person is author only is in cff authors and not git contributors person is author only is not in cff authors but in git contributors person is contributor only this must be implemented as processing step
| 1
|
150,958
| 19,635,725,131
|
IssuesEvent
|
2022-01-08 08:26:38
|
storybookjs/storybook
|
https://api.github.com/repos/storybookjs/storybook
|
closed
|
Vulnerability fix on prismjs package
|
dependencies security
|
Regular Expression Denial of Service
https://www.npmjs.com/advisories/1762
pls update the prismjs@1.24.0
|
True
|
Vulnerability fix on prismjs package - Regular Expression Denial of Service
https://www.npmjs.com/advisories/1762
pls update the prismjs@1.24.0
|
non_process
|
vulnerability fix on prismjs package regular expression denial of service pls update the prismjs
| 0
|
3,037
| 4,120,356,092
|
IssuesEvent
|
2016-06-08 17:39:14
|
certbot/certbot
|
https://api.github.com/repos/certbot/certbot
|
opened
|
Redirect vhost not enabled
|
apache security enhancements
|
On a fresh install of Apache on Debian, I used `certbot` to create the following vhost:
```
<IfModule mod_ssl.c>
<VirtualHost *:443>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
SSLCertificateFile /etc/letsencrypt/live/pickupapp.me/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/pickupapp.me/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
ServerName example.com
</VirtualHost>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
</IfModule>
```
I then deleted all vhosts listening on port 80 and made the vhost above the only enabled vhost. After that I ran `certbot -d example.com --apache --redirect --reinstall`.
When doing this, I saw the following message in the logs:
```
2016-06-08 10:31:19,919:INFO:certbot_apache.configurator:Created redirect file: le-redirect-example.com.conf
```
and although the vhost was created, it was not enabled.
cc @sagi
|
True
|
Redirect vhost not enabled - On a fresh install of Apache on Debian, I used `certbot` to create the following vhost:
```
<IfModule mod_ssl.c>
<VirtualHost *:443>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
SSLCertificateFile /etc/letsencrypt/live/pickupapp.me/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/pickupapp.me/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
ServerName example.com
</VirtualHost>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
</IfModule>
```
I then deleted all vhosts listening on port 80 and made the vhost above the only enabled vhost. After that I ran `certbot -d example.com --apache --redirect --reinstall`.
When doing this, I saw the following message in the logs:
```
2016-06-08 10:31:19,919:INFO:certbot_apache.configurator:Created redirect file: le-redirect-example.com.conf
```
and although the vhost was created, it was not enabled.
cc @sagi
|
non_process
|
redirect vhost not enabled on a fresh install of apache on debian i used certbot to create the following vhost the servername directive sets the request scheme hostname and port that the server uses to identify itself this is used when creating redirection urls in the context of virtual hosts the servername specifies what hostname must appear in the request s host header to match this virtual host for the default virtual host this file this value is not decisive as it is used as a last resort host regardless however you must set it for any further virtual host explicitly servername serveradmin webmaster localhost documentroot var www html available loglevels debug info notice warn error crit alert emerg it is also possible to configure the loglevel for particular modules e g loglevel info ssl warn errorlog apache log dir error log customlog apache log dir access log combined for most configuration files from conf available which are enabled or disabled at a global level it is possible to include a line for only one particular virtual host for example the following line enables the cgi configuration for this host only after it has been globally disabled with include conf available serve cgi bin conf sslcertificatefile etc letsencrypt live pickupapp me fullchain pem sslcertificatekeyfile etc letsencrypt live pickupapp me privkey pem include etc letsencrypt options ssl apache conf servername example com vim syntax apache ts sw sts sr noet i then deleted all vhosts listening on port and made the vhost above the only enabled vhost after that i ran certbot d example com apache redirect reinstall when doing this i saw the following message in the logs info certbot apache configurator created redirect file le redirect example com conf and although the vhost was created it was not enabled cc sagi
| 0
|
18,375
| 24,503,454,174
|
IssuesEvent
|
2022-10-10 14:30:19
|
fadeoutsoftware/WASDI
|
https://api.github.com/repos/fadeoutsoftware/WASDI
|
opened
|
Docker > Build: switch to the new log system
|
bug enhancement P3 app / processor
|
Actually when WASDI deploys an application, WASDI produces/writes a Bash script like this:
```
#!/bin/bash
echo Deploy Docker Started >> /usr/lib/wasdi/launcher/logs/dockers.log
docker build [...] $1 >> /usr/lib/wasdi/launcher/logs/dockers.log 2>&1
echo Deploy Docker Done >> /usr/lib/wasdi/launcher/logs/dockers.log
```
We can see many times this instruction:
```
>> /usr/lib/wasdi/launcher/logs/dockers.log
```
Which is used to produce the log.
This approach has a problem that's why I developed a new approach with Jupyter Notebook.
To switch on my new log system, please change the WASDI code to produce this script instead:
```
#!/bin/bash
## LOG MANAGEMENT ##
iCurrentPid=${$}
exec 1> >(logger --id=${iCurrentPid} --stderr --tag docker-build) 2>&1
## /LOG MANAGEMENT ##
echo "Deploy Docker Started"
docker build [...] $1
echo "Deploy Docker Done"
```
|
1.0
|
Docker > Build: switch to the new log system - Actually when WASDI deploys an application, WASDI produces/writes a Bash script like this:
```
#!/bin/bash
echo Deploy Docker Started >> /usr/lib/wasdi/launcher/logs/dockers.log
docker build [...] $1 >> /usr/lib/wasdi/launcher/logs/dockers.log 2>&1
echo Deploy Docker Done >> /usr/lib/wasdi/launcher/logs/dockers.log
```
We can see many times this instruction:
```
>> /usr/lib/wasdi/launcher/logs/dockers.log
```
Which is used to produce the log.
This approach has a problem that's why I developed a new approach with Jupyter Notebook.
To switch on my new log system, please change the WASDI code to produce this script instead:
```
#!/bin/bash
## LOG MANAGEMENT ##
iCurrentPid=${$}
exec 1> >(logger --id=${iCurrentPid} --stderr --tag docker-build) 2>&1
## /LOG MANAGEMENT ##
echo "Deploy Docker Started"
docker build [...] $1
echo "Deploy Docker Done"
```
|
process
|
docker build switch to the new log system actually when wasdi deploys an application wasdi produces writes a bash script like this bin bash echo deploy docker started usr lib wasdi launcher logs dockers log docker build usr lib wasdi launcher logs dockers log echo deploy docker done usr lib wasdi launcher logs dockers log we can see many times this instruction usr lib wasdi launcher logs dockers log which is used to produce the log this approach has a problem that s why i developed a new approach with jupyter notebook to switch on my new log system please change the wasdi code to produce this script instead bin bash log management icurrentpid exec logger id icurrentpid stderr tag docker build log management echo deploy docker started docker build echo deploy docker done
| 1
|
15,053
| 18,762,906,759
|
IssuesEvent
|
2021-11-05 18:47:13
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Cascading Heat updates
|
Process Heating
|
- Pull in inputs from AH/FG calculate modal, update other inputs
- Update outputs, add left side results for excess air, AH
|
1.0
|
Cascading Heat updates - - Pull in inputs from AH/FG calculate modal, update other inputs
- Update outputs, add left side results for excess air, AH
|
process
|
cascading heat updates pull in inputs from ah fg calculate modal update other inputs update outputs add left side results for excess air ah
| 1
|
818,453
| 30,689,685,560
|
IssuesEvent
|
2023-07-26 14:27:21
|
episphere/biospecimen
|
https://api.github.com/repos/episphere/biospecimen
|
opened
|
Rename Full Specimen ID to BSI ID in the In Transit File
|
Priority 2
|
In the column headers of the In Transit file, please rename the column labels "Full Specimen IDs" to "BSI ID"

|
1.0
|
Rename Full Specimen ID to BSI ID in the In Transit File - In the column headers of the In Transit file, please rename the column labels "Full Specimen IDs" to "BSI ID"

|
non_process
|
rename full specimen id to bsi id in the in transit file in the column headers of the in transit file please rename the column labels full specimen ids to bsi id
| 0
|
113,847
| 17,157,316,022
|
IssuesEvent
|
2021-07-14 08:38:36
|
gdcorp-action-public-forks/action-hosting-deploy
|
https://api.github.com/repos/gdcorp-action-public-forks/action-hosting-deploy
|
closed
|
CVE-2021-33587 (High) detected in css-what-3.4.2.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-33587 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>css-what-3.4.2.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz">https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz</a></p>
<p>Path to dependency file: action-hosting-deploy/package.json</p>
<p>Path to vulnerable library: action-hosting-deploy/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- microbundle-0.13.0.tgz (Root Library)
- rollup-plugin-postcss-4.0.0.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-svgo-4.0.2.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **css-what-3.4.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package 4.0.0 through 5.0.0 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: css-what - 5.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"css-what","packageVersion":"3.4.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"microbundle:0.13.0;rollup-plugin-postcss:4.0.0;cssnano:4.1.10;cssnano-preset-default:4.0.7;postcss-svgo:4.0.2;svgo:1.3.2;css-select:2.1.0;css-what:3.4.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"css-what - 5.0.1"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-33587","vulnerabilityDetails":"The css-what package 4.0.0 through 5.0.0 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-33587 (High) detected in css-what-3.4.2.tgz - autoclosed - ## CVE-2021-33587 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>css-what-3.4.2.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz">https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz</a></p>
<p>Path to dependency file: action-hosting-deploy/package.json</p>
<p>Path to vulnerable library: action-hosting-deploy/node_modules/css-what/package.json</p>
<p>
Dependency Hierarchy:
- microbundle-0.13.0.tgz (Root Library)
- rollup-plugin-postcss-4.0.0.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-svgo-4.0.2.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **css-what-3.4.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package 4.0.0 through 5.0.0 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: css-what - 5.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"css-what","packageVersion":"3.4.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"microbundle:0.13.0;rollup-plugin-postcss:4.0.0;cssnano:4.1.10;cssnano-preset-default:4.0.7;postcss-svgo:4.0.2;svgo:1.3.2;css-select:2.1.0;css-what:3.4.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"css-what - 5.0.1"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-33587","vulnerabilityDetails":"The css-what package 4.0.0 through 5.0.0 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in css what tgz autoclosed cve high severity vulnerability vulnerable library css what tgz a css selector parser library home page a href path to dependency file action hosting deploy package json path to vulnerable library action hosting deploy node modules css what package json dependency hierarchy microbundle tgz root library rollup plugin postcss tgz cssnano tgz cssnano preset default tgz postcss svgo tgz svgo tgz css select tgz x css what tgz vulnerable library found in base branch main vulnerability details the css what package through for node js does not ensure that attribute parsing has linear time complexity relative to the size of the input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution css what isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree microbundle rollup plugin postcss cssnano cssnano preset default postcss svgo svgo css select css what isminimumfixversionavailable true minimumfixversion css what basebranches vulnerabilityidentifier cve vulnerabilitydetails the css what package through for node js does not ensure that attribute parsing has linear time complexity relative to the size of the input vulnerabilityurl
| 0
|
434,700
| 12,521,983,754
|
IssuesEvent
|
2020-06-03 18:21:11
|
opt-out-tools/start-here
|
https://api.github.com/repos/opt-out-tools/start-here
|
closed
|
Build Brand Guidelines
|
high priority medium task
|
**Objective**
A strict set of guidelines to help communicate our image
**Description**
We need to be able to communicate what we are doing effectively. For this we need a place of reference.
**Skills**
**Tools**
**Time Estimation**
**Tasks**
- [ ] Collect images, examples, try out tone of voice
- [ ] Find users to test on
- [ ] Iterate
|
1.0
|
Build Brand Guidelines - **Objective**
A strict set of guidelines to help communicate our image
**Description**
We need to be able to communicate what we are doing effectively. For this we need a place of reference.
**Skills**
**Tools**
**Time Estimation**
**Tasks**
- [ ] Collect images, examples, try out tone of voice
- [ ] Find users to test on
- [ ] Iterate
|
non_process
|
build brand guidelines objective a strict set of guidelines to help communicate our image description we need to be able to communicate what we are doing effectively for this we need a place of reference skills tools time estimation tasks collect images examples try out tone of voice find users to test on iterate
| 0
|
3,725
| 6,732,918,374
|
IssuesEvent
|
2017-10-18 13:17:44
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
opened
|
incomplete def GO:0043039 and descendants
|
quick fix RNA processes
|
The definition of tRNA aminoacylation (GO:0043039) looks incomplete. It contains
"... by the formation of an ester bond between the 3'-hydroxyl group of the most 3' adenosine of the tRNA ..."
but does not mention the part of the amino acid involved in forming the bond.
Two of the is_a GO:0043039 terms, GO:0006418 and GO:0043040, have the same definition problem.
|
1.0
|
incomplete def GO:0043039 and descendants - The definition of tRNA aminoacylation (GO:0043039) looks incomplete. It contains
"... by the formation of an ester bond between the 3'-hydroxyl group of the most 3' adenosine of the tRNA ..."
but does not mention the part of the amino acid involved in forming the bond.
Two of the is_a GO:0043039 terms, GO:0006418 and GO:0043040, have the same definition problem.
|
process
|
incomplete def go and descendants the definition of trna aminoacylation go looks incomplete it contains by the formation of an ester bond between the hydroxyl group of the most adenosine of the trna but does not mention the part of the amino acid involved in forming the bond two of the is a go terms go and go have the same definition problem
| 1
|
11,149
| 13,957,693,131
|
IssuesEvent
|
2020-10-24 08:10:54
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
LU: Harvesting request
|
Geoportal Harvesting process LU - Luxembourg
|
From: Jeff Konnen [jeff.konnen@act.etat.lu]
Sent: 31 January 2019 08:26
To: inspire-geoportal@jrc.ec.europa.eu
Cc: JRC SDI NOTIFY
Subject: FW: INSPIRE Registry
Hi,
I just sent this email to Angelo, but the mail bounced.
Is there someone else I could contact for this request regarding the registry?
Best regards
Jeff Konnen
Chargé d’études / Responsable du géoportail
LE GOUVERNEMENT DU GRAND-DUCHÉ DE Luxembourg
Ministère des Finances
Administration du cadastre et de la topographie
Service géoportail
|
1.0
|
LU: Harvesting request - From: Jeff Konnen [jeff.konnen@act.etat.lu]
Sent: 31 January 2019 08:26
To: inspire-geoportal@jrc.ec.europa.eu
Cc: JRC SDI NOTIFY
Subject: FW: INSPIRE Registry
Hi,
I just sent this email to Angelo, but the mail bounced.
Is there someone else I could contact for this request regarding the registry?
Best regards
Jeff Konnen
Chargé d’études / Responsable du géoportail
LE GOUVERNEMENT DU GRAND-DUCHÉ DE Luxembourg
Ministère des Finances
Administration du cadastre et de la topographie
Service géoportail
|
process
|
lu harvesting request from jeff konnen sent january to inspire geoportal jrc ec europa eu cc jrc sdi notify subject fw inspire registry hi i just sent this email to angelo but the mail bounced is there someone else i could contact for this request regarding the registry best regards jeff konnen charg eacute d rsquo eacute tudes responsable du g eacute oportail le gouvernement du grand duch eacute de luxembourg minist egrave re des finances administration du cadastre et de la topographie service g eacute oportail
| 1
|
20,373
| 27,028,522,542
|
IssuesEvent
|
2023-02-11 22:44:38
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Release checklist 0.73
|
enhancement process
|
### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.73.0)
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Manual Submission for GCP Marketplace verification by google
- [x] Publish marketplace release
- [x] Publish release
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
## Previewnet
- [x] Deploy
## Staging
- [x] Deploy
## Testnet
- [x] Deploy
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
1.0
|
Release checklist 0.73 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.73.0)
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Manual Submission for GCP Marketplace verification by google
- [x] Publish marketplace release
- [x] Publish release
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
## Previewnet
- [x] Deploy
## Staging
- [x] Deploy
## Testnet
- [x] Deploy
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
process
|
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on relevant nothing open for github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts manual submission for gcp marketplace verification by google publish marketplace release publish release performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests previewnet deploy staging deploy testnet deploy mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm deploy to etl alternatives no response
| 1
|
11,040
| 7,037,282,620
|
IssuesEvent
|
2017-12-28 13:58:57
|
apinf/platform
|
https://api.github.com/repos/apinf/platform
|
closed
|
Path dropmenu in API Analytics View breaks if value includes path with long lengths
|
Dashboard 2.0 in progress Usability Issue
|
# Steps to Reproduce
1. login to apinf.io (production) as site admin
2. go to Dashboard and expand row for APInf Management Rest API
3. Click on "Show Details..." in expanded view
3. In analytics page, browse to API Request Timeline
# Outcome
Dropmenu listing paths breaks the GUI as it is not confined within its boundary

Same happens with the API Response Time Chart.
# Expected Behavior
The dropmenu shouldn't break the GUI even if the path lengths are longer.
# Environment
MAC OS
Firefox (most recent version)
https://apinf.io/
|
True
|
Path dropmenu in API Analytics View breaks if value includes path with long lengths - # Steps to Reproduce
1. login to apinf.io (production) as site admin
2. go to Dashboard and expand row for APInf Management Rest API
3. Click on "Show Details..." in expanded view
3. In analytics page, browse to API Request Timeline
# Outcome
Dropmenu listing paths breaks the GUI as it is not confined within its boundary

Same happens with the API Response Time Chart.
# Expected Behavior
The dropmenu shouldn't break the GUI even if the path lengths are longer.
# Environment
MAC OS
Firefox (most recent version)
https://apinf.io/
|
non_process
|
path dropmenu in api analytics view breaks if value includes path with long lengths steps to reproduce login to apinf io production as site admin go to dashboard and expand row for apinf management rest api click on show details in expanded view in analytics page browse to api request timeline outcome dropmenu listing paths breaks the gui as it is not confined within its boundary same happens with the api response time chart expected behavior the dropmenu shouldn t break the gui even if the path lengths are longer environment mac os firefox most recent version
| 0
|
4,297
| 7,192,463,371
|
IssuesEvent
|
2018-02-03 03:53:56
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Checksumming Addresses
|
libs-etherlib status-inprocess type-enhancement
|
Here's some javascript code to generate checksummed addresses:
/**
* Returns a checksummed address
* @param {String} address
* @return {String}
*/
exports.toChecksumAddress = function (address) {
address = exports.stripHexPrefix(address).toLowerCase()
const hash = exports.sha3(address).toString('hex')
let ret = '0x'
for (let i = 0; i < address.length; i++) {
if (parseInt(hash[i], 16) >= 8) {
ret += address[i].toUpperCase()
} else {
ret += address[i]
}
}
return ret
}
|
1.0
|
Checksumming Addresses - Here's some javascript code to generate checksummed addresses:
/**
* Returns a checksummed address
* @param {String} address
* @return {String}
*/
exports.toChecksumAddress = function (address) {
address = exports.stripHexPrefix(address).toLowerCase()
const hash = exports.sha3(address).toString('hex')
let ret = '0x'
for (let i = 0; i < address.length; i++) {
if (parseInt(hash[i], 16) >= 8) {
ret += address[i].toUpperCase()
} else {
ret += address[i]
}
}
return ret
}
|
process
|
checksumming addresses here s some javascript code to generate checksummed addresses returns a checksummed address param string address return string exports tochecksumaddress function address address exports striphexprefix address tolowercase const hash exports address tostring hex let ret for let i i address length i if parseint hash ret address touppercase else ret address return ret
| 1
|
14,529
| 3,274,681,668
|
IssuesEvent
|
2015-10-26 12:18:55
|
Mobicents/RestComm
|
https://api.github.com/repos/Mobicents/RestComm
|
closed
|
Detect Restcomm host/port/protocol by probing the container in RVD
|
Visual App Designer
|
Use Jboss/Tomcat container to find out the host/port/protocol the application is running on. That's what core restcomm does. Use these settings as default when accessing restcomm APIs.
|
1.0
|
Detect Restcomm host/port/protocol by probing the container in RVD - Use Jboss/Tomcat container to find out the host/port/protocol the application is running on. That's what core restcomm does. Use these settings as default when accessing restcomm APIs.
|
non_process
|
detect restcomm host port protocol by probing the container in rvd use jboss tomcat container to find out the host port protocol the application is running on that s what core restcomm does use these settings as default when accessing restcomm apis
| 0
|
1,135
| 2,507,996,717
|
IssuesEvent
|
2015-01-12 22:14:31
|
chessmasterhong/WaterEmblem
|
https://api.github.com/repos/chessmasterhong/WaterEmblem
|
opened
|
Discarding an item will cause the first item to be equipped
|
bug high priority
|
If the player decides to unequip their weapon, but then discard an item that is not in slot 0, then the item in slot 0 will be re-equipped.
|
1.0
|
Discarding an item will cause the first item to be equipped - If the player decides to unequip their weapon, but then discard an item that is not in slot 0, then the item in slot 0 will be re-equipped.
|
non_process
|
discarding an item will cause the first item to be equipped if the player decides to unequip their weapon but then discard an item that is not in slot then the item in slot will be re equipped
| 0
|
328,807
| 10,000,265,139
|
IssuesEvent
|
2019-07-12 12:59:09
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
accounts.google.com - site is not usable
|
browser-firefox engine-gecko priority-critical
|
<!-- @browser: Firefox 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin
**Browser / Version**: Firefox 69.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: I can not log in to my account Google.
**Steps to Reproduce**:
I downloaded an android emulator after that I could no longer access my account Google.
[](https://webcompat.com/uploads/2019/7/a77edd66-b06a-4caf-85c5-c2e69b152402.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190708182549</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring x-frame-options because of frame-ancestors directive."]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd,wI7Sfc,pB6Zqd,rHjpXd,o02Jie,lCVo3d,MB66Qc,sy99,sy9a,m5Z1Eb,sy5i,sy5j,sy5k,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "MouseEvent.mozPressure is deprecated. Use PointerEvent.pressure instead." {file: "https://accounts.google.com/ServiceLogin?service=accountsettings&hl=pt-BR&continue=https://myaccount.google.com/intro&csig=AF-SEna_h50xKfsb5Xuc:1562728673" line: 1636}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd,wI7Sfc,pB6Zqd,rHjpXd,o02Jie,lCVo3d,MB66Qc,sy99,sy9a,m5Z1Eb,sy5i,sy5j,sy5k,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd,wI7Sfc,pB6Zqd,rHjpXd,o02Jie,lCVo3d,MB66Qc,sy99,sy9a,m5Z1Eb,sy5i,sy5j,sy5k,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=wI7Sfc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=wI7Sfc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=wI7Sfc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=pB6Zqd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=pB6Zqd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=pB6Zqd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=rHjpXd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=rHjpXd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=rHjpXd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=o02Jie." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=o02Jie." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=o02Jie." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=lCVo3d." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=lCVo3d." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=lCVo3d." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=MB66Qc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=MB66Qc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=MB66Qc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy99,sy9a,m5Z1Eb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy99,sy9a,m5Z1Eb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy99,sy9a,m5Z1Eb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4,QLpTOd,sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4,QLpTOd,sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4,QLpTOd,sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=QLpTOd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=QLpTOd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=QLpTOd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,sy9x,identifier_view." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,sy9x,identifier_view." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,sy9x,identifier_view." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
accounts.google.com - site is not usable - <!-- @browser: Firefox 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin
**Browser / Version**: Firefox 69.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: I can not log in to my account Google.
**Steps to Reproduce**:
I downloaded an android emulator after that I could no longer access my account Google.
[](https://webcompat.com/uploads/2019/7/a77edd66-b06a-4caf-85c5-c2e69b152402.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190708182549</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring x-frame-options because of frame-ancestors directive."]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd,wI7Sfc,pB6Zqd,rHjpXd,o02Jie,lCVo3d,MB66Qc,sy99,sy9a,m5Z1Eb,sy5i,sy5j,sy5k,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "MouseEvent.mozPressure is deprecated. Use PointerEvent.pressure instead." {file: "https://accounts.google.com/ServiceLogin?service=accountsettings&hl=pt-BR&continue=https://myaccount.google.com/intro&csig=AF-SEna_h50xKfsb5Xuc:1562728673" line: 1636}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd,wI7Sfc,pB6Zqd,rHjpXd,o02Jie,lCVo3d,MB66Qc,sy99,sy9a,m5Z1Eb,sy5i,sy5j,sy5k,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd,wI7Sfc,pB6Zqd,rHjpXd,o02Jie,lCVo3d,MB66Qc,sy99,sy9a,m5Z1Eb,sy5i,sy5j,sy5k,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=SF3gsd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=wI7Sfc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=wI7Sfc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=wI7Sfc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=pB6Zqd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=pB6Zqd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=pB6Zqd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=rHjpXd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=rHjpXd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=rHjpXd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=o02Jie." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=o02Jie." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=o02Jie." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=lCVo3d." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=lCVo3d." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=lCVo3d." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=MB66Qc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=MB66Qc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=MB66Qc." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy99,sy9a,m5Z1Eb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy99,sy9a,m5Z1Eb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy99,sy9a,m5Z1Eb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4,QLpTOd,sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4,QLpTOd,sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4,QLpTOd,sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=YTxL4." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=QLpTOd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=QLpTOd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=QLpTOd." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,uhxrz." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,otPmVb." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy6x,sygp,rlNAl." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,sy9x,identifier_view." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,sy9x,identifier_view." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.pt.JONTrXJuKwo.O/am=BAUAAAiAAWB6AEUAIYAADhmaswMAEAgIBre3sVA/d=0/rs=ABkqax0OH9kyARlBWzrfzdCZOFRVO6PwAg/m=sy5i,sy5j,sy99,sy5k,sy9a,sy9g,sy9h,sy9q,sy9i,em3i,sy88,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3k,em3j,em3s,sy9x,identifier_view." {file: "https://accounts.google.com/signin/v2/identifier?service=accountsettings&hl=pt-BR&continue=https%3A%2F%2Fmyaccount.google.com%2Fintro&csig=AF-SEna_h50xKfsb5Xuc%3A1562728673&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
accounts google com site is not usable url browser version firefox operating system windows tested another browser yes problem type site is not usable description i can not log in to my account google steps to reproduce i downloaded an android emulator after that i could no longer access my account google browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta console messages u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u from with ❤️
| 0
|
1,774
| 4,489,058,041
|
IssuesEvent
|
2016-08-30 09:37:13
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Edits to term: suppression by virus of host protein binding ; GO:0036472
|
multiorganism processes Other term-related request PARL-UCL viruses
|
This term name is confusing:
suppression by virus of host protein binding ; GO:0036472
Exact synonym: suppression by virus of host protein interaction
Any process in which a virus stops, prevents, or reduces the frequency, rate or extent of interaction between host proteins.
It has my initials on it, and the only annotation is by me to PMID:17297443 (also cited in Definition DBXref). This paper shows that viral protein vIRF1 inhibits interaction between the two host proteins, HtrA2 and GRIM-19. The term name though sounds like it could be a virus suppressing a viral protein binding to a host protein.
A better name would be:
suppression by virus of host protein-protein interaction
synonym: suppression by virus of host protein:protein binding
synonym: suppression by virus of host protein:protein interaction
synonym: suppression by virus of host protein binding
Any better name suggestions before I rename?
Thanks!
|
1.0
|
Edits to term: suppression by virus of host protein binding ; GO:0036472 - This term name is confusing:
suppression by virus of host protein binding ; GO:0036472
Exact synonym: suppression by virus of host protein interaction
Any process in which a virus stops, prevents, or reduces the frequency, rate or extent of interaction between host proteins.
It has my initials on it, and the only annotation is by me to PMID:17297443 (also cited in Definition DBXref). This paper shows that viral protein vIRF1 inhibits interaction between the two host proteins, HtrA2 and GRIM-19. The term name though sounds like it could be a virus suppressing a viral protein binding to a host protein.
A better name would be:
suppression by virus of host protein-protein interaction
synonym: suppression by virus of host protein:protein binding
synonym: suppression by virus of host protein:protein interaction
synonym: suppression by virus of host protein binding
Any better name suggestions before I rename?
Thanks!
|
process
|
edits to term suppression by virus of host protein binding go this term name is confusing suppression by virus of host protein binding go exact synonym suppression by virus of host protein interaction any process in which a virus stops prevents or reduces the frequency rate or extent of interaction between host proteins it has my initials on it and the only annotation is by me to pmid also cited in definition dbxref this paper shows that viral protein inhibits interaction between the two host proteins and grim the term name though sounds like it could be a virus suppressing a viral protein binding to a host protein a better name would be suppression by virus of host protein protein interaction synonym suppression by virus of host protein protein binding synonym suppression by virus of host protein protein interaction synonym suppression by virus of host protein binding any better name suggestions before i rename thanks
| 1
|
319,693
| 27,394,508,039
|
IssuesEvent
|
2023-02-28 18:35:55
|
bloom-housing/bloom
|
https://api.github.com/repos/bloom-housing/bloom
|
closed
|
Core - first unit test on public
|
good first ticket testing
|
**Goal**
Now that we have unit tests in addition to Cypress tests on the public site, there is so much to test on this new smaller scale that we could start anywhere, but we could consider some of the below since they are unit testable and have been bugs in the past. It will probably require mock listing data.
Let's start in the listing detail sidebar in the get application section. It will probably be a test on ListingView.tsx.
**Example test**
The Apply Online button
* If an application is open and it is using the digital application, the Apply Now button should be present
* If it has the digital method but isn't open, the button should not show
* If it has a paper application method and not digital, it should not show
The whole sidebar has a lot of opportunity for something like this (e.g. if pick up address, then show the address, if paper methods, show the download button, etc) but we can start here.
|
1.0
|
Core - first unit test on public - **Goal**
Now that we have unit tests in addition to Cypress tests on the public site, there is so much to test on this new smaller scale that we could start anywhere, but we could consider some of the below since they are unit testable and have been bugs in the past. It will probably require mock listing data.
Let's start in the listing detail sidebar in the get application section. It will probably be a test on ListingView.tsx.
**Example test**
The Apply Online button
* If an application is open and it is using the digital application, the Apply Now button should be present
* If it has the digital method but isn't open, the button should not show
* If it has a paper application method and not digital, it should not show
The whole sidebar has a lot of opportunity for something like this (e.g. if pick up address, then show the address, if paper methods, show the download button, etc) but we can start here.
|
non_process
|
core first unit test on public goal now that we have unit tests in addition to cypress tests on the public site there is so much to test on this new smaller scale that we could start anywhere but we could consider some of the below since they are unit testable and have been bugs in the past it will probably require mock listing data let s start in the listing detail sidebar in the get application section it will probably be a test on listingview tsx example test the apply online button if an application is open and it is using the digital application the apply now button should be present if it has the digital method but isn t open the button should not show if it has a paper application method and not digital it should not show the whole sidebar has a lot of opportunity for something like this e g if pick up address then show the address if paper methods show the download button etc but we can start here
| 0
|
20,208
| 26,790,888,647
|
IssuesEvent
|
2023-02-01 08:23:29
|
Altinn/app-frontend-react
|
https://api.github.com/repos/Altinn/app-frontend-react
|
closed
|
As a developer I want to edit the receipt text in Altinn Studio
|
kind/user-story org/digdir org/skd org/ssb status/draft org/okokrim area/language area/receipt org/svv org/nbib area/process org/dibk org/dsb
|
## Description
As a developer I would like to change the receipt text in Altinn Studio.
The receipt text is the text the end-user is presented with after submitting the service. It is visible on the receipt page and on messages in the inbox.
Standard text defined in Altinn/altinn-studio#1677
## Suggested solution
We start by defining that the _app_ receipt can contain custom, possibly more dynamic, data, while the _inbox_ receipt view is strictly static.
### Step 1
- We support defining the _app_ receipt view via layouts, in the same way as f.eks. the confirm page. This will allow app developers to set up exact texts and other components they choose.
- We could potentially make it even simpler, and just support a markdown template. This would probably be easier to persist to other parts of the solution if we want to show the receipt other places like inbox.
- Either way we should support some standard components for displaying metadata and any file attachments incl. receipt pdf.
- In the _inbox_ receipt, user is just shown a static page which lists out metadata + attachments + pdf receipt.
- if we do go for a markdown receipt definition, we could persist the _static_ version with the instance and potentially display it together in the _inbox_ receipt view.
### Step 2
- We look into how we can make the messagebox receipt richer:
- can we show the receipt view directly in the message box view instead of linking to it?
- Could we leverage the existing correspondence functionality?
- f.ex. provide the A3 inbox data on correnspondence format, so that we have the details directly available in the inbox item?
- automatically create a corresponcence for the user with the receipt, and not show the instance itself? This would mean that the user cannot f.ex. create a copy of an archived instance.
## Screenshots
> Screenshots or links to Figma (make sure your sketch is public)
## Considerations
Where should this functionality be presented to the user?
Should we have any restrictions? (text length?)
## Acceptance criteria
- What is allowed/not allowed (negative tesing)
- Validations
- Error messages and warnings
- ...
## Tasks
- [ ] Verify that this issue meets DoP (remove unused text, add missing text/parameters/labels, verify tasks).
## Specification tasks
- [ ] Look into if there should be any restrictions
- [ ] Test design / decide test need
## Development tasks
- [ ] Documentation (if relevant)
- [ ] Manual test (if needed)
- [ ] Automated test (if needed)
- [ ] Design review
|
1.0
|
As a developer I want to edit the receipt text in Altinn Studio - ## Description
As a developer I would like to change the receipt text in Altinn Studio.
The receipt text is the text the end-user is presented with after submitting the service. It is visible on the receipt page and on messages in the inbox.
Standard text defined in Altinn/altinn-studio#1677
## Suggested solution
We start by defining that the _app_ receipt can contain custom, possibly more dynamic, data, while the _inbox_ receipt view is strictly static.
### Step 1
- We support defining the _app_ receipt view via layouts, in the same way as f.eks. the confirm page. This will allow app developers to set up exact texts and other components they choose.
- We could potentially make it even simpler, and just support a markdown template. This would probably be easier to persist to other parts of the solution if we want to show the receipt other places like inbox.
- Either way we should support some standard components for displaying metadata and any file attachments incl. receipt pdf.
- In the _inbox_ receipt, user is just shown a static page which lists out metadata + attachments + pdf receipt.
- if we do go for a markdown receipt definition, we could persist the _static_ version with the instance and potentially display it together in the _inbox_ receipt view.
### Step 2
- We look into how we can make the messagebox receipt richer:
- can we show the receipt view directly in the message box view instead of linking to it?
- Could we leverage the existing correspondence functionality?
- f.ex. provide the A3 inbox data on correnspondence format, so that we have the details directly available in the inbox item?
- automatically create a corresponcence for the user with the receipt, and not show the instance itself? This would mean that the user cannot f.ex. create a copy of an archived instance.
## Screenshots
> Screenshots or links to Figma (make sure your sketch is public)
## Considerations
Where should this functionality be presented to the user?
Should we have any restrictions? (text length?)
## Acceptance criteria
- What is allowed/not allowed (negative tesing)
- Validations
- Error messages and warnings
- ...
## Tasks
- [ ] Verify that this issue meets DoP (remove unused text, add missing text/parameters/labels, verify tasks).
## Specification tasks
- [ ] Look into if there should be any restrictions
- [ ] Test design / decide test need
## Development tasks
- [ ] Documentation (if relevant)
- [ ] Manual test (if needed)
- [ ] Automated test (if needed)
- [ ] Design review
|
process
|
as a developer i want to edit the receipt text in altinn studio description as a developer i would like to change the receipt text in altinn studio the receipt text is the text the end user is presented with after submitting the service it is visible on the receipt page and on messages in the inbox standard text defined in altinn altinn studio suggested solution we start by defining that the app receipt can contain custom possibly more dynamic data while the inbox receipt view is strictly static step we support defining the app receipt view via layouts in the same way as f eks the confirm page this will allow app developers to set up exact texts and other components they choose we could potentially make it even simpler and just support a markdown template this would probably be easier to persist to other parts of the solution if we want to show the receipt other places like inbox either way we should support some standard components for displaying metadata and any file attachments incl receipt pdf in the inbox receipt user is just shown a static page which lists out metadata attachments pdf receipt if we do go for a markdown receipt definition we could persist the static version with the instance and potentially display it together in the inbox receipt view step we look into how we can make the messagebox receipt richer can we show the receipt view directly in the message box view instead of linking to it could we leverage the existing correspondence functionality f ex provide the inbox data on correnspondence format so that we have the details directly available in the inbox item automatically create a corresponcence for the user with the receipt and not show the instance itself this would mean that the user cannot f ex create a copy of an archived instance screenshots screenshots or links to figma make sure your sketch is public considerations where should this functionality be presented to the user should we have any restrictions text length acceptance criteria what is allowed not allowed negative tesing validations error messages and warnings tasks verify that this issue meets dop remove unused text add missing text parameters labels verify tasks specification tasks look into if there should be any restrictions test design decide test need development tasks documentation if relevant manual test if needed automated test if needed design review
| 1
|
254,121
| 21,729,878,903
|
IssuesEvent
|
2022-05-11 11:00:22
|
hyperledger-labs/orion-server
|
https://api.github.com/repos/hyperledger-labs/orion-server
|
closed
|
Integration testing: GetClusterConfig accessibility
|
testing
|
Check that GetClusterConfig is only available to admin users. Check that a regular user is denied access.
|
1.0
|
Integration testing: GetClusterConfig accessibility - Check that GetClusterConfig is only available to admin users. Check that a regular user is denied access.
|
non_process
|
integration testing getclusterconfig accessibility check that getclusterconfig is only available to admin users check that a regular user is denied access
| 0
|
20,302
| 26,940,847,742
|
IssuesEvent
|
2023-02-08 02:00:06
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Wed, 8 Feb 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### From CAD models to soft point cloud labels: An automatic annotation pipeline for cheaply supervised 3D semantic segmentation
- **Authors:** Galadrielle Humblot-Renaux, Simon Buus Jensen, Andreas Møgelmose
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.03114
- **Pdf link:** https://arxiv.org/pdf/2302.03114
- **Abstract**
We propose a fully automatic annotation scheme which takes a raw 3D point cloud with a set of fitted CAD models as input, and outputs convincing point-wise labels which can be used as cheap training data for point cloud segmentation. Compared to manual annotations, we show that our automatic labels are accurate while drastically reducing the annotation time, and eliminating the need for manual intervention or dataset-specific parameters. Our labeling pipeline outputs semantic classes and soft point-wise object scores which can either be binarized into standard one-hot-encoded labels, thresholded into weak labels with ambiguous points left unlabeled, or used directly as soft labels during training. We evaluate the label quality and segmentation performance of PointNet++ on a dataset of real industrial point clouds and Scan2CAD, a public dataset of indoor scenes. Our results indicate that reducing supervision in areas which are more difficult to label automatically is beneficial, compared to the conventional approach of naively assigning a hard "best guess" label to every point.
### Ethical Considerations for Collecting Human-Centric Image Datasets
- **Authors:** Jerone T. A. Andrews, Dora Zhao, William Thong, Apostolos Modas, Orestis Papakyriakopoulos, Shruti Nagpal, Alice Xiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Databases (cs.DB); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2302.03629
- **Pdf link:** https://arxiv.org/pdf/2302.03629
- **Abstract**
Human-centric image datasets are critical to the development of computer vision technologies. However, recent investigations have foregrounded significant ethical issues related to privacy and bias, which have resulted in the complete retraction, or modification, of several prominent datasets. Recent works have tried to reverse this trend, for example, by proposing analytical frameworks for ethically evaluating datasets, the standardization of dataset documentation and curation practices, privacy preservation methodologies, as well as tools for surfacing and mitigating representational biases. Little attention, however, has been paid to the realities of operationalizing ethical data collection. To fill this gap, we present a set of key ethical considerations and practical recommendations for collecting more ethically-minded human-centric image data. Our research directly addresses issues of privacy and bias by contributing to the research community best practices for ethical data collection, covering purpose, privacy and consent, as well as diversity. We motivate each consideration by drawing on lessons from current practices, dataset withdrawals and audits, and analytical ethical frameworks. Our research is intended to augment recent scholarship, representing an important step toward more responsible data curation practices.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Wed, 8 Feb 23 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### From CAD models to soft point cloud labels: An automatic annotation pipeline for cheaply supervised 3D semantic segmentation
- **Authors:** Galadrielle Humblot-Renaux, Simon Buus Jensen, Andreas Møgelmose
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.03114
- **Pdf link:** https://arxiv.org/pdf/2302.03114
- **Abstract**
We propose a fully automatic annotation scheme which takes a raw 3D point cloud with a set of fitted CAD models as input, and outputs convincing point-wise labels which can be used as cheap training data for point cloud segmentation. Compared to manual annotations, we show that our automatic labels are accurate while drastically reducing the annotation time, and eliminating the need for manual intervention or dataset-specific parameters. Our labeling pipeline outputs semantic classes and soft point-wise object scores which can either be binarized into standard one-hot-encoded labels, thresholded into weak labels with ambiguous points left unlabeled, or used directly as soft labels during training. We evaluate the label quality and segmentation performance of PointNet++ on a dataset of real industrial point clouds and Scan2CAD, a public dataset of indoor scenes. Our results indicate that reducing supervision in areas which are more difficult to label automatically is beneficial, compared to the conventional approach of naively assigning a hard "best guess" label to every point.
### Ethical Considerations for Collecting Human-Centric Image Datasets
- **Authors:** Jerone T. A. Andrews, Dora Zhao, William Thong, Apostolos Modas, Orestis Papakyriakopoulos, Shruti Nagpal, Alice Xiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Databases (cs.DB); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2302.03629
- **Pdf link:** https://arxiv.org/pdf/2302.03629
- **Abstract**
Human-centric image datasets are critical to the development of computer vision technologies. However, recent investigations have foregrounded significant ethical issues related to privacy and bias, which have resulted in the complete retraction, or modification, of several prominent datasets. Recent works have tried to reverse this trend, for example, by proposing analytical frameworks for ethically evaluating datasets, the standardization of dataset documentation and curation practices, privacy preservation methodologies, as well as tools for surfacing and mitigating representational biases. Little attention, however, has been paid to the realities of operationalizing ethical data collection. To fill this gap, we present a set of key ethical considerations and practical recommendations for collecting more ethically-minded human-centric image data. Our research directly addresses issues of privacy and bias by contributing to the research community best practices for ethical data collection, covering purpose, privacy and consent, as well as diversity. We motivate each consideration by drawing on lessons from current practices, dataset withdrawals and audits, and analytical ethical frameworks. Our research is intended to augment recent scholarship, representing an important step toward more responsible data curation practices.
## Keyword: raw image
There is no result
|
process
|
new submissions for wed feb keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw from cad models to soft point cloud labels an automatic annotation pipeline for cheaply supervised semantic segmentation authors galadrielle humblot renaux simon buus jensen andreas møgelmose subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we propose a fully automatic annotation scheme which takes a raw point cloud with a set of fitted cad models as input and outputs convincing point wise labels which can be used as cheap training data for point cloud segmentation compared to manual annotations we show that our automatic labels are accurate while drastically reducing the annotation time and eliminating the need for manual intervention or dataset specific parameters our labeling pipeline outputs semantic classes and soft point wise object scores which can either be binarized into standard one hot encoded labels thresholded into weak labels with ambiguous points left unlabeled or used directly as soft labels during training we evaluate the label quality and segmentation performance of pointnet on a dataset of real industrial point clouds and a public dataset of indoor scenes our results indicate that reducing supervision in areas which are more difficult to label automatically is beneficial compared to the conventional approach of naively assigning a hard best guess label to every point ethical considerations for collecting human centric image datasets authors jerone t a andrews dora zhao william thong apostolos modas orestis papakyriakopoulos shruti nagpal alice xiang subjects computer vision and pattern recognition cs cv artificial intelligence cs ai databases cs db machine learning cs lg arxiv link pdf link abstract human centric image datasets are critical to the development of computer vision technologies however recent investigations have foregrounded significant ethical issues related to privacy and bias which have resulted in the complete retraction or modification of several prominent datasets recent works have tried to reverse this trend for example by proposing analytical frameworks for ethically evaluating datasets the standardization of dataset documentation and curation practices privacy preservation methodologies as well as tools for surfacing and mitigating representational biases little attention however has been paid to the realities of operationalizing ethical data collection to fill this gap we present a set of key ethical considerations and practical recommendations for collecting more ethically minded human centric image data our research directly addresses issues of privacy and bias by contributing to the research community best practices for ethical data collection covering purpose privacy and consent as well as diversity we motivate each consideration by drawing on lessons from current practices dataset withdrawals and audits and analytical ethical frameworks our research is intended to augment recent scholarship representing an important step toward more responsible data curation practices keyword raw image there is no result
| 1
|
117,300
| 15,086,503,692
|
IssuesEvent
|
2021-02-05 20:26:28
|
carbon-design-system/carbon-for-ibm-dotcom
|
https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom
|
closed
|
Web component: Card - Pictogram component variations Design PR review template
|
design package: web components
|
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
DDS Designer and Developer
> I need to:
test the `Card - Pictogram component variations`
> so that I can:
be sure the developer's code correctly works as designed
#### Additional Information
- **See the Epic for the Design and Functional specs information**
- [Creating a QA bug](https://ibm.ent.box.com/notes/603242247385)
- Dev Issue (#5087)
- [Storybook link](TBD)
#### Tasks
Ensure the `Card - Pictogram component variations` component has been tested and reviewed by design + dev.
#### Acceptance criteria
- [ ] Pattern is reviewed by designer and design defects are identified
- [ ] New issues are opened to log individual defects
- [ ] Screenshots of as-is and to-be design should be added to the issue when needed
- [ ] Developer associated with building the pattern is assigned to the new issues
- [ ] Appropriate labels are added to the issue eg. bug, dev
|
1.0
|
Web component: Card - Pictogram component variations Design PR review template - #### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
DDS Designer and Developer
> I need to:
test the `Card - Pictogram component variations`
> so that I can:
be sure the developer's code correctly works as designed
#### Additional Information
- **See the Epic for the Design and Functional specs information**
- [Creating a QA bug](https://ibm.ent.box.com/notes/603242247385)
- Dev Issue (#5087)
- [Storybook link](TBD)
#### Tasks
Ensure the `Card - Pictogram component variations` component has been tested and reviewed by design + dev.
#### Acceptance criteria
- [ ] Pattern is reviewed by designer and design defects are identified
- [ ] New issues are opened to log individual defects
- [ ] Screenshots of as-is and to-be design should be added to the issue when needed
- [ ] Developer associated with building the pattern is assigned to the new issues
- [ ] Appropriate labels are added to the issue eg. bug, dev
|
non_process
|
web component card pictogram component variations design pr review template user story as a dds designer and developer i need to test the card pictogram component variations so that i can be sure the developer s code correctly works as designed additional information see the epic for the design and functional specs information dev issue tbd tasks ensure the card pictogram component variations component has been tested and reviewed by design dev acceptance criteria pattern is reviewed by designer and design defects are identified new issues are opened to log individual defects screenshots of as is and to be design should be added to the issue when needed developer associated with building the pattern is assigned to the new issues appropriate labels are added to the issue eg bug dev
| 0
|
16,255
| 20,815,918,600
|
IssuesEvent
|
2022-03-18 10:15:26
|
googleapis/synthtool
|
https://api.github.com/repos/googleapis/synthtool
|
opened
|
process(python): Explore the possibility of using a single linter
|
type: process lang: python
|
In the templated `nox_file.py` for python, there is a `lint` `nox` session which runs both `black` and `flake8` linters.
https://github.com/googleapis/synthtool/blob/02193e4ca893eeb62c9641f4a0f88ef1418f6e17/synthtool/gcp/templates/python_library/noxfile.py.j2#L52-L64
https://github.com/googleapis/python-aiplatform/issues/1087, there is a conflict between the 2 lint tools that we use where a formatting change made by `black` causes the `flake8` check to fail.
In the current configuration, `black` and `flake8` check for different things. `black` tests/applies formatting while `flake` checks for syntax.
To check this, create an empty file `test_lint.py` with a single import statement and no new line at the end of the file. As an example, I used `import google.api` in an empty file without a blank line at the end.
If you run `black --check test_lint.py` which tests the file without changing it.
```
black --check test_lint.py
would reformat test_lint.py
All done! 💥 💔 💥
1 file would be reformatted.
```
Then run `black test_lint.py`, which adds an empty line at the end of the file.
```
black test_lint.py
reformatted test_lint.py
All done! ✨ 🍰✨
1 file reformatted.
```
On the same file run `flake8 test_lint.py` which tests for syntax.
```
flake8 test_lint.py
test_lint.py:1:1: F401 'google.api' imported but unused
```
I've opened this issue to understand the impact of removing `black`, and confirm that `flake8` can fill the gap.
|
1.0
|
process(python): Explore the possibility of using a single linter - In the templated `nox_file.py` for python, there is a `lint` `nox` session which runs both `black` and `flake8` linters.
https://github.com/googleapis/synthtool/blob/02193e4ca893eeb62c9641f4a0f88ef1418f6e17/synthtool/gcp/templates/python_library/noxfile.py.j2#L52-L64
https://github.com/googleapis/python-aiplatform/issues/1087, there is a conflict between the 2 lint tools that we use where a formatting change made by `black` causes the `flake8` check to fail.
In the current configuration, `black` and `flake8` check for different things. `black` tests/applies formatting while `flake` checks for syntax.
To check this, create an empty file `test_lint.py` with a single import statement and no new line at the end of the file. As an example, I used `import google.api` in an empty file without a blank line at the end.
If you run `black --check test_lint.py` which tests the file without changing it.
```
black --check test_lint.py
would reformat test_lint.py
All done! 💥 💔 💥
1 file would be reformatted.
```
Then run `black test_lint.py`, which adds an empty line at the end of the file.
```
black test_lint.py
reformatted test_lint.py
All done! ✨ 🍰✨
1 file reformatted.
```
On the same file run `flake8 test_lint.py` which tests for syntax.
```
flake8 test_lint.py
test_lint.py:1:1: F401 'google.api' imported but unused
```
I've opened this issue to understand the impact of removing `black`, and confirm that `flake8` can fill the gap.
|
process
|
process python explore the possibility of using a single linter in the templated nox file py for python there is a lint nox session which runs both black and linters there is a conflict between the lint tools that we use where a formatting change made by black causes the check to fail in the current configuration black and check for different things black tests applies formatting while flake checks for syntax to check this create an empty file test lint py with a single import statement and no new line at the end of the file as an example i used import google api in an empty file without a blank line at the end if you run black check test lint py which tests the file without changing it black check test lint py would reformat test lint py all done 💥 💔 💥 file would be reformatted then run black test lint py which adds an empty line at the end of the file black test lint py reformatted test lint py all done ✨ 🍰✨ file reformatted on the same file run test lint py which tests for syntax test lint py test lint py google api imported but unused i ve opened this issue to understand the impact of removing black and confirm that can fill the gap
| 1
|
78,191
| 10,053,145,961
|
IssuesEvent
|
2019-07-21 14:26:46
|
apoezzhaev/CIHP_PGN
|
https://api.github.com/repos/apoezzhaev/CIHP_PGN
|
reopened
|
Add docker file with runnable configuration
|
deployment documentation
|
Add dockerfile with runnable configs and proper hyperlinks to datasets, etc.
|
1.0
|
Add docker file with runnable configuration - Add dockerfile with runnable configs and proper hyperlinks to datasets, etc.
|
non_process
|
add docker file with runnable configuration add dockerfile with runnable configs and proper hyperlinks to datasets etc
| 0
|
440,439
| 12,699,987,471
|
IssuesEvent
|
2020-06-22 15:38:09
|
microsoft/terminal
|
https://api.github.com/repos/microsoft/terminal
|
closed
|
Terminal window has white border
|
Area-User Interface Impact-Visual In-PR Issue-Bug Priority-2 Product-Terminal
|
This issue has been reported several times but incorrectly marked as duplicate or just closed with no explanation.
The terminal window should have **under no circumstances a white border**. Not just in dark mode but NEVER. It is completely inconsistent with all other Windows apps including those from Microsoft.
### Terminal border

### Any other Windows app





Issues that were incorrectly closed:
- https://github.com/microsoft/terminal/issues/4980
- https://github.com/microsoft/terminal/issues/4837
|
1.0
|
Terminal window has white border - This issue has been reported several times but incorrectly marked as duplicate or just closed with no explanation.
The terminal window should have **under no circumstances a white border**. Not just in dark mode but NEVER. It is completely inconsistent with all other Windows apps including those from Microsoft.
### Terminal border

### Any other Windows app





Issues that were incorrectly closed:
- https://github.com/microsoft/terminal/issues/4980
- https://github.com/microsoft/terminal/issues/4837
|
non_process
|
terminal window has white border this issue has been reported several times but incorrectly marked as duplicate or just closed with no explanation the terminal window should have under no circumstances a white border not just in dark mode but never it is completely inconsistent with all other windows apps including those from microsoft terminal border any other windows app issues that were incorrectly closed
| 0
|
219,391
| 24,481,086,569
|
IssuesEvent
|
2022-10-08 21:03:44
|
Vonage-Community/sample-core-laravel_php-sandbox
|
https://api.github.com/repos/Vonage-Community/sample-core-laravel_php-sandbox
|
closed
|
laravel/framework-v9.0.2: 2 vulnerabilities (highest severity is: 9.8) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>laravel/framework-v9.0.2</b></p></summary>
<p>The Laravel Framework.</p>
<p>Library home page: <a href="https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9">https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9</a></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Vonage-Community/sample-core-laravel_php-sandbox/commit/28a6c7cfa5160073d4146b827a79a6fcd2be3429">28a6c7cfa5160073d4146b827a79a6fcd2be3429</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-30778](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30778) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | laravel/framework-v9.0.2 | Direct | N/A | ❌ |
| [CVE-2021-43503](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43503) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | laravel/framework-v9.0.2 | Direct | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-30778</summary>
### Vulnerable Library - <b>laravel/framework-v9.0.2</b></p>
<p>The Laravel Framework.</p>
<p>Library home page: <a href="https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9">https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9</a></p>
<p>
Dependency Hierarchy:
- :x: **laravel/framework-v9.0.2** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage-Community/sample-core-laravel_php-sandbox/commit/28a6c7cfa5160073d4146b827a79a6fcd2be3429">28a6c7cfa5160073d4146b827a79a6fcd2be3429</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was withdrawn by its CNA. Further investigation showed that it was not a security issue. Notes: none.
<p>Publish Date: 2022-05-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30778>CVE-2022-30778</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-43503</summary>
### Vulnerable Library - <b>laravel/framework-v9.0.2</b></p>
<p>The Laravel Framework.</p>
<p>Library home page: <a href="https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9">https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9</a></p>
<p>
Dependency Hierarchy:
- :x: **laravel/framework-v9.0.2** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage-Community/sample-core-laravel_php-sandbox/commit/28a6c7cfa5160073d4146b827a79a6fcd2be3429">28a6c7cfa5160073d4146b827a79a6fcd2be3429</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was withdrawn by its CNA. Further investigation showed that it was not a security issue. Notes: none.
<p>Publish Date: 2022-04-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43503>CVE-2021-43503</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details>
|
True
|
laravel/framework-v9.0.2: 2 vulnerabilities (highest severity is: 9.8) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>laravel/framework-v9.0.2</b></p></summary>
<p>The Laravel Framework.</p>
<p>Library home page: <a href="https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9">https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9</a></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Vonage-Community/sample-core-laravel_php-sandbox/commit/28a6c7cfa5160073d4146b827a79a6fcd2be3429">28a6c7cfa5160073d4146b827a79a6fcd2be3429</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-30778](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30778) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | laravel/framework-v9.0.2 | Direct | N/A | ❌ |
| [CVE-2021-43503](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43503) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | laravel/framework-v9.0.2 | Direct | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-30778</summary>
### Vulnerable Library - <b>laravel/framework-v9.0.2</b></p>
<p>The Laravel Framework.</p>
<p>Library home page: <a href="https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9">https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9</a></p>
<p>
Dependency Hierarchy:
- :x: **laravel/framework-v9.0.2** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage-Community/sample-core-laravel_php-sandbox/commit/28a6c7cfa5160073d4146b827a79a6fcd2be3429">28a6c7cfa5160073d4146b827a79a6fcd2be3429</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was withdrawn by its CNA. Further investigation showed that it was not a security issue. Notes: none.
<p>Publish Date: 2022-05-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-30778>CVE-2022-30778</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-43503</summary>
### Vulnerable Library - <b>laravel/framework-v9.0.2</b></p>
<p>The Laravel Framework.</p>
<p>Library home page: <a href="https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9">https://api.github.com/repos/laravel/framework/zipball/4c7cd8c4e95d161c0c6ada6efa0a5af4d0d487c9</a></p>
<p>
Dependency Hierarchy:
- :x: **laravel/framework-v9.0.2** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage-Community/sample-core-laravel_php-sandbox/commit/28a6c7cfa5160073d4146b827a79a6fcd2be3429">28a6c7cfa5160073d4146b827a79a6fcd2be3429</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was withdrawn by its CNA. Further investigation showed that it was not a security issue. Notes: none.
<p>Publish Date: 2022-04-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43503>CVE-2021-43503</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details>
|
non_process
|
laravel framework vulnerabilities highest severity is autoclosed vulnerable library laravel framework the laravel framework library home page a href found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high laravel framework direct n a high laravel framework direct n a details cve vulnerable library laravel framework the laravel framework library home page a href dependency hierarchy x laravel framework vulnerable library found in head commit a href found in base branch main vulnerability details reject do not use this candidate number consultids none reason this candidate was withdrawn by its cna further investigation showed that it was not a security issue notes none publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href cve vulnerable library laravel framework the laravel framework library home page a href dependency hierarchy x laravel framework vulnerable library found in head commit a href found in base branch main vulnerability details reject do not use this candidate number consultids none reason this candidate was withdrawn by its cna further investigation showed that it was not a security issue notes none publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href
| 0
|
101,269
| 4,111,889,168
|
IssuesEvent
|
2016-06-07 08:24:45
|
japanesemediamanager/jmmclient
|
https://api.github.com/repos/japanesemediamanager/jmmclient
|
closed
|
RSS Feed - Escaped Characters Issue
|
Bug - Low Priority
|
Escaped characters need to be converted so they display properly.

|
1.0
|
RSS Feed - Escaped Characters Issue - Escaped characters need to be converted so they display properly.

|
non_process
|
rss feed escaped characters issue escaped characters need to be converted so they display properly
| 0
|
4,763
| 7,625,730,257
|
IssuesEvent
|
2018-05-03 22:43:52
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
closed
|
ONTOPARON_SOCIAL: loading data into the triplestore fails
|
in progress ontology processing problem
|
The [ONTOPARON_SOCIAL](http://bioportal.bioontology.org/ontologies/ONTOPARON_SOCIAL) ontology passes the OWL API parsing phase, but the Ruby code that loads the data into the triplestore fails. Full stack trace from the log:
```
E, [2017-12-01T11:10:15.997478 #32467] ERROR -- : ["RestClient::BadRequest: 400 Bad Request
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:223:in `exception_with_response'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:103:in `return!'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:809:in `process_result'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:725:in `block in transmit'
/usr/local/rbenv/versions/2.3.3/lib/ruby/2.3.0/net/http.rb:853:in `start'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:715:in `transmit'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:145:in `execute'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:52:in `execute'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-59f145cdae84/lib/goo/sparql/client.rb:102:in `append_triples_no_bnodes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-59f145cdae84/lib/goo/sparql/client.rb:128:in `put_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:1424:in `delete_and_append'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:449:in `generate_rdf'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:903:in `process_submission'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:177:in `process_submission'
bin/ncbo_ontology_process:98:in `block in <main>'
bin/ncbo_ontology_process:81:in `each'
bin/ncbo_ontology_process:81:in `<main>'"]
E, [2017-12-01T11:10:16.101399 #32467] ERROR -- : Failed, exception: RestClient::BadRequest: 400 Bad Request
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:223:in `exception_with_response'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:103:in `return!'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:809:in `process_result'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:725:in `block in transmit'
/usr/local/rbenv/versions/2.3.3/lib/ruby/2.3.0/net/http.rb:853:in `start'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:715:in `transmit'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:145:in `execute'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:52:in `execute'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-59f145cdae84/lib/goo/sparql/client.rb:102:in `append_triples_no_bnodes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-59f145cdae84/lib/goo/sparql/client.rb:128:in `put_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:1424:in `delete_and_append'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:449:in `generate_rdf'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:903:in `process_submission'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:177:in `process_submission'
bin/ncbo_ontology_process:98:in `block in <main>'
bin/ncbo_ontology_process:81:in `each'
bin/ncbo_ontology_process:81:in `<main>'
```
Historically, this error has turned up when the code tries to load malformed ntriples data into the triplestore. I manually ran some rapper commands on ncboprod-parser1, but the output didn't indicate any issues with the ntriples data:
```
[ncbo-deployer@ncbo-prd-app-25 1]$ pwd
/srv/ncbo/repository/ONTOPARON_SOCIAL/1
[ncbo-deployer@ncbo-prd-app-25 1]$ rapper -i rdfxml -o ntriples owlapi.xrdf > data.triples
rapper: Parsing URI file:///srv/ncbo/share/env/production/repository/ONTOPARON_SOCIAL/1/owlapi.xrdf with parser rdfxml
rapper: Serializing with serializer ntriples
rapper: Parsing returned 8691 triples
[ncbo-deployer@ncbo-prd-app-25 1]$ rapper -i ntriples -c data.triples
rapper: Parsing URI file:///srv/ncbo/share/env/production/repository/ONTOPARON_SOCIAL/1/data.triples with parser ntriples
rapper: Parsing returned 8691 triples
```
Protege loads this ontology without errors.
|
1.0
|
ONTOPARON_SOCIAL: loading data into the triplestore fails - The [ONTOPARON_SOCIAL](http://bioportal.bioontology.org/ontologies/ONTOPARON_SOCIAL) ontology passes the OWL API parsing phase, but the Ruby code that loads the data into the triplestore fails. Full stack trace from the log:
```
E, [2017-12-01T11:10:15.997478 #32467] ERROR -- : ["RestClient::BadRequest: 400 Bad Request
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:223:in `exception_with_response'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:103:in `return!'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:809:in `process_result'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:725:in `block in transmit'
/usr/local/rbenv/versions/2.3.3/lib/ruby/2.3.0/net/http.rb:853:in `start'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:715:in `transmit'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:145:in `execute'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:52:in `execute'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-59f145cdae84/lib/goo/sparql/client.rb:102:in `append_triples_no_bnodes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-59f145cdae84/lib/goo/sparql/client.rb:128:in `put_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:1424:in `delete_and_append'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:449:in `generate_rdf'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:903:in `process_submission'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:177:in `process_submission'
bin/ncbo_ontology_process:98:in `block in <main>'
bin/ncbo_ontology_process:81:in `each'
bin/ncbo_ontology_process:81:in `<main>'"]
E, [2017-12-01T11:10:16.101399 #32467] ERROR -- : Failed, exception: RestClient::BadRequest: 400 Bad Request
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:223:in `exception_with_response'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:103:in `return!'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:809:in `process_result'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:725:in `block in transmit'
/usr/local/rbenv/versions/2.3.3/lib/ruby/2.3.0/net/http.rb:853:in `start'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:715:in `transmit'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:145:in `execute'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:52:in `execute'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-59f145cdae84/lib/goo/sparql/client.rb:102:in `append_triples_no_bnodes'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/goo-59f145cdae84/lib/goo/sparql/client.rb:128:in `put_triples'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:1424:in `delete_and_append'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:449:in `generate_rdf'
/srv/ncbo/ncbo_cron/vendor/bundle/ruby/2.3.0/bundler/gems/ontologies_linked_data-33015eba8641/lib/ontologies_linked_data/models/ontology_submission.rb:903:in `process_submission'
/srv/ncbo/ncbo_cron/lib/ncbo_cron/ontology_submission_parser.rb:177:in `process_submission'
bin/ncbo_ontology_process:98:in `block in <main>'
bin/ncbo_ontology_process:81:in `each'
bin/ncbo_ontology_process:81:in `<main>'
```
Historically, this error has turned up when the code tries to load malformed ntriples data into the triplestore. I manually ran some rapper commands on ncboprod-parser1, but the output didn't indicate any issues with the ntriples data:
```
[ncbo-deployer@ncbo-prd-app-25 1]$ pwd
/srv/ncbo/repository/ONTOPARON_SOCIAL/1
[ncbo-deployer@ncbo-prd-app-25 1]$ rapper -i rdfxml -o ntriples owlapi.xrdf > data.triples
rapper: Parsing URI file:///srv/ncbo/share/env/production/repository/ONTOPARON_SOCIAL/1/owlapi.xrdf with parser rdfxml
rapper: Serializing with serializer ntriples
rapper: Parsing returned 8691 triples
[ncbo-deployer@ncbo-prd-app-25 1]$ rapper -i ntriples -c data.triples
rapper: Parsing URI file:///srv/ncbo/share/env/production/repository/ONTOPARON_SOCIAL/1/data.triples with parser ntriples
rapper: Parsing returned 8691 triples
```
Protege loads this ontology without errors.
|
process
|
ontoparon social loading data into the triplestore fails the ontology passes the owl api parsing phase but the ruby code that loads the data into the triplestore fails full stack trace from the log e error restclient badrequest bad request srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient abstract response rb in exception with response srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient abstract response rb in return srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient request rb in process result srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient request rb in block in transmit usr local rbenv versions lib ruby net http rb in start srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient request rb in transmit srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient request rb in execute srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient request rb in execute srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in append triples no bnodes srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in put triples srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in delete and append srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in generate rdf srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in process submission srv ncbo ncbo cron lib ncbo cron ontology submission parser rb in process submission bin ncbo ontology process in block in bin ncbo ontology process in each bin ncbo ontology process in e error failed exception restclient badrequest bad request srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient abstract response rb in exception with response srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient abstract response rb in return srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient request rb in process result srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient request rb in block in transmit usr local rbenv versions lib ruby net http rb in start srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient request rb in transmit srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient request rb in execute srv ncbo ncbo cron vendor bundle ruby gems rest client lib restclient request rb in execute srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in append triples no bnodes srv ncbo ncbo cron vendor bundle ruby bundler gems goo lib goo sparql client rb in put triples srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in delete and append srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in generate rdf srv ncbo ncbo cron vendor bundle ruby bundler gems ontologies linked data lib ontologies linked data models ontology submission rb in process submission srv ncbo ncbo cron lib ncbo cron ontology submission parser rb in process submission bin ncbo ontology process in block in bin ncbo ontology process in each bin ncbo ontology process in historically this error has turned up when the code tries to load malformed ntriples data into the triplestore i manually ran some rapper commands on ncboprod but the output didn t indicate any issues with the ntriples data pwd srv ncbo repository ontoparon social rapper i rdfxml o ntriples owlapi xrdf data triples rapper parsing uri file srv ncbo share env production repository ontoparon social owlapi xrdf with parser rdfxml rapper serializing with serializer ntriples rapper parsing returned triples rapper i ntriples c data triples rapper parsing uri file srv ncbo share env production repository ontoparon social data triples with parser ntriples rapper parsing returned triples protege loads this ontology without errors
| 1
|
7,175
| 10,317,846,283
|
IssuesEvent
|
2019-08-30 13:40:51
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Dataloader crashes if num_worker>0
|
module: dataloader module: multiprocessing topic: crash triaged
|
If I define my dataloader as follows:
```
X_train = torch.tensor(X_train).to(device)
y_train = torch.tensor(y_train).to(device)
train = torch.utils.data.TensorDataset(X_train,y_train)
train_loader = torch.utils.data.DataLoader(
train, batch_size = 16, shuffle = True,pin_memory= True, num_workers= 4)
```
My training loop runs well.
If however I Implement my dataset class:
```
class ExampleDataset(torch.utils.data.Dataset):
def __init__(self, root):
self.root = root
# some code
def __getitem__(self, idx):
# some code
return x , label
def __len__(self):
return len(self.labels)
```
```
dataset = ExampleDataset(root)
dataset_train = torch.utils.data.Subset(dataset, indices[:-20])
data_loader = DataLoader(
dataset_train, batch_size=16, shuffle=True, pin_memory= True, num_workers=4)
```
the code crashes! I tried different choices:
- pin_memory = False
- non_blocking=True/False during fetching the dataset
- CUDA 10.0 with Pytorch 1.1.# or 1.2.0 and Python 3.6.9 or 3.7
here's the error that pops out:
```
BrokenPipeError Traceback (most recent call last)
<ipython-input-10-344640e27da1> in <module>
----> 1 final_model, hist = train_model(model, dataloaders_dict, criterion, optimizer)
<ipython-input-9-fdf91f815fa7> in train_model(model, dataloaders, criterion, optimizer, num_epochs)
23 # Iterate over data.
24 end = time.time()
---> 25 for i, (inputs, labels) in enumerate(dataloaders[phase]):
26 inputs = inputs.to(device, non_blocking=True)
27 labels = labels.to(device , non_blocking=True)
~\Anaconda3\envs\py_gpu\lib\site-packages\torch\utils\data\dataloader.py in __iter__(self)
276 return _SingleProcessDataLoaderIter(self)
277 else:
--> 278 return _MultiProcessingDataLoaderIter(self)
279
280 @property
~\Anaconda3\envs\py_gpu\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader)
680 # before it starts, and __del__ tries to join but will get:
681 # AssertionError: can only join a started process.
--> 682 w.start()
683 self.index_queues.append(index_queue)
684 self.workers.append(w)
~\Anaconda3\envs\py_gpu\lib\multiprocessing\process.py in start(self)
110 'daemonic processes are not allowed to have children'
111 _cleanup()
--> 112 self._popen = self._Popen(self)
113 self._sentinel = self._popen.sentinel
114 # Avoid a refcycle if the target function holds an indirect
~\Anaconda3\envs\py_gpu\lib\multiprocessing\context.py in _Popen(process_obj)
221 @staticmethod
222 def _Popen(process_obj):
--> 223 return _default_context.get_context().Process._Popen(process_obj)
224
225 class DefaultContext(BaseContext):
~\Anaconda3\envs\py_gpu\lib\multiprocessing\context.py in _Popen(process_obj)
320 def _Popen(process_obj):
321 from .popen_spawn_win32 import Popen
--> 322 return Popen(process_obj)
323
324 class SpawnContext(BaseContext):
~\Anaconda3\envs\py_gpu\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
87 try:
88 reduction.dump(prep_data, to_child)
---> 89 reduction.dump(process_obj, to_child)
90 finally:
91 set_spawning_popen(None)
~\Anaconda3\envs\py_gpu\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
58 def dump(obj, file, protocol=None):
59 '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60 ForkingPickler(file, protocol).dump(obj)
61
62 #
BrokenPipeError: [Errno 32] Broken pipe
```
Thank you
cc @SsnL
|
1.0
|
Dataloader crashes if num_worker>0 - If I define my dataloader as follows:
```
X_train = torch.tensor(X_train).to(device)
y_train = torch.tensor(y_train).to(device)
train = torch.utils.data.TensorDataset(X_train,y_train)
train_loader = torch.utils.data.DataLoader(
train, batch_size = 16, shuffle = True,pin_memory= True, num_workers= 4)
```
My training loop runs well.
If however I Implement my dataset class:
```
class ExampleDataset(torch.utils.data.Dataset):
def __init__(self, root):
self.root = root
# some code
def __getitem__(self, idx):
# some code
return x , label
def __len__(self):
return len(self.labels)
```
```
dataset = ExampleDataset(root)
dataset_train = torch.utils.data.Subset(dataset, indices[:-20])
data_loader = DataLoader(
dataset_train, batch_size=16, shuffle=True, pin_memory= True, num_workers=4)
```
the code crashes! I tried different choices:
- pin_memory = False
- non_blocking=True/False during fetching the dataset
- CUDA 10.0 with Pytorch 1.1.# or 1.2.0 and Python 3.6.9 or 3.7
here's the error that pops out:
```
BrokenPipeError Traceback (most recent call last)
<ipython-input-10-344640e27da1> in <module>
----> 1 final_model, hist = train_model(model, dataloaders_dict, criterion, optimizer)
<ipython-input-9-fdf91f815fa7> in train_model(model, dataloaders, criterion, optimizer, num_epochs)
23 # Iterate over data.
24 end = time.time()
---> 25 for i, (inputs, labels) in enumerate(dataloaders[phase]):
26 inputs = inputs.to(device, non_blocking=True)
27 labels = labels.to(device , non_blocking=True)
~\Anaconda3\envs\py_gpu\lib\site-packages\torch\utils\data\dataloader.py in __iter__(self)
276 return _SingleProcessDataLoaderIter(self)
277 else:
--> 278 return _MultiProcessingDataLoaderIter(self)
279
280 @property
~\Anaconda3\envs\py_gpu\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader)
680 # before it starts, and __del__ tries to join but will get:
681 # AssertionError: can only join a started process.
--> 682 w.start()
683 self.index_queues.append(index_queue)
684 self.workers.append(w)
~\Anaconda3\envs\py_gpu\lib\multiprocessing\process.py in start(self)
110 'daemonic processes are not allowed to have children'
111 _cleanup()
--> 112 self._popen = self._Popen(self)
113 self._sentinel = self._popen.sentinel
114 # Avoid a refcycle if the target function holds an indirect
~\Anaconda3\envs\py_gpu\lib\multiprocessing\context.py in _Popen(process_obj)
221 @staticmethod
222 def _Popen(process_obj):
--> 223 return _default_context.get_context().Process._Popen(process_obj)
224
225 class DefaultContext(BaseContext):
~\Anaconda3\envs\py_gpu\lib\multiprocessing\context.py in _Popen(process_obj)
320 def _Popen(process_obj):
321 from .popen_spawn_win32 import Popen
--> 322 return Popen(process_obj)
323
324 class SpawnContext(BaseContext):
~\Anaconda3\envs\py_gpu\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
87 try:
88 reduction.dump(prep_data, to_child)
---> 89 reduction.dump(process_obj, to_child)
90 finally:
91 set_spawning_popen(None)
~\Anaconda3\envs\py_gpu\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
58 def dump(obj, file, protocol=None):
59 '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60 ForkingPickler(file, protocol).dump(obj)
61
62 #
BrokenPipeError: [Errno 32] Broken pipe
```
Thank you
cc @SsnL
|
process
|
dataloader crashes if num worker if i define my dataloader as follows x train torch tensor x train to device y train torch tensor y train to device train torch utils data tensordataset x train y train train loader torch utils data dataloader train batch size shuffle true pin memory true num workers my training loop runs well if however i implement my dataset class class exampledataset torch utils data dataset def init self root self root root some code def getitem self idx some code return x label def len self return len self labels dataset exampledataset root dataset train torch utils data subset dataset indices data loader dataloader dataset train batch size shuffle true pin memory true num workers the code crashes i tried different choices pin memory false non blocking true false during fetching the dataset cuda with pytorch or and python or here s the error that pops out brokenpipeerror traceback most recent call last in final model hist train model model dataloaders dict criterion optimizer in train model model dataloaders criterion optimizer num epochs iterate over data end time time for i inputs labels in enumerate dataloaders inputs inputs to device non blocking true labels labels to device non blocking true envs py gpu lib site packages torch utils data dataloader py in iter self return singleprocessdataloaderiter self else return multiprocessingdataloaderiter self property envs py gpu lib site packages torch utils data dataloader py in init self loader before it starts and del tries to join but will get assertionerror can only join a started process w start self index queues append index queue self workers append w envs py gpu lib multiprocessing process py in start self daemonic processes are not allowed to have children cleanup self popen self popen self self sentinel self popen sentinel avoid a refcycle if the target function holds an indirect envs py gpu lib multiprocessing context py in popen process obj staticmethod def popen process obj return default context get context process popen process obj class defaultcontext basecontext envs py gpu lib multiprocessing context py in popen process obj def popen process obj from popen spawn import popen return popen process obj class spawncontext basecontext envs py gpu lib multiprocessing popen spawn py in init self process obj try reduction dump prep data to child reduction dump process obj to child finally set spawning popen none envs py gpu lib multiprocessing reduction py in dump obj file protocol def dump obj file protocol none replacement for pickle dump using forkingpickler forkingpickler file protocol dump obj brokenpipeerror broken pipe thank you cc ssnl
| 1
|
126,117
| 17,868,847,894
|
IssuesEvent
|
2021-09-06 12:58:51
|
fasttrack-solutions/jQuery-QueryBuilder
|
https://api.github.com/repos/fasttrack-solutions/jQuery-QueryBuilder
|
opened
|
CVE-2020-28500 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-4.6.1.tgz</b>, <b>lodash-4.15.0.tgz</b>, <b>lodash-4.17.11.tgz</b></p></summary>
<p>
<details><summary><b>lodash-4.6.1.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.6.1.tgz">https://registry.npmjs.org/lodash/-/lodash-4.6.1.tgz</a></p>
<p>Path to dependency file: jQuery-QueryBuilder/package.json</p>
<p>Path to vulnerable library: jQuery-QueryBuilder/node_modules/grunt-jscs/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-jscs-3.0.1.tgz (Root Library)
- :x: **lodash-4.6.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.15.0.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.15.0.tgz">https://registry.npmjs.org/lodash/-/lodash-4.15.0.tgz</a></p>
<p>Path to dependency file: jQuery-QueryBuilder/package.json</p>
<p>Path to vulnerable library: jQuery-QueryBuilder/node_modules/grunt-injector/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-injector-1.1.0.tgz (Root Library)
- :x: **lodash-4.15.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: jQuery-QueryBuilder/package.json</p>
<p>Path to vulnerable library: jQuery-QueryBuilder/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-watch-1.1.0.tgz (Root Library)
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/fasttrack-solutions/jQuery-QueryBuilder/commit/9291825bff1e01bb64535f99d7badac198ddbca0">9291825bff1e01bb64535f99d7badac198ddbca0</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash-4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28500 (Medium) detected in multiple libraries - ## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-4.6.1.tgz</b>, <b>lodash-4.15.0.tgz</b>, <b>lodash-4.17.11.tgz</b></p></summary>
<p>
<details><summary><b>lodash-4.6.1.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.6.1.tgz">https://registry.npmjs.org/lodash/-/lodash-4.6.1.tgz</a></p>
<p>Path to dependency file: jQuery-QueryBuilder/package.json</p>
<p>Path to vulnerable library: jQuery-QueryBuilder/node_modules/grunt-jscs/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-jscs-3.0.1.tgz (Root Library)
- :x: **lodash-4.6.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.15.0.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.15.0.tgz">https://registry.npmjs.org/lodash/-/lodash-4.15.0.tgz</a></p>
<p>Path to dependency file: jQuery-QueryBuilder/package.json</p>
<p>Path to vulnerable library: jQuery-QueryBuilder/node_modules/grunt-injector/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-injector-1.1.0.tgz (Root Library)
- :x: **lodash-4.15.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: jQuery-QueryBuilder/package.json</p>
<p>Path to vulnerable library: jQuery-QueryBuilder/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-watch-1.1.0.tgz (Root Library)
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/fasttrack-solutions/jQuery-QueryBuilder/commit/9291825bff1e01bb64535f99d7badac198ddbca0">9291825bff1e01bb64535f99d7badac198ddbca0</a></p>
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash-4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz lodash modular utilities library home page a href path to dependency file jquery querybuilder package json path to vulnerable library jquery querybuilder node modules grunt jscs node modules lodash package json dependency hierarchy grunt jscs tgz root library x lodash tgz vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file jquery querybuilder package json path to vulnerable library jquery querybuilder node modules grunt injector node modules lodash package json dependency hierarchy grunt injector tgz root library x lodash tgz vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file jquery querybuilder package json path to vulnerable library jquery querybuilder node modules lodash package json dependency hierarchy grunt contrib watch tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch dev vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
17,265
| 23,046,129,758
|
IssuesEvent
|
2022-07-23 23:27:37
|
tokio-rs/tokio
|
https://api.github.com/repos/tokio-rs/tokio
|
closed
|
process: fix tokio process abort issues on windows
|
C-bug M-process
|
Unfortunately, as a result of https://github.com/rust-lang/rust/pull/95469, we can no longer use `tokio::process` to communicate with child processes which are themselves rust binaries using std or tokio for stdio. Doing so may result in the child process being aborted.
This was the cause of https://github.com/tokio-rs/tokio/issues/4801.
Unfortunately, this doesn't have an easy solution. What we probably need to do is either have the normal stdio stuff use mio, or have the child process stdio use the blocking pool. The latter is very much the better option since we don't want to risk causing issues by setting stdin/stdout to be nonblocking on windows.
|
1.0
|
process: fix tokio process abort issues on windows - Unfortunately, as a result of https://github.com/rust-lang/rust/pull/95469, we can no longer use `tokio::process` to communicate with child processes which are themselves rust binaries using std or tokio for stdio. Doing so may result in the child process being aborted.
This was the cause of https://github.com/tokio-rs/tokio/issues/4801.
Unfortunately, this doesn't have an easy solution. What we probably need to do is either have the normal stdio stuff use mio, or have the child process stdio use the blocking pool. The latter is very much the better option since we don't want to risk causing issues by setting stdin/stdout to be nonblocking on windows.
|
process
|
process fix tokio process abort issues on windows unfortunately as a result of we can no longer use tokio process to communicate with child processes which are themselves rust binaries using std or tokio for stdio doing so may result in the child process being aborted this was the cause of unfortunately this doesn t have an easy solution what we probably need to do is either have the normal stdio stuff use mio or have the child process stdio use the blocking pool the latter is very much the better option since we don t want to risk causing issues by setting stdin stdout to be nonblocking on windows
| 1
|
163,305
| 20,359,949,878
|
IssuesEvent
|
2022-02-20 14:41:51
|
shaimael/cset
|
https://api.github.com/repos/shaimael/cset
|
opened
|
CVE-2022-0639 (Medium) detected in url-parse-1.5.1.tgz
|
security vulnerability
|
## CVE-2022-0639 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.5.1.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.5.1.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.5.1.tgz</a></p>
<p>Path to dependency file: /CSETWebNg/package.json</p>
<p>Path to vulnerable library: /CSETWebNg/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.11.2.tgz (Root Library)
- sockjs-client-1.5.0.tgz
- :x: **url-parse-1.5.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.7.
<p>Publish Date: 2022-02-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0639>CVE-2022-0639</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639</a></p>
<p>Release Date: 2022-02-17</p>
<p>Fix Resolution: url-parse - 1.5.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"url-parse","packageVersion":"1.5.1","packageFilePaths":["/CSETWebNg/package.json"],"isTransitiveDependency":true,"dependencyTree":"webpack-dev-server:3.11.2;sockjs-client:1.5.0;url-parse:1.5.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"url-parse - 1.5.7","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-0639","vulnerabilityDetails":"Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.7.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0639","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2022-0639 (Medium) detected in url-parse-1.5.1.tgz - ## CVE-2022-0639 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.5.1.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.5.1.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.5.1.tgz</a></p>
<p>Path to dependency file: /CSETWebNg/package.json</p>
<p>Path to vulnerable library: /CSETWebNg/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.11.2.tgz (Root Library)
- sockjs-client-1.5.0.tgz
- :x: **url-parse-1.5.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.7.
<p>Publish Date: 2022-02-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0639>CVE-2022-0639</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639</a></p>
<p>Release Date: 2022-02-17</p>
<p>Fix Resolution: url-parse - 1.5.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"url-parse","packageVersion":"1.5.1","packageFilePaths":["/CSETWebNg/package.json"],"isTransitiveDependency":true,"dependencyTree":"webpack-dev-server:3.11.2;sockjs-client:1.5.0;url-parse:1.5.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"url-parse - 1.5.7","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-0639","vulnerabilityDetails":"Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.7.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0639","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file csetwebng package json path to vulnerable library csetwebng node modules url parse package json dependency hierarchy webpack dev server tgz root library sockjs client tgz x url parse tgz vulnerable library found in base branch master vulnerability details authorization bypass through user controlled key in npm url parse prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree webpack dev server sockjs client url parse isminimumfixversionavailable true minimumfixversion url parse isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails authorization bypass through user controlled key in npm url parse prior to vulnerabilityurl
| 0
|
11,359
| 14,175,103,589
|
IssuesEvent
|
2020-11-12 21:00:51
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Documentation is not up-to-date
|
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
|
The scheme in this page does not match the scheme listed in all up documentation (https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema#deployment-job)
Mainly, I see the **workspace** property is missing but others might be wrong as well.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Documentation is not up-to-date -
The scheme in this page does not match the scheme listed in all up documentation (https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema#deployment-job)
Mainly, I see the **workspace** property is missing but others might be wrong as well.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
documentation is not up to date the scheme in this page does not match the scheme listed in all up documentation mainly i see the workspace property is missing but others might be wrong as well document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
49,968
| 12,439,282,568
|
IssuesEvent
|
2020-05-26 09:50:33
|
docascod/DocsAsCode
|
https://api.github.com/repos/docascod/DocsAsCode
|
opened
|
default theme slides : add default images
|
enhancement fct_build
|
Add default (blank) images into slides default theme :
* title-page-background
* title-page-logo
* header-logo
* footer-logo
|
1.0
|
default theme slides : add default images - Add default (blank) images into slides default theme :
* title-page-background
* title-page-logo
* header-logo
* footer-logo
|
non_process
|
default theme slides add default images add default blank images into slides default theme title page background title page logo header logo footer logo
| 0
|
337,366
| 24,536,983,538
|
IssuesEvent
|
2022-10-11 21:50:22
|
awslabs/aws-lambda-powertools-python
|
https://api.github.com/repos/awslabs/aws-lambda-powertools-python
|
closed
|
Docs: typo for `core/logger`.
|
documentation triage
|
### What were you searching in the docs?
Hello, everyone in the community.
I found a minor typo in `docs/core/logger.md`. :)
### Is this related to an existing documentation section?
https://github.com/awslabs/aws-lambda-powertools-python/blob/develop/docs/core/logger.md
### How can we improve?
```markdown
The typo is `atttributes`, so we can change it to `attributes` normally.
```
### Got a suggestion in mind?
I would like to propose this as a separate PR.
### Acknowledgment
- [X] I understand the final update might be different from my proposed suggestion, or refused.
|
1.0
|
Docs: typo for `core/logger`. - ### What were you searching in the docs?
Hello, everyone in the community.
I found a minor typo in `docs/core/logger.md`. :)
### Is this related to an existing documentation section?
https://github.com/awslabs/aws-lambda-powertools-python/blob/develop/docs/core/logger.md
### How can we improve?
```markdown
The typo is `atttributes`, so we can change it to `attributes` normally.
```
### Got a suggestion in mind?
I would like to propose this as a separate PR.
### Acknowledgment
- [X] I understand the final update might be different from my proposed suggestion, or refused.
|
non_process
|
docs typo for core logger what were you searching in the docs hello everyone in the community i found a minor typo in docs core logger md is this related to an existing documentation section how can we improve markdown the typo is atttributes so we can change it to attributes normally got a suggestion in mind i would like to propose this as a separate pr acknowledgment i understand the final update might be different from my proposed suggestion or refused
| 0
|
21,254
| 11,615,104,561
|
IssuesEvent
|
2020-02-26 13:43:14
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
opened
|
[Metricbeat] Reintroduce the size flag to docker.container
|
Metricbeat Team:Services
|
As the "size" flag change (https://github.com/elastic/beats/issues/14979) has been reverted in https://github.com/elastic/beats/pull/16600 due to a timeout issue for some docker versions (around 1.13), it's time to reintroduce it again, but in a different way. Some ideas:
1. Consider creating a new metricset which enables call with "size" flag enabled.
2. Make "size" flag optional.
|
1.0
|
[Metricbeat] Reintroduce the size flag to docker.container - As the "size" flag change (https://github.com/elastic/beats/issues/14979) has been reverted in https://github.com/elastic/beats/pull/16600 due to a timeout issue for some docker versions (around 1.13), it's time to reintroduce it again, but in a different way. Some ideas:
1. Consider creating a new metricset which enables call with "size" flag enabled.
2. Make "size" flag optional.
|
non_process
|
reintroduce the size flag to docker container as the size flag change has been reverted in due to a timeout issue for some docker versions around it s time to reintroduce it again but in a different way some ideas consider creating a new metricset which enables call with size flag enabled make size flag optional
| 0
|
17,752
| 23,666,121,178
|
IssuesEvent
|
2022-08-26 21:09:27
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Process.StandardOutput.Peek() hangs when StandardOutput contains no data.
|
area-System.Diagnostics.Process untriaged
|
### Description
When a process is launched using `System.Diagnostics.Process.Start` which has `RedirectStandardOutput` set to `true` and `Process.StandardOutput.Peek()` is called before any output is returned (or because there is no output), then the method will hang instead of returning `-1`.
### Reproduction Steps
Launch and redirect standard output using an executable which does not immediately return standard output as shown below:
```csharp
var process = Process.Start(new ProcessStartInfo("path\to\executable.exe", arguments)
{
CreateNoWindow = true,
UseShellExecute = false,
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
StandardOutputEncoding = Encoding.UTF8,
StandardErrorEncoding = Encoding.UTF8
});
var standardOutput = new StringBuilder();
while (process.StandardOutput.Peek() > -1)
standardOutput.Append((char)process.StandardOutput.Read());
var output = standardOutput.ToString();
```
### Expected behavior
`Process.StandardOutput.Peek()` should return `-1`.
### Actual behavior
`Process.StandardOutput.Peek()` hangs.
### Regression?
Not sure, the issue has been floating around the internet since at least 2014.
### Known Workarounds
People have developed work arounds by using a combination of `Process.StandardOutput.ReadAsync` and `Task.Wait` as seen here: https://social.msdn.microsoft.com/Forums/vstudio/en-US/057f1444-1dea-4c2f-bb48-612478de0f5c/processstandardoutputread-amp-peek-hang-on-empty-buffer?forum=csharpgeneral
### Configuration
Still present in .NET 6.0 on Windows 11 x64.
### Other information
A proposed solution was described on stackoverflow: https://stackoverflow.com/questions/4557591/alternative-to-streamreader-peek-and-thread-interrupt/6689406#6689406. I have included it here verbatim in order to preserve this information in case the post disappears in the future.
> I can tell this whole mess stems from the fact that Peek blocks. You're really trying to fix something that is fundamentally broken in the framework and that is never easy (i.e. not a dirty hack). Personally, I would fix the root of the problem, which is the blocking Peek. Mono would've followed Microsoft's implementation and thus ends up with the same problem.
>
> While I know exactly how to fix the problem should I be allowed to change the framework source code, the workaround is lengthy and time consuming.
>
> But here goes.
>
> Essentially, what Microsoft needs to do is change Process.StartWithCreateProcess such that standardOutput and standardError are both assigned a specialised type of StreamReader (e.g. PipeStreamReader).
>
> In this PipeStreamReader, they need to override both ReadBuffer overloads (i.e. need to change both overloads to virtual in StreamReader first) such that prior to a read, PeekNamedPipe is called to do the actual peek. As it is at the moment, FileStream.Read() (called by Peek()) will block on pipe reads when no data is available for read. While a FileStream.Read() with 0 bytes works well on files, it doesn't work all that well on pipes. In fact, the .NET team missed an important part of the pipe documentation - [PeekNamedPipe](http://msdn.microsoft.com/en-us/library/aa365779%28v=vs.85%29.aspx) WinAPI.
>
>> The PeekNamedPipe function is similar to the ReadFile function with the following exceptions:
>>
>> ...
>>
>> The function always returns immediately in a single-threaded application, **even if there is no data in the pipe**. The wait mode of a named pipe handle (blocking or nonblocking) has no effect on the function.
>
> The best thing at this moment without this issue solved in the framework would be to roll out your own Process class (a thin wrapper around WinAPI would suffice).
A possible alternative could be to include an overload to `Process.StandardOutput.Peek()` that would accept a timeout value in milliseconds (ie: `Process.StandardOutput.Peek(250)`).
|
1.0
|
Process.StandardOutput.Peek() hangs when StandardOutput contains no data. - ### Description
When a process is launched using `System.Diagnostics.Process.Start` which has `RedirectStandardOutput` set to `true` and `Process.StandardOutput.Peek()` is called before any output is returned (or because there is no output), then the method will hang instead of returning `-1`.
### Reproduction Steps
Launch and redirect standard output using an executable which does not immediately return standard output as shown below:
```csharp
var process = Process.Start(new ProcessStartInfo("path\to\executable.exe", arguments)
{
CreateNoWindow = true,
UseShellExecute = false,
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
StandardOutputEncoding = Encoding.UTF8,
StandardErrorEncoding = Encoding.UTF8
});
var standardOutput = new StringBuilder();
while (process.StandardOutput.Peek() > -1)
standardOutput.Append((char)process.StandardOutput.Read());
var output = standardOutput.ToString();
```
### Expected behavior
`Process.StandardOutput.Peek()` should return `-1`.
### Actual behavior
`Process.StandardOutput.Peek()` hangs.
### Regression?
Not sure, the issue has been floating around the internet since at least 2014.
### Known Workarounds
People have developed work arounds by using a combination of `Process.StandardOutput.ReadAsync` and `Task.Wait` as seen here: https://social.msdn.microsoft.com/Forums/vstudio/en-US/057f1444-1dea-4c2f-bb48-612478de0f5c/processstandardoutputread-amp-peek-hang-on-empty-buffer?forum=csharpgeneral
### Configuration
Still present in .NET 6.0 on Windows 11 x64.
### Other information
A proposed solution was described on stackoverflow: https://stackoverflow.com/questions/4557591/alternative-to-streamreader-peek-and-thread-interrupt/6689406#6689406. I have included it here verbatim in order to preserve this information in case the post disappears in the future.
> I can tell this whole mess stems from the fact that Peek blocks. You're really trying to fix something that is fundamentally broken in the framework and that is never easy (i.e. not a dirty hack). Personally, I would fix the root of the problem, which is the blocking Peek. Mono would've followed Microsoft's implementation and thus ends up with the same problem.
>
> While I know exactly how to fix the problem should I be allowed to change the framework source code, the workaround is lengthy and time consuming.
>
> But here goes.
>
> Essentially, what Microsoft needs to do is change Process.StartWithCreateProcess such that standardOutput and standardError are both assigned a specialised type of StreamReader (e.g. PipeStreamReader).
>
> In this PipeStreamReader, they need to override both ReadBuffer overloads (i.e. need to change both overloads to virtual in StreamReader first) such that prior to a read, PeekNamedPipe is called to do the actual peek. As it is at the moment, FileStream.Read() (called by Peek()) will block on pipe reads when no data is available for read. While a FileStream.Read() with 0 bytes works well on files, it doesn't work all that well on pipes. In fact, the .NET team missed an important part of the pipe documentation - [PeekNamedPipe](http://msdn.microsoft.com/en-us/library/aa365779%28v=vs.85%29.aspx) WinAPI.
>
>> The PeekNamedPipe function is similar to the ReadFile function with the following exceptions:
>>
>> ...
>>
>> The function always returns immediately in a single-threaded application, **even if there is no data in the pipe**. The wait mode of a named pipe handle (blocking or nonblocking) has no effect on the function.
>
> The best thing at this moment without this issue solved in the framework would be to roll out your own Process class (a thin wrapper around WinAPI would suffice).
A possible alternative could be to include an overload to `Process.StandardOutput.Peek()` that would accept a timeout value in milliseconds (ie: `Process.StandardOutput.Peek(250)`).
|
process
|
process standardoutput peek hangs when standardoutput contains no data description when a process is launched using system diagnostics process start which has redirectstandardoutput set to true and process standardoutput peek is called before any output is returned or because there is no output then the method will hang instead of returning reproduction steps launch and redirect standard output using an executable which does not immediately return standard output as shown below csharp var process process start new processstartinfo path to executable exe arguments createnowindow true useshellexecute false redirectstandardinput true redirectstandardoutput true redirectstandarderror true standardoutputencoding encoding standarderrorencoding encoding var standardoutput new stringbuilder while process standardoutput peek standardoutput append char process standardoutput read var output standardoutput tostring expected behavior process standardoutput peek should return actual behavior process standardoutput peek hangs regression not sure the issue has been floating around the internet since at least known workarounds people have developed work arounds by using a combination of process standardoutput readasync and task wait as seen here configuration still present in net on windows other information a proposed solution was described on stackoverflow i have included it here verbatim in order to preserve this information in case the post disappears in the future i can tell this whole mess stems from the fact that peek blocks you re really trying to fix something that is fundamentally broken in the framework and that is never easy i e not a dirty hack personally i would fix the root of the problem which is the blocking peek mono would ve followed microsoft s implementation and thus ends up with the same problem while i know exactly how to fix the problem should i be allowed to change the framework source code the workaround is lengthy and time consuming but here goes essentially what microsoft needs to do is change process startwithcreateprocess such that standardoutput and standarderror are both assigned a specialised type of streamreader e g pipestreamreader in this pipestreamreader they need to override both readbuffer overloads i e need to change both overloads to virtual in streamreader first such that prior to a read peeknamedpipe is called to do the actual peek as it is at the moment filestream read called by peek will block on pipe reads when no data is available for read while a filestream read with bytes works well on files it doesn t work all that well on pipes in fact the net team missed an important part of the pipe documentation winapi the peeknamedpipe function is similar to the readfile function with the following exceptions the function always returns immediately in a single threaded application even if there is no data in the pipe the wait mode of a named pipe handle blocking or nonblocking has no effect on the function the best thing at this moment without this issue solved in the framework would be to roll out your own process class a thin wrapper around winapi would suffice a possible alternative could be to include an overload to process standardoutput peek that would accept a timeout value in milliseconds ie process standardoutput peek
| 1
|
247,147
| 26,688,694,919
|
IssuesEvent
|
2023-01-27 01:16:24
|
turkdevops/grafana
|
https://api.github.com/repos/turkdevops/grafana
|
closed
|
CVE-2019-0205 (High) detected in github.com/Uber/jaeger-client-go-v2.20.1+incompatible - autoclosed
|
security vulnerability
|
## CVE-2019-0205 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/Uber/jaeger-client-go-v2.20.1+incompatible</b></p></summary>
<p></p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/uber/jaeger-client-go/@v/v2.20.1+incompatible.zip">https://proxy.golang.org/github.com/uber/jaeger-client-go/@v/v2.20.1+incompatible.zip</a></p>
<p>
Dependency Hierarchy:
- :x: **github.com/Uber/jaeger-client-go-v2.20.1+incompatible** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/grafana/commit/a1c271764655c7e3ff81126d5929b8dda6170bf4">a1c271764655c7e3ff81126d5929b8dda6170bf4</a></p>
<p>Found in base branch: <b>datasource-meta</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Thrift all versions up to and including 0.12.0, a server or client may run into an endless loop when feed with specific input data. Because the issue had already been partially fixed in version 0.11.0, depending on the installed version it affects only certain language bindings.
<p>Publish Date: 2019-10-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-0205>CVE-2019-0205</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0205">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0205</a></p>
<p>Release Date: 2019-10-29</p>
<p>Fix Resolution: org.apache.thrift:libthrift:0.13.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-0205 (High) detected in github.com/Uber/jaeger-client-go-v2.20.1+incompatible - autoclosed - ## CVE-2019-0205 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/Uber/jaeger-client-go-v2.20.1+incompatible</b></p></summary>
<p></p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/uber/jaeger-client-go/@v/v2.20.1+incompatible.zip">https://proxy.golang.org/github.com/uber/jaeger-client-go/@v/v2.20.1+incompatible.zip</a></p>
<p>
Dependency Hierarchy:
- :x: **github.com/Uber/jaeger-client-go-v2.20.1+incompatible** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/grafana/commit/a1c271764655c7e3ff81126d5929b8dda6170bf4">a1c271764655c7e3ff81126d5929b8dda6170bf4</a></p>
<p>Found in base branch: <b>datasource-meta</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Thrift all versions up to and including 0.12.0, a server or client may run into an endless loop when feed with specific input data. Because the issue had already been partially fixed in version 0.11.0, depending on the installed version it affects only certain language bindings.
<p>Publish Date: 2019-10-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-0205>CVE-2019-0205</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0205">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0205</a></p>
<p>Release Date: 2019-10-29</p>
<p>Fix Resolution: org.apache.thrift:libthrift:0.13.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in github com uber jaeger client go incompatible autoclosed cve high severity vulnerability vulnerable library github com uber jaeger client go incompatible library home page a href dependency hierarchy x github com uber jaeger client go incompatible vulnerable library found in head commit a href found in base branch datasource meta vulnerability details in apache thrift all versions up to and including a server or client may run into an endless loop when feed with specific input data because the issue had already been partially fixed in version depending on the installed version it affects only certain language bindings publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache thrift libthrift step up your open source security game with mend
| 0
|
33,300
| 9,098,500,899
|
IssuesEvent
|
2019-02-20 00:12:37
|
gini/dexter
|
https://api.github.com/repos/gini/dexter
|
closed
|
ID and Secret-id is still required even when the binary has been built with the said parameters
|
build question
|
Hey Team,
I tried out Dexter on an Ubuntu 18 host and looks to be id & secret id are still required to get a successful login.
Built the binary with following parameters:
`CLIENT_ID=529839178945-toh9cj642someid7hvu7id9n5.apps.googleusercontent.com CLIENT_SECRET=VzK7vrksome_secret3ts6AfZa6Diz OS=linux make`
When a login attempt being made via dexter as follows getting a http 400.
`$dexter auth
`
However when all the parameters are specified as follows can log in just fine.
`dexter auth -i xxxxxx -s ffdfdff`
Appreciate your feedback guys, could be something that I might have missed.
|
1.0
|
ID and Secret-id is still required even when the binary has been built with the said parameters - Hey Team,
I tried out Dexter on an Ubuntu 18 host and looks to be id & secret id are still required to get a successful login.
Built the binary with following parameters:
`CLIENT_ID=529839178945-toh9cj642someid7hvu7id9n5.apps.googleusercontent.com CLIENT_SECRET=VzK7vrksome_secret3ts6AfZa6Diz OS=linux make`
When a login attempt being made via dexter as follows getting a http 400.
`$dexter auth
`
However when all the parameters are specified as follows can log in just fine.
`dexter auth -i xxxxxx -s ffdfdff`
Appreciate your feedback guys, could be something that I might have missed.
|
non_process
|
id and secret id is still required even when the binary has been built with the said parameters hey team i tried out dexter on an ubuntu host and looks to be id secret id are still required to get a successful login built the binary with following parameters client id apps googleusercontent com client secret os linux make when a login attempt being made via dexter as follows getting a http dexter auth however when all the parameters are specified as follows can log in just fine dexter auth i xxxxxx s ffdfdff appreciate your feedback guys could be something that i might have missed
| 0
|
2,474
| 5,246,673,907
|
IssuesEvent
|
2017-02-01 10:23:20
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Click On column text donot trigger onRowSelect
|
bug duplicate inprocess
|
I have column with dataFormat returning
`<span className="column-select">{this.props.field}</span>`
and the css as below to make it look like clickable
```
.column-select{
cursor:pointer;
}
.column-select:hover{
color:#9A144A;
}
```
When clicked on this text, onRowSelect is not getting triggered. Below is the row props
```
handleRowSelect(row, isSelected, e) {
if (e.target.cellIndex === 0)
alert(`row clicked`);
}
const selectRowProp = {
mode: 'checkbox',
hideSelectColumn: true,
clickToSelect: true,
bgColor: 'pink',
onSelect: this.handleRowSelect,
};
<BootstrapTable selectRow={ selectRowProp } pagination={ false }>
<TableHeaderColumn
isKey
width='350'
dataAlign='center'
dataField='id'
dataFormat={idFormatter}
>Payment ID</TableHeaderColumn>
```
but the bgColor: 'pink', gets applied. Am i missing something here?
|
1.0
|
Click On column text donot trigger onRowSelect - I have column with dataFormat returning
`<span className="column-select">{this.props.field}</span>`
and the css as below to make it look like clickable
```
.column-select{
cursor:pointer;
}
.column-select:hover{
color:#9A144A;
}
```
When clicked on this text, onRowSelect is not getting triggered. Below is the row props
```
handleRowSelect(row, isSelected, e) {
if (e.target.cellIndex === 0)
alert(`row clicked`);
}
const selectRowProp = {
mode: 'checkbox',
hideSelectColumn: true,
clickToSelect: true,
bgColor: 'pink',
onSelect: this.handleRowSelect,
};
<BootstrapTable selectRow={ selectRowProp } pagination={ false }>
<TableHeaderColumn
isKey
width='350'
dataAlign='center'
dataField='id'
dataFormat={idFormatter}
>Payment ID</TableHeaderColumn>
```
but the bgColor: 'pink', gets applied. Am i missing something here?
|
process
|
click on column text donot trigger onrowselect i have column with dataformat returning this props field and the css as below to make it look like clickable column select cursor pointer column select hover color when clicked on this text onrowselect is not getting triggered below is the row props handlerowselect row isselected e if e target cellindex alert row clicked const selectrowprop mode checkbox hideselectcolumn true clicktoselect true bgcolor pink onselect this handlerowselect tableheadercolumn iskey width dataalign center datafield id dataformat idformatter payment id but the bgcolor pink gets applied am i missing something here
| 1
|
93,375
| 19,186,173,604
|
IssuesEvent
|
2021-12-05 08:23:00
|
feenkcom/gtoolkit
|
https://api.github.com/repos/feenkcom/gtoolkit
|
closed
|
[Coder] Add abstract methods filter
|
coder
|
We need a filter to display methods defined as `subclass responsibility`
|
1.0
|
[Coder] Add abstract methods filter - We need a filter to display methods defined as `subclass responsibility`
|
non_process
|
add abstract methods filter we need a filter to display methods defined as subclass responsibility
| 0
|
12,667
| 15,037,099,358
|
IssuesEvent
|
2021-02-02 15:58:30
|
retaildevcrews/ngsa
|
https://api.github.com/repos/retaildevcrews/ngsa
|
closed
|
Engineering Fundamentals Checklist - November
|
EngPrac Process
|
# Tech Lead's Engineering Fundamentals Checklist
This checklist helps to ensure that our projects meet our Engineering Fundamentals.
## Source Control
- [x] The main branch is locked.
- [x] Merges are done through PRs.
- [x] PRs reference related work items.
- [x] Commit history is consistent and commit messages are informative (what, why).
- [x] Secrets are not part of the commit history or made public. (see [Credential scanning](continuous-integration/credential-scanning/readme.md))
- [x] Public repositories follow the [OSS guidelines](source-control/readme.md#creating-a-new-repository), see `Required files in default branch for public repositories`.
More details on [Source Control](source-control/readme.md)
## Work Item Tracking
- [x] All items are tracked in GitHub
- [x] The board is organized (swim lanes, feature tags, technology tags).
## Testing
- [x] Unit tests cover the majority of all components (>90% if possible).
- [x] Integration tests run to test the solution e2e.
More details on [Unit Testing](automated-testing/unit-testing/readme.md)
## CI/CD
- [x] Project runs CI with automated build and test on each PR.
- [x] Project uses CD to manage deployments to a replica environment before PRs are merged.
- [x] Main branch is always shippable.
## Security
- [x] Access control.
- [x] Separation of concerns.
- [x] Robust treatment of secrets.
- [x] Encryption for data in transit (and if necessary at rest) and password hashing.
## Observability
- [x] Significant business and functional events are tracked and related metrics collected.
- [x] Application faults and errors are logged.
- [x] Health of the system is monitored.
- [x] The client and server side observability data can be differentiated.
- [x] Logging configuration can be modified without code changes (eg: verbose mode).
- [x] [Incoming tracing context](observability/correlation-id.md) is propagated to allow for production issue debugging purposes.
- [x] GDPR compliance is ensured regarding PII (Personally Identifiable Information).
## Agile/Scrum
- [x] Process Lead (fixed) to run standup daily.
- [x] Agile process clearly defined within team.
- [x] Tech Lead + TPM have responsibility for backlog management and grooming.
- [x] Working agreement between members of team and customer.
## Design Reviews
- [x] Process for conducting design reviews is included in the [Working Agreement](/agile-development/team-agreements/working-agreements/readme.md)
- [x] Design reviews for each major component of the solution are carried out and documented, including alternatives.
- [x] Stories and/or PRs link to the design document.
- [x] Each user story includes a task for design review by default, which is assigned or removed during sprint planning.
- [x] Project advisors are invited to design reviews or asked to give feedback to the design decisions captured in documentation.
- [x] Discover all the reviews that the customer's processes require and plan for them.
## Code Reviews
- [x] Clear agreement in the team as to function of code reviews.
- [x] Code review checklist or established process.
- [x] A minimum number of reviewers (usually 2) for a PR merge is enforced by policy.
- [x] Linters/Code Analyzers, unit tests and successful builds for PR merges are set up.
- [x] Process to enforce a quick review turnaround.
More details on [Code Reviews](code-reviews/README.md)
## Retrospectives
- [x] Set time for retrospectives each week/at the end of each sprint.
- [x] 1-3 proposed experiments to be tried each week/sprint to improve the process.
- [x] Experiments have owners and are added to project backlog.
- [x] Longer retrospective for Milestones and project completion.
More details on [Retrospectives](agile-development/retrospectives/readme.md)
## Engineering Feedback
- [x] Submit business and technical blockers that prevent project success
- [x] Add suggestions for improvements to leveraged services and components
- [x] Ensure feedback is detailed and repeatable
More details on [Engineering Feedback](engineering-feedback/readme.md)
|
1.0
|
Engineering Fundamentals Checklist - November - # Tech Lead's Engineering Fundamentals Checklist
This checklist helps to ensure that our projects meet our Engineering Fundamentals.
## Source Control
- [x] The main branch is locked.
- [x] Merges are done through PRs.
- [x] PRs reference related work items.
- [x] Commit history is consistent and commit messages are informative (what, why).
- [x] Secrets are not part of the commit history or made public. (see [Credential scanning](continuous-integration/credential-scanning/readme.md))
- [x] Public repositories follow the [OSS guidelines](source-control/readme.md#creating-a-new-repository), see `Required files in default branch for public repositories`.
More details on [Source Control](source-control/readme.md)
## Work Item Tracking
- [x] All items are tracked in GitHub
- [x] The board is organized (swim lanes, feature tags, technology tags).
## Testing
- [x] Unit tests cover the majority of all components (>90% if possible).
- [x] Integration tests run to test the solution e2e.
More details on [Unit Testing](automated-testing/unit-testing/readme.md)
## CI/CD
- [x] Project runs CI with automated build and test on each PR.
- [x] Project uses CD to manage deployments to a replica environment before PRs are merged.
- [x] Main branch is always shippable.
## Security
- [x] Access control.
- [x] Separation of concerns.
- [x] Robust treatment of secrets.
- [x] Encryption for data in transit (and if necessary at rest) and password hashing.
## Observability
- [x] Significant business and functional events are tracked and related metrics collected.
- [x] Application faults and errors are logged.
- [x] Health of the system is monitored.
- [x] The client and server side observability data can be differentiated.
- [x] Logging configuration can be modified without code changes (eg: verbose mode).
- [x] [Incoming tracing context](observability/correlation-id.md) is propagated to allow for production issue debugging purposes.
- [x] GDPR compliance is ensured regarding PII (Personally Identifiable Information).
## Agile/Scrum
- [x] Process Lead (fixed) to run standup daily.
- [x] Agile process clearly defined within team.
- [x] Tech Lead + TPM have responsibility for backlog management and grooming.
- [x] Working agreement between members of team and customer.
## Design Reviews
- [x] Process for conducting design reviews is included in the [Working Agreement](/agile-development/team-agreements/working-agreements/readme.md)
- [x] Design reviews for each major component of the solution are carried out and documented, including alternatives.
- [x] Stories and/or PRs link to the design document.
- [x] Each user story includes a task for design review by default, which is assigned or removed during sprint planning.
- [x] Project advisors are invited to design reviews or asked to give feedback to the design decisions captured in documentation.
- [x] Discover all the reviews that the customer's processes require and plan for them.
## Code Reviews
- [x] Clear agreement in the team as to function of code reviews.
- [x] Code review checklist or established process.
- [x] A minimum number of reviewers (usually 2) for a PR merge is enforced by policy.
- [x] Linters/Code Analyzers, unit tests and successful builds for PR merges are set up.
- [x] Process to enforce a quick review turnaround.
More details on [Code Reviews](code-reviews/README.md)
## Retrospectives
- [x] Set time for retrospectives each week/at the end of each sprint.
- [x] 1-3 proposed experiments to be tried each week/sprint to improve the process.
- [x] Experiments have owners and are added to project backlog.
- [x] Longer retrospective for Milestones and project completion.
More details on [Retrospectives](agile-development/retrospectives/readme.md)
## Engineering Feedback
- [x] Submit business and technical blockers that prevent project success
- [x] Add suggestions for improvements to leveraged services and components
- [x] Ensure feedback is detailed and repeatable
More details on [Engineering Feedback](engineering-feedback/readme.md)
|
process
|
engineering fundamentals checklist november tech lead s engineering fundamentals checklist this checklist helps to ensure that our projects meet our engineering fundamentals source control the main branch is locked merges are done through prs prs reference related work items commit history is consistent and commit messages are informative what why secrets are not part of the commit history or made public see continuous integration credential scanning readme md public repositories follow the source control readme md creating a new repository see required files in default branch for public repositories more details on source control readme md work item tracking all items are tracked in github the board is organized swim lanes feature tags technology tags testing unit tests cover the majority of all components if possible integration tests run to test the solution more details on automated testing unit testing readme md ci cd project runs ci with automated build and test on each pr project uses cd to manage deployments to a replica environment before prs are merged main branch is always shippable security access control separation of concerns robust treatment of secrets encryption for data in transit and if necessary at rest and password hashing observability significant business and functional events are tracked and related metrics collected application faults and errors are logged health of the system is monitored the client and server side observability data can be differentiated logging configuration can be modified without code changes eg verbose mode observability correlation id md is propagated to allow for production issue debugging purposes gdpr compliance is ensured regarding pii personally identifiable information agile scrum process lead fixed to run standup daily agile process clearly defined within team tech lead tpm have responsibility for backlog management and grooming working agreement between members of team and customer design reviews process for conducting design reviews is included in the agile development team agreements working agreements readme md design reviews for each major component of the solution are carried out and documented including alternatives stories and or prs link to the design document each user story includes a task for design review by default which is assigned or removed during sprint planning project advisors are invited to design reviews or asked to give feedback to the design decisions captured in documentation discover all the reviews that the customer s processes require and plan for them code reviews clear agreement in the team as to function of code reviews code review checklist or established process a minimum number of reviewers usually for a pr merge is enforced by policy linters code analyzers unit tests and successful builds for pr merges are set up process to enforce a quick review turnaround more details on code reviews readme md retrospectives set time for retrospectives each week at the end of each sprint proposed experiments to be tried each week sprint to improve the process experiments have owners and are added to project backlog longer retrospective for milestones and project completion more details on agile development retrospectives readme md engineering feedback submit business and technical blockers that prevent project success add suggestions for improvements to leveraged services and components ensure feedback is detailed and repeatable more details on engineering feedback readme md
| 1
|
15,774
| 19,916,070,004
|
IssuesEvent
|
2022-01-25 22:52:19
|
googleapis/go-sql-spanner
|
https://api.github.com/repos/googleapis/go-sql-spanner
|
closed
|
Dependency Dashboard
|
type: process
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/google.golang.org-genproto-digest -->[fix(deps): update google.golang.org/genproto commit hash](../pull/78)
- [ ] <!-- rebase-branch=renovate/github.com-google-go-cmp-0.x -->[fix(deps): update module github.com/google/go-cmp to v0.5.7](../pull/80)
- [ ] <!-- rebase-branch=renovate/google.golang.org-api-0.x -->[fix(deps): update module google.golang.org/api to v0.65.0](../pull/76)
- [ ] <!-- rebase-branch=renovate/google.golang.org-grpc-1.x -->[fix(deps): update module google.golang.org/grpc to v1.43.0](../pull/77)
- [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/google.golang.org-genproto-digest -->[fix(deps): update google.golang.org/genproto commit hash](../pull/78)
- [ ] <!-- rebase-branch=renovate/github.com-google-go-cmp-0.x -->[fix(deps): update module github.com/google/go-cmp to v0.5.7](../pull/80)
- [ ] <!-- rebase-branch=renovate/google.golang.org-api-0.x -->[fix(deps): update module google.golang.org/api to v0.65.0](../pull/76)
- [ ] <!-- rebase-branch=renovate/google.golang.org-grpc-1.x -->[fix(deps): update module google.golang.org/grpc to v1.43.0](../pull/77)
- [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any pull pull pull pull click on this checkbox to rebase all open prs at once check this box to trigger a request for renovate to run again on this repository
| 1
|
675,184
| 23,082,901,423
|
IssuesEvent
|
2022-07-26 08:50:56
|
goharbor/harbor
|
https://api.github.com/repos/goharbor/harbor
|
closed
|
CVE export does not generate log information under /var/log/jobs/ in jobservice container
|
priority/high target/2.6.0 area/cve-export
|
**Expected behavior and actual behavior:**
1. docker push x.x.x.x/library/nginx:1.13.12
2. get into the jobservice container shell: `docker exec -it <jobservice_container_id> /bin/bash`
3. within jobservice container shell: `ls /var/log/jobs/` there is no file
4. scan the nginx image, we can see there is a file generated with content under `/var/log/jobs/` in jobservice container: `ls /var/log/jobs/`
5. perform export cve against the `library` project, and cve data are exported and downloaded with cve information in it.
6. However, `ls /var/log/jobs/`, the new log file generated under `/var/log/jobs/` in jobservice container is empty (**actual behavior**).
This new file under `/var/log/jobs/` in jobservice container should contain some log information about the export cve log activity (**expected behavior**).
<img width="647" alt="Screen Shot 2022-07-19 at 14 56 20" src="https://user-images.githubusercontent.com/6709992/179687139-f6c3d25e-43d1-4241-9a4b-edfde1352d5e.png">
**Steps to reproduce the problem:**
Please provide the steps to reproduce this problem.
please check the above description
**Versions:**
Please specify the versions of following systems.
* Harbor Version: Version v2.6.0-f3edb03b
|
1.0
|
CVE export does not generate log information under /var/log/jobs/ in jobservice container - **Expected behavior and actual behavior:**
1. docker push x.x.x.x/library/nginx:1.13.12
2. get into the jobservice container shell: `docker exec -it <jobservice_container_id> /bin/bash`
3. within jobservice container shell: `ls /var/log/jobs/` there is no file
4. scan the nginx image, we can see there is a file generated with content under `/var/log/jobs/` in jobservice container: `ls /var/log/jobs/`
5. perform export cve against the `library` project, and cve data are exported and downloaded with cve information in it.
6. However, `ls /var/log/jobs/`, the new log file generated under `/var/log/jobs/` in jobservice container is empty (**actual behavior**).
This new file under `/var/log/jobs/` in jobservice container should contain some log information about the export cve log activity (**expected behavior**).
<img width="647" alt="Screen Shot 2022-07-19 at 14 56 20" src="https://user-images.githubusercontent.com/6709992/179687139-f6c3d25e-43d1-4241-9a4b-edfde1352d5e.png">
**Steps to reproduce the problem:**
Please provide the steps to reproduce this problem.
please check the above description
**Versions:**
Please specify the versions of following systems.
* Harbor Version: Version v2.6.0-f3edb03b
|
non_process
|
cve export does not generate log information under var log jobs in jobservice container expected behavior and actual behavior docker push x x x x library nginx get into the jobservice container shell docker exec it bin bash within jobservice container shell ls var log jobs there is no file scan the nginx image we can see there is a file generated with content under var log jobs in jobservice container ls var log jobs perform export cve against the library project and cve data are exported and downloaded with cve information in it however ls var log jobs the new log file generated under var log jobs in jobservice container is empty actual behavior this new file under var log jobs in jobservice container should contain some log information about the export cve log activity expected behavior img width alt screen shot at src steps to reproduce the problem please provide the steps to reproduce this problem please check the above description versions please specify the versions of following systems harbor version version
| 0
|
743
| 3,214,327,571
|
IssuesEvent
|
2015-10-07 00:52:05
|
broadinstitute/hellbender
|
https://api.github.com/repos/broadinstitute/hellbender
|
closed
|
Support writing large BAM files in dataflow
|
Dataflow DataflowPreprocessingPipeline enhancement
|
Currently tools like `MarkDuplicatesDataflow` use the `SmallBamWriter` to write their output, which makes them unable to handle large BAM outputs. We should switch to using a sharded/large BAM writer, when one materializes.
|
1.0
|
Support writing large BAM files in dataflow - Currently tools like `MarkDuplicatesDataflow` use the `SmallBamWriter` to write their output, which makes them unable to handle large BAM outputs. We should switch to using a sharded/large BAM writer, when one materializes.
|
process
|
support writing large bam files in dataflow currently tools like markduplicatesdataflow use the smallbamwriter to write their output which makes them unable to handle large bam outputs we should switch to using a sharded large bam writer when one materializes
| 1
|
29,874
| 13,180,900,845
|
IssuesEvent
|
2020-08-12 13:32:23
|
Ryujinx/Ryujinx-Games-List
|
https://api.github.com/repos/Ryujinx/Ryujinx-Games-List
|
opened
|
Psikyo Shooting Stars Alpha
|
crash services status-nothing
|
## Psikyo Shooting Stars Alpha
#### Game Update Version : 1.0.0
#### Current on `master` : 1.0.5155
Crashes on launch.
```
Application : Unhandled exception caught: Ryujinx.HLE.Exceptions.ServiceNotImplementedException: Ryujinx.HLE.HOS.Services.Account.Acc.IAccountServiceForApplication: 131
```
#### Hardware Specs :
##### CPU: i5-6600K
##### GPU: NVIDIA GTX 1080
##### RAM: 16GB
#### Outstanding Issues:
https://github.com/Ryujinx/Ryujinx/issues/1267
#### Log file :
[PSSA.log](https://github.com/Ryujinx/Ryujinx-Games-List/files/5062977/PSSA.log)
|
1.0
|
Psikyo Shooting Stars Alpha - ## Psikyo Shooting Stars Alpha
#### Game Update Version : 1.0.0
#### Current on `master` : 1.0.5155
Crashes on launch.
```
Application : Unhandled exception caught: Ryujinx.HLE.Exceptions.ServiceNotImplementedException: Ryujinx.HLE.HOS.Services.Account.Acc.IAccountServiceForApplication: 131
```
#### Hardware Specs :
##### CPU: i5-6600K
##### GPU: NVIDIA GTX 1080
##### RAM: 16GB
#### Outstanding Issues:
https://github.com/Ryujinx/Ryujinx/issues/1267
#### Log file :
[PSSA.log](https://github.com/Ryujinx/Ryujinx-Games-List/files/5062977/PSSA.log)
|
non_process
|
psikyo shooting stars alpha psikyo shooting stars alpha game update version current on master crashes on launch application unhandled exception caught ryujinx hle exceptions servicenotimplementedexception ryujinx hle hos services account acc iaccountserviceforapplication hardware specs cpu gpu nvidia gtx ram outstanding issues log file
| 0
|
407,913
| 27,636,675,626
|
IssuesEvent
|
2023-03-10 14:56:42
|
scaleway/packer-plugin-scaleway
|
https://api.github.com/repos/scaleway/packer-plugin-scaleway
|
opened
|
Add logs about what could be done once the image is created.
|
documentation enhancement
|
Having some messages popping up once the image is created could be helpful to hint at other tools such as terraform or the CLI
|
1.0
|
Add logs about what could be done once the image is created. - Having some messages popping up once the image is created could be helpful to hint at other tools such as terraform or the CLI
|
non_process
|
add logs about what could be done once the image is created having some messages popping up once the image is created could be helpful to hint at other tools such as terraform or the cli
| 0
|
15,757
| 5,173,604,413
|
IssuesEvent
|
2017-01-18 16:28:40
|
c2corg/v6_ui
|
https://api.github.com/repos/c2corg/v6_ui
|
closed
|
Use CSS only to style snow conditions for mobile
|
Association Code Optimization fixed and ready for testing
|
Instead of having 2 components, one being always hidden while the other is shown, CSS is enough to style the table for mobile.
|
1.0
|
Use CSS only to style snow conditions for mobile - Instead of having 2 components, one being always hidden while the other is shown, CSS is enough to style the table for mobile.
|
non_process
|
use css only to style snow conditions for mobile instead of having components one being always hidden while the other is shown css is enough to style the table for mobile
| 0
|
9,019
| 12,126,202,399
|
IssuesEvent
|
2020-04-22 16:38:53
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
[Feature thought] Allow to drag input layers into graphical modeler
|
Feature Request Processing
|
**Feature description.**
<!-- A clear and concise description of what you want to happen. Ex. QGIS would rock even more if [...] -->
Thanks very much for the excellent open-source project. The graphical modeler is helpful not only to execute as a complex process on a different set of inputs, but also to act as a workplace to record/test different parameters and inputs. For the latter reason, users often have to modify the model and then to execute it back and forth. However, when executing, it is tedious for users to point input layers and parameters by hand every time, especially if there are many. QGIS so for doesn't allow to drag input layers directly into the model and save, like model builder in ArcGIS. I'm wondering if it is worth consideration?
**Additional context**
<!-- Add any other context or screenshots about the feature request here. Open source is community driven, please consider a way to support this work either by hiring developers, supporting the QGIS project, find someone to submit a pull request.
If the change required is important, you should consider writing a [QGIS Enhancement Proposal](https://github.com/qgis/QGIS-Enhancement-Proposals/issues) (QEP) or hiring someone to, and announce your work on the lists. -->
I understand that the graphical modeler system in QGIS is more like returning a function. Users can only provide the type of input data rather than the pointer to the data itself. if my feature request is far from its original design, possibly an alternative to solve the tedious-point-model-inputs-every-time problem is to add a "model run history" selection in the model calling window. The selection can help automatically pop-up previous executed inputs layers and parameters into the right positions.
Thank you very much. Hope the thoughts can help.
Chen
|
1.0
|
[Feature thought] Allow to drag input layers into graphical modeler - **Feature description.**
<!-- A clear and concise description of what you want to happen. Ex. QGIS would rock even more if [...] -->
Thanks very much for the excellent open-source project. The graphical modeler is helpful not only to execute as a complex process on a different set of inputs, but also to act as a workplace to record/test different parameters and inputs. For the latter reason, users often have to modify the model and then to execute it back and forth. However, when executing, it is tedious for users to point input layers and parameters by hand every time, especially if there are many. QGIS so for doesn't allow to drag input layers directly into the model and save, like model builder in ArcGIS. I'm wondering if it is worth consideration?
**Additional context**
<!-- Add any other context or screenshots about the feature request here. Open source is community driven, please consider a way to support this work either by hiring developers, supporting the QGIS project, find someone to submit a pull request.
If the change required is important, you should consider writing a [QGIS Enhancement Proposal](https://github.com/qgis/QGIS-Enhancement-Proposals/issues) (QEP) or hiring someone to, and announce your work on the lists. -->
I understand that the graphical modeler system in QGIS is more like returning a function. Users can only provide the type of input data rather than the pointer to the data itself. if my feature request is far from its original design, possibly an alternative to solve the tedious-point-model-inputs-every-time problem is to add a "model run history" selection in the model calling window. The selection can help automatically pop-up previous executed inputs layers and parameters into the right positions.
Thank you very much. Hope the thoughts can help.
Chen
|
process
|
allow to drag input layers into graphical modeler feature description thanks very much for the excellent open source project the graphical modeler is helpful not only to execute as a complex process on a different set of inputs but also to act as a workplace to record test different parameters and inputs for the latter reason users often have to modify the model and then to execute it back and forth however when executing it is tedious for users to point input layers and parameters by hand every time especially if there are many qgis so for doesn t allow to drag input layers directly into the model and save like model builder in arcgis i m wondering if it is worth consideration additional context add any other context or screenshots about the feature request here open source is community driven please consider a way to support this work either by hiring developers supporting the qgis project find someone to submit a pull request if the change required is important you should consider writing a qep or hiring someone to and announce your work on the lists i understand that the graphical modeler system in qgis is more like returning a function users can only provide the type of input data rather than the pointer to the data itself if my feature request is far from its original design possibly an alternative to solve the tedious point model inputs every time problem is to add a model run history selection in the model calling window the selection can help automatically pop up previous executed inputs layers and parameters into the right positions thank you very much hope the thoughts can help chen
| 1
|
1,935
| 4,763,886,581
|
IssuesEvent
|
2016-10-25 15:32:42
|
openvstorage/alba
|
https://api.github.com/repos/openvstorage/alba
|
closed
|
Claim osd looping and high cpu usage
|
process_wontfix SRP type_bug
|
## Problem description
When I logged into my setup, I found that my root partition was completely full. The culprit was found in /var/log/upstart/alba-asd *.
One of the asd logging was full of:
```
2016-10-19 21:53:42 16108 +0200 - ovs-node1 - 9453/0 - alba/asd - 5366 - info - returning error (Asd_protocol.Protocol.Error.Assert_failed; "p\000\000\000\000n\000\000\000\000g\000\000\000\000\000\000\000\000 \000\000\000\022v\0119\127\220C\168\166\216\005 \228\210\175\242\021\217\143aG\232A\158\184.\146\164\240g=a\142\159\003\000\002\000\000\000\000\000\000\000")
2016-10-19 21:53:42 17439 +0200 - ovs-node1 - 9453/0 - alba/asd - 5367 - warning - Assertion failed, expected some but got None instead
2016-10-19 21:53:42 17473 +0200 - ovs-node1 - 9453/0 - alba/asd - 5368 - info - returning error (Asd_protocol.Protocol.Error.Assert_failed; "p\000\000\000\000n\000\000\000\000g\000\000\000\000\000\000\000\000 \000\000\000\022v\0119\127\220C\168\166\216\005 \228\210\175\242\021\217\143aG\232A\158\184.\146\164\240g=a\142\159\004\000\002\000\000\000\000\000\000\000")
2016-10-19 21:53:42 22216 +0200 - ovs-node1 - 9453/0 - alba/asd - 5369 - warning - Assertion failed, expected some but got None instead
2016-10-19 21:53:42 22241 +0200 - ovs-node1 - 9453/0 - alba/asd - 5370 - info - returning error (Asd_protocol.Protocol.Error.Assert_failed;
```
Every node within my cluster had an identical issue but the asd-id was different each time.
After closer inspection I spotted the following:
```
alba 9333 39.7 0.3 459100 52200 ? Rsl Oct19 380:40 /usr/bin/alba asd-start --config arakoon://config/ovs/alba/asds/0Z1uxwBY2djSHiAdG5xG855bwCHhZKfO/config?ini=%2Fopt%2Fasd-manager%2Fconfig%2Farakoon_cacc.ini --log-sink console:
```
The asd process is taking up alot of cpu.
The reason why is the claim-osd that is looping:
```
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14619 ovs 20 0 7718872 6.096g 2652 S 47.5 38.9 325:26.80
/usr/bin/alba claim-osd --config=arakoon://config/ovs/arakoon/mybackend-global-abm/config?ini=%2Fopt%2FOpenvStorage%2Fconfig%2Farakoon_cacc.ini --long-id=0ff56d57-930c-4a38-96a8-71da45cd7c3c --to-json 9333 alba 20 0 455944 50164 4876 R 43.2 0.3 384:15.83 /usr/bin/alba asd-start --config arakoon://config/ovs/alba/asds/0Z1uxwBY2djSHiAdG5xG855bwCHhZKfO/config?ini=%2Fopt%2Fasd-manager%2Fconfig%2Farakoon_cacc.ini --log-sink console:
```
When the claim-osd was killed, my cpu usage was back to normal.
|
1.0
|
Claim osd looping and high cpu usage - ## Problem description
When I logged into my setup, I found that my root partition was completely full. The culprit was found in /var/log/upstart/alba-asd *.
One of the asd logging was full of:
```
2016-10-19 21:53:42 16108 +0200 - ovs-node1 - 9453/0 - alba/asd - 5366 - info - returning error (Asd_protocol.Protocol.Error.Assert_failed; "p\000\000\000\000n\000\000\000\000g\000\000\000\000\000\000\000\000 \000\000\000\022v\0119\127\220C\168\166\216\005 \228\210\175\242\021\217\143aG\232A\158\184.\146\164\240g=a\142\159\003\000\002\000\000\000\000\000\000\000")
2016-10-19 21:53:42 17439 +0200 - ovs-node1 - 9453/0 - alba/asd - 5367 - warning - Assertion failed, expected some but got None instead
2016-10-19 21:53:42 17473 +0200 - ovs-node1 - 9453/0 - alba/asd - 5368 - info - returning error (Asd_protocol.Protocol.Error.Assert_failed; "p\000\000\000\000n\000\000\000\000g\000\000\000\000\000\000\000\000 \000\000\000\022v\0119\127\220C\168\166\216\005 \228\210\175\242\021\217\143aG\232A\158\184.\146\164\240g=a\142\159\004\000\002\000\000\000\000\000\000\000")
2016-10-19 21:53:42 22216 +0200 - ovs-node1 - 9453/0 - alba/asd - 5369 - warning - Assertion failed, expected some but got None instead
2016-10-19 21:53:42 22241 +0200 - ovs-node1 - 9453/0 - alba/asd - 5370 - info - returning error (Asd_protocol.Protocol.Error.Assert_failed;
```
Every node within my cluster had an identical issue but the asd-id was different each time.
After closer inspection I spotted the following:
```
alba 9333 39.7 0.3 459100 52200 ? Rsl Oct19 380:40 /usr/bin/alba asd-start --config arakoon://config/ovs/alba/asds/0Z1uxwBY2djSHiAdG5xG855bwCHhZKfO/config?ini=%2Fopt%2Fasd-manager%2Fconfig%2Farakoon_cacc.ini --log-sink console:
```
The asd process is taking up alot of cpu.
The reason why is the claim-osd that is looping:
```
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14619 ovs 20 0 7718872 6.096g 2652 S 47.5 38.9 325:26.80
/usr/bin/alba claim-osd --config=arakoon://config/ovs/arakoon/mybackend-global-abm/config?ini=%2Fopt%2FOpenvStorage%2Fconfig%2Farakoon_cacc.ini --long-id=0ff56d57-930c-4a38-96a8-71da45cd7c3c --to-json 9333 alba 20 0 455944 50164 4876 R 43.2 0.3 384:15.83 /usr/bin/alba asd-start --config arakoon://config/ovs/alba/asds/0Z1uxwBY2djSHiAdG5xG855bwCHhZKfO/config?ini=%2Fopt%2Fasd-manager%2Fconfig%2Farakoon_cacc.ini --log-sink console:
```
When the claim-osd was killed, my cpu usage was back to normal.
|
process
|
claim osd looping and high cpu usage problem description when i logged into my setup i found that my root partition was completely full the culprit was found in var log upstart alba asd one of the asd logging was full of ovs alba asd info returning error asd protocol protocol error assert failed p a ovs alba asd warning assertion failed expected some but got none instead ovs alba asd info returning error asd protocol protocol error assert failed p a ovs alba asd warning assertion failed expected some but got none instead ovs alba asd info returning error asd protocol protocol error assert failed every node within my cluster had an identical issue but the asd id was different each time after closer inspection i spotted the following alba rsl usr bin alba asd start config arakoon config ovs alba asds config ini manager cacc ini log sink console the asd process is taking up alot of cpu the reason why is the claim osd that is looping pid user pr ni virt res shr s cpu mem time command ovs s usr bin alba claim osd config arakoon config ovs arakoon mybackend global abm config ini cacc ini long id to json alba r usr bin alba asd start config arakoon config ovs alba asds config ini manager cacc ini log sink console when the claim osd was killed my cpu usage was back to normal
| 1
|
10,109
| 14,546,779,131
|
IssuesEvent
|
2020-12-15 21:47:55
|
NASA-PDS/harvest
|
https://api.github.com/repos/NASA-PDS/harvest
|
opened
|
Add default configuration
|
enhancement requirement-needed triage-needed
|
The package should contain a default configuration which comply with following contraints:
1) should work as-is on example data given in pds-registry-app package
2) should contain every possible configuration (especially those needed for demo as described on https://github.com/NASA-PDS/pds-registry-app/issues/107) **commented out**
|
1.0
|
Add default configuration - The package should contain a default configuration which comply with following contraints:
1) should work as-is on example data given in pds-registry-app package
2) should contain every possible configuration (especially those needed for demo as described on https://github.com/NASA-PDS/pds-registry-app/issues/107) **commented out**
|
non_process
|
add default configuration the package should contain a default configuration which comply with following contraints should work as is on example data given in pds registry app package should contain every possible configuration especially those needed for demo as described on commented out
| 0
|
20,943
| 27,803,583,682
|
IssuesEvent
|
2023-03-17 17:45:37
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
closed
|
Warning: a recent release failed
|
api: bigquery type: process
|
The following release PRs may have failed:
* #351 - The release job failed -- check the build log.
* #238 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #351 - The release job failed -- check the build log.
* #238 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
|
process
|
warning a recent release failed the following release prs may have failed the release job failed check the build log the release job is autorelease tagged but expected autorelease published
| 1
|
266,662
| 20,160,684,086
|
IssuesEvent
|
2022-02-09 21:10:27
|
ethers-io/ethers.js
|
https://api.github.com/repos/ethers-io/ethers.js
|
opened
|
Filter address format needs more information
|
documentation
|
There's an example filter here: https://docs.ethers.io/v5/api/providers/provider/#:~:text=dai.tokens.ethers.eth
```
filter = {
address: "dai.tokens.ethers.eth",
topics: [
utils.id("Transfer(address,address,uint256)")
]
}
provider.on(filter, (log, event) => {
// Emitted whenever a DAI token transfer occurs
})
```
`dai.tokens.ethers.eth` gets resolved to `0x6B175474E89094C44Da98b954EedeAC495271d0F`, but I don't understand why, and I haven't found anywhere where this format is documented. I would like to use it so it would be very useful for me.
The address documentation (https://docs.ethers.io/v5/api/utils/address/#address) also does not mention this format, and `ethers.utils.getAddress("dai.tokens.ethers.eth")` returns an invalid address error.
|
1.0
|
Filter address format needs more information - There's an example filter here: https://docs.ethers.io/v5/api/providers/provider/#:~:text=dai.tokens.ethers.eth
```
filter = {
address: "dai.tokens.ethers.eth",
topics: [
utils.id("Transfer(address,address,uint256)")
]
}
provider.on(filter, (log, event) => {
// Emitted whenever a DAI token transfer occurs
})
```
`dai.tokens.ethers.eth` gets resolved to `0x6B175474E89094C44Da98b954EedeAC495271d0F`, but I don't understand why, and I haven't found anywhere where this format is documented. I would like to use it so it would be very useful for me.
The address documentation (https://docs.ethers.io/v5/api/utils/address/#address) also does not mention this format, and `ethers.utils.getAddress("dai.tokens.ethers.eth")` returns an invalid address error.
|
non_process
|
filter address format needs more information there s an example filter here filter address dai tokens ethers eth topics utils id transfer address address provider on filter log event emitted whenever a dai token transfer occurs dai tokens ethers eth gets resolved to but i don t understand why and i haven t found anywhere where this format is documented i would like to use it so it would be very useful for me the address documentation also does not mention this format and ethers utils getaddress dai tokens ethers eth returns an invalid address error
| 0
|
5,004
| 7,836,991,006
|
IssuesEvent
|
2018-06-18 02:22:39
|
vital-software/scala-redox
|
https://api.github.com/repos/vital-software/scala-redox
|
closed
|
Set up CI to perform releases
|
process: ci stage: in progress type: devops
|
Similar to what we've done with our other Scala libs, this library should support versioning the project during the CI build of master commits:
```yaml
- name: ":sbt: Release new version"
branches: master
command:
- git checkout -B ${BUILDKITE_BRANCH}
- git branch -u origin/${BUILDKITE_BRANCH}
- git config branch.${BUILDKITE_BRANCH}.remote origin
- git config branch.${BUILDKITE_BRANCH}.merge refs/heads/${BUILDKITE_BRANCH}
- sbt -batch "release with-defaults skip-tests"
```
The CI system should have Sonatype access, and use it to publish artifacts.
|
1.0
|
Set up CI to perform releases - Similar to what we've done with our other Scala libs, this library should support versioning the project during the CI build of master commits:
```yaml
- name: ":sbt: Release new version"
branches: master
command:
- git checkout -B ${BUILDKITE_BRANCH}
- git branch -u origin/${BUILDKITE_BRANCH}
- git config branch.${BUILDKITE_BRANCH}.remote origin
- git config branch.${BUILDKITE_BRANCH}.merge refs/heads/${BUILDKITE_BRANCH}
- sbt -batch "release with-defaults skip-tests"
```
The CI system should have Sonatype access, and use it to publish artifacts.
|
process
|
set up ci to perform releases similar to what we ve done with our other scala libs this library should support versioning the project during the ci build of master commits yaml name sbt release new version branches master command git checkout b buildkite branch git branch u origin buildkite branch git config branch buildkite branch remote origin git config branch buildkite branch merge refs heads buildkite branch sbt batch release with defaults skip tests the ci system should have sonatype access and use it to publish artifacts
| 1
|
18,058
| 24,067,803,293
|
IssuesEvent
|
2022-09-17 18:51:15
|
sancarn/stdVBA
|
https://api.github.com/repos/sancarn/stdVBA
|
closed
|
Fix `stdProcess` structure issue with ProcessEntry32
|
lib-stdProcess
|
See this issue for quotation:
https://github.com/twinbasic/twinbasic/issues/1031#issuecomment-1196277353
Ultimately `pcPriClassBase` should be a `Single` not a `Long` type
```
Private Type PROCESSENTRY32
dwSize As Long
cntUsage As Long
th32ProcessID As Long
'pack1 As Long
th32DefaultHeapID As LongPtr
th32ModuleID As Long
cntThreads As Long
th32ParentProcessID As Long
pcPriClassBase As Single 'Bugfix
dwFlags As Long
szExeFile As String * 260
'pack2 As Long
End Type
```
|
1.0
|
Fix `stdProcess` structure issue with ProcessEntry32 - See this issue for quotation:
https://github.com/twinbasic/twinbasic/issues/1031#issuecomment-1196277353
Ultimately `pcPriClassBase` should be a `Single` not a `Long` type
```
Private Type PROCESSENTRY32
dwSize As Long
cntUsage As Long
th32ProcessID As Long
'pack1 As Long
th32DefaultHeapID As LongPtr
th32ModuleID As Long
cntThreads As Long
th32ParentProcessID As Long
pcPriClassBase As Single 'Bugfix
dwFlags As Long
szExeFile As String * 260
'pack2 As Long
End Type
```
|
process
|
fix stdprocess structure issue with see this issue for quotation ultimately pcpriclassbase should be a single not a long type private type dwsize as long cntusage as long as long as long as longptr as long cntthreads as long as long pcpriclassbase as single bugfix dwflags as long szexefile as string as long end type
| 1
|
62,325
| 14,656,481,525
|
IssuesEvent
|
2020-12-28 13:31:20
|
fu1771695yongxie/AdminLTE
|
https://api.github.com/repos/fu1771695yongxie/AdminLTE
|
opened
|
CVE-2018-20822 (Medium) detected in opennmsopennms-source-26.0.0-1
|
security vulnerability
|
## CVE-2018-20822 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-26.0.0-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/AdminLTE/commit/9a9ffa7ce7496d0a23d23b40b9fa7d75474051e1">9a9ffa7ce7496d0a23d23b40b9fa7d75474051e1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>AdminLTE/node_modules/node-sass/src/libsass/src/ast.hpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
LibSass 3.5.4 allows attackers to cause a denial-of-service (uncontrolled recursion in Sass::Complex_Selector::perform in ast.hpp and Sass::Inspect::operator in inspect.cpp).
<p>Publish Date: 2019-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20822>CVE-2018-20822</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0;node-sass - 4.13.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-20822 (Medium) detected in opennmsopennms-source-26.0.0-1 - ## CVE-2018-20822 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-26.0.0-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/AdminLTE/commit/9a9ffa7ce7496d0a23d23b40b9fa7d75474051e1">9a9ffa7ce7496d0a23d23b40b9fa7d75474051e1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>AdminLTE/node_modules/node-sass/src/libsass/src/ast.hpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
LibSass 3.5.4 allows attackers to cause a denial-of-service (uncontrolled recursion in Sass::Complex_Selector::perform in ast.hpp and Sass::Inspect::operator in inspect.cpp).
<p>Publish Date: 2019-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20822>CVE-2018-20822</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0;node-sass - 4.13.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in opennmsopennms source cve medium severity vulnerability vulnerable library opennmsopennms source a java based fault and performance management system library home page a href found in head commit a href found in base branch master vulnerable source files adminlte node modules node sass src libsass src ast hpp vulnerability details libsass allows attackers to cause a denial of service uncontrolled recursion in sass complex selector perform in ast hpp and sass inspect operator in inspect cpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass node sass step up your open source security game with whitesource
| 0
|
334,284
| 29,829,775,583
|
IssuesEvent
|
2023-06-18 05:40:42
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix ndarray.test_numpy_instance_tolist__
|
NumPy Frontend Sub Task Failing Test
|
| | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5301791196/jobs/9596295835"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5301791196/jobs/9596295835"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5301791196/jobs/9596295835"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5301791196/jobs/9596295835"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5301791196/jobs/9596295835"><img src=https://img.shields.io/badge/-failure-red></a>
|
1.0
|
Fix ndarray.test_numpy_instance_tolist__ - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5301791196/jobs/9596295835"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5301791196/jobs/9596295835"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5301791196/jobs/9596295835"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5301791196/jobs/9596295835"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5301791196/jobs/9596295835"><img src=https://img.shields.io/badge/-failure-red></a>
|
non_process
|
fix ndarray test numpy instance tolist numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src
| 0
|
170,000
| 26,889,408,568
|
IssuesEvent
|
2023-02-06 07:36:12
|
starplanter93/The_Garden_of_Musicsheet
|
https://api.github.com/repos/starplanter93/The_Garden_of_Musicsheet
|
closed
|
Feat: MainSongSection organism 작성
|
Feat Design
|
## Description
MainSongSection organism 작성
## Todo
- [x] MainSongSection organism 컴포넌트 구현
- [x] MainSongSection organism 스토리북 등록
## ETC
.
|
1.0
|
Feat: MainSongSection organism 작성 - ## Description
MainSongSection organism 작성
## Todo
- [x] MainSongSection organism 컴포넌트 구현
- [x] MainSongSection organism 스토리북 등록
## ETC
.
|
non_process
|
feat mainsongsection organism 작성 description mainsongsection organism 작성 todo mainsongsection organism 컴포넌트 구현 mainsongsection organism 스토리북 등록 etc
| 0
|
22,502
| 31,495,988,290
|
IssuesEvent
|
2023-08-31 02:24:03
|
dart-lang/linter
|
https://api.github.com/repos/dart-lang/linter
|
closed
|
linter repo move readiness // planning
|
type-task process P1
|
🚧 _(This is a living document and a work in progress. Watch this space for emerging details. And do provide feedback!)_ 🚧
## Background
**Costs.** There are costs to having the linter source living externally and separate from the analyzer. In practice, the two are quite tightly coupled and not being able to make changes atomically across them (when an analyzer API changes or a new lint is added for example) creates extra work (e.g., otherwise needless analyzer package publications) and bookkeeping (e.g., TODOs or issues to track the implementation of quick fixes for new lints that cannot be landed until the new lint is available in the SDK). Lints are also very difficult to evolve as any downstream impact from changed semantics is only detected when a linter roll into the Dart SDK's `DEPS` is evaluated. Having the linter workflow external and unfamiliar to Dart team members has also (arguably) limited Dart team contributions.
**Benefits.** There are benefits too. Particularly in the early days of this project we've greatly benefited from the relative nimbleness of a workflow that favors external contributions, allows easy CI customization and enjoys the relative focus of a dedicated issue tracker.
**Today.** When the linter was growing rapidly and was a tool that was effectively adjacent to the core Dart SDK offerings, the benefits (IMO clearly) outweighed the costs. The landscape today is different though. The linter is effectively now _itself_ a core offering, with lints a central part of the developer experience and various subsets endorsed by the Dart and Flutter teams and promoted as part of standard best practice (see [package:lints](https://github.com/dart-lang/lints)).
**Making a move.** In light of this, we'd like to move the linter sources (and issue tracking) into the Dart SDK.
## Plan
_(WIP planning notes. Details to be filled in.)_
**Readiness.** There are a bunch of steps to make the linter ready for a move.
### Sources
**Build.** To make the source buildable in `sdk/pkg`, we'll need to:
- [x] #4412
To not lose the build check automation we enjoy from GitHub actions, we'll want to:
- [x] #4416
**Testing.** To make the source testable in the SDK build, we'll need to:
- [x] https://github.com/dart-lang/linter/issues/4404
_(Nice to have)_ To speed up test runes we might:
- [ ] https://github.com/dart-lang/linter/issues/3535
**Documentation.** We currently auto-generate lint docs here and host them on GitHub pages. We need:
- [x] #4460
We'll also need to:
- [x] update contributing guidelines and docs to integrate with an SDK development workflow
**Process.** We should tidy up stale process bits.
Notably, as we're no longer publishing the package, we should:
- [x] 🧹 https://github.com/dart-lang/linter/issues/4389
**Post move.** After moving a number of bits will need tidying up:
- [x] #4418
### Issues
To get the issue tracker ready for migration we'll need to:
- [ ] https://github.com/dart-lang/linter/issues/4729
- [ ] align our issue priorities with SDK analyzer issue priorities
- [x] #4461
We might also:
- [ ] consider adding workflows to the SDK repo that take over our custom auto-labeling
## Feedback and Thanks
Please provide feedback! A lot of thinking has gone into this and I'm happy to elaborate; just ask. 🙏
The quality of external contributions has been a bright light for this product (and for me personally). My sincere hope is that this move and the benefits of integration with our core source will not discourage contributions. On the contrary, I hope this will empower us to make the linter even better for all of us.
---
/fyi @a14n @athomas @asashour @bwilkerson @devoncarew @jacob314 @jcollins-g @parlough @srawlins @stereotype441 @whesse
|
1.0
|
linter repo move readiness // planning - 🚧 _(This is a living document and a work in progress. Watch this space for emerging details. And do provide feedback!)_ 🚧
## Background
**Costs.** There are costs to having the linter source living externally and separate from the analyzer. In practice, the two are quite tightly coupled and not being able to make changes atomically across them (when an analyzer API changes or a new lint is added for example) creates extra work (e.g., otherwise needless analyzer package publications) and bookkeeping (e.g., TODOs or issues to track the implementation of quick fixes for new lints that cannot be landed until the new lint is available in the SDK). Lints are also very difficult to evolve as any downstream impact from changed semantics is only detected when a linter roll into the Dart SDK's `DEPS` is evaluated. Having the linter workflow external and unfamiliar to Dart team members has also (arguably) limited Dart team contributions.
**Benefits.** There are benefits too. Particularly in the early days of this project we've greatly benefited from the relative nimbleness of a workflow that favors external contributions, allows easy CI customization and enjoys the relative focus of a dedicated issue tracker.
**Today.** When the linter was growing rapidly and was a tool that was effectively adjacent to the core Dart SDK offerings, the benefits (IMO clearly) outweighed the costs. The landscape today is different though. The linter is effectively now _itself_ a core offering, with lints a central part of the developer experience and various subsets endorsed by the Dart and Flutter teams and promoted as part of standard best practice (see [package:lints](https://github.com/dart-lang/lints)).
**Making a move.** In light of this, we'd like to move the linter sources (and issue tracking) into the Dart SDK.
## Plan
_(WIP planning notes. Details to be filled in.)_
**Readiness.** There are a bunch of steps to make the linter ready for a move.
### Sources
**Build.** To make the source buildable in `sdk/pkg`, we'll need to:
- [x] #4412
To not lose the build check automation we enjoy from GitHub actions, we'll want to:
- [x] #4416
**Testing.** To make the source testable in the SDK build, we'll need to:
- [x] https://github.com/dart-lang/linter/issues/4404
_(Nice to have)_ To speed up test runes we might:
- [ ] https://github.com/dart-lang/linter/issues/3535
**Documentation.** We currently auto-generate lint docs here and host them on GitHub pages. We need:
- [x] #4460
We'll also need to:
- [x] update contributing guidelines and docs to integrate with an SDK development workflow
**Process.** We should tidy up stale process bits.
Notably, as we're no longer publishing the package, we should:
- [x] 🧹 https://github.com/dart-lang/linter/issues/4389
**Post move.** After moving a number of bits will need tidying up:
- [x] #4418
### Issues
To get the issue tracker ready for migration we'll need to:
- [ ] https://github.com/dart-lang/linter/issues/4729
- [ ] align our issue priorities with SDK analyzer issue priorities
- [x] #4461
We might also:
- [ ] consider adding workflows to the SDK repo that take over our custom auto-labeling
## Feedback and Thanks
Please provide feedback! A lot of thinking has gone into this and I'm happy to elaborate; just ask. 🙏
The quality of external contributions has been a bright light for this product (and for me personally). My sincere hope is that this move and the benefits of integration with our core source will not discourage contributions. On the contrary, I hope this will empower us to make the linter even better for all of us.
---
/fyi @a14n @athomas @asashour @bwilkerson @devoncarew @jacob314 @jcollins-g @parlough @srawlins @stereotype441 @whesse
|
process
|
linter repo move readiness planning 🚧 this is a living document and a work in progress watch this space for emerging details and do provide feedback 🚧 background costs there are costs to having the linter source living externally and separate from the analyzer in practice the two are quite tightly coupled and not being able to make changes atomically across them when an analyzer api changes or a new lint is added for example creates extra work e g otherwise needless analyzer package publications and bookkeeping e g todos or issues to track the implementation of quick fixes for new lints that cannot be landed until the new lint is available in the sdk lints are also very difficult to evolve as any downstream impact from changed semantics is only detected when a linter roll into the dart sdk s deps is evaluated having the linter workflow external and unfamiliar to dart team members has also arguably limited dart team contributions benefits there are benefits too particularly in the early days of this project we ve greatly benefited from the relative nimbleness of a workflow that favors external contributions allows easy ci customization and enjoys the relative focus of a dedicated issue tracker today when the linter was growing rapidly and was a tool that was effectively adjacent to the core dart sdk offerings the benefits imo clearly outweighed the costs the landscape today is different though the linter is effectively now itself a core offering with lints a central part of the developer experience and various subsets endorsed by the dart and flutter teams and promoted as part of standard best practice see making a move in light of this we d like to move the linter sources and issue tracking into the dart sdk plan wip planning notes details to be filled in readiness there are a bunch of steps to make the linter ready for a move sources build to make the source buildable in sdk pkg we ll need to to not lose the build check automation we enjoy from github actions we ll want to testing to make the source testable in the sdk build we ll need to nice to have to speed up test runes we might documentation we currently auto generate lint docs here and host them on github pages we need we ll also need to update contributing guidelines and docs to integrate with an sdk development workflow process we should tidy up stale process bits notably as we re no longer publishing the package we should 🧹 post move after moving a number of bits will need tidying up issues to get the issue tracker ready for migration we ll need to align our issue priorities with sdk analyzer issue priorities we might also consider adding workflows to the sdk repo that take over our custom auto labeling feedback and thanks please provide feedback a lot of thinking has gone into this and i m happy to elaborate just ask 🙏 the quality of external contributions has been a bright light for this product and for me personally my sincere hope is that this move and the benefits of integration with our core source will not discourage contributions on the contrary i hope this will empower us to make the linter even better for all of us fyi athomas asashour bwilkerson devoncarew jcollins g parlough srawlins whesse
| 1
|
49,290
| 7,488,825,098
|
IssuesEvent
|
2018-04-06 03:59:03
|
CS2103JAN2018-F12-B2/main
|
https://api.github.com/repos/CS2103JAN2018-F12-B2/main
|
closed
|
v1.4 Added new addEvent command and parser method + Updated user and developer guide
|
type.documentation type.enhancement type.epic
|
Integrated Google Calendar API
Added addEvent command and parser method
Updated user and developer guide
|
1.0
|
v1.4 Added new addEvent command and parser method + Updated user and developer guide - Integrated Google Calendar API
Added addEvent command and parser method
Updated user and developer guide
|
non_process
|
added new addevent command and parser method updated user and developer guide integrated google calendar api added addevent command and parser method updated user and developer guide
| 0
|
21,500
| 29,667,693,988
|
IssuesEvent
|
2023-06-11 02:00:08
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Fri, 9 Jun 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Spain on Fire: A novel wildfire risk assessment model based on image satellite processing and atmospheric information
- **Authors:** Helena Liz-López, Javier Huertas-Tato, Jorge Pérez-Aracil, Carlos Casanova-Mateo, Julia Sanz-Justo, David Camacho
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2306.05045
- **Pdf link:** https://arxiv.org/pdf/2306.05045
- **Abstract**
Each year, wildfires destroy larger areas of Spain, threatening numerous ecosystems. Humans cause 90% of them (negligence or provoked) and the behaviour of individuals is unpredictable. However, atmospheric and environmental variables affect the spread of wildfires, and they can be analysed by using deep learning. In order to mitigate the damage of these events we proposed the novel Wildfire Assessment Model (WAM). Our aim is to anticipate the economic and ecological impact of a wildfire, assisting managers resource allocation and decision making for dangerous regions in Spain, Castilla y Le\'on and Andaluc\'ia. The WAM uses a residual-style convolutional network architecture to perform regression over atmospheric variables and the greenness index, computing necessary resources, the control and extinction time, and the expected burnt surface area. It is first pre-trained with self-supervision over 100,000 examples of unlabelled data with a masked patch prediction objective and fine-tuned using 311 samples of wildfires. The pretraining allows the model to understand situations, outclassing baselines with a 1,4%, 3,7% and 9% improvement estimating human, heavy and aerial resources; 21% and 10,2% in expected extinction and control time; and 18,8% in expected burnt area. Using the WAM we provide an example assessment map of Castilla y Le\'on, visualizing the expected resources over an entire region.
### Point-Voxel Absorbing Graph Representation Learning for Event Stream based Recognition
- **Authors:** Bo Jiang, Chengguo Yuan, Xiao Wang, Zhimin Bao, Lin Zhu, Bin Luo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE)
- **Arxiv link:** https://arxiv.org/abs/2306.05239
- **Pdf link:** https://arxiv.org/pdf/2306.05239
- **Abstract**
Considering the balance of performance and efficiency, sampled point and voxel methods are usually employed to down-sample dense events into sparse ones. After that, one popular way is to leverage a graph model which treats the sparse points/voxels as nodes and adopts graph neural networks (GNNs) to learn the representation for event data. Although good performance can be obtained, however, their results are still limited mainly due to two issues. (1) Existing event GNNs generally adopt the additional max (or mean) pooling layer to summarize all node embeddings into a single graph-level representation for the whole event data representation. However, this approach fails to capture the importance of graph nodes and also fails to be fully aware of the node representations. (2) Existing methods generally employ either a sparse point or voxel graph representation model which thus lacks consideration of the complementary between these two types of representation models. To address these issues, in this paper, we propose a novel dual point-voxel absorbing graph representation learning for event stream data representation. To be specific, given the input event stream, we first transform it into the sparse event cloud and voxel grids and build dual absorbing graph models for them respectively. Then, we design a novel absorbing graph convolutional network (AGCN) for our dual absorbing graph representation and learning. The key aspect of the proposed AGCN is its ability to effectively capture the importance of nodes and thus be fully aware of node representations in summarizing all node representations through the introduced absorbing nodes. Finally, the event representations of dual learning branches are concatenated together to extract the complementary information of two cues. The output is then fed into a linear layer for event data classification.
### Anomaly Detection in Satellite Videos using Diffusion Models
- **Authors:** Akash Awasthi, Son Ly, Jaer Nizam, Samira Zare, Videet Mehta, Safwan Ahmed, Keshav Shah, Ramakrishna Nemani, Saurabh Prasad, Hien Van Nguyen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.05376
- **Pdf link:** https://arxiv.org/pdf/2306.05376
- **Abstract**
The definition of anomaly detection is the identification of an unexpected event. Real-time detection of extreme events such as wildfires, cyclones, or floods using satellite data has become crucial for disaster management. Although several earth-observing satellites provide information about disasters, satellites in the geostationary orbit provide data at intervals as frequent as every minute, effectively creating a video from space. There are many techniques that have been proposed to identify anomalies in surveillance videos; however, the available datasets do not have dynamic behavior, so we discuss an anomaly framework that can work on very high-frequency datasets to find very fast-moving anomalies. In this work, we present a diffusion model which does not need any motion component to capture the fast-moving anomalies and outperforms the other baseline methods.
### FollowNet: A Comprehensive Benchmark for Car-Following Behavior Modeling
- **Authors:** Xianda Chen, Meixin Zhu, Kehua Chen, Pengqin Wang, Hongliang Lu, Hui Zhong, Xu Han, Yinhai Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2306.05381
- **Pdf link:** https://arxiv.org/pdf/2306.05381
- **Abstract**
Car-following is a control process in which a following vehicle (FV) adjusts its acceleration to keep a safe distance from the lead vehicle (LV). Recently, there has been a booming of data-driven models that enable more accurate modeling of car-following through real-world driving datasets. Although there are several public datasets available, their formats are not always consistent, making it challenging to determine the state-of-the-art models and how well a new model performs compared to existing ones. In contrast, research fields such as image recognition and object detection have benchmark datasets like ImageNet, Microsoft COCO, and KITTI. To address this gap and promote the development of microscopic traffic flow modeling, we establish a public benchmark dataset for car-following behavior modeling. The benchmark consists of more than 80K car-following events extracted from five public driving datasets using the same criteria. These events cover diverse situations including different road types, various weather conditions, and mixed traffic flows with autonomous vehicles. Moreover, to give an overview of current progress in car-following modeling, we implemented and tested representative baseline models with the benchmark. Results show that the deep deterministic policy gradient (DDPG) based model performs competitively with a lower MSE for spacing compared to traditional intelligent driver model (IDM) and Gazis-Herman-Rothery (GHR) models, and a smaller collision rate compared to fully connected neural network (NN) and long short-term memory (LSTM) models in most datasets. The established benchmark will provide researchers with consistent data formats and metrics for cross-comparing different car-following models, promoting the development of more accurate models. We open-source our dataset and implementation code in https://github.com/HKUST-DRIVE-AI-LAB/FollowNet.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Grounded Text-to-Image Synthesis with Attention Refocusing
- **Authors:** Quynh Phung, Songwei Ge, Jia-Bin Huang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05427
- **Pdf link:** https://arxiv.org/pdf/2306.05427
- **Abstract**
Driven by scalable diffusion models trained on large-scale paired text-image datasets, text-to-image synthesis methods have shown compelling results. However, these models still fail to precisely follow the text prompt when multiple objects, attributes, and spatial compositions are involved in the prompt. In this paper, we identify the potential reasons in both the cross-attention and self-attention layers of the diffusion model. We propose two novel losses to refocus the attention maps according to a given layout during the sampling process. We perform comprehensive experiments on the DrawBench and HRS benchmarks using layouts synthesized by Large Language Models, showing that our proposed losses can be integrated easily and effectively into existing text-to-image methods and consistently improve their alignment between the generated images and the text prompts.
## Keyword: ISP
### DiViNeT: 3D Reconstruction from Disparate Views via Neural Template Regularization
- **Authors:** Aditya Vora, Akshay Gadi Patil, Hao Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.04699
- **Pdf link:** https://arxiv.org/pdf/2306.04699
- **Abstract**
We present a volume rendering-based neural surface reconstruction method that takes as few as three disparate RGB images as input. Our key idea is to regularize the reconstruction, which is severely ill-posed and leaving Our method, coined DiViNet, operates in two stages. The first stage learns the templates, in the form of 3D Gaussian functions, across different scenes, without 3D supervision. In the reconstruction stage, our predicted templates serve as anchors to help ``stitch'' the surfaces over sparse regions. We demonstrate that our approach is not only able to complete the surface geometry but also reconstructs surface details to a reasonable extent from few disparate input views. On the DTU and BlendedMVS datasets, our approach achieves the best reconstruction quality among existing methods in the presence of such sparse views, and performs on par, if not better, with competing methods when dense views are employed as inputs.
### Enhance-NeRF: Multiple Performance Evaluation for Neural Radiance Fields
- **Authors:** Qianqiu Tan, Tao Liu, Yinling Xie, Shuwan Yu, Baohua Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05303
- **Pdf link:** https://arxiv.org/pdf/2306.05303
- **Abstract**
The quality of three-dimensional reconstruction is a key factor affecting the effectiveness of its application in areas such as virtual reality (VR) and augmented reality (AR) technologies. Neural Radiance Fields (NeRF) can generate realistic images from any viewpoint. It simultaneously reconstructs the shape, lighting, and materials of objects, and without surface defects, which breaks down the barrier between virtuality and reality. The potential spatial correspondences displayed by NeRF between reconstructed scenes and real-world scenes offer a wide range of practical applications possibilities. Despite significant progress in 3D reconstruction since NeRF were introduced, there remains considerable room for exploration and experimentation. NeRF-based models are susceptible to interference issues caused by colored "fog" noise. Additionally, they frequently encounter instabilities and failures while attempting to reconstruct unbounded scenes. Moreover, the model takes a significant amount of time to converge, making it even more challenging to use in such scenarios. Our approach, coined Enhance-NeRF, which adopts joint color to balance low and high reflectivity objects display, utilizes a decoding architecture with prior knowledge to improve recognition, and employs multi-layer performance evaluation mechanisms to enhance learning capacity. It achieves reconstruction of outdoor scenes within one hour under single-card condition. Based on experimental results, Enhance-NeRF partially enhances fitness capability and provides some support to outdoor scene reconstruction. The Enhance-NeRF method can be used as a plug-and-play component, making it easy to integrate with other NeRF-based models. The code is available at: https://github.com/TANQIanQ/Enhance-NeRF
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### HQ-50K: A Large-scale, High-quality Dataset for Image Restoration
- **Authors:** Qinhong Yang, Dongdong Chen, Zhentao Tan, Qiankun Liu, Qi Chu, Jianmin Bao, Lu Yuan, Gang Hua, Nenghai Yu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05390
- **Pdf link:** https://arxiv.org/pdf/2306.05390
- **Abstract**
This paper introduces a new large-scale image restoration dataset, called HQ-50K, which contains 50,000 high-quality images with rich texture details and semantic diversity. We analyze existing image restoration datasets from five different perspectives, including data scale, resolution, compression rates, texture details, and semantic coverage. However, we find that all of these datasets are deficient in some aspects. In contrast, HQ-50K considers all of these five aspects during the data curation process and meets all requirements. We also present a new Degradation-Aware Mixture of Expert (DAMoE) model, which enables a single model to handle multiple corruption types and unknown levels. Our extensive experiments demonstrate that HQ-50K consistently improves the performance on various image restoration tasks, such as super-resolution, denoising, dejpeg, and deraining. Furthermore, our proposed DAMoE, trained on our \dataset, outperforms existing state-of-the-art unified models designed for multiple restoration tasks and levels. The dataset and code are available at \url{https://github.com/littleYaang/HQ-50K}.
## Keyword: RAW
### StreetSurf: Extending Multi-view Implicit Surface Reconstruction to Street Views
- **Authors:** Jianfei Guo, Nianchen Deng, Xinyang Li, Yeqi Bai, Botian Shi, Chiyu Wang, Chenjing Ding, Dongliang Wang, Yikang Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2306.04988
- **Pdf link:** https://arxiv.org/pdf/2306.04988
- **Abstract**
We present a novel multi-view implicit surface reconstruction technique, termed StreetSurf, that is readily applicable to street view images in widely-used autonomous driving datasets, such as Waymo-perception sequences, without necessarily requiring LiDAR data. As neural rendering research expands rapidly, its integration into street views has started to draw interests. Existing approaches on street views either mainly focus on novel view synthesis with little exploration of the scene geometry, or rely heavily on dense LiDAR data when investigating reconstruction. Neither of them investigates multi-view implicit surface reconstruction, especially under settings without LiDAR data. Our method extends prior object-centric neural surface reconstruction techniques to address the unique challenges posed by the unbounded street views that are captured with non-object-centric, long and narrow camera trajectories. We delimit the unbounded space into three parts, close-range, distant-view and sky, with aligned cuboid boundaries, and adapt cuboid/hyper-cuboid hash-grids along with road-surface initialization scheme for finer and disentangled representation. To further address the geometric errors arising from textureless regions and insufficient viewing angles, we adopt geometric priors that are estimated using general purpose monocular models. Coupled with our implementation of efficient and fine-grained multi-stage ray marching strategy, we achieve state of the art reconstruction quality in both geometry and appearance within only one to two hours of training time with a single RTX3090 GPU for each street view sequence. Furthermore, we demonstrate that the reconstructed implicit surfaces have rich potential for various downstream tasks, including ray tracing and LiDAR simulation.
### SNAP: Self-Supervised Neural Maps for Visual Positioning and Semantic Understanding
- **Authors:** Paul-Edouard Sarlin, Eduard Trulls, Marc Pollefeys, Jan Hosang, Simon Lynen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05407
- **Pdf link:** https://arxiv.org/pdf/2306.05407
- **Abstract**
Semantic 2D maps are commonly used by humans and machines for navigation purposes, whether it's walking or driving. However, these maps have limitations: they lack detail, often contain inaccuracies, and are difficult to create and maintain, especially in an automated fashion. Can we use raw imagery to automatically create better maps that can be easily interpreted by both humans and machines? We introduce SNAP, a deep network that learns rich neural 2D maps from ground-level and overhead images. We train our model to align neural maps estimated from different inputs, supervised only with camera poses over tens of millions of StreetView images. SNAP can resolve the location of challenging image queries beyond the reach of traditional methods, outperforming the state of the art in localization by a large margin. Moreover, our neural maps encode not only geometry and appearance but also high-level semantics, discovered without explicit supervision. This enables effective pre-training for data-efficient semantic scene understanding, with the potential to unlock cost-efficient creation of more detailed maps.
### Grounded Text-to-Image Synthesis with Attention Refocusing
- **Authors:** Quynh Phung, Songwei Ge, Jia-Bin Huang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05427
- **Pdf link:** https://arxiv.org/pdf/2306.05427
- **Abstract**
Driven by scalable diffusion models trained on large-scale paired text-image datasets, text-to-image synthesis methods have shown compelling results. However, these models still fail to precisely follow the text prompt when multiple objects, attributes, and spatial compositions are involved in the prompt. In this paper, we identify the potential reasons in both the cross-attention and self-attention layers of the diffusion model. We propose two novel losses to refocus the attention maps according to a given layout during the sampling process. We perform comprehensive experiments on the DrawBench and HRS benchmarks using layouts synthesized by Large Language Models, showing that our proposed losses can be integrated easily and effectively into existing text-to-image methods and consistently improve their alignment between the generated images and the text prompts.
## Keyword: raw image
### SNAP: Self-Supervised Neural Maps for Visual Positioning and Semantic Understanding
- **Authors:** Paul-Edouard Sarlin, Eduard Trulls, Marc Pollefeys, Jan Hosang, Simon Lynen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05407
- **Pdf link:** https://arxiv.org/pdf/2306.05407
- **Abstract**
Semantic 2D maps are commonly used by humans and machines for navigation purposes, whether it's walking or driving. However, these maps have limitations: they lack detail, often contain inaccuracies, and are difficult to create and maintain, especially in an automated fashion. Can we use raw imagery to automatically create better maps that can be easily interpreted by both humans and machines? We introduce SNAP, a deep network that learns rich neural 2D maps from ground-level and overhead images. We train our model to align neural maps estimated from different inputs, supervised only with camera poses over tens of millions of StreetView images. SNAP can resolve the location of challenging image queries beyond the reach of traditional methods, outperforming the state of the art in localization by a large margin. Moreover, our neural maps encode not only geometry and appearance but also high-level semantics, discovered without explicit supervision. This enables effective pre-training for data-efficient semantic scene understanding, with the potential to unlock cost-efficient creation of more detailed maps.
|
2.0
|
New submissions for Fri, 9 Jun 23 - ## Keyword: events
### Spain on Fire: A novel wildfire risk assessment model based on image satellite processing and atmospheric information
- **Authors:** Helena Liz-López, Javier Huertas-Tato, Jorge Pérez-Aracil, Carlos Casanova-Mateo, Julia Sanz-Justo, David Camacho
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2306.05045
- **Pdf link:** https://arxiv.org/pdf/2306.05045
- **Abstract**
Each year, wildfires destroy larger areas of Spain, threatening numerous ecosystems. Humans cause 90% of them (negligence or provoked) and the behaviour of individuals is unpredictable. However, atmospheric and environmental variables affect the spread of wildfires, and they can be analysed by using deep learning. In order to mitigate the damage of these events we proposed the novel Wildfire Assessment Model (WAM). Our aim is to anticipate the economic and ecological impact of a wildfire, assisting managers resource allocation and decision making for dangerous regions in Spain, Castilla y Le\'on and Andaluc\'ia. The WAM uses a residual-style convolutional network architecture to perform regression over atmospheric variables and the greenness index, computing necessary resources, the control and extinction time, and the expected burnt surface area. It is first pre-trained with self-supervision over 100,000 examples of unlabelled data with a masked patch prediction objective and fine-tuned using 311 samples of wildfires. The pretraining allows the model to understand situations, outclassing baselines with a 1,4%, 3,7% and 9% improvement estimating human, heavy and aerial resources; 21% and 10,2% in expected extinction and control time; and 18,8% in expected burnt area. Using the WAM we provide an example assessment map of Castilla y Le\'on, visualizing the expected resources over an entire region.
### Point-Voxel Absorbing Graph Representation Learning for Event Stream based Recognition
- **Authors:** Bo Jiang, Chengguo Yuan, Xiao Wang, Zhimin Bao, Lin Zhu, Bin Luo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE)
- **Arxiv link:** https://arxiv.org/abs/2306.05239
- **Pdf link:** https://arxiv.org/pdf/2306.05239
- **Abstract**
Considering the balance of performance and efficiency, sampled point and voxel methods are usually employed to down-sample dense events into sparse ones. After that, one popular way is to leverage a graph model which treats the sparse points/voxels as nodes and adopts graph neural networks (GNNs) to learn the representation for event data. Although good performance can be obtained, however, their results are still limited mainly due to two issues. (1) Existing event GNNs generally adopt the additional max (or mean) pooling layer to summarize all node embeddings into a single graph-level representation for the whole event data representation. However, this approach fails to capture the importance of graph nodes and also fails to be fully aware of the node representations. (2) Existing methods generally employ either a sparse point or voxel graph representation model which thus lacks consideration of the complementary between these two types of representation models. To address these issues, in this paper, we propose a novel dual point-voxel absorbing graph representation learning for event stream data representation. To be specific, given the input event stream, we first transform it into the sparse event cloud and voxel grids and build dual absorbing graph models for them respectively. Then, we design a novel absorbing graph convolutional network (AGCN) for our dual absorbing graph representation and learning. The key aspect of the proposed AGCN is its ability to effectively capture the importance of nodes and thus be fully aware of node representations in summarizing all node representations through the introduced absorbing nodes. Finally, the event representations of dual learning branches are concatenated together to extract the complementary information of two cues. The output is then fed into a linear layer for event data classification.
### Anomaly Detection in Satellite Videos using Diffusion Models
- **Authors:** Akash Awasthi, Son Ly, Jaer Nizam, Samira Zare, Videet Mehta, Safwan Ahmed, Keshav Shah, Ramakrishna Nemani, Saurabh Prasad, Hien Van Nguyen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.05376
- **Pdf link:** https://arxiv.org/pdf/2306.05376
- **Abstract**
The definition of anomaly detection is the identification of an unexpected event. Real-time detection of extreme events such as wildfires, cyclones, or floods using satellite data has become crucial for disaster management. Although several earth-observing satellites provide information about disasters, satellites in the geostationary orbit provide data at intervals as frequent as every minute, effectively creating a video from space. There are many techniques that have been proposed to identify anomalies in surveillance videos; however, the available datasets do not have dynamic behavior, so we discuss an anomaly framework that can work on very high-frequency datasets to find very fast-moving anomalies. In this work, we present a diffusion model which does not need any motion component to capture the fast-moving anomalies and outperforms the other baseline methods.
### FollowNet: A Comprehensive Benchmark for Car-Following Behavior Modeling
- **Authors:** Xianda Chen, Meixin Zhu, Kehua Chen, Pengqin Wang, Hongliang Lu, Hui Zhong, Xu Han, Yinhai Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2306.05381
- **Pdf link:** https://arxiv.org/pdf/2306.05381
- **Abstract**
Car-following is a control process in which a following vehicle (FV) adjusts its acceleration to keep a safe distance from the lead vehicle (LV). Recently, there has been a booming of data-driven models that enable more accurate modeling of car-following through real-world driving datasets. Although there are several public datasets available, their formats are not always consistent, making it challenging to determine the state-of-the-art models and how well a new model performs compared to existing ones. In contrast, research fields such as image recognition and object detection have benchmark datasets like ImageNet, Microsoft COCO, and KITTI. To address this gap and promote the development of microscopic traffic flow modeling, we establish a public benchmark dataset for car-following behavior modeling. The benchmark consists of more than 80K car-following events extracted from five public driving datasets using the same criteria. These events cover diverse situations including different road types, various weather conditions, and mixed traffic flows with autonomous vehicles. Moreover, to give an overview of current progress in car-following modeling, we implemented and tested representative baseline models with the benchmark. Results show that the deep deterministic policy gradient (DDPG) based model performs competitively with a lower MSE for spacing compared to traditional intelligent driver model (IDM) and Gazis-Herman-Rothery (GHR) models, and a smaller collision rate compared to fully connected neural network (NN) and long short-term memory (LSTM) models in most datasets. The established benchmark will provide researchers with consistent data formats and metrics for cross-comparing different car-following models, promoting the development of more accurate models. We open-source our dataset and implementation code in https://github.com/HKUST-DRIVE-AI-LAB/FollowNet.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Grounded Text-to-Image Synthesis with Attention Refocusing
- **Authors:** Quynh Phung, Songwei Ge, Jia-Bin Huang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05427
- **Pdf link:** https://arxiv.org/pdf/2306.05427
- **Abstract**
Driven by scalable diffusion models trained on large-scale paired text-image datasets, text-to-image synthesis methods have shown compelling results. However, these models still fail to precisely follow the text prompt when multiple objects, attributes, and spatial compositions are involved in the prompt. In this paper, we identify the potential reasons in both the cross-attention and self-attention layers of the diffusion model. We propose two novel losses to refocus the attention maps according to a given layout during the sampling process. We perform comprehensive experiments on the DrawBench and HRS benchmarks using layouts synthesized by Large Language Models, showing that our proposed losses can be integrated easily and effectively into existing text-to-image methods and consistently improve their alignment between the generated images and the text prompts.
## Keyword: ISP
### DiViNeT: 3D Reconstruction from Disparate Views via Neural Template Regularization
- **Authors:** Aditya Vora, Akshay Gadi Patil, Hao Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.04699
- **Pdf link:** https://arxiv.org/pdf/2306.04699
- **Abstract**
We present a volume rendering-based neural surface reconstruction method that takes as few as three disparate RGB images as input. Our key idea is to regularize the reconstruction, which is severely ill-posed and leaving Our method, coined DiViNet, operates in two stages. The first stage learns the templates, in the form of 3D Gaussian functions, across different scenes, without 3D supervision. In the reconstruction stage, our predicted templates serve as anchors to help ``stitch'' the surfaces over sparse regions. We demonstrate that our approach is not only able to complete the surface geometry but also reconstructs surface details to a reasonable extent from few disparate input views. On the DTU and BlendedMVS datasets, our approach achieves the best reconstruction quality among existing methods in the presence of such sparse views, and performs on par, if not better, with competing methods when dense views are employed as inputs.
### Enhance-NeRF: Multiple Performance Evaluation for Neural Radiance Fields
- **Authors:** Qianqiu Tan, Tao Liu, Yinling Xie, Shuwan Yu, Baohua Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05303
- **Pdf link:** https://arxiv.org/pdf/2306.05303
- **Abstract**
The quality of three-dimensional reconstruction is a key factor affecting the effectiveness of its application in areas such as virtual reality (VR) and augmented reality (AR) technologies. Neural Radiance Fields (NeRF) can generate realistic images from any viewpoint. It simultaneously reconstructs the shape, lighting, and materials of objects, and without surface defects, which breaks down the barrier between virtuality and reality. The potential spatial correspondences displayed by NeRF between reconstructed scenes and real-world scenes offer a wide range of practical applications possibilities. Despite significant progress in 3D reconstruction since NeRF were introduced, there remains considerable room for exploration and experimentation. NeRF-based models are susceptible to interference issues caused by colored "fog" noise. Additionally, they frequently encounter instabilities and failures while attempting to reconstruct unbounded scenes. Moreover, the model takes a significant amount of time to converge, making it even more challenging to use in such scenarios. Our approach, coined Enhance-NeRF, which adopts joint color to balance low and high reflectivity objects display, utilizes a decoding architecture with prior knowledge to improve recognition, and employs multi-layer performance evaluation mechanisms to enhance learning capacity. It achieves reconstruction of outdoor scenes within one hour under single-card condition. Based on experimental results, Enhance-NeRF partially enhances fitness capability and provides some support to outdoor scene reconstruction. The Enhance-NeRF method can be used as a plug-and-play component, making it easy to integrate with other NeRF-based models. The code is available at: https://github.com/TANQIanQ/Enhance-NeRF
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### HQ-50K: A Large-scale, High-quality Dataset for Image Restoration
- **Authors:** Qinhong Yang, Dongdong Chen, Zhentao Tan, Qiankun Liu, Qi Chu, Jianmin Bao, Lu Yuan, Gang Hua, Nenghai Yu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05390
- **Pdf link:** https://arxiv.org/pdf/2306.05390
- **Abstract**
This paper introduces a new large-scale image restoration dataset, called HQ-50K, which contains 50,000 high-quality images with rich texture details and semantic diversity. We analyze existing image restoration datasets from five different perspectives, including data scale, resolution, compression rates, texture details, and semantic coverage. However, we find that all of these datasets are deficient in some aspects. In contrast, HQ-50K considers all of these five aspects during the data curation process and meets all requirements. We also present a new Degradation-Aware Mixture of Expert (DAMoE) model, which enables a single model to handle multiple corruption types and unknown levels. Our extensive experiments demonstrate that HQ-50K consistently improves the performance on various image restoration tasks, such as super-resolution, denoising, dejpeg, and deraining. Furthermore, our proposed DAMoE, trained on our \dataset, outperforms existing state-of-the-art unified models designed for multiple restoration tasks and levels. The dataset and code are available at \url{https://github.com/littleYaang/HQ-50K}.
## Keyword: RAW
### StreetSurf: Extending Multi-view Implicit Surface Reconstruction to Street Views
- **Authors:** Jianfei Guo, Nianchen Deng, Xinyang Li, Yeqi Bai, Botian Shi, Chiyu Wang, Chenjing Ding, Dongliang Wang, Yikang Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2306.04988
- **Pdf link:** https://arxiv.org/pdf/2306.04988
- **Abstract**
We present a novel multi-view implicit surface reconstruction technique, termed StreetSurf, that is readily applicable to street view images in widely-used autonomous driving datasets, such as Waymo-perception sequences, without necessarily requiring LiDAR data. As neural rendering research expands rapidly, its integration into street views has started to draw interests. Existing approaches on street views either mainly focus on novel view synthesis with little exploration of the scene geometry, or rely heavily on dense LiDAR data when investigating reconstruction. Neither of them investigates multi-view implicit surface reconstruction, especially under settings without LiDAR data. Our method extends prior object-centric neural surface reconstruction techniques to address the unique challenges posed by the unbounded street views that are captured with non-object-centric, long and narrow camera trajectories. We delimit the unbounded space into three parts, close-range, distant-view and sky, with aligned cuboid boundaries, and adapt cuboid/hyper-cuboid hash-grids along with road-surface initialization scheme for finer and disentangled representation. To further address the geometric errors arising from textureless regions and insufficient viewing angles, we adopt geometric priors that are estimated using general purpose monocular models. Coupled with our implementation of efficient and fine-grained multi-stage ray marching strategy, we achieve state of the art reconstruction quality in both geometry and appearance within only one to two hours of training time with a single RTX3090 GPU for each street view sequence. Furthermore, we demonstrate that the reconstructed implicit surfaces have rich potential for various downstream tasks, including ray tracing and LiDAR simulation.
### SNAP: Self-Supervised Neural Maps for Visual Positioning and Semantic Understanding
- **Authors:** Paul-Edouard Sarlin, Eduard Trulls, Marc Pollefeys, Jan Hosang, Simon Lynen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05407
- **Pdf link:** https://arxiv.org/pdf/2306.05407
- **Abstract**
Semantic 2D maps are commonly used by humans and machines for navigation purposes, whether it's walking or driving. However, these maps have limitations: they lack detail, often contain inaccuracies, and are difficult to create and maintain, especially in an automated fashion. Can we use raw imagery to automatically create better maps that can be easily interpreted by both humans and machines? We introduce SNAP, a deep network that learns rich neural 2D maps from ground-level and overhead images. We train our model to align neural maps estimated from different inputs, supervised only with camera poses over tens of millions of StreetView images. SNAP can resolve the location of challenging image queries beyond the reach of traditional methods, outperforming the state of the art in localization by a large margin. Moreover, our neural maps encode not only geometry and appearance but also high-level semantics, discovered without explicit supervision. This enables effective pre-training for data-efficient semantic scene understanding, with the potential to unlock cost-efficient creation of more detailed maps.
### Grounded Text-to-Image Synthesis with Attention Refocusing
- **Authors:** Quynh Phung, Songwei Ge, Jia-Bin Huang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05427
- **Pdf link:** https://arxiv.org/pdf/2306.05427
- **Abstract**
Driven by scalable diffusion models trained on large-scale paired text-image datasets, text-to-image synthesis methods have shown compelling results. However, these models still fail to precisely follow the text prompt when multiple objects, attributes, and spatial compositions are involved in the prompt. In this paper, we identify the potential reasons in both the cross-attention and self-attention layers of the diffusion model. We propose two novel losses to refocus the attention maps according to a given layout during the sampling process. We perform comprehensive experiments on the DrawBench and HRS benchmarks using layouts synthesized by Large Language Models, showing that our proposed losses can be integrated easily and effectively into existing text-to-image methods and consistently improve their alignment between the generated images and the text prompts.
## Keyword: raw image
### SNAP: Self-Supervised Neural Maps for Visual Positioning and Semantic Understanding
- **Authors:** Paul-Edouard Sarlin, Eduard Trulls, Marc Pollefeys, Jan Hosang, Simon Lynen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.05407
- **Pdf link:** https://arxiv.org/pdf/2306.05407
- **Abstract**
Semantic 2D maps are commonly used by humans and machines for navigation purposes, whether it's walking or driving. However, these maps have limitations: they lack detail, often contain inaccuracies, and are difficult to create and maintain, especially in an automated fashion. Can we use raw imagery to automatically create better maps that can be easily interpreted by both humans and machines? We introduce SNAP, a deep network that learns rich neural 2D maps from ground-level and overhead images. We train our model to align neural maps estimated from different inputs, supervised only with camera poses over tens of millions of StreetView images. SNAP can resolve the location of challenging image queries beyond the reach of traditional methods, outperforming the state of the art in localization by a large margin. Moreover, our neural maps encode not only geometry and appearance but also high-level semantics, discovered without explicit supervision. This enables effective pre-training for data-efficient semantic scene understanding, with the potential to unlock cost-efficient creation of more detailed maps.
|
process
|
new submissions for fri jun keyword events spain on fire a novel wildfire risk assessment model based on image satellite processing and atmospheric information authors helena liz lópez javier huertas tato jorge pérez aracil carlos casanova mateo julia sanz justo david camacho subjects computer vision and pattern recognition cs cv artificial intelligence cs ai image and video processing eess iv arxiv link pdf link abstract each year wildfires destroy larger areas of spain threatening numerous ecosystems humans cause of them negligence or provoked and the behaviour of individuals is unpredictable however atmospheric and environmental variables affect the spread of wildfires and they can be analysed by using deep learning in order to mitigate the damage of these events we proposed the novel wildfire assessment model wam our aim is to anticipate the economic and ecological impact of a wildfire assisting managers resource allocation and decision making for dangerous regions in spain castilla y le on and andaluc ia the wam uses a residual style convolutional network architecture to perform regression over atmospheric variables and the greenness index computing necessary resources the control and extinction time and the expected burnt surface area it is first pre trained with self supervision over examples of unlabelled data with a masked patch prediction objective and fine tuned using samples of wildfires the pretraining allows the model to understand situations outclassing baselines with a and improvement estimating human heavy and aerial resources and in expected extinction and control time and in expected burnt area using the wam we provide an example assessment map of castilla y le on visualizing the expected resources over an entire region point voxel absorbing graph representation learning for event stream based recognition authors bo jiang chengguo yuan xiao wang zhimin bao lin zhu bin luo subjects computer vision and pattern recognition cs cv neural and evolutionary computing cs ne arxiv link pdf link abstract considering the balance of performance and efficiency sampled point and voxel methods are usually employed to down sample dense events into sparse ones after that one popular way is to leverage a graph model which treats the sparse points voxels as nodes and adopts graph neural networks gnns to learn the representation for event data although good performance can be obtained however their results are still limited mainly due to two issues existing event gnns generally adopt the additional max or mean pooling layer to summarize all node embeddings into a single graph level representation for the whole event data representation however this approach fails to capture the importance of graph nodes and also fails to be fully aware of the node representations existing methods generally employ either a sparse point or voxel graph representation model which thus lacks consideration of the complementary between these two types of representation models to address these issues in this paper we propose a novel dual point voxel absorbing graph representation learning for event stream data representation to be specific given the input event stream we first transform it into the sparse event cloud and voxel grids and build dual absorbing graph models for them respectively then we design a novel absorbing graph convolutional network agcn for our dual absorbing graph representation and learning the key aspect of the proposed agcn is its ability to effectively capture the importance of nodes and thus be fully aware of node representations in summarizing all node representations through the introduced absorbing nodes finally the event representations of dual learning branches are concatenated together to extract the complementary information of two cues the output is then fed into a linear layer for event data classification anomaly detection in satellite videos using diffusion models authors akash awasthi son ly jaer nizam samira zare videet mehta safwan ahmed keshav shah ramakrishna nemani saurabh prasad hien van nguyen subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract the definition of anomaly detection is the identification of an unexpected event real time detection of extreme events such as wildfires cyclones or floods using satellite data has become crucial for disaster management although several earth observing satellites provide information about disasters satellites in the geostationary orbit provide data at intervals as frequent as every minute effectively creating a video from space there are many techniques that have been proposed to identify anomalies in surveillance videos however the available datasets do not have dynamic behavior so we discuss an anomaly framework that can work on very high frequency datasets to find very fast moving anomalies in this work we present a diffusion model which does not need any motion component to capture the fast moving anomalies and outperforms the other baseline methods follownet a comprehensive benchmark for car following behavior modeling authors xianda chen meixin zhu kehua chen pengqin wang hongliang lu hui zhong xu han yinhai wang subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract car following is a control process in which a following vehicle fv adjusts its acceleration to keep a safe distance from the lead vehicle lv recently there has been a booming of data driven models that enable more accurate modeling of car following through real world driving datasets although there are several public datasets available their formats are not always consistent making it challenging to determine the state of the art models and how well a new model performs compared to existing ones in contrast research fields such as image recognition and object detection have benchmark datasets like imagenet microsoft coco and kitti to address this gap and promote the development of microscopic traffic flow modeling we establish a public benchmark dataset for car following behavior modeling the benchmark consists of more than car following events extracted from five public driving datasets using the same criteria these events cover diverse situations including different road types various weather conditions and mixed traffic flows with autonomous vehicles moreover to give an overview of current progress in car following modeling we implemented and tested representative baseline models with the benchmark results show that the deep deterministic policy gradient ddpg based model performs competitively with a lower mse for spacing compared to traditional intelligent driver model idm and gazis herman rothery ghr models and a smaller collision rate compared to fully connected neural network nn and long short term memory lstm models in most datasets the established benchmark will provide researchers with consistent data formats and metrics for cross comparing different car following models promoting the development of more accurate models we open source our dataset and implementation code in keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb grounded text to image synthesis with attention refocusing authors quynh phung songwei ge jia bin huang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract driven by scalable diffusion models trained on large scale paired text image datasets text to image synthesis methods have shown compelling results however these models still fail to precisely follow the text prompt when multiple objects attributes and spatial compositions are involved in the prompt in this paper we identify the potential reasons in both the cross attention and self attention layers of the diffusion model we propose two novel losses to refocus the attention maps according to a given layout during the sampling process we perform comprehensive experiments on the drawbench and hrs benchmarks using layouts synthesized by large language models showing that our proposed losses can be integrated easily and effectively into existing text to image methods and consistently improve their alignment between the generated images and the text prompts keyword isp divinet reconstruction from disparate views via neural template regularization authors aditya vora akshay gadi patil hao zhang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we present a volume rendering based neural surface reconstruction method that takes as few as three disparate rgb images as input our key idea is to regularize the reconstruction which is severely ill posed and leaving our method coined divinet operates in two stages the first stage learns the templates in the form of gaussian functions across different scenes without supervision in the reconstruction stage our predicted templates serve as anchors to help stitch the surfaces over sparse regions we demonstrate that our approach is not only able to complete the surface geometry but also reconstructs surface details to a reasonable extent from few disparate input views on the dtu and blendedmvs datasets our approach achieves the best reconstruction quality among existing methods in the presence of such sparse views and performs on par if not better with competing methods when dense views are employed as inputs enhance nerf multiple performance evaluation for neural radiance fields authors qianqiu tan tao liu yinling xie shuwan yu baohua zhang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the quality of three dimensional reconstruction is a key factor affecting the effectiveness of its application in areas such as virtual reality vr and augmented reality ar technologies neural radiance fields nerf can generate realistic images from any viewpoint it simultaneously reconstructs the shape lighting and materials of objects and without surface defects which breaks down the barrier between virtuality and reality the potential spatial correspondences displayed by nerf between reconstructed scenes and real world scenes offer a wide range of practical applications possibilities despite significant progress in reconstruction since nerf were introduced there remains considerable room for exploration and experimentation nerf based models are susceptible to interference issues caused by colored fog noise additionally they frequently encounter instabilities and failures while attempting to reconstruct unbounded scenes moreover the model takes a significant amount of time to converge making it even more challenging to use in such scenarios our approach coined enhance nerf which adopts joint color to balance low and high reflectivity objects display utilizes a decoding architecture with prior knowledge to improve recognition and employs multi layer performance evaluation mechanisms to enhance learning capacity it achieves reconstruction of outdoor scenes within one hour under single card condition based on experimental results enhance nerf partially enhances fitness capability and provides some support to outdoor scene reconstruction the enhance nerf method can be used as a plug and play component making it easy to integrate with other nerf based models the code is available at keyword image signal processing there is no result keyword image signal process there is no result keyword compression hq a large scale high quality dataset for image restoration authors qinhong yang dongdong chen zhentao tan qiankun liu qi chu jianmin bao lu yuan gang hua nenghai yu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper introduces a new large scale image restoration dataset called hq which contains high quality images with rich texture details and semantic diversity we analyze existing image restoration datasets from five different perspectives including data scale resolution compression rates texture details and semantic coverage however we find that all of these datasets are deficient in some aspects in contrast hq considers all of these five aspects during the data curation process and meets all requirements we also present a new degradation aware mixture of expert damoe model which enables a single model to handle multiple corruption types and unknown levels our extensive experiments demonstrate that hq consistently improves the performance on various image restoration tasks such as super resolution denoising dejpeg and deraining furthermore our proposed damoe trained on our dataset outperforms existing state of the art unified models designed for multiple restoration tasks and levels the dataset and code are available at url keyword raw streetsurf extending multi view implicit surface reconstruction to street views authors jianfei guo nianchen deng xinyang li yeqi bai botian shi chiyu wang chenjing ding dongliang wang yikang li subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract we present a novel multi view implicit surface reconstruction technique termed streetsurf that is readily applicable to street view images in widely used autonomous driving datasets such as waymo perception sequences without necessarily requiring lidar data as neural rendering research expands rapidly its integration into street views has started to draw interests existing approaches on street views either mainly focus on novel view synthesis with little exploration of the scene geometry or rely heavily on dense lidar data when investigating reconstruction neither of them investigates multi view implicit surface reconstruction especially under settings without lidar data our method extends prior object centric neural surface reconstruction techniques to address the unique challenges posed by the unbounded street views that are captured with non object centric long and narrow camera trajectories we delimit the unbounded space into three parts close range distant view and sky with aligned cuboid boundaries and adapt cuboid hyper cuboid hash grids along with road surface initialization scheme for finer and disentangled representation to further address the geometric errors arising from textureless regions and insufficient viewing angles we adopt geometric priors that are estimated using general purpose monocular models coupled with our implementation of efficient and fine grained multi stage ray marching strategy we achieve state of the art reconstruction quality in both geometry and appearance within only one to two hours of training time with a single gpu for each street view sequence furthermore we demonstrate that the reconstructed implicit surfaces have rich potential for various downstream tasks including ray tracing and lidar simulation snap self supervised neural maps for visual positioning and semantic understanding authors paul edouard sarlin eduard trulls marc pollefeys jan hosang simon lynen subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract semantic maps are commonly used by humans and machines for navigation purposes whether it s walking or driving however these maps have limitations they lack detail often contain inaccuracies and are difficult to create and maintain especially in an automated fashion can we use raw imagery to automatically create better maps that can be easily interpreted by both humans and machines we introduce snap a deep network that learns rich neural maps from ground level and overhead images we train our model to align neural maps estimated from different inputs supervised only with camera poses over tens of millions of streetview images snap can resolve the location of challenging image queries beyond the reach of traditional methods outperforming the state of the art in localization by a large margin moreover our neural maps encode not only geometry and appearance but also high level semantics discovered without explicit supervision this enables effective pre training for data efficient semantic scene understanding with the potential to unlock cost efficient creation of more detailed maps grounded text to image synthesis with attention refocusing authors quynh phung songwei ge jia bin huang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract driven by scalable diffusion models trained on large scale paired text image datasets text to image synthesis methods have shown compelling results however these models still fail to precisely follow the text prompt when multiple objects attributes and spatial compositions are involved in the prompt in this paper we identify the potential reasons in both the cross attention and self attention layers of the diffusion model we propose two novel losses to refocus the attention maps according to a given layout during the sampling process we perform comprehensive experiments on the drawbench and hrs benchmarks using layouts synthesized by large language models showing that our proposed losses can be integrated easily and effectively into existing text to image methods and consistently improve their alignment between the generated images and the text prompts keyword raw image snap self supervised neural maps for visual positioning and semantic understanding authors paul edouard sarlin eduard trulls marc pollefeys jan hosang simon lynen subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract semantic maps are commonly used by humans and machines for navigation purposes whether it s walking or driving however these maps have limitations they lack detail often contain inaccuracies and are difficult to create and maintain especially in an automated fashion can we use raw imagery to automatically create better maps that can be easily interpreted by both humans and machines we introduce snap a deep network that learns rich neural maps from ground level and overhead images we train our model to align neural maps estimated from different inputs supervised only with camera poses over tens of millions of streetview images snap can resolve the location of challenging image queries beyond the reach of traditional methods outperforming the state of the art in localization by a large margin moreover our neural maps encode not only geometry and appearance but also high level semantics discovered without explicit supervision this enables effective pre training for data efficient semantic scene understanding with the potential to unlock cost efficient creation of more detailed maps
| 1
|
180,802
| 21,625,826,689
|
IssuesEvent
|
2022-05-05 01:54:53
|
emilwareus/thimble.mozilla.org
|
https://api.github.com/repos/emilwareus/thimble.mozilla.org
|
closed
|
CVE-2021-21277 (High) detected in angular-expressions-0.2.1.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-21277 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-expressions-0.2.1.tgz</b></p></summary>
<p>Angular expressions as standalone module</p>
<p>Library home page: <a href="https://registry.npmjs.org/angular-expressions/-/angular-expressions-0.2.1.tgz">https://registry.npmjs.org/angular-expressions/-/angular-expressions-0.2.1.tgz</a></p>
<p>Path to dependency file: thimble.mozilla.org/services/login.webmaker.org/bower_components/webmaker-login-ux/package.json</p>
<p>Path to vulnerable library: thimble.mozilla.org/services/login.webmaker.org/bower_components/webmaker-login-ux/node_modules/angular-expressions/package.json</p>
<p>
Dependency Hierarchy:
- :x: **angular-expressions-0.2.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/emilwareus/thimble.mozilla.org/commit/af3d91f99628f029ddcc04f2c30b6bf019be57d7">af3d91f99628f029ddcc04f2c30b6bf019be57d7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
angular-expressions is "angular's nicest part extracted as a standalone module for the browser and node". In angular-expressions before version 1.1.2 there is a vulnerability which allows Remote Code Execution if you call "expressions.compile(userControlledInput)" where "userControlledInput" is text that comes from user input. The security of the package could be bypassed by using a more complex payload, using a ".constructor.constructor" technique. In terms of impact: If running angular-expressions in the browser, an attacker could run any browser script when the application code calls expressions.compile(userControlledInput). If running angular-expressions on the server, an attacker could run any Javascript expression, thus gaining Remote Code Execution. This is fixed in version 1.1.2 of angular-expressions A temporary workaround might be either to disable user-controlled input that will be fed into angular-expressions in your application or allow only following characters in the userControlledInput.
<p>Publish Date: 2021-02-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21277>CVE-2021-21277</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/peerigon/angular-expressions/security/advisories/GHSA-j6px-jwvv-vpwq">https://github.com/peerigon/angular-expressions/security/advisories/GHSA-j6px-jwvv-vpwq</a></p>
<p>Release Date: 2021-02-01</p>
<p>Fix Resolution: v1.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-21277 (High) detected in angular-expressions-0.2.1.tgz - autoclosed - ## CVE-2021-21277 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-expressions-0.2.1.tgz</b></p></summary>
<p>Angular expressions as standalone module</p>
<p>Library home page: <a href="https://registry.npmjs.org/angular-expressions/-/angular-expressions-0.2.1.tgz">https://registry.npmjs.org/angular-expressions/-/angular-expressions-0.2.1.tgz</a></p>
<p>Path to dependency file: thimble.mozilla.org/services/login.webmaker.org/bower_components/webmaker-login-ux/package.json</p>
<p>Path to vulnerable library: thimble.mozilla.org/services/login.webmaker.org/bower_components/webmaker-login-ux/node_modules/angular-expressions/package.json</p>
<p>
Dependency Hierarchy:
- :x: **angular-expressions-0.2.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/emilwareus/thimble.mozilla.org/commit/af3d91f99628f029ddcc04f2c30b6bf019be57d7">af3d91f99628f029ddcc04f2c30b6bf019be57d7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
angular-expressions is "angular's nicest part extracted as a standalone module for the browser and node". In angular-expressions before version 1.1.2 there is a vulnerability which allows Remote Code Execution if you call "expressions.compile(userControlledInput)" where "userControlledInput" is text that comes from user input. The security of the package could be bypassed by using a more complex payload, using a ".constructor.constructor" technique. In terms of impact: If running angular-expressions in the browser, an attacker could run any browser script when the application code calls expressions.compile(userControlledInput). If running angular-expressions on the server, an attacker could run any Javascript expression, thus gaining Remote Code Execution. This is fixed in version 1.1.2 of angular-expressions A temporary workaround might be either to disable user-controlled input that will be fed into angular-expressions in your application or allow only following characters in the userControlledInput.
<p>Publish Date: 2021-02-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21277>CVE-2021-21277</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/peerigon/angular-expressions/security/advisories/GHSA-j6px-jwvv-vpwq">https://github.com/peerigon/angular-expressions/security/advisories/GHSA-j6px-jwvv-vpwq</a></p>
<p>Release Date: 2021-02-01</p>
<p>Fix Resolution: v1.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in angular expressions tgz autoclosed cve high severity vulnerability vulnerable library angular expressions tgz angular expressions as standalone module library home page a href path to dependency file thimble mozilla org services login webmaker org bower components webmaker login ux package json path to vulnerable library thimble mozilla org services login webmaker org bower components webmaker login ux node modules angular expressions package json dependency hierarchy x angular expressions tgz vulnerable library found in head commit a href vulnerability details angular expressions is angular s nicest part extracted as a standalone module for the browser and node in angular expressions before version there is a vulnerability which allows remote code execution if you call expressions compile usercontrolledinput where usercontrolledinput is text that comes from user input the security of the package could be bypassed by using a more complex payload using a constructor constructor technique in terms of impact if running angular expressions in the browser an attacker could run any browser script when the application code calls expressions compile usercontrolledinput if running angular expressions on the server an attacker could run any javascript expression thus gaining remote code execution this is fixed in version of angular expressions a temporary workaround might be either to disable user controlled input that will be fed into angular expressions in your application or allow only following characters in the usercontrolledinput publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
627
| 3,091,965,871
|
IssuesEvent
|
2015-08-26 15:34:27
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
opened
|
Киев МДА - Видача довідки про участь в приватизації житла державного житлового фонду
|
in process of creating
|
Описание процесса:
https://docs.google.com/document/d/1an1FVsgcxj6dAYNR-CRsq2gl04ALKEqbbImyWimCnmQ/edit?usp=sharing
https://drive.google.com/file/d/0B7KfbWa-Z0dedTNaaEJvRVVjTVE/view?usp=sharing
|
1.0
|
Киев МДА - Видача довідки про участь в приватизації житла державного житлового фонду - Описание процесса:
https://docs.google.com/document/d/1an1FVsgcxj6dAYNR-CRsq2gl04ALKEqbbImyWimCnmQ/edit?usp=sharing
https://drive.google.com/file/d/0B7KfbWa-Z0dedTNaaEJvRVVjTVE/view?usp=sharing
|
process
|
киев мда видача довідки про участь в приватизації житла державного житлового фонду описание процесса
| 1
|
312,447
| 9,547,017,684
|
IssuesEvent
|
2019-05-01 21:42:23
|
teambit/bit
|
https://api.github.com/repos/teambit/bit
|
closed
|
Create a component for each file in each sub-directory
|
area/add priority/medium type/feature
|
## Expected Behavior
This should create a component for each of the files in src and in its sub-directories, according to the documentation.
## Actual Behavior
error: one or more of the added components does not contain a main file.
please either use --id to group all added files as one component or use our DSL to define the main file dynamically.
## Steps to Reproduce the Problem
1. Create a file structure

2. bit add src/**/*
## Specifications
- Bit version: 13.0.4
- Node version: v9.9.0
- npm version: 6.5.0
- Platform: Macbook Pro
|
1.0
|
Create a component for each file in each sub-directory - ## Expected Behavior
This should create a component for each of the files in src and in its sub-directories, according to the documentation.
## Actual Behavior
error: one or more of the added components does not contain a main file.
please either use --id to group all added files as one component or use our DSL to define the main file dynamically.
## Steps to Reproduce the Problem
1. Create a file structure

2. bit add src/**/*
## Specifications
- Bit version: 13.0.4
- Node version: v9.9.0
- npm version: 6.5.0
- Platform: Macbook Pro
|
non_process
|
create a component for each file in each sub directory expected behavior this should create a component for each of the files in src and in its sub directories according to the documentation actual behavior error one or more of the added components does not contain a main file please either use id to group all added files as one component or use our dsl to define the main file dynamically steps to reproduce the problem create a file structure bit add src specifications bit version node version npm version platform macbook pro
| 0
|
1,210
| 3,715,400,505
|
IssuesEvent
|
2016-03-03 01:30:53
|
e107inc/e107
|
https://api.github.com/repos/e107inc/e107
|
closed
|
Possible issue in *_setup.php upgrade_required() method
|
bug testing required upgrade process 1.x to 2.x
|
After finishing the forum upgrade procedure, e107 still asks to update the forum.
This is because the upgrade_required() method is not working as intended.
If I change the forum_setup.php file to include this, it will still trigger an update request:
```php
function upgrade_required()
{
return false;
}
```
/cc @CaMer0n @SecretR
|
1.0
|
Possible issue in *_setup.php upgrade_required() method - After finishing the forum upgrade procedure, e107 still asks to update the forum.
This is because the upgrade_required() method is not working as intended.
If I change the forum_setup.php file to include this, it will still trigger an update request:
```php
function upgrade_required()
{
return false;
}
```
/cc @CaMer0n @SecretR
|
process
|
possible issue in setup php upgrade required method after finishing the forum upgrade procedure still asks to update the forum this is because the upgrade required method is not working as intended if i change the forum setup php file to include this it will still trigger an update request php function upgrade required return false cc secretr
| 1
|
21,810
| 30,316,465,397
|
IssuesEvent
|
2023-07-10 15:54:00
|
Parsl/parsl
|
https://api.github.com/repos/Parsl/parsl
|
closed
|
parsl auto weekly release does not update `stable` readthedocs
|
bug release_process
|
**Describe the bug**
Read the docs stable docs still point at 848e3a2bf3dc6eb8ed3efb7fd7e849b5c195d34c which is the last release in the non-automatic release series, 1.2.0, in January 2022.
**To Reproduce**
Look at readthedocs
**Expected behavior**
readthedocs versions should align with pypi release versions.
**Environment**
readthedocs
|
1.0
|
parsl auto weekly release does not update `stable` readthedocs - **Describe the bug**
Read the docs stable docs still point at 848e3a2bf3dc6eb8ed3efb7fd7e849b5c195d34c which is the last release in the non-automatic release series, 1.2.0, in January 2022.
**To Reproduce**
Look at readthedocs
**Expected behavior**
readthedocs versions should align with pypi release versions.
**Environment**
readthedocs
|
process
|
parsl auto weekly release does not update stable readthedocs describe the bug read the docs stable docs still point at which is the last release in the non automatic release series in january to reproduce look at readthedocs expected behavior readthedocs versions should align with pypi release versions environment readthedocs
| 1
|
154,319
| 19,712,255,467
|
IssuesEvent
|
2022-01-13 07:15:53
|
Shai-Demo-Org/JS-Demo
|
https://api.github.com/repos/Shai-Demo-Org/JS-Demo
|
closed
|
CVE-2017-16026 (Medium) detected in request-2.36.0.tgz, request-2.67.0.tgz - autoclosed
|
security vulnerability
|
## CVE-2017-16026 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>request-2.36.0.tgz</b>, <b>request-2.67.0.tgz</b></p></summary>
<p>
<details><summary><b>request-2.36.0.tgz</b></p></summary>
<p>Simplified HTTP request client.</p>
<p>Library home page: <a href="https://registry.npmjs.org/request/-/request-2.36.0.tgz">https://registry.npmjs.org/request/-/request-2.36.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/zaproxy/node_modules/request/package.json</p>
<p>
Dependency Hierarchy:
- zaproxy-0.2.0.tgz (Root Library)
- :x: **request-2.36.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>request-2.67.0.tgz</b></p></summary>
<p>Simplified HTTP request client.</p>
<p>Library home page: <a href="https://registry.npmjs.org/request/-/request-2.67.0.tgz">https://registry.npmjs.org/request/-/request-2.67.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/grunt-retire/node_modules/request/package.json</p>
<p>
Dependency Hierarchy:
- grunt-retire-0.3.12.tgz (Root Library)
- :x: **request-2.67.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Shai-Demo-Org/JS-Demo/commit/d0b5974c3a448cfa882ca24599e23e40c6bac8c0">d0b5974c3a448cfa882ca24599e23e40c6bac8c0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Request is an http client. If a request is made using ```multipart```, and the body type is a ```number```, then the specified number of non-zero memory is passed in the body. This affects Request >=2.2.6 <2.47.0 || >2.51.0 <=2.67.0.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16026>CVE-2017-16026</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-16026">https://nvd.nist.gov/vuln/detail/CVE-2017-16026</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 2.47.1,2.67.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"request","packageVersion":"2.36.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"zaproxy:0.2.0;request:2.36.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.47.1,2.67.1","isBinary":false},{"packageType":"javascript/Node.js","packageName":"request","packageVersion":"2.67.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-retire:0.3.12;request:2.67.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.47.1,2.67.1","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2017-16026","vulnerabilityDetails":"Request is an http client. If a request is made using ```multipart```, and the body type is a ```number```, then the specified number of non-zero memory is passed in the body. This affects Request \u003e\u003d2.2.6 \u003c2.47.0 || \u003e2.51.0 \u003c\u003d2.67.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16026","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-16026 (Medium) detected in request-2.36.0.tgz, request-2.67.0.tgz - autoclosed - ## CVE-2017-16026 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>request-2.36.0.tgz</b>, <b>request-2.67.0.tgz</b></p></summary>
<p>
<details><summary><b>request-2.36.0.tgz</b></p></summary>
<p>Simplified HTTP request client.</p>
<p>Library home page: <a href="https://registry.npmjs.org/request/-/request-2.36.0.tgz">https://registry.npmjs.org/request/-/request-2.36.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/zaproxy/node_modules/request/package.json</p>
<p>
Dependency Hierarchy:
- zaproxy-0.2.0.tgz (Root Library)
- :x: **request-2.36.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>request-2.67.0.tgz</b></p></summary>
<p>Simplified HTTP request client.</p>
<p>Library home page: <a href="https://registry.npmjs.org/request/-/request-2.67.0.tgz">https://registry.npmjs.org/request/-/request-2.67.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/grunt-retire/node_modules/request/package.json</p>
<p>
Dependency Hierarchy:
- grunt-retire-0.3.12.tgz (Root Library)
- :x: **request-2.67.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Shai-Demo-Org/JS-Demo/commit/d0b5974c3a448cfa882ca24599e23e40c6bac8c0">d0b5974c3a448cfa882ca24599e23e40c6bac8c0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Request is an http client. If a request is made using ```multipart```, and the body type is a ```number```, then the specified number of non-zero memory is passed in the body. This affects Request >=2.2.6 <2.47.0 || >2.51.0 <=2.67.0.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16026>CVE-2017-16026</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-16026">https://nvd.nist.gov/vuln/detail/CVE-2017-16026</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 2.47.1,2.67.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"request","packageVersion":"2.36.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"zaproxy:0.2.0;request:2.36.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.47.1,2.67.1","isBinary":false},{"packageType":"javascript/Node.js","packageName":"request","packageVersion":"2.67.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-retire:0.3.12;request:2.67.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.47.1,2.67.1","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2017-16026","vulnerabilityDetails":"Request is an http client. If a request is made using ```multipart```, and the body type is a ```number```, then the specified number of non-zero memory is passed in the body. This affects Request \u003e\u003d2.2.6 \u003c2.47.0 || \u003e2.51.0 \u003c\u003d2.67.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16026","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in request tgz request tgz autoclosed cve medium severity vulnerability vulnerable libraries request tgz request tgz request tgz simplified http request client library home page a href path to dependency file package json path to vulnerable library node modules zaproxy node modules request package json dependency hierarchy zaproxy tgz root library x request tgz vulnerable library request tgz simplified http request client library home page a href path to dependency file package json path to vulnerable library node modules grunt retire node modules request package json dependency hierarchy grunt retire tgz root library x request tgz vulnerable library found in head commit a href found in base branch main vulnerability details request is an http client if a request is made using multipart and the body type is a number then the specified number of non zero memory is passed in the body this affects request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree zaproxy request isminimumfixversionavailable true minimumfixversion isbinary false packagetype javascript node js packagename request packageversion packagefilepaths istransitivedependency true dependencytree grunt retire request isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails request is an http client if a request is made using multipart and the body type is a number then the specified number of non zero memory is passed in the body this affects request vulnerabilityurl
| 0
|
83,715
| 16,358,352,237
|
IssuesEvent
|
2021-05-14 04:34:05
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
[tool_crash] ProcessException: Process exited abnormally:Command line invocation: /Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -listxcodebuild: error: Unable to read project 'Runner.xcodeproj'. Reason: Could not open workspace file at ~/projects/-Mobile/ios/Runner.xcodeproj/project.xcworkspace/contents.xcworkspacedata Command: /usr/bin/xcodebuild, OS error code: 74
|
P4 t: xcode tool
|
## Command
```
flutter build ios --release
```
## Steps to Reproduce
1. ...
2. ...
3. ...
## Logs
ProcessException: Process exited abnormally:
Command line invocation:
/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -list
xcodebuild: error: Unable to read project 'Runner.xcodeproj'.
Reason: Could not open workspace file at ~/projects/-Mobile/ios/Runner.xcodeproj/project.xcworkspace/contents.xcworkspacedata Command: /usr/bin/xcodebuild, OS error code: 74
```
#0 RunResult.throwException (package:flutter_tools/src/base/process.dart:172:5)
#1 _DefaultProcessUtils.run (package:flutter_tools/src/base/process.dart:323:19)
<asynchronous suspension>
#2 XcodeProjectInterpreter.getInfo (package:flutter_tools/src/ios/xcodeproj.dart:381:50)
#3 buildXcodeProject (package:flutter_tools/src/ios/mac.dart:122:78)
<asynchronous suspension>
#4 BuildIOSCommand.runCommand (package:flutter_tools/src/commands/build_ios.dart:93:43)
#5 _rootRunUnary (dart:async/zone.dart:1198:47)
#6 _CustomZone.runUnary (dart:async/zone.dart:1100:19)
#7 _FutureListener.handleValue (dart:async/future_impl.dart:143:18)
#8 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:696:45)
#9 Future._propagateToListeners (dart:async/future_impl.dart:725:32)
#10 Future._completeWithValue (dart:async/future_impl.dart:529:5)
#11 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:40:15)
#12 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:311:13)
#13 ApplicationPackageStore.getPackageForPlatform (package:flutter_tools/src/application_package.dart)
#14 _rootRunUnary (dart:async/zone.dart:1198:47)
#15 _CustomZone.runUnary (dart:async/zone.dart:1100:19)
#16 _FutureListener.handleValue (dart:async/future_impl.dart:143:18)
#17 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:696:45)
#18 Future._propagateToListeners (dart:async/future_impl.dart:725:32)
#19 Future._completeWithValue (dart:async/future_impl.dart:529:5)
#20 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:40:15)
#21 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:311:13)
#22 BuildableIOSApp.fromProject (package:flutter_tools/src/application_package.dart)
```
```
[✓] Flutter (Channel beta, 1.20.0, on Mac OS X 10.15.6 19G73, locale en-GB)
• Flutter version 1.20.0 at /Users/vjpranay/.asdf/installs/flutter/1.17.4
• Framework revision 916c3ac648 (7 days ago), 2020-08-01 09:01:12 -0700
• Engine revision d6ee1499c2
• Dart version 2.9.0 (build 2.9.0-21.10.beta)
[!] Android toolchain - develop for Android devices
• Android SDK at /usr/local/Caskroom/android-platform-tools/30.0.0
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, set ANDROID_HOME to that location.
You may also want to add it to your PATH environment variable.
✗ No valid Android SDK platforms found in /usr/local/Caskroom/android-platform-tools/30.0.0/platforms. Directory was empty.
• Try re-installing or updating your Android SDK,
visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions.
[✓] Xcode - develop for iOS and macOS (Xcode 11.5)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.5, Build version 11E608c
• CocoaPods version 1.9.3
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions).
[✓] Connected device (1 available)
• VJ’s iPhone (mobile) • 00008020-001514813433002E • ios • iOS 13.5
! Doctor found issues in 2 categories.
```
## Flutter Application Metadata
**Type**: app
**Version**: 1.0.0+1
**Material**: true
**Android X**: false
**Module**: false
**Plugin**: false
**Android package**: null
**iOS bundle identifier**: null
**Creation channel**: beta
**Creation framework version**: 659dc8129d4edb9166e9a0d600439d135740933f
|
1.0
|
[tool_crash] ProcessException: Process exited abnormally:Command line invocation: /Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -listxcodebuild: error: Unable to read project 'Runner.xcodeproj'. Reason: Could not open workspace file at ~/projects/-Mobile/ios/Runner.xcodeproj/project.xcworkspace/contents.xcworkspacedata Command: /usr/bin/xcodebuild, OS error code: 74 - ## Command
```
flutter build ios --release
```
## Steps to Reproduce
1. ...
2. ...
3. ...
## Logs
ProcessException: Process exited abnormally:
Command line invocation:
/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -list
xcodebuild: error: Unable to read project 'Runner.xcodeproj'.
Reason: Could not open workspace file at ~/projects/-Mobile/ios/Runner.xcodeproj/project.xcworkspace/contents.xcworkspacedata Command: /usr/bin/xcodebuild, OS error code: 74
```
#0 RunResult.throwException (package:flutter_tools/src/base/process.dart:172:5)
#1 _DefaultProcessUtils.run (package:flutter_tools/src/base/process.dart:323:19)
<asynchronous suspension>
#2 XcodeProjectInterpreter.getInfo (package:flutter_tools/src/ios/xcodeproj.dart:381:50)
#3 buildXcodeProject (package:flutter_tools/src/ios/mac.dart:122:78)
<asynchronous suspension>
#4 BuildIOSCommand.runCommand (package:flutter_tools/src/commands/build_ios.dart:93:43)
#5 _rootRunUnary (dart:async/zone.dart:1198:47)
#6 _CustomZone.runUnary (dart:async/zone.dart:1100:19)
#7 _FutureListener.handleValue (dart:async/future_impl.dart:143:18)
#8 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:696:45)
#9 Future._propagateToListeners (dart:async/future_impl.dart:725:32)
#10 Future._completeWithValue (dart:async/future_impl.dart:529:5)
#11 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:40:15)
#12 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:311:13)
#13 ApplicationPackageStore.getPackageForPlatform (package:flutter_tools/src/application_package.dart)
#14 _rootRunUnary (dart:async/zone.dart:1198:47)
#15 _CustomZone.runUnary (dart:async/zone.dart:1100:19)
#16 _FutureListener.handleValue (dart:async/future_impl.dart:143:18)
#17 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:696:45)
#18 Future._propagateToListeners (dart:async/future_impl.dart:725:32)
#19 Future._completeWithValue (dart:async/future_impl.dart:529:5)
#20 _AsyncAwaitCompleter.complete (dart:async-patch/async_patch.dart:40:15)
#21 _completeOnAsyncReturn (dart:async-patch/async_patch.dart:311:13)
#22 BuildableIOSApp.fromProject (package:flutter_tools/src/application_package.dart)
```
```
[✓] Flutter (Channel beta, 1.20.0, on Mac OS X 10.15.6 19G73, locale en-GB)
• Flutter version 1.20.0 at /Users/vjpranay/.asdf/installs/flutter/1.17.4
• Framework revision 916c3ac648 (7 days ago), 2020-08-01 09:01:12 -0700
• Engine revision d6ee1499c2
• Dart version 2.9.0 (build 2.9.0-21.10.beta)
[!] Android toolchain - develop for Android devices
• Android SDK at /usr/local/Caskroom/android-platform-tools/30.0.0
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, set ANDROID_HOME to that location.
You may also want to add it to your PATH environment variable.
✗ No valid Android SDK platforms found in /usr/local/Caskroom/android-platform-tools/30.0.0/platforms. Directory was empty.
• Try re-installing or updating your Android SDK,
visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions.
[✓] Xcode - develop for iOS and macOS (Xcode 11.5)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.5, Build version 11E608c
• CocoaPods version 1.9.3
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions).
[✓] Connected device (1 available)
• VJ’s iPhone (mobile) • 00008020-001514813433002E • ios • iOS 13.5
! Doctor found issues in 2 categories.
```
## Flutter Application Metadata
**Type**: app
**Version**: 1.0.0+1
**Material**: true
**Android X**: false
**Module**: false
**Plugin**: false
**Android package**: null
**iOS bundle identifier**: null
**Creation channel**: beta
**Creation framework version**: 659dc8129d4edb9166e9a0d600439d135740933f
|
non_process
|
processexception process exited abnormally command line invocation applications xcode app contents developer usr bin xcodebuild listxcodebuild error unable to read project runner xcodeproj reason could not open workspace file at projects mobile ios runner xcodeproj project xcworkspace contents xcworkspacedata command usr bin xcodebuild os error code command flutter build ios release steps to reproduce logs processexception process exited abnormally command line invocation applications xcode app contents developer usr bin xcodebuild list xcodebuild error unable to read project runner xcodeproj reason could not open workspace file at projects mobile ios runner xcodeproj project xcworkspace contents xcworkspacedata command usr bin xcodebuild os error code runresult throwexception package flutter tools src base process dart defaultprocessutils run package flutter tools src base process dart xcodeprojectinterpreter getinfo package flutter tools src ios xcodeproj dart buildxcodeproject package flutter tools src ios mac dart buildioscommand runcommand package flutter tools src commands build ios dart rootrununary dart async zone dart customzone rununary dart async zone dart futurelistener handlevalue dart async future impl dart future propagatetolisteners handlevaluecallback dart async future impl dart future propagatetolisteners dart async future impl dart future completewithvalue dart async future impl dart asyncawaitcompleter complete dart async patch async patch dart completeonasyncreturn dart async patch async patch dart applicationpackagestore getpackageforplatform package flutter tools src application package dart rootrununary dart async zone dart customzone rununary dart async zone dart futurelistener handlevalue dart async future impl dart future propagatetolisteners handlevaluecallback dart async future impl dart future propagatetolisteners dart async future impl dart future completewithvalue dart async future impl dart asyncawaitcompleter complete dart async patch async patch dart completeonasyncreturn dart async patch async patch dart buildableiosapp fromproject package flutter tools src application package dart flutter channel beta on mac os x locale en gb • flutter version at users vjpranay asdf installs flutter • framework revision days ago • engine revision • dart version build beta android toolchain develop for android devices • android sdk at usr local caskroom android platform tools ✗ unable to locate android sdk install android studio from on first launch it will assist you in installing the android sdk components or visit for detailed instructions if the android sdk has been installed to a custom location set android home to that location you may also want to add it to your path environment variable ✗ no valid android sdk platforms found in usr local caskroom android platform tools platforms directory was empty • try re installing or updating your android sdk visit for detailed instructions xcode develop for ios and macos xcode • xcode at applications xcode app contents developer • xcode build version • cocoapods version android studio not installed • android studio not found download from or visit for detailed instructions connected device available • vj’s iphone mobile • • ios • ios doctor found issues in categories flutter application metadata type app version material true android x false module false plugin false android package null ios bundle identifier null creation channel beta creation framework version
| 0
|
53,403
| 28,119,261,132
|
IssuesEvent
|
2023-03-31 13:08:20
|
ARK-Builders/arklib
|
https://api.github.com/repos/ARK-Builders/arklib
|
opened
|
Update method should return complete resources with their details
|
performance
|
`ResourceIndex` should pass complete resources with their details during `update`.
Right now, only `ResourceId`s are passed. This causes the library clients to reconstruct details again.
See `fun compute` in `Resource.kt` (https://github.com/ARK-Builders/arklib-android).
|
True
|
Update method should return complete resources with their details - `ResourceIndex` should pass complete resources with their details during `update`.
Right now, only `ResourceId`s are passed. This causes the library clients to reconstruct details again.
See `fun compute` in `Resource.kt` (https://github.com/ARK-Builders/arklib-android).
|
non_process
|
update method should return complete resources with their details resourceindex should pass complete resources with their details during update right now only resourceid s are passed this causes the library clients to reconstruct details again see fun compute in resource kt
| 0
|
214,226
| 16,552,700,186
|
IssuesEvent
|
2021-05-28 10:23:49
|
tweepy/tweepy
|
https://api.github.com/repos/tweepy/tweepy
|
closed
|
Automatically fill in method docstrings (and even new methods/parameters?) from the Twitter API Docs
|
Documentation RFC
|
I was inspired by @jackdied's PR: https://github.com/tweepy/tweepy/pull/259, which scrapes the docs, looking for mismatches with allowed_parameters (I don't think it works with the current docs/code).
This would allow Tweepy to be updated quickly whenever Twitter changes their API. It would also save us the work of documenting every single API method manually.
The downside is that dowloading all of the docs and scraping them can be slow, and the documentation HTML isn't versioned (obviously :smile:), so the script could break suddenly.
@joshthecoder: What are you thoughts?
|
1.0
|
Automatically fill in method docstrings (and even new methods/parameters?) from the Twitter API Docs - I was inspired by @jackdied's PR: https://github.com/tweepy/tweepy/pull/259, which scrapes the docs, looking for mismatches with allowed_parameters (I don't think it works with the current docs/code).
This would allow Tweepy to be updated quickly whenever Twitter changes their API. It would also save us the work of documenting every single API method manually.
The downside is that dowloading all of the docs and scraping them can be slow, and the documentation HTML isn't versioned (obviously :smile:), so the script could break suddenly.
@joshthecoder: What are you thoughts?
|
non_process
|
automatically fill in method docstrings and even new methods parameters from the twitter api docs i was inspired by jackdied s pr which scrapes the docs looking for mismatches with allowed parameters i don t think it works with the current docs code this would allow tweepy to be updated quickly whenever twitter changes their api it would also save us the work of documenting every single api method manually the downside is that dowloading all of the docs and scraping them can be slow and the documentation html isn t versioned obviously smile so the script could break suddenly joshthecoder what are you thoughts
| 0
|
153,466
| 13,505,564,851
|
IssuesEvent
|
2020-09-13 23:38:03
|
flareteam/flare-engine
|
https://api.github.com/repos/flareteam/flare-engine
|
closed
|
no VC++ solution?
|
documentation
|
You say this can be built with VC++ but there is no solution included anywhere?
|
1.0
|
no VC++ solution? - You say this can be built with VC++ but there is no solution included anywhere?
|
non_process
|
no vc solution you say this can be built with vc but there is no solution included anywhere
| 0
|
15,177
| 18,950,649,479
|
IssuesEvent
|
2021-11-18 14:52:33
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Please document which characters are valid in `counter` expressions and how they are treated
|
doc-enhancement devops/prod devops-cicd-process/tech
|
This is a follow-up issue for this issue, which seems to be muted because it's closed.
https://github.com/MicrosoftDocs/azure-devops-docs/issues/11323#issuecomment-946823878
The fix for the issue above just said that some unspecified set of special characters cannot be used, but didn't specify explicitly the valid set of characters, except that it mentioned that the `.` character cannot be used.
Can you please document which characters can be used for a prefix in the `counter` function as well as the behavior when invalid characters are used and whether they are replaced with some placeholder to form a prefix (i.e. `1.2.3` becomes `1_2_3`, if `_` is a placeholder) or removed (i.e. `1.2.3` becomes `123`) or left as unexpanded variable and used as is (i.e. `1.2.3+$(BLDN)` is left as a prefix `1.2.3+$(BLDN)` if `$(BLDN)` contains invalid characters).
There is no way to see the final prefix used inside `counter`, which makes troubleshooting of this issue extremely expensive - one needs to run a pipeline and see if the counter has been incremented or restarted for each sequence. It takes hours for all experiments.
It's very important to know which one of those 3 cases actually happens. Imagine if invalid characters, like `.` mentioned in the docs, are just removed. This would create cases with build numbers completely messed up. Imagine I use `1.12.4` as a prefix. If dots are removed, then `1.1.24` will produce the same sequence `1124` and the counter used as a build number between completely different version builds will mixed up.
Similarly, using a placeholder may produce unexpected results if different source sequences are replaced to use the same placeholder character, whatever the actual character is - it may be any Unicode character and this point would still stand.
Speaking of `.`, I tested a few sequences and it appears that `.`, `-`, `+` and a couple other punctuation characters work in that they are not removed and indeed participate in the prefix - each of the tests generated a new counter sequence. This makes the fix in the issue above questionable. However, I cannot possibly test if they are being replaced with some placeholders.
Can you please ask developers for this feature to describe exactly how `counter` treats the prefix sequence, what characters remain intact and which ones may be replaced in some way with placeholders?
`counter` in Azure DevOps is one of the best features that makes it possible to generate accurate build numbers for a number of build scenarios that other DevOps platforms lack. Documenting it well will go a long way.
Thanks!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops-2020#counter)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Please document which characters are valid in `counter` expressions and how they are treated - This is a follow-up issue for this issue, which seems to be muted because it's closed.
https://github.com/MicrosoftDocs/azure-devops-docs/issues/11323#issuecomment-946823878
The fix for the issue above just said that some unspecified set of special characters cannot be used, but didn't specify explicitly the valid set of characters, except that it mentioned that the `.` character cannot be used.
Can you please document which characters can be used for a prefix in the `counter` function as well as the behavior when invalid characters are used and whether they are replaced with some placeholder to form a prefix (i.e. `1.2.3` becomes `1_2_3`, if `_` is a placeholder) or removed (i.e. `1.2.3` becomes `123`) or left as unexpanded variable and used as is (i.e. `1.2.3+$(BLDN)` is left as a prefix `1.2.3+$(BLDN)` if `$(BLDN)` contains invalid characters).
There is no way to see the final prefix used inside `counter`, which makes troubleshooting of this issue extremely expensive - one needs to run a pipeline and see if the counter has been incremented or restarted for each sequence. It takes hours for all experiments.
It's very important to know which one of those 3 cases actually happens. Imagine if invalid characters, like `.` mentioned in the docs, are just removed. This would create cases with build numbers completely messed up. Imagine I use `1.12.4` as a prefix. If dots are removed, then `1.1.24` will produce the same sequence `1124` and the counter used as a build number between completely different version builds will mixed up.
Similarly, using a placeholder may produce unexpected results if different source sequences are replaced to use the same placeholder character, whatever the actual character is - it may be any Unicode character and this point would still stand.
Speaking of `.`, I tested a few sequences and it appears that `.`, `-`, `+` and a couple other punctuation characters work in that they are not removed and indeed participate in the prefix - each of the tests generated a new counter sequence. This makes the fix in the issue above questionable. However, I cannot possibly test if they are being replaced with some placeholders.
Can you please ask developers for this feature to describe exactly how `counter` treats the prefix sequence, what characters remain intact and which ones may be replaced in some way with placeholders?
`counter` in Azure DevOps is one of the best features that makes it possible to generate accurate build numbers for a number of build scenarios that other DevOps platforms lack. Documenting it well will go a long way.
Thanks!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops-2020#counter)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
please document which characters are valid in counter expressions and how they are treated this is a follow up issue for this issue which seems to be muted because it s closed the fix for the issue above just said that some unspecified set of special characters cannot be used but didn t specify explicitly the valid set of characters except that it mentioned that the character cannot be used can you please document which characters can be used for a prefix in the counter function as well as the behavior when invalid characters are used and whether they are replaced with some placeholder to form a prefix i e becomes if is a placeholder or removed i e becomes or left as unexpanded variable and used as is i e bldn is left as a prefix bldn if bldn contains invalid characters there is no way to see the final prefix used inside counter which makes troubleshooting of this issue extremely expensive one needs to run a pipeline and see if the counter has been incremented or restarted for each sequence it takes hours for all experiments it s very important to know which one of those cases actually happens imagine if invalid characters like mentioned in the docs are just removed this would create cases with build numbers completely messed up imagine i use as a prefix if dots are removed then will produce the same sequence and the counter used as a build number between completely different version builds will mixed up similarly using a placeholder may produce unexpected results if different source sequences are replaced to use the same placeholder character whatever the actual character is it may be any unicode character and this point would still stand speaking of i tested a few sequences and it appears that and a couple other punctuation characters work in that they are not removed and indeed participate in the prefix each of the tests generated a new counter sequence this makes the fix in the issue above questionable however i cannot possibly test if they are being replaced with some placeholders can you please ask developers for this feature to describe exactly how counter treats the prefix sequence what characters remain intact and which ones may be replaced in some way with placeholders counter in azure devops is one of the best features that makes it possible to generate accurate build numbers for a number of build scenarios that other devops platforms lack documenting it well will go a long way thanks document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
54,352
| 13,616,058,125
|
IssuesEvent
|
2020-09-23 15:10:16
|
SAP/fundamental-styles
|
https://api.github.com/repos/SAP/fundamental-styles
|
closed
|
Defect Hunting Sep23
|
Defect Hunting
|
- [ ] Cozy Side Navigation Icon Active Underline

- [ ] Complex Cozy Side Nav focus

|
1.0
|
Defect Hunting Sep23 - - [ ] Cozy Side Navigation Icon Active Underline

- [ ] Complex Cozy Side Nav focus

|
non_process
|
defect hunting cozy side navigation icon active underline complex cozy side nav focus
| 0
|
14,270
| 17,225,640,440
|
IssuesEvent
|
2021-07-20 00:58:43
|
beyondhb1079/s4us
|
https://api.github.com/repos/beyondhb1079/s4us
|
closed
|
About page: update paragraphs
|
process
|
Currently the information is generated text with some sections. We should figure out what goes there. Separate issues for each section of it can be created if needed.
|
1.0
|
About page: update paragraphs - Currently the information is generated text with some sections. We should figure out what goes there. Separate issues for each section of it can be created if needed.
|
process
|
about page update paragraphs currently the information is generated text with some sections we should figure out what goes there separate issues for each section of it can be created if needed
| 1
|
20,578
| 27,240,046,553
|
IssuesEvent
|
2023-02-21 19:34:35
|
winter-telescope/winterdrp
|
https://api.github.com/repos/winter-telescope/winterdrp
|
opened
|
[FEATURE] Single Source Photometry
|
enhancement processors
|
**Is your feature request related to a problem? Please describe.**
As discussed on a previous call, we need a processor to do forced/PSF photometry for a particular source, and generate a one-source SourceTable rather than just sticking the info in the header.
That will enable us to use all downstream processors e.g exporting to Fritz etc.
|
1.0
|
[FEATURE] Single Source Photometry - **Is your feature request related to a problem? Please describe.**
As discussed on a previous call, we need a processor to do forced/PSF photometry for a particular source, and generate a one-source SourceTable rather than just sticking the info in the header.
That will enable us to use all downstream processors e.g exporting to Fritz etc.
|
process
|
single source photometry is your feature request related to a problem please describe as discussed on a previous call we need a processor to do forced psf photometry for a particular source and generate a one source sourcetable rather than just sticking the info in the header that will enable us to use all downstream processors e g exporting to fritz etc
| 1
|
24,189
| 16,992,008,220
|
IssuesEvent
|
2021-06-30 22:02:58
|
timescale/timescaledb-toolkit
|
https://api.github.com/repos/timescale/timescaledb-toolkit
|
closed
|
Tracking Issue: CI/CD setup
|
Infrastructure
|
Basic tasks needed for CI/CD:
- [x] Test PG 13 Job: just like [the pg12 job](https://github.com/timescale/timescale-analytics/blob/7f44eeeafe6be0df64aaf92b3bdc19c1e50566d0/.github/workflows/ci.yml#L12-L47) but for pg13
- [x] Update our [pgx image](https://github.com/timescale/timescale-analytics/blob/7f44eeeafe6be0df64aaf92b3bdc19c1e50566d0/docker/Dockerfile) to also include TimescaleDB in the relevant postgres instances (12 and 13?)
- [x] Create dockerfile for building our image
- [x] Set up automatic builds&pushes for our image
- [ ] stretch: only run tests when things under test actually change
|
1.0
|
Tracking Issue: CI/CD setup - Basic tasks needed for CI/CD:
- [x] Test PG 13 Job: just like [the pg12 job](https://github.com/timescale/timescale-analytics/blob/7f44eeeafe6be0df64aaf92b3bdc19c1e50566d0/.github/workflows/ci.yml#L12-L47) but for pg13
- [x] Update our [pgx image](https://github.com/timescale/timescale-analytics/blob/7f44eeeafe6be0df64aaf92b3bdc19c1e50566d0/docker/Dockerfile) to also include TimescaleDB in the relevant postgres instances (12 and 13?)
- [x] Create dockerfile for building our image
- [x] Set up automatic builds&pushes for our image
- [ ] stretch: only run tests when things under test actually change
|
non_process
|
tracking issue ci cd setup basic tasks needed for ci cd test pg job just like but for update our to also include timescaledb in the relevant postgres instances and create dockerfile for building our image set up automatic builds pushes for our image stretch only run tests when things under test actually change
| 0
|
17,528
| 23,341,021,216
|
IssuesEvent
|
2022-08-09 14:02:35
|
pystatgen/sgkit
|
https://api.github.com/repos/pystatgen/sgkit
|
closed
|
Wheels test on macOS on Python 3.10 fails
|
process + tools
|
There is no `cyvcf2` wheel for this combination, so we need to fix that. See https://github.com/brentp/cyvcf2/issues/245
Failure: https://github.com/pystatgen/sgkit/runs/7634527135?check_suite_focus=true
|
1.0
|
Wheels test on macOS on Python 3.10 fails - There is no `cyvcf2` wheel for this combination, so we need to fix that. See https://github.com/brentp/cyvcf2/issues/245
Failure: https://github.com/pystatgen/sgkit/runs/7634527135?check_suite_focus=true
|
process
|
wheels test on macos on python fails there is no wheel for this combination so we need to fix that see failure
| 1
|
200,540
| 15,111,883,716
|
IssuesEvent
|
2021-02-08 21:03:20
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
Upgrade Locust to 1.4.1
|
Epic QA VSP-testing-team testing
|
## User Story
As a VFS team engineer, I need an updated version of Locust in order to perform load testing on my endpoints in vets-api. Python 2.X is being sunset and it is becoming more and more difficult to maintain an environment that supports the very old version of Locust that is currently prescribed in the devops repo.
## Expected Behavior
Locust 1.4.1 configured for use in the devops repository.
## Acceptance Criteria
- [x] Locust 1.4.1 is configured in devops/loadtest
- [x] Existing load tests contained devops/loadtest have been refactored to work with Locust 1.4.1
|
2.0
|
Upgrade Locust to 1.4.1 - ## User Story
As a VFS team engineer, I need an updated version of Locust in order to perform load testing on my endpoints in vets-api. Python 2.X is being sunset and it is becoming more and more difficult to maintain an environment that supports the very old version of Locust that is currently prescribed in the devops repo.
## Expected Behavior
Locust 1.4.1 configured for use in the devops repository.
## Acceptance Criteria
- [x] Locust 1.4.1 is configured in devops/loadtest
- [x] Existing load tests contained devops/loadtest have been refactored to work with Locust 1.4.1
|
non_process
|
upgrade locust to user story as a vfs team engineer i need an updated version of locust in order to perform load testing on my endpoints in vets api python x is being sunset and it is becoming more and more difficult to maintain an environment that supports the very old version of locust that is currently prescribed in the devops repo expected behavior locust configured for use in the devops repository acceptance criteria locust is configured in devops loadtest existing load tests contained devops loadtest have been refactored to work with locust
| 0
|
157,793
| 19,984,268,399
|
IssuesEvent
|
2022-01-30 12:08:03
|
davidjj58/Home-assignment
|
https://api.github.com/repos/davidjj58/Home-assignment
|
opened
|
CVE-2019-17267 (High) detected in jackson-databind-2.8.7.jar
|
security vulnerability
|
## CVE-2019-17267 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.7.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /itory/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.7.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/davidjj58/Home-assignment/commit/77c895db92c590fab8e72e59e8be84d821d75fb6">77c895db92c590fab8e72e59e8be84d821d75fb6</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to net.sf.ehcache.hibernate.EhcacheJtaTransactionManagerLookup.
<p>Publish Date: 2019-10-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17267>CVE-2019-17267</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2460">https://github.com/FasterXML/jackson-databind/issues/2460</a></p>
<p>Release Date: 2019-10-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.8.11.5,2.9.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-17267 (High) detected in jackson-databind-2.8.7.jar - ## CVE-2019-17267 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.7.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /itory/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.7.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/davidjj58/Home-assignment/commit/77c895db92c590fab8e72e59e8be84d821d75fb6">77c895db92c590fab8e72e59e8be84d821d75fb6</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to net.sf.ehcache.hibernate.EhcacheJtaTransactionManagerLookup.
<p>Publish Date: 2019-10-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17267>CVE-2019-17267</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2460">https://github.com/FasterXML/jackson-databind/issues/2460</a></p>
<p>Release Date: 2019-10-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.8.11.5,2.9.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library itory com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind before it is related to net sf ehcache hibernate ehcachejtatransactionmanagerlookup publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
11,547
| 14,432,059,615
|
IssuesEvent
|
2020-12-07 00:42:31
|
thegooddocsproject/templates
|
https://api.github.com/repos/thegooddocsproject/templates
|
closed
|
Align Contributors' Guides
|
improve process
|
**Describe the bug**
We have several locations with Contributors' Guides, let's align them in the correct place:
* `.github` directory: This is the correct place, so that the repo shows that we have a Contributors' Guide in the Insights tab. However, doing a PR to this directory is a little tricky.
* Templates repo `CONTRIBUTING.md`: This has the right filename, but is only a stub.
* Incubator repo `base-template/contributors-guide.md`: This has content, but is in the wrong location.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to different locations in the project
1. See different contributors' Guides
1. Be confused
**Expected behavior**
A single guide with comprehensive information, in the `.github` directory, and available in the GH Insights tab.
|
1.0
|
Align Contributors' Guides - **Describe the bug**
We have several locations with Contributors' Guides, let's align them in the correct place:
* `.github` directory: This is the correct place, so that the repo shows that we have a Contributors' Guide in the Insights tab. However, doing a PR to this directory is a little tricky.
* Templates repo `CONTRIBUTING.md`: This has the right filename, but is only a stub.
* Incubator repo `base-template/contributors-guide.md`: This has content, but is in the wrong location.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to different locations in the project
1. See different contributors' Guides
1. Be confused
**Expected behavior**
A single guide with comprehensive information, in the `.github` directory, and available in the GH Insights tab.
|
process
|
align contributors guides describe the bug we have several locations with contributors guides let s align them in the correct place github directory this is the correct place so that the repo shows that we have a contributors guide in the insights tab however doing a pr to this directory is a little tricky templates repo contributing md this has the right filename but is only a stub incubator repo base template contributors guide md this has content but is in the wrong location to reproduce steps to reproduce the behavior go to different locations in the project see different contributors guides be confused expected behavior a single guide with comprehensive information in the github directory and available in the gh insights tab
| 1
|
17,233
| 22,947,299,987
|
IssuesEvent
|
2022-07-19 02:12:26
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[BUG][FLINK-27881] The key(String) in PulsarMessageBuilder returns null
|
compute/data-processing type/bug
|
This is a minor typo bug in PulsarMessageBuilder. The PulsarMessageBuild.key(String) always return null, which might cause NPE in the later call.
|
1.0
|
[BUG][FLINK-27881] The key(String) in PulsarMessageBuilder returns null - This is a minor typo bug in PulsarMessageBuilder. The PulsarMessageBuild.key(String) always return null, which might cause NPE in the later call.
|
process
|
the key string in pulsarmessagebuilder returns null this is a minor typo bug in pulsarmessagebuilder the pulsarmessagebuild key string always return null which might cause npe in the later call
| 1
|
81,326
| 23,441,069,106
|
IssuesEvent
|
2022-08-15 14:55:40
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Build 20 for 8.4 with status FAILURE
|
automation ci-reported Team:Elastic-Agent-Data-Plane build-failures
|
## :broken_heart: Build Failed
<!-- BUILD BADGES-->
> _the below badges are clickable and redirect to their specific view in the CI or DOCS_
[](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/20//pipeline) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/20//tests) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/20//changes) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/20//artifacts) [](http://beats_null.docs-preview.app.elstc.co/diff) [](https://ci-stats.elastic.co/app/apm/services/beats-ci/transactions/view?rangeFrom=2022-08-15T08:43:45.964Z&rangeTo=2022-08-15T09:03:45.964Z&transactionName=Beats/beats/8.4&transactionType=job&latencyAggregationType=avg&traceId=fe8b569f179bc37a147ab1305ba1215f&transactionId=e8da1c3d72cc2c7a)
<!-- BUILD SUMMARY-->
<details><summary>Expand to view the summary</summary>
<p>
#### Build stats
* Start Time: 2022-08-15T08:53:45.964+0000
* Duration: 72 min 59 sec
#### Test stats :test_tube:
| Test | Results |
| ------------ | :-----------------------------: |
| Failed | 0 |
| Passed | 23797 |
| Skipped | 2116 |
| Total | 25913 |
</p>
</details>
<!-- TEST RESULTS IF ANY-->
<!-- STEPS ERRORS IF ANY -->
### Steps errors [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/20//pipeline)
<details><summary>Expand to view the steps failures</summary>
<p>
##### `Docker login`
<ul>
<li>Took 0 min 6 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/20/steps/20330/log/?start=0">here</a></li>
<li>Description: <code> set +x if command -v host 2>&1 > /dev/null; then host docker.io 2>&1 > /dev/null fi if command -v dig 2>&1 > /dev/null; then dig docker.io 2>&1 > /dev/null fi docker login -u "${DOCKER_USER}" -p "${DOCKER_PASSWORD}" "docker.io" 2>/dev/null </code></l1>
</ul>
##### `Docker login`
<ul>
<li>Took 0 min 15 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/20/steps/20532/log/?start=0">here</a></li>
<li>Description: <code> set +x if command -v host 2>&1 > /dev/null; then host docker.io 2>&1 > /dev/null fi if command -v dig 2>&1 > /dev/null; then dig docker.io 2>&1 > /dev/null fi docker login -u "${DOCKER_USER}" -p "${DOCKER_PASSWORD}" "docker.io" 2>/dev/null </code></l1>
</ul>
##### `Docker login`
<ul>
<li>Took 0 min 15 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/20/steps/20833/log/?start=0">here</a></li>
<li>Description: <code> set +x if command -v host 2>&1 > /dev/null; then host docker.io 2>&1 > /dev/null fi if command -v dig 2>&1 > /dev/null; then dig docker.io 2>&1 > /dev/null fi docker login -u "${DOCKER_USER}" -p "${DOCKER_PASSWORD}" "docker.io" 2>/dev/null </code></l1>
</ul>
</p>
</details>
|
1.0
|
Build 20 for 8.4 with status FAILURE -
## :broken_heart: Build Failed
<!-- BUILD BADGES-->
> _the below badges are clickable and redirect to their specific view in the CI or DOCS_
[](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/20//pipeline) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/20//tests) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/20//changes) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/20//artifacts) [](http://beats_null.docs-preview.app.elstc.co/diff) [](https://ci-stats.elastic.co/app/apm/services/beats-ci/transactions/view?rangeFrom=2022-08-15T08:43:45.964Z&rangeTo=2022-08-15T09:03:45.964Z&transactionName=Beats/beats/8.4&transactionType=job&latencyAggregationType=avg&traceId=fe8b569f179bc37a147ab1305ba1215f&transactionId=e8da1c3d72cc2c7a)
<!-- BUILD SUMMARY-->
<details><summary>Expand to view the summary</summary>
<p>
#### Build stats
* Start Time: 2022-08-15T08:53:45.964+0000
* Duration: 72 min 59 sec
#### Test stats :test_tube:
| Test | Results |
| ------------ | :-----------------------------: |
| Failed | 0 |
| Passed | 23797 |
| Skipped | 2116 |
| Total | 25913 |
</p>
</details>
<!-- TEST RESULTS IF ANY-->
<!-- STEPS ERRORS IF ANY -->
### Steps errors [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.4/detail/8.4/20//pipeline)
<details><summary>Expand to view the steps failures</summary>
<p>
##### `Docker login`
<ul>
<li>Took 0 min 6 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/20/steps/20330/log/?start=0">here</a></li>
<li>Description: <code> set +x if command -v host 2>&1 > /dev/null; then host docker.io 2>&1 > /dev/null fi if command -v dig 2>&1 > /dev/null; then dig docker.io 2>&1 > /dev/null fi docker login -u "${DOCKER_USER}" -p "${DOCKER_PASSWORD}" "docker.io" 2>/dev/null </code></l1>
</ul>
##### `Docker login`
<ul>
<li>Took 0 min 15 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/20/steps/20532/log/?start=0">here</a></li>
<li>Description: <code> set +x if command -v host 2>&1 > /dev/null; then host docker.io 2>&1 > /dev/null fi if command -v dig 2>&1 > /dev/null; then dig docker.io 2>&1 > /dev/null fi docker login -u "${DOCKER_USER}" -p "${DOCKER_PASSWORD}" "docker.io" 2>/dev/null </code></l1>
</ul>
##### `Docker login`
<ul>
<li>Took 0 min 15 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.4/runs/20/steps/20833/log/?start=0">here</a></li>
<li>Description: <code> set +x if command -v host 2>&1 > /dev/null; then host docker.io 2>&1 > /dev/null fi if command -v dig 2>&1 > /dev/null; then dig docker.io 2>&1 > /dev/null fi docker login -u "${DOCKER_USER}" -p "${DOCKER_PASSWORD}" "docker.io" 2>/dev/null </code></l1>
</ul>
</p>
</details>
|
non_process
|
build for with status failure broken heart build failed the below badges are clickable and redirect to their specific view in the ci or docs expand to view the summary build stats start time duration min sec test stats test tube test results failed passed skipped total steps errors expand to view the steps failures docker login took min sec view more details a href description set x if command v host dev null then host docker io dev null fi if command v dig dev null then dig docker io dev null fi docker login u docker user p docker password docker io dev null docker login took min sec view more details a href description set x if command v host dev null then host docker io dev null fi if command v dig dev null then dig docker io dev null fi docker login u docker user p docker password docker io dev null docker login took min sec view more details a href description set x if command v host dev null then host docker io dev null fi if command v dig dev null then dig docker io dev null fi docker login u docker user p docker password docker io dev null
| 0
|
35,383
| 7,941,116,666
|
IssuesEvent
|
2018-07-10 02:33:46
|
tsuruclient/tsuru
|
https://api.github.com/repos/tsuruclient/tsuru
|
closed
|
マルチスレッド化
|
code problem enhancement specification
|
こんにちは。マルチスレッド化です。Web Worker APIが存在し、マルチスレッドにできるので、[Thread.js](https://threadjs.deep-rain.com/)等を使うか、あるいはマルチスレッド化したい部分をElectron(Main Process)側で処理をするかのどちらかにしたい。
参考資料
* https://scrapbox.io/shokai/WebWorker%E3%82%92production%E3%81%A7%E4%BD%BF%E3%81%A3%E3%81%A6%E3%82%8B%E8%A9%B1
* https://www.deep-rain.com/programming/javascript/728
|
1.0
|
マルチスレッド化 - こんにちは。マルチスレッド化です。Web Worker APIが存在し、マルチスレッドにできるので、[Thread.js](https://threadjs.deep-rain.com/)等を使うか、あるいはマルチスレッド化したい部分をElectron(Main Process)側で処理をするかのどちらかにしたい。
参考資料
* https://scrapbox.io/shokai/WebWorker%E3%82%92production%E3%81%A7%E4%BD%BF%E3%81%A3%E3%81%A6%E3%82%8B%E8%A9%B1
* https://www.deep-rain.com/programming/javascript/728
|
non_process
|
マルチスレッド化 こんにちは。マルチスレッド化です。web worker apiが存在し、マルチスレッドにできるので、 process 側で処理をするかのどちらかにしたい。 参考資料
| 0
|
6,792
| 9,923,200,992
|
IssuesEvent
|
2019-07-01 06:29:14
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
[Tabs] Verify/fix layout for safeAreaInsets in Tab Bar View
|
Client-blocking [Tabs] type:Process
|
This was filed as an internal issue. If you are a Googler, please visit [b/136175944](http://b/136175944) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/136175944](http://b/136175944)
|
1.0
|
[Tabs] Verify/fix layout for safeAreaInsets in Tab Bar View - This was filed as an internal issue. If you are a Googler, please visit [b/136175944](http://b/136175944) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/136175944](http://b/136175944)
|
process
|
verify fix layout for safeareainsets in tab bar view this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug
| 1
|
4,101
| 7,049,804,408
|
IssuesEvent
|
2018-01-03 00:36:13
|
shirou/gopsutil
|
https://api.github.com/repos/shirou/gopsutil
|
closed
|
cpu#TimesStat should be more precise, may be time.Duration
|
os:windows package:process
|
I need more precise time process#Times() return, second is too rough. time.Duration may be a better choice.
|
1.0
|
cpu#TimesStat should be more precise, may be time.Duration - I need more precise time process#Times() return, second is too rough. time.Duration may be a better choice.
|
process
|
cpu timesstat should be more precise may be time duration i need more precise time process times return second is too rough time duration may be a better choice
| 1
|
20,462
| 27,128,770,449
|
IssuesEvent
|
2023-02-16 08:11:32
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Cleanup flags and code paths for legacy Python version api and semantics
|
P3 type: process team-Rules-Python stale
|
Tracking bug for deleting `--incompatible_remove_old_python_version_api`, `--incompatible_allow_python_version_transitions`, and the code supporting these flags.
The flags are being flipped in Bazel 0.25. Google still uses these flags internally, but we expect to be migrated for them soon as well, so this might happen in the near term (O(weeks)).
Corresponds to implemented feature #6583.
|
1.0
|
Cleanup flags and code paths for legacy Python version api and semantics - Tracking bug for deleting `--incompatible_remove_old_python_version_api`, `--incompatible_allow_python_version_transitions`, and the code supporting these flags.
The flags are being flipped in Bazel 0.25. Google still uses these flags internally, but we expect to be migrated for them soon as well, so this might happen in the near term (O(weeks)).
Corresponds to implemented feature #6583.
|
process
|
cleanup flags and code paths for legacy python version api and semantics tracking bug for deleting incompatible remove old python version api incompatible allow python version transitions and the code supporting these flags the flags are being flipped in bazel google still uses these flags internally but we expect to be migrated for them soon as well so this might happen in the near term o weeks corresponds to implemented feature
| 1
|
12,200
| 14,742,534,027
|
IssuesEvent
|
2021-01-07 12:27:47
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
CC Payments Down
|
anc-process anp-1.5 ant-support has attachment
|
In GitLab by @kdjstudios on Apr 30, 2019, 12:42
**Submitted by:** Multiple
**Helpdesk:**
* http://www.servicedesk.answernet.com/profiles/ticket/2019-04-30-29178
* http://www.servicedesk.answernet.com/profiles/ticket/2019-04-30-53841
**Server:** Internal
**Client/Site:** Multiple
**Account:** Multiple
**Issue:**
David wrote:
> We have been trying to process some credit card payments through SAB with no luck, I have had my AGM attempt with his credentials. However, he receives the same error, please find the screenshot below;
Trawana wrote:
> Are there any reported issues with processing credit card payments in SAB? I have attempted to process payments for the last hour and receiving the message “We’re sorry, but something went wrong.”

|
1.0
|
CC Payments Down - In GitLab by @kdjstudios on Apr 30, 2019, 12:42
**Submitted by:** Multiple
**Helpdesk:**
* http://www.servicedesk.answernet.com/profiles/ticket/2019-04-30-29178
* http://www.servicedesk.answernet.com/profiles/ticket/2019-04-30-53841
**Server:** Internal
**Client/Site:** Multiple
**Account:** Multiple
**Issue:**
David wrote:
> We have been trying to process some credit card payments through SAB with no luck, I have had my AGM attempt with his credentials. However, he receives the same error, please find the screenshot below;
Trawana wrote:
> Are there any reported issues with processing credit card payments in SAB? I have attempted to process payments for the last hour and receiving the message “We’re sorry, but something went wrong.”

|
process
|
cc payments down in gitlab by kdjstudios on apr submitted by multiple helpdesk server internal client site multiple account multiple issue david wrote we have been trying to process some credit card payments through sab with no luck i have had my agm attempt with his credentials however he receives the same error please find the screenshot below trawana wrote are there any reported issues with processing credit card payments in sab i have attempted to process payments for the last hour and receiving the message “we’re sorry but something went wrong ” uploads image png
| 1
|
10,764
| 4,821,341,272
|
IssuesEvent
|
2016-11-05 08:54:34
|
GoldenCheetah/GoldenCheetah
|
https://api.github.com/repos/GoldenCheetah/GoldenCheetah
|
closed
|
DSO missing from command line
|
build error not a bug
|
Debian 8.6 x64
$ git clone ...
$ make clean; qmake; make;
....
`g++ -m64 -Wl,-O1 -Wl,-rpath-link,/usr/lib/x86_64-linux-gnu -o GoldenCheetah qtsoap.o QTFullScreen.o qtsegmentcontrol.o VideoWindow.o TwitterDialog.o Dropbox.o GoogleDrive.o MonarkController.o MonarkConnection.o Kettler.o KettlerController.o KettlerConnection.o ANTChannel.o ANT.o ANTlocalController.o ANTLogger.o ANTMessage.o Aerolab.o AerolabWindow.o AllPlot.o AllPlotInterval.o AllPlotSlopeCurve.o AllPlotWindow.o BingMap.o BlankState.o ChartBar.o ChartSettings.o CPPlot.o CpPlotCurve.o CriticalPowerWindow.o ExhaustionDialog.o GcOverlayWidget.o GcPane.o GoldenCheetah.o GoogleMapControl.o HistogramWindow.o HomeWindow.o HrPwPlot.o HrPwWindow.o IndendPlotMarker.o IntervalSummaryWindow.o LogTimeScaleDraw.o LTMCanvasPicker.o LTMChartParser.o LTMOutliers.o LTMPlot.o LTMPopup.o LTMSettings.o LTMTool.o LTMTrend.o LTMWindow.o MetadataWindow.o MUPlot.o MUWidget.o PfPvPlot.o PfPvWindow.o PowerHist.o ReferenceLineDialog.o RideEditor.o RideMapWindow.o RideSummaryWindow.o ScatterPlot.o ScatterWindow.o SmallPlot.o SummaryWindow.o TreeMapPlot.o TreeMapWindow.o RideWindow.o CalendarDownload.o FileStore.o LocalFileStore.o OAuthDialog.o ShareDialog.o SportPlusHealthUploader.o TPDownload.o TPDownloadDialog.o TPUpload.o TPUploadDialog.o TrainingstagebuchUploader.o VeloHeroUploader.o WithingsDownload.o Athlete.o Context.o DataFilter.o FreeSearch.o GcUpgrade.o IdleTimer.o IntervalItem.o main.o NamedSearch.o RideCache.o RideCacheModel.o RideItem.o Route.o RouteParser.o Season.o SeasonParser.o Settings.o Specification.o TimeUtils.o Units.o UserData.o Utils.o AthleteBackup.o Bin2RideFile.o BinRideFile.o CommPort.o Computrainer3dpFile.o CsvRideFile.o DataProcessor.o Device.o FitlogParser.o FitlogRideFile.o FitRideFile.o FixDeriveDistance.o FixDeriveHeadwind.o FixDerivePower.o FixDeriveTorque.o FixElevation.o FixLapSwim.o FixFreewheeling.o FixGaps.o FixGPS.o FixRunningCadence.o FixRunningPower.o FixHRSpikes.o FixMoxy.o FixPower.o FixSmO2.o FixSpeed.o FixSpikes.o FixTorque.o GcRideFile.o GpxParser.o GpxRideFile.o JouleDevice.o LapsEditor.o MacroDevice.o ManualRideFile.o MoxyDevice.o PolarRideFile.o PowerTapDevice.o PowerTapUtil.o PwxRideFile.o QuarqParser.o QuarqRideFile.o RawRideFile.o RideAutoImportConfig.o RideFileCache.o RideFileCommand.o RideFile.o RideFileTableModel.o Serial.o SlfParser.o SlfRideFile.o SmfParser.o SmfRideFile.o SmlParser.o SmlRideFile.o SrdRideFile.o SrmRideFile.o SyncRideFile.o TacxCafRideFile.o TcxParser.o TcxRideFile.o TxtRideFile.o WkoRideFile.o XDataDialog.o XDataTableModel.o AboutDialog.o AddIntervalDialog.o AnalysisSidebar.o ChooseCyclistDialog.o ColorButton.o Colors.o CompareDateRange.o CompareInterval.o ComparePane.o ConfigDialog.o DiarySidebar.o DragBar.o EstimateCPDialog.o GcCrashDialog.o GcScopeBar.o GcSideBarItem.o GcToolBar.o GcWindowLayout.o GcWindowRegistry.o GenerateHeatMapDialog.o GProgressDialog.o HelpWhatsThis.o HelpWindow.o IntervalTreeView.o LTMSidebar.o MainWindow.o NewCyclistDialog.o Pages.o RideNavigator.o SaveDialogs.o SearchBox.o SearchFilterBox.o SolveCPDialog.o Tab.o TabView.o ToolsRhoEstimator.o Views.o BatchExportDialog.o DownloadRideDialog.o ManualRideDialog.o EditUserMetricDialog.o MergeActivityWizard.o RideImportWizard.o SplitActivityWizard.o SolverDisplay.o aBikeScore.o aCoggan.o AerobicDecoupling.o BasicRideMetrics.o BikeScore.o Coggan.o CPSolver.o DanielsPoints.o ExtendedCriticalPower.o GOVSS.o HrTimeInZone.o HrZones.o LeftRightBalance.o PaceTimeInZone.o PaceZones.o PDModel.o PeakPace.o PeakPower.o PMCData.o RideMetadata.o RideMetric.o RunMetrics.o SwimMetrics.o SpecialFields.o Statistic.o SustainMetric.o SwimScore.o TimeInZone.o TRIMPPoints.o UserMetric.o UserMetricParser.o VDOTCalculator.o VDOT.o WattsPerKilogram.o WPrime.o Zones.o codeeditor.o mvjson.o qwt_plot_gapped_curve.o qxtspanslider.o qxtstringspinbox.o zip.o AddDeviceWizard.o ComputrainerController.o Computrainer.o DeviceConfiguration.o DeviceTypes.o DialWindow.o ErgDB.o ErgDBDownloadDialog.o ErgFile.o ErgFilePlot.o Library.o LibraryParser.o MeterWidget.o NullController.o RealtimeController.o RealtimeData.o RealtimePlot.o RealtimePlotWindow.o RemoteControl.o SpinScanPlot.o SpinScanPlotWindow.o SpinScanPolarPlot.o TrainBottom.o TrainDB.o TrainSidebar.o VideoLayoutParser.o VideoSyncFile.o WorkoutPlotWindow.o WorkoutWidget.o WorkoutWidgetItems.o WorkoutWindow.o WorkoutWizard.o ZwoParser.o qrc_application.o qrc_RideWindow.o moc_qtsoap.o moc_QTFullScreen.o moc_qtsegmentcontrol.o moc_VideoWindow.o moc_TwitterDialog.o moc_Dropbox.o moc_GoogleDrive.o moc_MonarkConnection.o moc_Kettler.o moc_KettlerConnection.o moc_ANTChannel.o moc_ANT.o moc_ANTlocalController.o moc_ANTLogger.o moc_Aerolab.o moc_AerolabWindow.o moc_AllPlot.o moc_AllPlotInterval.o moc_AllPlotWindow.o moc_BingMap.o moc_BlankState.o moc_ChartBar.o moc_ChartSettings.o moc_CPPlot.o moc_CriticalPowerWindow.o moc_ExhaustionDialog.o moc_GcOverlayWidget.o moc_GcPane.o moc_GoldenCheetah.o moc_GoogleMapControl.o moc_HistogramWindow.o moc_HomeWindow.o moc_HrPwPlot.o moc_HrPwWindow.o moc_IntervalSummaryWindow.o moc_LTMCanvasPicker.o moc_LTMChartParser.o moc_LTMPlot.o moc_LTMPopup.o moc_LTMSettings.o moc_LTMTool.o moc_LTMWindow.o moc_MetadataWindow.o moc_MUPlot.o moc_MUWidget.o moc_PfPvPlot.o moc_PfPvWindow.o moc_PowerHist.o moc_ReferenceLineDialog.o moc_RideEditor.o moc_RideMapWindow.o moc_RideSummaryWindow.o moc_ScatterPlot.o moc_ScatterWindow.o moc_SmallPlot.o moc_SummaryWindow.o moc_TreeMapPlot.o moc_TreeMapWindow.o moc_RideWindow.o moc_CalendarDownload.o moc_FileStore.o moc_LocalFileStore.o moc_OAuthDialog.o moc_ShareDialog.o moc_SportPlusHealthUploader.o moc_TPDownloadDialog.o moc_TPDownload.o moc_TPUploadDialog.o moc_TPUpload.o moc_TrainingstagebuchUploader.o moc_VeloHeroUploader.o moc_WithingsDownload.o moc_Athlete.o moc_Context.o moc_DataFilter.o moc_FreeSearch.o moc_GcCalendarModel.o moc_GcUpgrade.o moc_IdleTimer.o moc_IntervalItem.o moc_NamedSearch.o moc_RideCache.o moc_RideCacheModel.o moc_RideItem.o moc_Route.o moc_Season.o moc_TimeUtils.o moc_UserData.o moc_AthleteBackup.o moc_DataProcessor.o moc_Device.o moc_LapsEditor.o moc_RideAutoImportConfig.o moc_RideFileCommand.o moc_RideFile.o moc_RideFileTableModel.o moc_XDataDialog.o moc_XDataTableModel.o moc_AboutDialog.o moc_AddIntervalDialog.o moc_AnalysisSidebar.o moc_ChooseCyclistDialog.o moc_ColorButton.o moc_Colors.o moc_ComparePane.o moc_ConfigDialog.o moc_DiarySidebar.o moc_DragBar.o moc_EstimateCPDialog.o moc_GcCrashDialog.o moc_GcScopeBar.o moc_GcSideBarItem.o moc_GcToolBar.o moc_GenerateHeatMapDialog.o moc_GProgressDialog.o moc_HelpWhatsThis.o moc_HelpWindow.o moc_IntervalTreeView.o moc_LTMSidebar.o moc_MainWindow.o moc_NewCyclistDialog.o moc_Pages.o moc_RideNavigator.o moc_RideNavigatorProxy.o moc_SaveDialogs.o moc_SearchBox.o moc_SearchFilterBox.o moc_SolveCPDialog.o moc_Tab.o moc_TabView.o moc_ToolsRhoEstimator.o moc_Views.o moc_BatchExportDialog.o moc_DownloadRideDialog.o moc_ManualRideDialog.o moc_MergeActivityWizard.o moc_RideImportWizard.o moc_SplitActivityWizard.o moc_SolverDisplay.o moc_CPSolver.o moc_HrZones.o moc_PaceZones.o moc_PDModel.o moc_PMCData.o moc_RideMetadata.o moc_UserMetricSettings.o moc_VDOTCalculator.o moc_Zones.o moc_codeeditor.o moc_qxtspanslider.o moc_qxtspanslider_p.o moc_qxtstringspinbox.o moc_AddDeviceWizard.o moc_ComputrainerController.o moc_DialWindow.o moc_ErgDBDownloadDialog.o moc_ErgDB.o moc_ErgFilePlot.o moc_Library.o moc_NullController.o moc_RealtimeController.o moc_RealtimePlot.o moc_RealtimePlotWindow.o moc_SpinScanPlot.o moc_SpinScanPlotWindow.o moc_SpinScanPolarPlot.o moc_TrainBottom.o moc_TrainDB.o moc_TrainSidebar.o moc_WorkoutPlotWindow.o moc_WorkoutWidget.o moc_WorkoutWindow.o moc_WorkoutWizard.o DataFilter_yacc.o JsonRideFile_yacc.o WithingsParser_yacc.o RideDB_yacc.o DataFilter_lex.o JsonRideFile_lex.o WithingsParser_lex.o RideDB_lex.o -L/usr/X11R6/lib64 -L/media/disk/src/GoldenCheetah/src/../qwt/lib -lqwt -lm /media/disk/src/GoldenCheetah/src/../kqoauth/libkqoauth.a -lssl -lcrypto -lQt5MultimediaWidgets -lQt5WebKitWidgets -lQt5Svg -lQt5Multimedia -lQt5WebKit -lQt5Widgets -lQt5SerialPort -lQt5Concurrent -lQt5Network -lQt5Sql -lQt5Xml -lQt5Gui -lQt5Core -lGL -lpthread
/usr/bin/ld: zip.o: undefined reference to symbol 'inflateInit2_'
//lib/x86_64-linux-gnu/libz.so.1: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
Makefile:1377: recipe for target 'GoldenCheetah' failed
make: *** [GoldenCheetah] Error 1'
|
1.0
|
DSO missing from command line - Debian 8.6 x64
$ git clone ...
$ make clean; qmake; make;
....
`g++ -m64 -Wl,-O1 -Wl,-rpath-link,/usr/lib/x86_64-linux-gnu -o GoldenCheetah qtsoap.o QTFullScreen.o qtsegmentcontrol.o VideoWindow.o TwitterDialog.o Dropbox.o GoogleDrive.o MonarkController.o MonarkConnection.o Kettler.o KettlerController.o KettlerConnection.o ANTChannel.o ANT.o ANTlocalController.o ANTLogger.o ANTMessage.o Aerolab.o AerolabWindow.o AllPlot.o AllPlotInterval.o AllPlotSlopeCurve.o AllPlotWindow.o BingMap.o BlankState.o ChartBar.o ChartSettings.o CPPlot.o CpPlotCurve.o CriticalPowerWindow.o ExhaustionDialog.o GcOverlayWidget.o GcPane.o GoldenCheetah.o GoogleMapControl.o HistogramWindow.o HomeWindow.o HrPwPlot.o HrPwWindow.o IndendPlotMarker.o IntervalSummaryWindow.o LogTimeScaleDraw.o LTMCanvasPicker.o LTMChartParser.o LTMOutliers.o LTMPlot.o LTMPopup.o LTMSettings.o LTMTool.o LTMTrend.o LTMWindow.o MetadataWindow.o MUPlot.o MUWidget.o PfPvPlot.o PfPvWindow.o PowerHist.o ReferenceLineDialog.o RideEditor.o RideMapWindow.o RideSummaryWindow.o ScatterPlot.o ScatterWindow.o SmallPlot.o SummaryWindow.o TreeMapPlot.o TreeMapWindow.o RideWindow.o CalendarDownload.o FileStore.o LocalFileStore.o OAuthDialog.o ShareDialog.o SportPlusHealthUploader.o TPDownload.o TPDownloadDialog.o TPUpload.o TPUploadDialog.o TrainingstagebuchUploader.o VeloHeroUploader.o WithingsDownload.o Athlete.o Context.o DataFilter.o FreeSearch.o GcUpgrade.o IdleTimer.o IntervalItem.o main.o NamedSearch.o RideCache.o RideCacheModel.o RideItem.o Route.o RouteParser.o Season.o SeasonParser.o Settings.o Specification.o TimeUtils.o Units.o UserData.o Utils.o AthleteBackup.o Bin2RideFile.o BinRideFile.o CommPort.o Computrainer3dpFile.o CsvRideFile.o DataProcessor.o Device.o FitlogParser.o FitlogRideFile.o FitRideFile.o FixDeriveDistance.o FixDeriveHeadwind.o FixDerivePower.o FixDeriveTorque.o FixElevation.o FixLapSwim.o FixFreewheeling.o FixGaps.o FixGPS.o FixRunningCadence.o FixRunningPower.o FixHRSpikes.o FixMoxy.o FixPower.o FixSmO2.o FixSpeed.o FixSpikes.o FixTorque.o GcRideFile.o GpxParser.o GpxRideFile.o JouleDevice.o LapsEditor.o MacroDevice.o ManualRideFile.o MoxyDevice.o PolarRideFile.o PowerTapDevice.o PowerTapUtil.o PwxRideFile.o QuarqParser.o QuarqRideFile.o RawRideFile.o RideAutoImportConfig.o RideFileCache.o RideFileCommand.o RideFile.o RideFileTableModel.o Serial.o SlfParser.o SlfRideFile.o SmfParser.o SmfRideFile.o SmlParser.o SmlRideFile.o SrdRideFile.o SrmRideFile.o SyncRideFile.o TacxCafRideFile.o TcxParser.o TcxRideFile.o TxtRideFile.o WkoRideFile.o XDataDialog.o XDataTableModel.o AboutDialog.o AddIntervalDialog.o AnalysisSidebar.o ChooseCyclistDialog.o ColorButton.o Colors.o CompareDateRange.o CompareInterval.o ComparePane.o ConfigDialog.o DiarySidebar.o DragBar.o EstimateCPDialog.o GcCrashDialog.o GcScopeBar.o GcSideBarItem.o GcToolBar.o GcWindowLayout.o GcWindowRegistry.o GenerateHeatMapDialog.o GProgressDialog.o HelpWhatsThis.o HelpWindow.o IntervalTreeView.o LTMSidebar.o MainWindow.o NewCyclistDialog.o Pages.o RideNavigator.o SaveDialogs.o SearchBox.o SearchFilterBox.o SolveCPDialog.o Tab.o TabView.o ToolsRhoEstimator.o Views.o BatchExportDialog.o DownloadRideDialog.o ManualRideDialog.o EditUserMetricDialog.o MergeActivityWizard.o RideImportWizard.o SplitActivityWizard.o SolverDisplay.o aBikeScore.o aCoggan.o AerobicDecoupling.o BasicRideMetrics.o BikeScore.o Coggan.o CPSolver.o DanielsPoints.o ExtendedCriticalPower.o GOVSS.o HrTimeInZone.o HrZones.o LeftRightBalance.o PaceTimeInZone.o PaceZones.o PDModel.o PeakPace.o PeakPower.o PMCData.o RideMetadata.o RideMetric.o RunMetrics.o SwimMetrics.o SpecialFields.o Statistic.o SustainMetric.o SwimScore.o TimeInZone.o TRIMPPoints.o UserMetric.o UserMetricParser.o VDOTCalculator.o VDOT.o WattsPerKilogram.o WPrime.o Zones.o codeeditor.o mvjson.o qwt_plot_gapped_curve.o qxtspanslider.o qxtstringspinbox.o zip.o AddDeviceWizard.o ComputrainerController.o Computrainer.o DeviceConfiguration.o DeviceTypes.o DialWindow.o ErgDB.o ErgDBDownloadDialog.o ErgFile.o ErgFilePlot.o Library.o LibraryParser.o MeterWidget.o NullController.o RealtimeController.o RealtimeData.o RealtimePlot.o RealtimePlotWindow.o RemoteControl.o SpinScanPlot.o SpinScanPlotWindow.o SpinScanPolarPlot.o TrainBottom.o TrainDB.o TrainSidebar.o VideoLayoutParser.o VideoSyncFile.o WorkoutPlotWindow.o WorkoutWidget.o WorkoutWidgetItems.o WorkoutWindow.o WorkoutWizard.o ZwoParser.o qrc_application.o qrc_RideWindow.o moc_qtsoap.o moc_QTFullScreen.o moc_qtsegmentcontrol.o moc_VideoWindow.o moc_TwitterDialog.o moc_Dropbox.o moc_GoogleDrive.o moc_MonarkConnection.o moc_Kettler.o moc_KettlerConnection.o moc_ANTChannel.o moc_ANT.o moc_ANTlocalController.o moc_ANTLogger.o moc_Aerolab.o moc_AerolabWindow.o moc_AllPlot.o moc_AllPlotInterval.o moc_AllPlotWindow.o moc_BingMap.o moc_BlankState.o moc_ChartBar.o moc_ChartSettings.o moc_CPPlot.o moc_CriticalPowerWindow.o moc_ExhaustionDialog.o moc_GcOverlayWidget.o moc_GcPane.o moc_GoldenCheetah.o moc_GoogleMapControl.o moc_HistogramWindow.o moc_HomeWindow.o moc_HrPwPlot.o moc_HrPwWindow.o moc_IntervalSummaryWindow.o moc_LTMCanvasPicker.o moc_LTMChartParser.o moc_LTMPlot.o moc_LTMPopup.o moc_LTMSettings.o moc_LTMTool.o moc_LTMWindow.o moc_MetadataWindow.o moc_MUPlot.o moc_MUWidget.o moc_PfPvPlot.o moc_PfPvWindow.o moc_PowerHist.o moc_ReferenceLineDialog.o moc_RideEditor.o moc_RideMapWindow.o moc_RideSummaryWindow.o moc_ScatterPlot.o moc_ScatterWindow.o moc_SmallPlot.o moc_SummaryWindow.o moc_TreeMapPlot.o moc_TreeMapWindow.o moc_RideWindow.o moc_CalendarDownload.o moc_FileStore.o moc_LocalFileStore.o moc_OAuthDialog.o moc_ShareDialog.o moc_SportPlusHealthUploader.o moc_TPDownloadDialog.o moc_TPDownload.o moc_TPUploadDialog.o moc_TPUpload.o moc_TrainingstagebuchUploader.o moc_VeloHeroUploader.o moc_WithingsDownload.o moc_Athlete.o moc_Context.o moc_DataFilter.o moc_FreeSearch.o moc_GcCalendarModel.o moc_GcUpgrade.o moc_IdleTimer.o moc_IntervalItem.o moc_NamedSearch.o moc_RideCache.o moc_RideCacheModel.o moc_RideItem.o moc_Route.o moc_Season.o moc_TimeUtils.o moc_UserData.o moc_AthleteBackup.o moc_DataProcessor.o moc_Device.o moc_LapsEditor.o moc_RideAutoImportConfig.o moc_RideFileCommand.o moc_RideFile.o moc_RideFileTableModel.o moc_XDataDialog.o moc_XDataTableModel.o moc_AboutDialog.o moc_AddIntervalDialog.o moc_AnalysisSidebar.o moc_ChooseCyclistDialog.o moc_ColorButton.o moc_Colors.o moc_ComparePane.o moc_ConfigDialog.o moc_DiarySidebar.o moc_DragBar.o moc_EstimateCPDialog.o moc_GcCrashDialog.o moc_GcScopeBar.o moc_GcSideBarItem.o moc_GcToolBar.o moc_GenerateHeatMapDialog.o moc_GProgressDialog.o moc_HelpWhatsThis.o moc_HelpWindow.o moc_IntervalTreeView.o moc_LTMSidebar.o moc_MainWindow.o moc_NewCyclistDialog.o moc_Pages.o moc_RideNavigator.o moc_RideNavigatorProxy.o moc_SaveDialogs.o moc_SearchBox.o moc_SearchFilterBox.o moc_SolveCPDialog.o moc_Tab.o moc_TabView.o moc_ToolsRhoEstimator.o moc_Views.o moc_BatchExportDialog.o moc_DownloadRideDialog.o moc_ManualRideDialog.o moc_MergeActivityWizard.o moc_RideImportWizard.o moc_SplitActivityWizard.o moc_SolverDisplay.o moc_CPSolver.o moc_HrZones.o moc_PaceZones.o moc_PDModel.o moc_PMCData.o moc_RideMetadata.o moc_UserMetricSettings.o moc_VDOTCalculator.o moc_Zones.o moc_codeeditor.o moc_qxtspanslider.o moc_qxtspanslider_p.o moc_qxtstringspinbox.o moc_AddDeviceWizard.o moc_ComputrainerController.o moc_DialWindow.o moc_ErgDBDownloadDialog.o moc_ErgDB.o moc_ErgFilePlot.o moc_Library.o moc_NullController.o moc_RealtimeController.o moc_RealtimePlot.o moc_RealtimePlotWindow.o moc_SpinScanPlot.o moc_SpinScanPlotWindow.o moc_SpinScanPolarPlot.o moc_TrainBottom.o moc_TrainDB.o moc_TrainSidebar.o moc_WorkoutPlotWindow.o moc_WorkoutWidget.o moc_WorkoutWindow.o moc_WorkoutWizard.o DataFilter_yacc.o JsonRideFile_yacc.o WithingsParser_yacc.o RideDB_yacc.o DataFilter_lex.o JsonRideFile_lex.o WithingsParser_lex.o RideDB_lex.o -L/usr/X11R6/lib64 -L/media/disk/src/GoldenCheetah/src/../qwt/lib -lqwt -lm /media/disk/src/GoldenCheetah/src/../kqoauth/libkqoauth.a -lssl -lcrypto -lQt5MultimediaWidgets -lQt5WebKitWidgets -lQt5Svg -lQt5Multimedia -lQt5WebKit -lQt5Widgets -lQt5SerialPort -lQt5Concurrent -lQt5Network -lQt5Sql -lQt5Xml -lQt5Gui -lQt5Core -lGL -lpthread
/usr/bin/ld: zip.o: undefined reference to symbol 'inflateInit2_'
//lib/x86_64-linux-gnu/libz.so.1: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
Makefile:1377: recipe for target 'GoldenCheetah' failed
make: *** [GoldenCheetah] Error 1'
|
non_process
|
dso missing from command line debian git clone make clean qmake make g wl wl rpath link usr lib linux gnu o goldencheetah qtsoap o qtfullscreen o qtsegmentcontrol o videowindow o twitterdialog o dropbox o googledrive o monarkcontroller o monarkconnection o kettler o kettlercontroller o kettlerconnection o antchannel o ant o antlocalcontroller o antlogger o antmessage o aerolab o aerolabwindow o allplot o allplotinterval o allplotslopecurve o allplotwindow o bingmap o blankstate o chartbar o chartsettings o cpplot o cpplotcurve o criticalpowerwindow o exhaustiondialog o gcoverlaywidget o gcpane o goldencheetah o googlemapcontrol o histogramwindow o homewindow o hrpwplot o hrpwwindow o indendplotmarker o intervalsummarywindow o logtimescaledraw o ltmcanvaspicker o ltmchartparser o ltmoutliers o ltmplot o ltmpopup o ltmsettings o ltmtool o ltmtrend o ltmwindow o metadatawindow o muplot o muwidget o pfpvplot o pfpvwindow o powerhist o referencelinedialog o rideeditor o ridemapwindow o ridesummarywindow o scatterplot o scatterwindow o smallplot o summarywindow o treemapplot o treemapwindow o ridewindow o calendardownload o filestore o localfilestore o oauthdialog o sharedialog o sportplushealthuploader o tpdownload o tpdownloaddialog o tpupload o tpuploaddialog o trainingstagebuchuploader o veloherouploader o withingsdownload o athlete o context o datafilter o freesearch o gcupgrade o idletimer o intervalitem o main o namedsearch o ridecache o ridecachemodel o rideitem o route o routeparser o season o seasonparser o settings o specification o timeutils o units o userdata o utils o athletebackup o o binridefile o commport o o csvridefile o dataprocessor o device o fitlogparser o fitlogridefile o fitridefile o fixderivedistance o fixderiveheadwind o fixderivepower o fixderivetorque o fixelevation o fixlapswim o fixfreewheeling o fixgaps o fixgps o fixrunningcadence o fixrunningpower o fixhrspikes o fixmoxy o fixpower o o fixspeed o fixspikes o fixtorque o gcridefile o gpxparser o gpxridefile o jouledevice o lapseditor o macrodevice o manualridefile o moxydevice o polarridefile o powertapdevice o powertaputil o pwxridefile o quarqparser o quarqridefile o rawridefile o rideautoimportconfig o ridefilecache o ridefilecommand o ridefile o ridefiletablemodel o serial o slfparser o slfridefile o smfparser o smfridefile o smlparser o smlridefile o srdridefile o srmridefile o syncridefile o tacxcafridefile o tcxparser o tcxridefile o txtridefile o wkoridefile o xdatadialog o xdatatablemodel o aboutdialog o addintervaldialog o analysissidebar o choosecyclistdialog o colorbutton o colors o comparedaterange o compareinterval o comparepane o configdialog o diarysidebar o dragbar o estimatecpdialog o gccrashdialog o gcscopebar o gcsidebaritem o gctoolbar o gcwindowlayout o gcwindowregistry o generateheatmapdialog o gprogressdialog o helpwhatsthis o helpwindow o intervaltreeview o ltmsidebar o mainwindow o newcyclistdialog o pages o ridenavigator o savedialogs o searchbox o searchfilterbox o solvecpdialog o tab o tabview o toolsrhoestimator o views o batchexportdialog o downloadridedialog o manualridedialog o editusermetricdialog o mergeactivitywizard o rideimportwizard o splitactivitywizard o solverdisplay o abikescore o acoggan o aerobicdecoupling o basicridemetrics o bikescore o coggan o cpsolver o danielspoints o extendedcriticalpower o govss o hrtimeinzone o hrzones o leftrightbalance o pacetimeinzone o pacezones o pdmodel o peakpace o peakpower o pmcdata o ridemetadata o ridemetric o runmetrics o swimmetrics o specialfields o statistic o sustainmetric o swimscore o timeinzone o trimppoints o usermetric o usermetricparser o vdotcalculator o vdot o wattsperkilogram o wprime o zones o codeeditor o mvjson o qwt plot gapped curve o qxtspanslider o qxtstringspinbox o zip o adddevicewizard o computrainercontroller o computrainer o deviceconfiguration o devicetypes o dialwindow o ergdb o ergdbdownloaddialog o ergfile o ergfileplot o library o libraryparser o meterwidget o nullcontroller o realtimecontroller o realtimedata o realtimeplot o realtimeplotwindow o remotecontrol o spinscanplot o spinscanplotwindow o spinscanpolarplot o trainbottom o traindb o trainsidebar o videolayoutparser o videosyncfile o workoutplotwindow o workoutwidget o workoutwidgetitems o workoutwindow o workoutwizard o zwoparser o qrc application o qrc ridewindow o moc qtsoap o moc qtfullscreen o moc qtsegmentcontrol o moc videowindow o moc twitterdialog o moc dropbox o moc googledrive o moc monarkconnection o moc kettler o moc kettlerconnection o moc antchannel o moc ant o moc antlocalcontroller o moc antlogger o moc aerolab o moc aerolabwindow o moc allplot o moc allplotinterval o moc allplotwindow o moc bingmap o moc blankstate o moc chartbar o moc chartsettings o moc cpplot o moc criticalpowerwindow o moc exhaustiondialog o moc gcoverlaywidget o moc gcpane o moc goldencheetah o moc googlemapcontrol o moc histogramwindow o moc homewindow o moc hrpwplot o moc hrpwwindow o moc intervalsummarywindow o moc ltmcanvaspicker o moc ltmchartparser o moc ltmplot o moc ltmpopup o moc ltmsettings o moc ltmtool o moc ltmwindow o moc metadatawindow o moc muplot o moc muwidget o moc pfpvplot o moc pfpvwindow o moc powerhist o moc referencelinedialog o moc rideeditor o moc ridemapwindow o moc ridesummarywindow o moc scatterplot o moc scatterwindow o moc smallplot o moc summarywindow o moc treemapplot o moc treemapwindow o moc ridewindow o moc calendardownload o moc filestore o moc localfilestore o moc oauthdialog o moc sharedialog o moc sportplushealthuploader o moc tpdownloaddialog o moc tpdownload o moc tpuploaddialog o moc tpupload o moc trainingstagebuchuploader o moc veloherouploader o moc withingsdownload o moc athlete o moc context o moc datafilter o moc freesearch o moc gccalendarmodel o moc gcupgrade o moc idletimer o moc intervalitem o moc namedsearch o moc ridecache o moc ridecachemodel o moc rideitem o moc route o moc season o moc timeutils o moc userdata o moc athletebackup o moc dataprocessor o moc device o moc lapseditor o moc rideautoimportconfig o moc ridefilecommand o moc ridefile o moc ridefiletablemodel o moc xdatadialog o moc xdatatablemodel o moc aboutdialog o moc addintervaldialog o moc analysissidebar o moc choosecyclistdialog o moc colorbutton o moc colors o moc comparepane o moc configdialog o moc diarysidebar o moc dragbar o moc estimatecpdialog o moc gccrashdialog o moc gcscopebar o moc gcsidebaritem o moc gctoolbar o moc generateheatmapdialog o moc gprogressdialog o moc helpwhatsthis o moc helpwindow o moc intervaltreeview o moc ltmsidebar o moc mainwindow o moc newcyclistdialog o moc pages o moc ridenavigator o moc ridenavigatorproxy o moc savedialogs o moc searchbox o moc searchfilterbox o moc solvecpdialog o moc tab o moc tabview o moc toolsrhoestimator o moc views o moc batchexportdialog o moc downloadridedialog o moc manualridedialog o moc mergeactivitywizard o moc rideimportwizard o moc splitactivitywizard o moc solverdisplay o moc cpsolver o moc hrzones o moc pacezones o moc pdmodel o moc pmcdata o moc ridemetadata o moc usermetricsettings o moc vdotcalculator o moc zones o moc codeeditor o moc qxtspanslider o moc qxtspanslider p o moc qxtstringspinbox o moc adddevicewizard o moc computrainercontroller o moc dialwindow o moc ergdbdownloaddialog o moc ergdb o moc ergfileplot o moc library o moc nullcontroller o moc realtimecontroller o moc realtimeplot o moc realtimeplotwindow o moc spinscanplot o moc spinscanplotwindow o moc spinscanpolarplot o moc trainbottom o moc traindb o moc trainsidebar o moc workoutplotwindow o moc workoutwidget o moc workoutwindow o moc workoutwizard o datafilter yacc o jsonridefile yacc o withingsparser yacc o ridedb yacc o datafilter lex o jsonridefile lex o withingsparser lex o ridedb lex o l usr l media disk src goldencheetah src qwt lib lqwt lm media disk src goldencheetah src kqoauth libkqoauth a lssl lcrypto lgl lpthread usr bin ld zip o undefined reference to symbol lib linux gnu libz so error adding symbols dso missing from command line error ld returned exit status makefile recipe for target goldencheetah failed make error
| 0
|
41,712
| 6,927,926,152
|
IssuesEvent
|
2017-12-01 01:28:35
|
golang/go
|
https://api.github.com/repos/golang/go
|
closed
|
cmd/go: 'go test' output-hiding behavior is under-documented
|
Documentation
|
Using 1.9rc1 / amd64, but it's not particular to any version.
The fact that `go test` hides output from passing tests if you give it a package selector argument is surprising and is not properly documented.
```
$ cat x_test.go
package main
import (
"fmt"
"testing"
)
func TestFoo(t *testing.T) {
fmt.Println("hello")
}
$ go1.9rc1 test
hello
PASS
ok _/home/caleb/test/p 0.001s
$ go1.9rc1 test .
ok _/home/caleb/test/p 0.001s
```
The closest I can find to documentation of this difference is at the beginning of `go help test`:
```
usage: go test [build/test flags] [packages] [build/test flags & test binary flags]
'Go test' automates testing the packages named by the import paths.
It prints a summary of the test results in the format:
ok archive/tar 0.011s
FAIL archive/zip 0.022s
ok compress/gzip 0.033s
...
followed by detailed output for each failed package.
```
In particular, "detailed output for each failed package" hints that there might be a difference between the way output for failed and passed packages are handled, but the documentation should be explicit about when output is hidden from the user.
---
At a philosophical level, I think it's harmful to hide the output at all. Best practice seems to be writing tests that don't print stuff on success, so the hiding behavior just makes it take longer to discover and fix these bad tests.
I have also observed the behavior of sometimes hiding the output and sometimes not -- depending on how the test was invoked -- to be a source of confusion for `go test` users.
|
1.0
|
cmd/go: 'go test' output-hiding behavior is under-documented - Using 1.9rc1 / amd64, but it's not particular to any version.
The fact that `go test` hides output from passing tests if you give it a package selector argument is surprising and is not properly documented.
```
$ cat x_test.go
package main
import (
"fmt"
"testing"
)
func TestFoo(t *testing.T) {
fmt.Println("hello")
}
$ go1.9rc1 test
hello
PASS
ok _/home/caleb/test/p 0.001s
$ go1.9rc1 test .
ok _/home/caleb/test/p 0.001s
```
The closest I can find to documentation of this difference is at the beginning of `go help test`:
```
usage: go test [build/test flags] [packages] [build/test flags & test binary flags]
'Go test' automates testing the packages named by the import paths.
It prints a summary of the test results in the format:
ok archive/tar 0.011s
FAIL archive/zip 0.022s
ok compress/gzip 0.033s
...
followed by detailed output for each failed package.
```
In particular, "detailed output for each failed package" hints that there might be a difference between the way output for failed and passed packages are handled, but the documentation should be explicit about when output is hidden from the user.
---
At a philosophical level, I think it's harmful to hide the output at all. Best practice seems to be writing tests that don't print stuff on success, so the hiding behavior just makes it take longer to discover and fix these bad tests.
I have also observed the behavior of sometimes hiding the output and sometimes not -- depending on how the test was invoked -- to be a source of confusion for `go test` users.
|
non_process
|
cmd go go test output hiding behavior is under documented using but it s not particular to any version the fact that go test hides output from passing tests if you give it a package selector argument is surprising and is not properly documented cat x test go package main import fmt testing func testfoo t testing t fmt println hello test hello pass ok home caleb test p test ok home caleb test p the closest i can find to documentation of this difference is at the beginning of go help test usage go test go test automates testing the packages named by the import paths it prints a summary of the test results in the format ok archive tar fail archive zip ok compress gzip followed by detailed output for each failed package in particular detailed output for each failed package hints that there might be a difference between the way output for failed and passed packages are handled but the documentation should be explicit about when output is hidden from the user at a philosophical level i think it s harmful to hide the output at all best practice seems to be writing tests that don t print stuff on success so the hiding behavior just makes it take longer to discover and fix these bad tests i have also observed the behavior of sometimes hiding the output and sometimes not depending on how the test was invoked to be a source of confusion for go test users
| 0
|
22,485
| 31,395,274,248
|
IssuesEvent
|
2023-08-26 21:21:47
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Loch Henry: Truth Will Out from "Black Mirror" (Screenshots, Thumbnail, and Title Card added)
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Loch Henry: Truth Will Out
Type (film/tv show): film - documentary (true crime)
Film or show in which it appears: Black Mirror
Is the parent film/show streaming anywhere? Yes - Netflix
About when in the parent film/show does it appear? Ep. 6x02 - "Loch Henry"
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamp:
- 47:50 - 50:00
Production Companies: Historik Productions & Streamberry (from the award winning producers of _The Callow Years_)
Production: Davis McCardle, Kate Cezar, Clara Ryce, & Ted Bellingham
Streamberry Synopsis: A sleepy lochside town is rocked by a string of horrific murders. Decades later, the former holiday destination has never recovered.
Tagline: They tried to tell a story. But the truth they found was too close to home.
Awards: BAFTA Television Awards - Best Factual Series
In memory of Pia.




































|
1.0
|
Add Loch Henry: Truth Will Out from "Black Mirror" (Screenshots, Thumbnail, and Title Card added) - Please add as much of the following info as you can:
Title: Loch Henry: Truth Will Out
Type (film/tv show): film - documentary (true crime)
Film or show in which it appears: Black Mirror
Is the parent film/show streaming anywhere? Yes - Netflix
About when in the parent film/show does it appear? Ep. 6x02 - "Loch Henry"
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamp:
- 47:50 - 50:00
Production Companies: Historik Productions & Streamberry (from the award winning producers of _The Callow Years_)
Production: Davis McCardle, Kate Cezar, Clara Ryce, & Ted Bellingham
Streamberry Synopsis: A sleepy lochside town is rocked by a string of horrific murders. Decades later, the former holiday destination has never recovered.
Tagline: They tried to tell a story. But the truth they found was too close to home.
Awards: BAFTA Television Awards - Best Factual Series
In memory of Pia.




































|
process
|
add loch henry truth will out from black mirror screenshots thumbnail and title card added please add as much of the following info as you can title loch henry truth will out type film tv show film documentary true crime film or show in which it appears black mirror is the parent film show streaming anywhere yes netflix about when in the parent film show does it appear ep loch henry actual footage of the film show can be seen yes no yes timestamp production companies historik productions streamberry from the award winning producers of the callow years production davis mccardle kate cezar clara ryce ted bellingham streamberry synopsis a sleepy lochside town is rocked by a string of horrific murders decades later the former holiday destination has never recovered tagline they tried to tell a story but the truth they found was too close to home awards bafta television awards best factual series in memory of pia
| 1
|
21,281
| 28,451,876,511
|
IssuesEvent
|
2023-04-17 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Mon, 17 Apr 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and Events
- **Authors:** Yangguang Wang, Xiang Zhang, Mingyuan Lin, Lei Yu, Boxin Shi, Wen Yang, Gui-Song Xia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.06930
- **Pdf link:** https://arxiv.org/pdf/2304.06930
- **Abstract**
Scene Dynamic Recovery (SDR) by inverting distorted Rolling Shutter (RS) images to an undistorted high frame-rate Global Shutter (GS) video is a severely ill-posed problem, particularly when prior knowledge about camera/object motions is unavailable. Commonly used artificial assumptions on motion linearity and data-specific characteristics, regarding the temporal dynamics information embedded in the RS scanlines, are prone to producing sub-optimal solutions in real-world scenarios. To address this challenge, we propose an event-based RS2GS framework within a self-supervised learning paradigm that leverages the extremely high temporal resolution of event cameras to provide accurate inter/intra-frame information. % In this paper, we propose to leverage the event camera to provide inter/intra-frame information as the emitted events have an extremely high temporal resolution and learn an event-based RS2GS network within a self-supervised learning framework, where real-world events and RS images can be exploited to alleviate the performance degradation caused by the domain gap between the synthesized and real data. Specifically, an Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals, including the temporal transition and spatial translation. Exploring connections in terms of RS-RS, RS-GS, and GS-RS, we explicitly formulate mutual constraints with the proposed E-IC, resulting in supervisions without ground-truth GS images. Extensive evaluations over synthetic and real datasets demonstrate that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios. The dataset and code are available at https://w3un.github.io/selfunroll/.
## Keyword: event camera
### Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and Events
- **Authors:** Yangguang Wang, Xiang Zhang, Mingyuan Lin, Lei Yu, Boxin Shi, Wen Yang, Gui-Song Xia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.06930
- **Pdf link:** https://arxiv.org/pdf/2304.06930
- **Abstract**
Scene Dynamic Recovery (SDR) by inverting distorted Rolling Shutter (RS) images to an undistorted high frame-rate Global Shutter (GS) video is a severely ill-posed problem, particularly when prior knowledge about camera/object motions is unavailable. Commonly used artificial assumptions on motion linearity and data-specific characteristics, regarding the temporal dynamics information embedded in the RS scanlines, are prone to producing sub-optimal solutions in real-world scenarios. To address this challenge, we propose an event-based RS2GS framework within a self-supervised learning paradigm that leverages the extremely high temporal resolution of event cameras to provide accurate inter/intra-frame information. % In this paper, we propose to leverage the event camera to provide inter/intra-frame information as the emitted events have an extremely high temporal resolution and learn an event-based RS2GS network within a self-supervised learning framework, where real-world events and RS images can be exploited to alleviate the performance degradation caused by the domain gap between the synthesized and real data. Specifically, an Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals, including the temporal transition and spatial translation. Exploring connections in terms of RS-RS, RS-GS, and GS-RS, we explicitly formulate mutual constraints with the proposed E-IC, resulting in supervisions without ground-truth GS images. Extensive evaluations over synthetic and real datasets demonstrate that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios. The dataset and code are available at https://w3un.github.io/selfunroll/.
### Neuromorphic Optical Flow and Real-time Implementation with Event Cameras
- **Authors:** Yannick Schnider, Stanislaw Wozniak, Mathias Gehrig, Jules Lecomte, Axel von Arnim, Luca Benini, Davide Scaramuzza, Angeliki Pantazi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE)
- **Arxiv link:** https://arxiv.org/abs/2304.07139
- **Pdf link:** https://arxiv.org/pdf/2304.07139
- **Abstract**
Optical flow provides information on relative motion that is an important component in many computer vision pipelines. Neural networks provide high accuracy optical flow, yet their complexity is often prohibitive for application at the edge or in robots, where efficiency and latency play crucial role. To address this challenge, we build on the latest developments in event-based vision and spiking neural networks. We propose a new network architecture, inspired by Timelens, that improves the state-of-the-art self-supervised optical flow accuracy when operated both in spiking and non-spiking mode. To implement a real-time pipeline with a physical event camera, we propose a methodology for principled model simplification based on activity and latency analysis. We demonstrate high speed optical flow prediction with almost two orders of magnitude reduced complexity while maintaining the accuracy, opening the path for real-time deployments.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Phantom Embeddings: Using Embedding Space for Model Regularization in Deep Neural Networks
- **Authors:** Mofassir ul Islam Arif, Mohsan Jameel, Josif Grabocka, Lars Schmidt-Thieme
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.07262
- **Pdf link:** https://arxiv.org/pdf/2304.07262
- **Abstract**
The strength of machine learning models stems from their ability to learn complex function approximations from data; however, this strength also makes training deep neural networks challenging. Notably, the complex models tend to memorize the training data, which results in poor regularization performance on test data. The regularization techniques such as L1, L2, dropout, etc. are proposed to reduce the overfitting effect; however, they bring in additional hyperparameters tuning complexity. These methods also fall short when the inter-class similarity is high due to the underlying data distribution, leading to a less accurate model. In this paper, we present a novel approach to regularize the models by leveraging the information-rich latent embeddings and their high intra-class correlation. We create phantom embeddings from a subset of homogenous samples and use these phantom embeddings to decrease the inter-class similarity of instances in their latent embedding space. The resulting models generalize better as a combination of their embedding and regularize them without requiring an expensive hyperparameter search. We evaluate our method on two popular and challenging image classification datasets (CIFAR and FashionMNIST) and show how our approach outperforms the standard baselines while displaying better training behavior.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### A Byte Sequence is Worth an Image: CNN for File Fragment Classification Using Bit Shift and n-Gram Embeddings
- **Authors:** Wenyang Liu, Yi Wang, Kejun Wu, Kim-Hui Yap, Lap-Pui Chau
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Signal Processing (eess.SP)
- **Arxiv link:** https://arxiv.org/abs/2304.06983
- **Pdf link:** https://arxiv.org/pdf/2304.06983
- **Abstract**
File fragment classification (FFC) on small chunks of memory is essential in memory forensics and Internet security. Existing methods mainly treat file fragments as 1d byte signals and utilize the captured inter-byte features for classification, while the bit information within bytes, i.e., intra-byte information, is seldom considered. This is inherently inapt for classifying variable-length coding files whose symbols are represented as the variable number of bits. Conversely, we propose Byte2Image, a novel data augmentation technique, to introduce the neglected intra-byte information into file fragments and re-treat them as 2d gray-scale images, which allows us to capture both inter-byte and intra-byte correlations simultaneously through powerful convolutional neural networks (CNNs). Specifically, to convert file fragments to 2d images, we employ a sliding byte window to expose the neglected intra-byte information and stack their n-gram features row by row. We further propose a byte sequence \& image fusion network as a classifier, which can jointly model the raw 1d byte sequence and the converted 2d image to perform FFC. Experiments on FFT-75 dataset validate that our proposed method can achieve notable accuracy improvements over state-of-the-art methods in nearly all scenarios. The code will be released at https://github.com/wenyang001/Byte2Image.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Mon, 17 Apr 23 - ## Keyword: events
### Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and Events
- **Authors:** Yangguang Wang, Xiang Zhang, Mingyuan Lin, Lei Yu, Boxin Shi, Wen Yang, Gui-Song Xia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.06930
- **Pdf link:** https://arxiv.org/pdf/2304.06930
- **Abstract**
Scene Dynamic Recovery (SDR) by inverting distorted Rolling Shutter (RS) images to an undistorted high frame-rate Global Shutter (GS) video is a severely ill-posed problem, particularly when prior knowledge about camera/object motions is unavailable. Commonly used artificial assumptions on motion linearity and data-specific characteristics, regarding the temporal dynamics information embedded in the RS scanlines, are prone to producing sub-optimal solutions in real-world scenarios. To address this challenge, we propose an event-based RS2GS framework within a self-supervised learning paradigm that leverages the extremely high temporal resolution of event cameras to provide accurate inter/intra-frame information. % In this paper, we propose to leverage the event camera to provide inter/intra-frame information as the emitted events have an extremely high temporal resolution and learn an event-based RS2GS network within a self-supervised learning framework, where real-world events and RS images can be exploited to alleviate the performance degradation caused by the domain gap between the synthesized and real data. Specifically, an Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals, including the temporal transition and spatial translation. Exploring connections in terms of RS-RS, RS-GS, and GS-RS, we explicitly formulate mutual constraints with the proposed E-IC, resulting in supervisions without ground-truth GS images. Extensive evaluations over synthetic and real datasets demonstrate that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios. The dataset and code are available at https://w3un.github.io/selfunroll/.
## Keyword: event camera
### Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and Events
- **Authors:** Yangguang Wang, Xiang Zhang, Mingyuan Lin, Lei Yu, Boxin Shi, Wen Yang, Gui-Song Xia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.06930
- **Pdf link:** https://arxiv.org/pdf/2304.06930
- **Abstract**
Scene Dynamic Recovery (SDR) by inverting distorted Rolling Shutter (RS) images to an undistorted high frame-rate Global Shutter (GS) video is a severely ill-posed problem, particularly when prior knowledge about camera/object motions is unavailable. Commonly used artificial assumptions on motion linearity and data-specific characteristics, regarding the temporal dynamics information embedded in the RS scanlines, are prone to producing sub-optimal solutions in real-world scenarios. To address this challenge, we propose an event-based RS2GS framework within a self-supervised learning paradigm that leverages the extremely high temporal resolution of event cameras to provide accurate inter/intra-frame information. % In this paper, we propose to leverage the event camera to provide inter/intra-frame information as the emitted events have an extremely high temporal resolution and learn an event-based RS2GS network within a self-supervised learning framework, where real-world events and RS images can be exploited to alleviate the performance degradation caused by the domain gap between the synthesized and real data. Specifically, an Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals, including the temporal transition and spatial translation. Exploring connections in terms of RS-RS, RS-GS, and GS-RS, we explicitly formulate mutual constraints with the proposed E-IC, resulting in supervisions without ground-truth GS images. Extensive evaluations over synthetic and real datasets demonstrate that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios. The dataset and code are available at https://w3un.github.io/selfunroll/.
### Neuromorphic Optical Flow and Real-time Implementation with Event Cameras
- **Authors:** Yannick Schnider, Stanislaw Wozniak, Mathias Gehrig, Jules Lecomte, Axel von Arnim, Luca Benini, Davide Scaramuzza, Angeliki Pantazi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE)
- **Arxiv link:** https://arxiv.org/abs/2304.07139
- **Pdf link:** https://arxiv.org/pdf/2304.07139
- **Abstract**
Optical flow provides information on relative motion that is an important component in many computer vision pipelines. Neural networks provide high accuracy optical flow, yet their complexity is often prohibitive for application at the edge or in robots, where efficiency and latency play crucial role. To address this challenge, we build on the latest developments in event-based vision and spiking neural networks. We propose a new network architecture, inspired by Timelens, that improves the state-of-the-art self-supervised optical flow accuracy when operated both in spiking and non-spiking mode. To implement a real-time pipeline with a physical event camera, we propose a methodology for principled model simplification based on activity and latency analysis. We demonstrate high speed optical flow prediction with almost two orders of magnitude reduced complexity while maintaining the accuracy, opening the path for real-time deployments.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Phantom Embeddings: Using Embedding Space for Model Regularization in Deep Neural Networks
- **Authors:** Mofassir ul Islam Arif, Mohsan Jameel, Josif Grabocka, Lars Schmidt-Thieme
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.07262
- **Pdf link:** https://arxiv.org/pdf/2304.07262
- **Abstract**
The strength of machine learning models stems from their ability to learn complex function approximations from data; however, this strength also makes training deep neural networks challenging. Notably, the complex models tend to memorize the training data, which results in poor regularization performance on test data. The regularization techniques such as L1, L2, dropout, etc. are proposed to reduce the overfitting effect; however, they bring in additional hyperparameters tuning complexity. These methods also fall short when the inter-class similarity is high due to the underlying data distribution, leading to a less accurate model. In this paper, we present a novel approach to regularize the models by leveraging the information-rich latent embeddings and their high intra-class correlation. We create phantom embeddings from a subset of homogenous samples and use these phantom embeddings to decrease the inter-class similarity of instances in their latent embedding space. The resulting models generalize better as a combination of their embedding and regularize them without requiring an expensive hyperparameter search. We evaluate our method on two popular and challenging image classification datasets (CIFAR and FashionMNIST) and show how our approach outperforms the standard baselines while displaying better training behavior.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### A Byte Sequence is Worth an Image: CNN for File Fragment Classification Using Bit Shift and n-Gram Embeddings
- **Authors:** Wenyang Liu, Yi Wang, Kejun Wu, Kim-Hui Yap, Lap-Pui Chau
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Signal Processing (eess.SP)
- **Arxiv link:** https://arxiv.org/abs/2304.06983
- **Pdf link:** https://arxiv.org/pdf/2304.06983
- **Abstract**
File fragment classification (FFC) on small chunks of memory is essential in memory forensics and Internet security. Existing methods mainly treat file fragments as 1d byte signals and utilize the captured inter-byte features for classification, while the bit information within bytes, i.e., intra-byte information, is seldom considered. This is inherently inapt for classifying variable-length coding files whose symbols are represented as the variable number of bits. Conversely, we propose Byte2Image, a novel data augmentation technique, to introduce the neglected intra-byte information into file fragments and re-treat them as 2d gray-scale images, which allows us to capture both inter-byte and intra-byte correlations simultaneously through powerful convolutional neural networks (CNNs). Specifically, to convert file fragments to 2d images, we employ a sliding byte window to expose the neglected intra-byte information and stack their n-gram features row by row. We further propose a byte sequence \& image fusion network as a classifier, which can jointly model the raw 1d byte sequence and the converted 2d image to perform FFC. Experiments on FFT-75 dataset validate that our proposed method can achieve notable accuracy improvements over state-of-the-art methods in nearly all scenarios. The code will be released at https://github.com/wenyang001/Byte2Image.
## Keyword: raw image
There is no result
|
process
|
new submissions for mon apr keyword events self supervised scene dynamic recovery from rolling shutter images and events authors yangguang wang xiang zhang mingyuan lin lei yu boxin shi wen yang gui song xia subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract scene dynamic recovery sdr by inverting distorted rolling shutter rs images to an undistorted high frame rate global shutter gs video is a severely ill posed problem particularly when prior knowledge about camera object motions is unavailable commonly used artificial assumptions on motion linearity and data specific characteristics regarding the temporal dynamics information embedded in the rs scanlines are prone to producing sub optimal solutions in real world scenarios to address this challenge we propose an event based framework within a self supervised learning paradigm that leverages the extremely high temporal resolution of event cameras to provide accurate inter intra frame information in this paper we propose to leverage the event camera to provide inter intra frame information as the emitted events have an extremely high temporal resolution and learn an event based network within a self supervised learning framework where real world events and rs images can be exploited to alleviate the performance degradation caused by the domain gap between the synthesized and real data specifically an event based inter intra frame compensator e ic is proposed to predict the per pixel dynamic between arbitrary time intervals including the temporal transition and spatial translation exploring connections in terms of rs rs rs gs and gs rs we explicitly formulate mutual constraints with the proposed e ic resulting in supervisions without ground truth gs images extensive evaluations over synthetic and real datasets demonstrate that the proposed method achieves state of the art and shows remarkable performance for event based inversion in real world scenarios the dataset and code are available at keyword event camera self supervised scene dynamic recovery from rolling shutter images and events authors yangguang wang xiang zhang mingyuan lin lei yu boxin shi wen yang gui song xia subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract scene dynamic recovery sdr by inverting distorted rolling shutter rs images to an undistorted high frame rate global shutter gs video is a severely ill posed problem particularly when prior knowledge about camera object motions is unavailable commonly used artificial assumptions on motion linearity and data specific characteristics regarding the temporal dynamics information embedded in the rs scanlines are prone to producing sub optimal solutions in real world scenarios to address this challenge we propose an event based framework within a self supervised learning paradigm that leverages the extremely high temporal resolution of event cameras to provide accurate inter intra frame information in this paper we propose to leverage the event camera to provide inter intra frame information as the emitted events have an extremely high temporal resolution and learn an event based network within a self supervised learning framework where real world events and rs images can be exploited to alleviate the performance degradation caused by the domain gap between the synthesized and real data specifically an event based inter intra frame compensator e ic is proposed to predict the per pixel dynamic between arbitrary time intervals including the temporal transition and spatial translation exploring connections in terms of rs rs rs gs and gs rs we explicitly formulate mutual constraints with the proposed e ic resulting in supervisions without ground truth gs images extensive evaluations over synthetic and real datasets demonstrate that the proposed method achieves state of the art and shows remarkable performance for event based inversion in real world scenarios the dataset and code are available at neuromorphic optical flow and real time implementation with event cameras authors yannick schnider stanislaw wozniak mathias gehrig jules lecomte axel von arnim luca benini davide scaramuzza angeliki pantazi subjects computer vision and pattern recognition cs cv neural and evolutionary computing cs ne arxiv link pdf link abstract optical flow provides information on relative motion that is an important component in many computer vision pipelines neural networks provide high accuracy optical flow yet their complexity is often prohibitive for application at the edge or in robots where efficiency and latency play crucial role to address this challenge we build on the latest developments in event based vision and spiking neural networks we propose a new network architecture inspired by timelens that improves the state of the art self supervised optical flow accuracy when operated both in spiking and non spiking mode to implement a real time pipeline with a physical event camera we propose a methodology for principled model simplification based on activity and latency analysis we demonstrate high speed optical flow prediction with almost two orders of magnitude reduced complexity while maintaining the accuracy opening the path for real time deployments keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp phantom embeddings using embedding space for model regularization in deep neural networks authors mofassir ul islam arif mohsan jameel josif grabocka lars schmidt thieme subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the strength of machine learning models stems from their ability to learn complex function approximations from data however this strength also makes training deep neural networks challenging notably the complex models tend to memorize the training data which results in poor regularization performance on test data the regularization techniques such as dropout etc are proposed to reduce the overfitting effect however they bring in additional hyperparameters tuning complexity these methods also fall short when the inter class similarity is high due to the underlying data distribution leading to a less accurate model in this paper we present a novel approach to regularize the models by leveraging the information rich latent embeddings and their high intra class correlation we create phantom embeddings from a subset of homogenous samples and use these phantom embeddings to decrease the inter class similarity of instances in their latent embedding space the resulting models generalize better as a combination of their embedding and regularize them without requiring an expensive hyperparameter search we evaluate our method on two popular and challenging image classification datasets cifar and fashionmnist and show how our approach outperforms the standard baselines while displaying better training behavior keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw a byte sequence is worth an image cnn for file fragment classification using bit shift and n gram embeddings authors wenyang liu yi wang kejun wu kim hui yap lap pui chau subjects computer vision and pattern recognition cs cv signal processing eess sp arxiv link pdf link abstract file fragment classification ffc on small chunks of memory is essential in memory forensics and internet security existing methods mainly treat file fragments as byte signals and utilize the captured inter byte features for classification while the bit information within bytes i e intra byte information is seldom considered this is inherently inapt for classifying variable length coding files whose symbols are represented as the variable number of bits conversely we propose a novel data augmentation technique to introduce the neglected intra byte information into file fragments and re treat them as gray scale images which allows us to capture both inter byte and intra byte correlations simultaneously through powerful convolutional neural networks cnns specifically to convert file fragments to images we employ a sliding byte window to expose the neglected intra byte information and stack their n gram features row by row we further propose a byte sequence image fusion network as a classifier which can jointly model the raw byte sequence and the converted image to perform ffc experiments on fft dataset validate that our proposed method can achieve notable accuracy improvements over state of the art methods in nearly all scenarios the code will be released at keyword raw image there is no result
| 1
|
15,515
| 19,703,267,211
|
IssuesEvent
|
2022-01-12 18:52:20
|
googleapis/java-debugger-client
|
https://api.github.com/repos/googleapis/java-debugger-client
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'debugger-client' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'debugger-client' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname debugger client invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
19,549
| 6,735,133,254
|
IssuesEvent
|
2017-10-18 20:36:10
|
mmatyas/pegasus-frontend
|
https://api.github.com/repos/mmatyas/pegasus-frontend
|
closed
|
Generate deployment packages
|
build
|
- [x] Create a simple TAR package for Linux
- ~(fails for the same reason, as it also uses the same tool)~
- fixed by static builds
- [x] Create a simple ZIP package for Windows
- [ ] Add AppImage support
- previously: *outdated Travis, Docker complications, missing libs: too much hacking. Works offline though. I'll build Qt on older Ubuntu next time*
- I'd need to bundle GStreamer, is that okay?
- [ ] Snap support
- there are some Qt parts for Snapcraft, but a lot missing; would require building Qt from scratch
- system access problems, as listed below
- [ ] Flatpak support
- it seems I'd need to depend on the whole KDE runtime to get it work
- maybe I can avoid it with static builds
- TODO test
|
1.0
|
Generate deployment packages - - [x] Create a simple TAR package for Linux
- ~(fails for the same reason, as it also uses the same tool)~
- fixed by static builds
- [x] Create a simple ZIP package for Windows
- [ ] Add AppImage support
- previously: *outdated Travis, Docker complications, missing libs: too much hacking. Works offline though. I'll build Qt on older Ubuntu next time*
- I'd need to bundle GStreamer, is that okay?
- [ ] Snap support
- there are some Qt parts for Snapcraft, but a lot missing; would require building Qt from scratch
- system access problems, as listed below
- [ ] Flatpak support
- it seems I'd need to depend on the whole KDE runtime to get it work
- maybe I can avoid it with static builds
- TODO test
|
non_process
|
generate deployment packages create a simple tar package for linux fails for the same reason as it also uses the same tool fixed by static builds create a simple zip package for windows add appimage support previously outdated travis docker complications missing libs too much hacking works offline though i ll build qt on older ubuntu next time i d need to bundle gstreamer is that okay snap support there are some qt parts for snapcraft but a lot missing would require building qt from scratch system access problems as listed below flatpak support it seems i d need to depend on the whole kde runtime to get it work maybe i can avoid it with static builds todo test
| 0
|
208,558
| 23,616,553,723
|
IssuesEvent
|
2022-08-24 16:21:22
|
kydell/onboarding-training
|
https://api.github.com/repos/kydell/onboarding-training
|
opened
|
spring-boot-starter-web-2.1.3.RELEASE.jar: 27 vulnerabilities (highest severity is: 9.8)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-starter-web-2.1.3.RELEASE.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-22965](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22965) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | spring-beans-5.1.5.RELEASE.jar | Transitive | 2.4.0 | ❌ |
| [CVE-2016-1000027](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000027) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | spring-web-5.1.5.RELEASE.jar | Transitive | 2.1.15.RELEASE | ❌ |
| [CVE-2019-0232](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0232) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.1 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.5.RELEASE | ❌ |
| [CVE-2017-18640](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18640) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | snakeyaml-1.23.jar | Transitive | 2.3.0.RELEASE | ❌ |
| [CVE-2020-5398](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5398) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | spring-web-5.1.5.RELEASE.jar | Transitive | 2.1.12.RELEASE | ❌ |
| [CVE-2019-10072](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10072) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.6.RELEASE | ❌ |
| [CVE-2019-17563](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17563) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.12.RELEASE | ❌ |
| [CVE-2020-11996](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11996) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.15.RELEASE | ❌ |
| [CVE-2020-13934](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13934) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.16.RELEASE | ❌ |
| [CVE-2020-13935](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13935) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-websocket-9.0.16.jar | Transitive | 2.1.16.RELEASE | ❌ |
| [CVE-2021-25122](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25122) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.3.9.RELEASE | ❌ |
| [CVE-2021-41079](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41079) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.3.10.RELEASE | ❌ |
| [CVE-2021-25329](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.0 | tomcat-embed-core-9.0.16.jar | Transitive | 2.3.9.RELEASE | ❌ |
| [CVE-2019-12418](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12418) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.0 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.11.RELEASE | ❌ |
| [CVE-2020-9484](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9484) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.0 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.15.RELEASE | ❌ |
| [CVE-2021-42550](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42550) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.6 | detected in multiple dependencies | Transitive | 2.5.8 | ❌ |
| [CVE-2022-22950](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | spring-expression-5.1.5.RELEASE.jar | Transitive | 2.4.0 | ❌ |
| [CVE-2020-5421](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | spring-web-5.1.5.RELEASE.jar | Transitive | 2.1.17.RELEASE | ❌ |
| [CVE-2019-0221](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0221) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.5.RELEASE | ❌ |
| [CVE-2019-10219](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10219) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | hibernate-validator-6.0.14.Final.jar | Transitive | 2.1.10.RELEASE | ❌ |
| [CVE-2021-24122](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-24122) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.9 | tomcat-embed-core-9.0.16.jar | Transitive | 2.2.12.RELEASE | ❌ |
| [CVE-2021-33037](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33037) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | tomcat-embed-core-9.0.16.jar | Transitive | 2.4.8 | ❌ |
| [CVE-2022-22970](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22970) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | spring-beans-5.1.5.RELEASE.jar | Transitive | 2.5.13 | ❌ |
| [CVE-2020-10693](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10693) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | hibernate-validator-6.0.14.Final.jar | Transitive | 2.1.15.RELEASE | ❌ |
| [CVE-2020-1935](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1935) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.8 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.13.RELEASE | ❌ |
| [CVE-2020-13943](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13943) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.3 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.17.RELEASE | ❌ |
| [CVE-2021-22096](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22096) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.3 | detected in multiple dependencies | Transitive | 2.4.0 | ❌ |
## Details
> Partial details (21 vulnerabilities) are displayed below due to a content size limitation in GitHub. To view information on the remaining vulnerabilities, navigate to the Mend Application.<br>
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-22965</summary>
### Vulnerable Library - <b>spring-beans-5.1.5.RELEASE.jar</b></p>
<p>Spring Beans</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-beans/5.1.5.RELEASE/58b10c61f6bf2362909d884813c4049b657735f5/spring-beans-5.1.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-webmvc-5.1.5.RELEASE.jar
- :x: **spring-beans-5.1.5.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Spring MVC or Spring WebFlux application running on JDK 9+ may be vulnerable to remote code execution (RCE) via data binding. The specific exploit requires the application to run on Tomcat as a WAR deployment. If the application is deployed as a Spring Boot executable jar, i.e. the default, it is not vulnerable to the exploit. However, the nature of the vulnerability is more general, and there may be other ways to exploit it.
<p>Publish Date: 2022-04-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22965>CVE-2022-22965</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement">https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement</a></p>
<p>Release Date: 2022-04-01</p>
<p>Fix Resolution (org.springframework:spring-beans): 5.2.20.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2016-1000027</summary>
### Vulnerable Library - <b>spring-web-5.1.5.RELEASE.jar</b></p>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/5.1.5.RELEASE/c37c4363be4ad6c5f67e3f9f020497e2d599e325/spring-web-5.1.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- :x: **spring-web-5.1.5.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Pivotal Spring Framework through 5.3.16 suffers from a potential remote code execution (RCE) issue if used for Java deserialization of untrusted data. Depending on how the library is implemented within a product, this issue may or not occur, and authentication may be required. NOTE: the vendor's position is that untrusted data is not an intended use case. The product's behavior will not be changed because some users rely on deserialization of trusted data.
<p>Publish Date: 2020-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000027>CVE-2016-1000027</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-1000027">https://nvd.nist.gov/vuln/detail/CVE-2016-1000027</a></p>
<p>Release Date: 2020-01-02</p>
<p>Fix Resolution (org.springframework:spring-web): 5.1.16.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.15.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-0232</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When running on Windows with enableCmdLineArguments enabled, the CGI Servlet in Apache Tomcat 9.0.0.M1 to 9.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 is vulnerable to Remote Code Execution due to a bug in the way the JRE passes command line arguments to Windows. The CGI Servlet is disabled by default. The CGI option enableCmdLineArguments is disable by default in Tomcat 9.0.x (and will be disabled by default in all versions in response to this vulnerability). For a detailed explanation of the JRE behaviour, see Markus Wulftange's blog (https://codewhitesec.blogspot.com/2016/02/java-and-command-line-injections-in-windows.html) and this archived MSDN blog (https://web.archive.org/web/20161228144344/https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/).
<p>Publish Date: 2019-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0232>CVE-2019-0232</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232</a></p>
<p>Release Date: 2019-04-15</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.19</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.5.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2017-18640</summary>
### Vulnerable Library - <b>snakeyaml-1.23.jar</b></p>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.23/ec62d74fe50689c28c0ff5b35d3aebcaa8b5be68/snakeyaml-1.23.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.3.RELEASE.jar
- :x: **snakeyaml-1.23.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The Alias feature in SnakeYAML before 1.26 allows entity expansion during a load operation, a related issue to CVE-2003-1564.
<p>Publish Date: 2019-12-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18640>CVE-2017-18640</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640</a></p>
<p>Release Date: 2019-12-12</p>
<p>Fix Resolution (org.yaml:snakeyaml): 1.26</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.0.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-5398</summary>
### Vulnerable Library - <b>spring-web-5.1.5.RELEASE.jar</b></p>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/5.1.5.RELEASE/c37c4363be4ad6c5f67e3f9f020497e2d599e325/spring-web-5.1.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- :x: **spring-web-5.1.5.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Spring Framework, versions 5.2.x prior to 5.2.3, versions 5.1.x prior to 5.1.13, and versions 5.0.x prior to 5.0.16, an application is vulnerable to a reflected file download (RFD) attack when it sets a "Content-Disposition" header in the response where the filename attribute is derived from user supplied input.
<p>Publish Date: 2020-01-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5398>CVE-2020-5398</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pivotal.io/security/cve-2020-5398">https://pivotal.io/security/cve-2020-5398</a></p>
<p>Release Date: 2020-01-17</p>
<p>Fix Resolution (org.springframework:spring-web): 5.1.13.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.12.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-10072</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The fix for CVE-2019-0199 was incomplete and did not address HTTP/2 connection window exhaustion on write in Apache Tomcat versions 9.0.0.M1 to 9.0.19 and 8.5.0 to 8.5.40 . By not sending WINDOW_UPDATE messages for the connection window (stream 0) clients were able to cause server-side threads to block eventually leading to thread exhaustion and a DoS.
<p>Publish Date: 2019-06-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10072>CVE-2019-10072</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://tomcat.apache.org/security-8.html#Fixed_in_Apache_Tomcat_8.5.41">http://tomcat.apache.org/security-8.html#Fixed_in_Apache_Tomcat_8.5.41</a></p>
<p>Release Date: 2019-06-21</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.20</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.6.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-17563</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When using FORM authentication with Apache Tomcat 9.0.0.M1 to 9.0.29, 8.5.0 to 8.5.49 and 7.0.0 to 7.0.98 there was a narrow window where an attacker could perform a session fixation attack. The window was considered too narrow for an exploit to be practical but, erring on the side of caution, this issue has been treated as a security vulnerability.
<p>Publish Date: 2019-12-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17563>CVE-2019-17563</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563</a></p>
<p>Release Date: 2019-12-23</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.30</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.12.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-11996</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A specially crafted sequence of HTTP/2 requests sent to Apache Tomcat 10.0.0-M1 to 10.0.0-M5, 9.0.0.M1 to 9.0.35 and 8.5.0 to 8.5.55 could trigger high CPU usage for several seconds. If a sufficient number of such requests were made on concurrent HTTP/2 connections, the server could become unresponsive.
<p>Publish Date: 2020-06-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11996>CVE-2020-11996</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html">https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2020-06-26</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.36</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.15.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-13934</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An h2c direct connection to Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M5 to 9.0.36 and 8.5.1 to 8.5.56 did not release the HTTP/1.1 processor after the upgrade to HTTP/2. If a sufficient number of such requests were made, an OutOfMemoryException could occur leading to a denial of service.
<p>Publish Date: 2020-07-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13934>CVE-2020-13934</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.37</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.16.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-13935</summary>
### Vulnerable Library - <b>tomcat-embed-websocket-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-websocket/9.0.16/f5eac487823c68f5d20742a99df1d94350c24d21/tomcat-embed-websocket-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-websocket-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The payload length in a WebSocket frame was not correctly validated in Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M1 to 9.0.36, 8.5.0 to 8.5.56 and 7.0.27 to 7.0.104. Invalid payload lengths could trigger an infinite loop. Multiple requests with invalid payload lengths could lead to a denial of service.
<p>Publish Date: 2020-07-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13935>CVE-2020-13935</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rd48c72bd3255bda87564d4da3791517c074d94f8a701f93b85752651%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rd48c72bd3255bda87564d4da3791517c074d94f8a701f93b85752651%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-websocket): 9.0.37</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.16.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25122</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When responding to new h2c connection requests, Apache Tomcat versions 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41 and 8.5.0 to 8.5.61 could duplicate request headers and a limited amount of request body from one request to another meaning user A and user B could both see the results of user A's request.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25122>CVE-2021-25122</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.43</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.9.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-41079</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Tomcat 8.5.0 to 8.5.63, 9.0.0-M1 to 9.0.43 and 10.0.0-M1 to 10.0.2 did not properly validate incoming TLS packets. When Tomcat was configured to use NIO+OpenSSL or NIO2+OpenSSL for TLS, a specially crafted packet could be used to trigger an infinite loop resulting in a denial of service.
<p>Publish Date: 2021-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41079>CVE-2021-41079</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/security-10.html">https://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2021-09-16</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25329</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329>CVE-2021-25329</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.43</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.9.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-12418</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When Apache Tomcat 9.0.0.M1 to 9.0.28, 8.5.0 to 8.5.47, 7.0.0 and 7.0.97 is configured with the JMX Remote Lifecycle Listener, a local attacker without access to the Tomcat process or configuration files is able to manipulate the RMI registry to perform a man-in-the-middle attack to capture user names and passwords used to access the JMX interface. The attacker can then use these credentials to access the JMX interface and gain complete control over the Tomcat instance.
<p>Publish Date: 2019-12-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12418>CVE-2019-12418</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12418">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12418</a></p>
<p>Release Date: 2019-12-23</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.29</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.11.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-9484</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When using Apache Tomcat versions 10.0.0-M1 to 10.0.0-M4, 9.0.0.M1 to 9.0.34, 8.5.0 to 8.5.54 and 7.0.0 to 7.0.103 if a) an attacker is able to control the contents and name of a file on the server; and b) the server is configured to use the PersistenceManager with a FileStore; and c) the PersistenceManager is configured with sessionAttributeValueClassNameFilter="null" (the default unless a SecurityManager is used) or a sufficiently lax filter to allow the attacker provided object to be deserialized; and d) the attacker knows the relative file path from the storage location used by FileStore to the file the attacker has control over; then, using a specifically crafted request, the attacker will be able to trigger remote code execution via deserialization of the file under their control. Note that all of conditions a) to d) must be true for the attack to succeed.
<p>Publish Date: 2020-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9484>CVE-2020-9484</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9484">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9484</a></p>
<p>Release Date: 2020-05-20</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.35</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.15.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-42550</summary>
### Vulnerable Libraries - <b>logback-classic-1.2.3.jar</b>, <b>logback-core-1.2.3.jar</b></p>
<p>
### <b>logback-classic-1.2.3.jar</b></p>
<p>logback-classic module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-classic/1.2.3/7c4f3c474fb2c041d8028740440937705ebb473a/logback-classic-1.2.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-classic/1.2.3/7c4f3c474fb2c041d8028740440937705ebb473a/logback-classic-1.2.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.3.RELEASE.jar
- spring-boot-starter-logging-2.1.3.RELEASE.jar
- :x: **logback-classic-1.2.3.jar** (Vulnerable Library)
### <b>logback-core-1.2.3.jar</b></p>
<p>logback-core module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-core/1.2.3/864344400c3d4d92dfeb0a305dc87d953677c03c/logback-core-1.2.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-core/1.2.3/864344400c3d4d92dfeb0a305dc87d953677c03c/logback-core-1.2.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.3.RELEASE.jar
- spring-boot-starter-logging-2.1.3.RELEASE.jar
- logback-classic-1.2.3.jar
- :x: **logback-core-1.2.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In logback version 1.2.7 and prior versions, an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from LDAP servers.
<p>Publish Date: 2021-12-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42550>CVE-2021-42550</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2021-42550">https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2021-42550</a></p>
<p>Release Date: 2021-12-16</p>
<p>Fix Resolution (ch.qos.logback:logback-classic): 1.2.8</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.5.8</p><p>Fix Resolution (ch.qos.logback:logback-core): 1.2.8</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.5.8</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-22950</summary>
### Vulnerable Library - <b>spring-expression-5.1.5.RELEASE.jar</b></p>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.1.5.RELEASE/b728a06924560ee69307a52d100e6b156d9a4a80/spring-expression-5.1.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-webmvc-5.1.5.RELEASE.jar
- :x: **spring-expression-5.1.5.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
n Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition.
<p>Publish Date: 2022-04-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950>CVE-2022-22950</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22950">https://tanzu.vmware.com/security/cve-2022-22950</a></p>
<p>Release Date: 2022-04-01</p>
<p>Fix Resolution (org.springframework:spring-expression): 5.2.20.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-5421</summary>
### Vulnerable Library - <b>spring-web-5.1.5.RELEASE.jar</b></p>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/5.1.5.RELEASE/c37c4363be4ad6c5f67e3f9f020497e2d599e325/spring-web-5.1.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- :x: **spring-web-5.1.5.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.
<p>Publish Date: 2020-09-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421>CVE-2020-5421</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2020-5421">https://tanzu.vmware.com/security/cve-2020-5421</a></p>
<p>Release Date: 2020-09-19</p>
<p>Fix Resolution (org.springframework:spring-web): 5.1.18.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.17.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-0221</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The SSI printenv command in Apache Tomcat 9.0.0.M1 to 9.0.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 echoes user provided data without escaping and is, therefore, vulnerable to XSS. SSI is disabled by default. The printenv command is intended for debugging and is unlikely to be present in a production website.
<p>Publish Date: 2019-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0221>CVE-2019-0221</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221</a></p>
<p>Release Date: 2019-05-28</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.19</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.5.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-10219</summary>
### Vulnerable Library - <b>hibernate-validator-6.0.14.Final.jar</b></p>
<p>Hibernate's Bean Validation (JSR-380) reference implementation.</p>
<p>Library home page: <a href="http://hibernate.org/validator">http://hibernate.org/validator</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.hibernate.validator/hibernate-validator/6.0.14.Final/c424524aa7718c564d9199ac5892b05901cabae6/hibernate-validator-6.0.14.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- :x: **hibernate-validator-6.0.14.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability was found in Hibernate-Validator. The SafeHtml validator annotation fails to properly sanitize payloads consisting of potentially malicious code in HTML comments and instructions. This vulnerability can result in an XSS attack.
<p>Publish Date: 2019-11-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10219>CVE-2019-10219</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10219">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10219</a></p>
<p>Release Date: 2019-11-08</p>
<p>Fix Resolution (org.hibernate.validator:hibernate-validator): 6.0.18.Final</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.10.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-24122</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When serving resources from a network location using the NTFS file system, Apache Tomcat versions 10.0.0-M1 to 10.0.0-M9, 9.0.0.M1 to 9.0.39, 8.5.0 to 8.5.59 and 7.0.0 to 7.0.106 were susceptible to JSP source code disclosure in some configurations. The root cause was the unexpected behaviour of the JRE API File.getCanonicalPath() which in turn was caused by the inconsistent behaviour of the Windows API (FindFirstFileW) in some circumstances.
<p>Publish Date: 2021-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-24122>CVE-2021-24122</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-24122">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-24122</a></p>
<p>Release Date: 2021-01-14</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.40</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.2.12.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
spring-boot-starter-web-2.1.3.RELEASE.jar: 27 vulnerabilities (highest severity is: 9.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-starter-web-2.1.3.RELEASE.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-22965](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22965) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | spring-beans-5.1.5.RELEASE.jar | Transitive | 2.4.0 | ❌ |
| [CVE-2016-1000027](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000027) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | spring-web-5.1.5.RELEASE.jar | Transitive | 2.1.15.RELEASE | ❌ |
| [CVE-2019-0232](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0232) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.1 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.5.RELEASE | ❌ |
| [CVE-2017-18640](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18640) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | snakeyaml-1.23.jar | Transitive | 2.3.0.RELEASE | ❌ |
| [CVE-2020-5398](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5398) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | spring-web-5.1.5.RELEASE.jar | Transitive | 2.1.12.RELEASE | ❌ |
| [CVE-2019-10072](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10072) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.6.RELEASE | ❌ |
| [CVE-2019-17563](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17563) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.12.RELEASE | ❌ |
| [CVE-2020-11996](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11996) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.15.RELEASE | ❌ |
| [CVE-2020-13934](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13934) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.16.RELEASE | ❌ |
| [CVE-2020-13935](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13935) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-websocket-9.0.16.jar | Transitive | 2.1.16.RELEASE | ❌ |
| [CVE-2021-25122](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25122) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.3.9.RELEASE | ❌ |
| [CVE-2021-41079](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41079) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | tomcat-embed-core-9.0.16.jar | Transitive | 2.3.10.RELEASE | ❌ |
| [CVE-2021-25329](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.0 | tomcat-embed-core-9.0.16.jar | Transitive | 2.3.9.RELEASE | ❌ |
| [CVE-2019-12418](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12418) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.0 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.11.RELEASE | ❌ |
| [CVE-2020-9484](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9484) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.0 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.15.RELEASE | ❌ |
| [CVE-2021-42550](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42550) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.6 | detected in multiple dependencies | Transitive | 2.5.8 | ❌ |
| [CVE-2022-22950](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | spring-expression-5.1.5.RELEASE.jar | Transitive | 2.4.0 | ❌ |
| [CVE-2020-5421](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | spring-web-5.1.5.RELEASE.jar | Transitive | 2.1.17.RELEASE | ❌ |
| [CVE-2019-0221](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0221) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.5.RELEASE | ❌ |
| [CVE-2019-10219](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10219) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | hibernate-validator-6.0.14.Final.jar | Transitive | 2.1.10.RELEASE | ❌ |
| [CVE-2021-24122](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-24122) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.9 | tomcat-embed-core-9.0.16.jar | Transitive | 2.2.12.RELEASE | ❌ |
| [CVE-2021-33037](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33037) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | tomcat-embed-core-9.0.16.jar | Transitive | 2.4.8 | ❌ |
| [CVE-2022-22970](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22970) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | spring-beans-5.1.5.RELEASE.jar | Transitive | 2.5.13 | ❌ |
| [CVE-2020-10693](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10693) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | hibernate-validator-6.0.14.Final.jar | Transitive | 2.1.15.RELEASE | ❌ |
| [CVE-2020-1935](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1935) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.8 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.13.RELEASE | ❌ |
| [CVE-2020-13943](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13943) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.3 | tomcat-embed-core-9.0.16.jar | Transitive | 2.1.17.RELEASE | ❌ |
| [CVE-2021-22096](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22096) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.3 | detected in multiple dependencies | Transitive | 2.4.0 | ❌ |
## Details
> Partial details (21 vulnerabilities) are displayed below due to a content size limitation in GitHub. To view information on the remaining vulnerabilities, navigate to the Mend Application.<br>
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-22965</summary>
### Vulnerable Library - <b>spring-beans-5.1.5.RELEASE.jar</b></p>
<p>Spring Beans</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-beans/5.1.5.RELEASE/58b10c61f6bf2362909d884813c4049b657735f5/spring-beans-5.1.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-webmvc-5.1.5.RELEASE.jar
- :x: **spring-beans-5.1.5.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Spring MVC or Spring WebFlux application running on JDK 9+ may be vulnerable to remote code execution (RCE) via data binding. The specific exploit requires the application to run on Tomcat as a WAR deployment. If the application is deployed as a Spring Boot executable jar, i.e. the default, it is not vulnerable to the exploit. However, the nature of the vulnerability is more general, and there may be other ways to exploit it.
<p>Publish Date: 2022-04-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22965>CVE-2022-22965</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement">https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement</a></p>
<p>Release Date: 2022-04-01</p>
<p>Fix Resolution (org.springframework:spring-beans): 5.2.20.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2016-1000027</summary>
### Vulnerable Library - <b>spring-web-5.1.5.RELEASE.jar</b></p>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/5.1.5.RELEASE/c37c4363be4ad6c5f67e3f9f020497e2d599e325/spring-web-5.1.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- :x: **spring-web-5.1.5.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Pivotal Spring Framework through 5.3.16 suffers from a potential remote code execution (RCE) issue if used for Java deserialization of untrusted data. Depending on how the library is implemented within a product, this issue may or not occur, and authentication may be required. NOTE: the vendor's position is that untrusted data is not an intended use case. The product's behavior will not be changed because some users rely on deserialization of trusted data.
<p>Publish Date: 2020-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000027>CVE-2016-1000027</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-1000027">https://nvd.nist.gov/vuln/detail/CVE-2016-1000027</a></p>
<p>Release Date: 2020-01-02</p>
<p>Fix Resolution (org.springframework:spring-web): 5.1.16.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.15.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-0232</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When running on Windows with enableCmdLineArguments enabled, the CGI Servlet in Apache Tomcat 9.0.0.M1 to 9.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 is vulnerable to Remote Code Execution due to a bug in the way the JRE passes command line arguments to Windows. The CGI Servlet is disabled by default. The CGI option enableCmdLineArguments is disable by default in Tomcat 9.0.x (and will be disabled by default in all versions in response to this vulnerability). For a detailed explanation of the JRE behaviour, see Markus Wulftange's blog (https://codewhitesec.blogspot.com/2016/02/java-and-command-line-injections-in-windows.html) and this archived MSDN blog (https://web.archive.org/web/20161228144344/https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/).
<p>Publish Date: 2019-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0232>CVE-2019-0232</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0232</a></p>
<p>Release Date: 2019-04-15</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.19</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.5.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2017-18640</summary>
### Vulnerable Library - <b>snakeyaml-1.23.jar</b></p>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.23/ec62d74fe50689c28c0ff5b35d3aebcaa8b5be68/snakeyaml-1.23.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.3.RELEASE.jar
- :x: **snakeyaml-1.23.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The Alias feature in SnakeYAML before 1.26 allows entity expansion during a load operation, a related issue to CVE-2003-1564.
<p>Publish Date: 2019-12-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18640>CVE-2017-18640</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18640</a></p>
<p>Release Date: 2019-12-12</p>
<p>Fix Resolution (org.yaml:snakeyaml): 1.26</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.0.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-5398</summary>
### Vulnerable Library - <b>spring-web-5.1.5.RELEASE.jar</b></p>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/5.1.5.RELEASE/c37c4363be4ad6c5f67e3f9f020497e2d599e325/spring-web-5.1.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- :x: **spring-web-5.1.5.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Spring Framework, versions 5.2.x prior to 5.2.3, versions 5.1.x prior to 5.1.13, and versions 5.0.x prior to 5.0.16, an application is vulnerable to a reflected file download (RFD) attack when it sets a "Content-Disposition" header in the response where the filename attribute is derived from user supplied input.
<p>Publish Date: 2020-01-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5398>CVE-2020-5398</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pivotal.io/security/cve-2020-5398">https://pivotal.io/security/cve-2020-5398</a></p>
<p>Release Date: 2020-01-17</p>
<p>Fix Resolution (org.springframework:spring-web): 5.1.13.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.12.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-10072</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The fix for CVE-2019-0199 was incomplete and did not address HTTP/2 connection window exhaustion on write in Apache Tomcat versions 9.0.0.M1 to 9.0.19 and 8.5.0 to 8.5.40 . By not sending WINDOW_UPDATE messages for the connection window (stream 0) clients were able to cause server-side threads to block eventually leading to thread exhaustion and a DoS.
<p>Publish Date: 2019-06-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10072>CVE-2019-10072</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://tomcat.apache.org/security-8.html#Fixed_in_Apache_Tomcat_8.5.41">http://tomcat.apache.org/security-8.html#Fixed_in_Apache_Tomcat_8.5.41</a></p>
<p>Release Date: 2019-06-21</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.20</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.6.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-17563</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When using FORM authentication with Apache Tomcat 9.0.0.M1 to 9.0.29, 8.5.0 to 8.5.49 and 7.0.0 to 7.0.98 there was a narrow window where an attacker could perform a session fixation attack. The window was considered too narrow for an exploit to be practical but, erring on the side of caution, this issue has been treated as a security vulnerability.
<p>Publish Date: 2019-12-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17563>CVE-2019-17563</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563</a></p>
<p>Release Date: 2019-12-23</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.30</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.12.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-11996</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A specially crafted sequence of HTTP/2 requests sent to Apache Tomcat 10.0.0-M1 to 10.0.0-M5, 9.0.0.M1 to 9.0.35 and 8.5.0 to 8.5.55 could trigger high CPU usage for several seconds. If a sufficient number of such requests were made on concurrent HTTP/2 connections, the server could become unresponsive.
<p>Publish Date: 2020-06-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11996>CVE-2020-11996</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html">https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2020-06-26</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.36</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.15.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-13934</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An h2c direct connection to Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M5 to 9.0.36 and 8.5.1 to 8.5.56 did not release the HTTP/1.1 processor after the upgrade to HTTP/2. If a sufficient number of such requests were made, an OutOfMemoryException could occur leading to a denial of service.
<p>Publish Date: 2020-07-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13934>CVE-2020-13934</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.37</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.16.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-13935</summary>
### Vulnerable Library - <b>tomcat-embed-websocket-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-websocket/9.0.16/f5eac487823c68f5d20742a99df1d94350c24d21/tomcat-embed-websocket-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-websocket-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The payload length in a WebSocket frame was not correctly validated in Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M1 to 9.0.36, 8.5.0 to 8.5.56 and 7.0.27 to 7.0.104. Invalid payload lengths could trigger an infinite loop. Multiple requests with invalid payload lengths could lead to a denial of service.
<p>Publish Date: 2020-07-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13935>CVE-2020-13935</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rd48c72bd3255bda87564d4da3791517c074d94f8a701f93b85752651%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rd48c72bd3255bda87564d4da3791517c074d94f8a701f93b85752651%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-websocket): 9.0.37</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.16.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25122</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When responding to new h2c connection requests, Apache Tomcat versions 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41 and 8.5.0 to 8.5.61 could duplicate request headers and a limited amount of request body from one request to another meaning user A and user B could both see the results of user A's request.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25122>CVE-2021-25122</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.43</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.9.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-41079</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Tomcat 8.5.0 to 8.5.63, 9.0.0-M1 to 9.0.43 and 10.0.0-M1 to 10.0.2 did not properly validate incoming TLS packets. When Tomcat was configured to use NIO+OpenSSL or NIO2+OpenSSL for TLS, a specially crafted packet could be used to trigger an infinite loop resulting in a denial of service.
<p>Publish Date: 2021-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41079>CVE-2021-41079</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/security-10.html">https://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2021-09-16</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-25329</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329>CVE-2021-25329</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.43</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.9.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-12418</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When Apache Tomcat 9.0.0.M1 to 9.0.28, 8.5.0 to 8.5.47, 7.0.0 and 7.0.97 is configured with the JMX Remote Lifecycle Listener, a local attacker without access to the Tomcat process or configuration files is able to manipulate the RMI registry to perform a man-in-the-middle attack to capture user names and passwords used to access the JMX interface. The attacker can then use these credentials to access the JMX interface and gain complete control over the Tomcat instance.
<p>Publish Date: 2019-12-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12418>CVE-2019-12418</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12418">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12418</a></p>
<p>Release Date: 2019-12-23</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.29</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.11.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-9484</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When using Apache Tomcat versions 10.0.0-M1 to 10.0.0-M4, 9.0.0.M1 to 9.0.34, 8.5.0 to 8.5.54 and 7.0.0 to 7.0.103 if a) an attacker is able to control the contents and name of a file on the server; and b) the server is configured to use the PersistenceManager with a FileStore; and c) the PersistenceManager is configured with sessionAttributeValueClassNameFilter="null" (the default unless a SecurityManager is used) or a sufficiently lax filter to allow the attacker provided object to be deserialized; and d) the attacker knows the relative file path from the storage location used by FileStore to the file the attacker has control over; then, using a specifically crafted request, the attacker will be able to trigger remote code execution via deserialization of the file under their control. Note that all of conditions a) to d) must be true for the attack to succeed.
<p>Publish Date: 2020-05-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9484>CVE-2020-9484</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9484">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9484</a></p>
<p>Release Date: 2020-05-20</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.35</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.15.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-42550</summary>
### Vulnerable Libraries - <b>logback-classic-1.2.3.jar</b>, <b>logback-core-1.2.3.jar</b></p>
<p>
### <b>logback-classic-1.2.3.jar</b></p>
<p>logback-classic module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-classic/1.2.3/7c4f3c474fb2c041d8028740440937705ebb473a/logback-classic-1.2.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-classic/1.2.3/7c4f3c474fb2c041d8028740440937705ebb473a/logback-classic-1.2.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.3.RELEASE.jar
- spring-boot-starter-logging-2.1.3.RELEASE.jar
- :x: **logback-classic-1.2.3.jar** (Vulnerable Library)
### <b>logback-core-1.2.3.jar</b></p>
<p>logback-core module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-core/1.2.3/864344400c3d4d92dfeb0a305dc87d953677c03c/logback-core-1.2.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-core/1.2.3/864344400c3d4d92dfeb0a305dc87d953677c03c/logback-core-1.2.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.3.RELEASE.jar
- spring-boot-starter-logging-2.1.3.RELEASE.jar
- logback-classic-1.2.3.jar
- :x: **logback-core-1.2.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In logback version 1.2.7 and prior versions, an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from LDAP servers.
<p>Publish Date: 2021-12-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42550>CVE-2021-42550</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2021-42550">https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2021-42550</a></p>
<p>Release Date: 2021-12-16</p>
<p>Fix Resolution (ch.qos.logback:logback-classic): 1.2.8</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.5.8</p><p>Fix Resolution (ch.qos.logback:logback-core): 1.2.8</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.5.8</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-22950</summary>
### Vulnerable Library - <b>spring-expression-5.1.5.RELEASE.jar</b></p>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/5.1.5.RELEASE/b728a06924560ee69307a52d100e6b156d9a4a80/spring-expression-5.1.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-webmvc-5.1.5.RELEASE.jar
- :x: **spring-expression-5.1.5.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
n Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition.
<p>Publish Date: 2022-04-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950>CVE-2022-22950</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22950">https://tanzu.vmware.com/security/cve-2022-22950</a></p>
<p>Release Date: 2022-04-01</p>
<p>Fix Resolution (org.springframework:spring-expression): 5.2.20.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.4.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-5421</summary>
### Vulnerable Library - <b>spring-web-5.1.5.RELEASE.jar</b></p>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/5.1.5.RELEASE/c37c4363be4ad6c5f67e3f9f020497e2d599e325/spring-web-5.1.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- :x: **spring-web-5.1.5.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Spring Framework versions 5.2.0 - 5.2.8, 5.1.0 - 5.1.17, 5.0.0 - 5.0.18, 4.3.0 - 4.3.28, and older unsupported versions, the protections against RFD attacks from CVE-2015-5211 may be bypassed depending on the browser used through the use of a jsessionid path parameter.
<p>Publish Date: 2020-09-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5421>CVE-2020-5421</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2020-5421">https://tanzu.vmware.com/security/cve-2020-5421</a></p>
<p>Release Date: 2020-09-19</p>
<p>Fix Resolution (org.springframework:spring-web): 5.1.18.RELEASE</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.17.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-0221</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The SSI printenv command in Apache Tomcat 9.0.0.M1 to 9.0.0.17, 8.5.0 to 8.5.39 and 7.0.0 to 7.0.93 echoes user provided data without escaping and is, therefore, vulnerable to XSS. SSI is disabled by default. The printenv command is intended for debugging and is unlikely to be present in a production website.
<p>Publish Date: 2019-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0221>CVE-2019-0221</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-0221</a></p>
<p>Release Date: 2019-05-28</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.19</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.5.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-10219</summary>
### Vulnerable Library - <b>hibernate-validator-6.0.14.Final.jar</b></p>
<p>Hibernate's Bean Validation (JSR-380) reference implementation.</p>
<p>Library home page: <a href="http://hibernate.org/validator">http://hibernate.org/validator</a></p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.hibernate.validator/hibernate-validator/6.0.14.Final/c424524aa7718c564d9199ac5892b05901cabae6/hibernate-validator-6.0.14.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- :x: **hibernate-validator-6.0.14.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability was found in Hibernate-Validator. The SafeHtml validator annotation fails to properly sanitize payloads consisting of potentially malicious code in HTML comments and instructions. This vulnerability can result in an XSS attack.
<p>Publish Date: 2019-11-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10219>CVE-2019-10219</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10219">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10219</a></p>
<p>Release Date: 2019-11-08</p>
<p>Fix Resolution (org.hibernate.validator:hibernate-validator): 6.0.18.Final</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.1.10.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-24122</summary>
### Vulnerable Library - <b>tomcat-embed-core-9.0.16.jar</b></p>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: /Java/Gradle/kotlin-build-1/build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.16/d7069e3d0f760035b26b68b7b6af5eaa0c1862f/tomcat-embed-core-9.0.16.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.16.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kydell/onboarding-training/commit/4ff66014e73190546324666966b868318ad1a80b">4ff66014e73190546324666966b868318ad1a80b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When serving resources from a network location using the NTFS file system, Apache Tomcat versions 10.0.0-M1 to 10.0.0-M9, 9.0.0.M1 to 9.0.39, 8.5.0 to 8.5.59 and 7.0.0 to 7.0.106 were susceptible to JSP source code disclosure in some configurations. The root cause was the unexpected behaviour of the JRE API File.getCanonicalPath() which in turn was caused by the inconsistent behaviour of the Windows API (FindFirstFileW) in some circumstances.
<p>Publish Date: 2021-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-24122>CVE-2021-24122</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-24122">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-24122</a></p>
<p>Release Date: 2021-01-14</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.40</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.2.12.RELEASE</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_process
|
spring boot starter web release jar vulnerabilities highest severity is vulnerable library spring boot starter web release jar path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high spring beans release jar transitive high spring web release jar transitive release high tomcat embed core jar transitive release high snakeyaml jar transitive release high spring web release jar transitive release high tomcat embed core jar transitive release high tomcat embed core jar transitive release high tomcat embed core jar transitive release high tomcat embed core jar transitive release high tomcat embed websocket jar transitive release high tomcat embed core jar transitive release high tomcat embed core jar transitive release high tomcat embed core jar transitive release high tomcat embed core jar transitive release high tomcat embed core jar transitive release medium detected in multiple dependencies transitive medium spring expression release jar transitive medium spring web release jar transitive release medium tomcat embed core jar transitive release medium hibernate validator final jar transitive release medium tomcat embed core jar transitive release medium tomcat embed core jar transitive medium spring beans release jar transitive medium hibernate validator final jar transitive release medium tomcat embed core jar transitive release medium tomcat embed core jar transitive release medium detected in multiple dependencies transitive details partial details vulnerabilities are displayed below due to a content size limitation in github to view information on the remaining vulnerabilities navigate to the mend application cve vulnerable library spring beans release jar spring beans library home page a href path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org springframework spring beans release spring beans release jar dependency hierarchy spring boot starter web release jar root library spring webmvc release jar x spring beans release jar vulnerable library found in head commit a href found in base branch main vulnerability details a spring mvc or spring webflux application running on jdk may be vulnerable to remote code execution rce via data binding the specific exploit requires the application to run on tomcat as a war deployment if the application is deployed as a spring boot executable jar i e the default it is not vulnerable to the exploit however the nature of the vulnerability is more general and there may be other ways to exploit it publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring beans release direct dependency fix resolution org springframework boot spring boot starter web step up your open source security game with mend cve vulnerable library spring web release jar spring web library home page a href path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org springframework spring web release spring web release jar dependency hierarchy spring boot starter web release jar root library x spring web release jar vulnerable library found in head commit a href found in base branch main vulnerability details pivotal spring framework through suffers from a potential remote code execution rce issue if used for java deserialization of untrusted data depending on how the library is implemented within a product this issue may or not occur and authentication may be required note the vendor s position is that untrusted data is not an intended use case the product s behavior will not be changed because some users rely on deserialization of trusted data publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring web release direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details when running on windows with enablecmdlinearguments enabled the cgi servlet in apache tomcat to to and to is vulnerable to remote code execution due to a bug in the way the jre passes command line arguments to windows the cgi servlet is disabled by default the cgi option enablecmdlinearguments is disable by default in tomcat x and will be disabled by default in all versions in response to this vulnerability for a detailed explanation of the jre behaviour see markus wulftange s blog and this archived msdn blog publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter web release jar root library spring boot starter release jar x snakeyaml jar vulnerable library found in head commit a href found in base branch main vulnerability details the alias feature in snakeyaml before allows entity expansion during a load operation a related issue to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library spring web release jar spring web library home page a href path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org springframework spring web release spring web release jar dependency hierarchy spring boot starter web release jar root library x spring web release jar vulnerable library found in head commit a href found in base branch main vulnerability details in spring framework versions x prior to versions x prior to and versions x prior to an application is vulnerable to a reflected file download rfd attack when it sets a content disposition header in the response where the filename attribute is derived from user supplied input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring web release direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details the fix for cve was incomplete and did not address http connection window exhaustion on write in apache tomcat versions to and to by not sending window update messages for the connection window stream clients were able to cause server side threads to block eventually leading to thread exhaustion and a dos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details when using form authentication with apache tomcat to to and to there was a narrow window where an attacker could perform a session fixation attack the window was considered too narrow for an exploit to be practical but erring on the side of caution this issue has been treated as a security vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details a specially crafted sequence of http requests sent to apache tomcat to to and to could trigger high cpu usage for several seconds if a sufficient number of such requests were made on concurrent http connections the server could become unresponsive publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details an direct connection to apache tomcat to to and to did not release the http processor after the upgrade to http if a sufficient number of such requests were made an outofmemoryexception could occur leading to a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed websocket jar core tomcat implementation library home page a href path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed websocket tomcat embed websocket jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed websocket jar vulnerable library found in head commit a href found in base branch main vulnerability details the payload length in a websocket frame was not correctly validated in apache tomcat to to to and to invalid payload lengths could trigger an infinite loop multiple requests with invalid payload lengths could lead to a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed websocket direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details when responding to new connection requests apache tomcat versions to to and to could duplicate request headers and a limited amount of request body from one request to another meaning user a and user b could both see the results of user a s request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details apache tomcat to to and to did not properly validate incoming tls packets when tomcat was configured to use nio openssl or openssl for tls a specially crafted packet could be used to trigger an infinite loop resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details the fix for cve was incomplete when using apache tomcat to to to or to with a configuration edge case that was highly unlikely to be used the tomcat instance was still vulnerable to cve note that both the previously published prerequisites for cve and the previously published mitigations for cve also apply to this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details when apache tomcat to to and is configured with the jmx remote lifecycle listener a local attacker without access to the tomcat process or configuration files is able to manipulate the rmi registry to perform a man in the middle attack to capture user names and passwords used to access the jmx interface the attacker can then use these credentials to access the jmx interface and gain complete control over the tomcat instance publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details when using apache tomcat versions to to to and to if a an attacker is able to control the contents and name of a file on the server and b the server is configured to use the persistencemanager with a filestore and c the persistencemanager is configured with sessionattributevalueclassnamefilter null the default unless a securitymanager is used or a sufficiently lax filter to allow the attacker provided object to be deserialized and d the attacker knows the relative file path from the storage location used by filestore to the file the attacker has control over then using a specifically crafted request the attacker will be able to trigger remote code execution via deserialization of the file under their control note that all of conditions a to d must be true for the attack to succeed publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable libraries logback classic jar logback core jar logback classic jar logback classic module library home page a href path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files ch qos logback logback classic logback classic jar home wss scanner gradle caches modules files ch qos logback logback classic logback classic jar dependency hierarchy spring boot starter web release jar root library spring boot starter release jar spring boot starter logging release jar x logback classic jar vulnerable library logback core jar logback core module library home page a href path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files ch qos logback logback core logback core jar home wss scanner gradle caches modules files ch qos logback logback core logback core jar dependency hierarchy spring boot starter web release jar root library spring boot starter release jar spring boot starter logging release jar logback classic jar x logback core jar vulnerable library found in head commit a href found in base branch main vulnerability details in logback version and prior versions an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from ldap servers publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos logback logback classic direct dependency fix resolution org springframework boot spring boot starter web fix resolution ch qos logback logback core direct dependency fix resolution org springframework boot spring boot starter web step up your open source security game with mend cve vulnerable library spring expression release jar spring expression language spel library home page a href path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org springframework spring expression release spring expression release jar dependency hierarchy spring boot starter web release jar root library spring webmvc release jar x spring expression release jar vulnerable library found in head commit a href found in base branch main vulnerability details n spring framework versions and older unsupported versions it is possible for a user to provide a specially crafted spel expression that may cause a denial of service condition publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring expression release direct dependency fix resolution org springframework boot spring boot starter web step up your open source security game with mend cve vulnerable library spring web release jar spring web library home page a href path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org springframework spring web release spring web release jar dependency hierarchy spring boot starter web release jar root library x spring web release jar vulnerable library found in head commit a href found in base branch main vulnerability details in spring framework versions and older unsupported versions the protections against rfd attacks from cve may be bypassed depending on the browser used through the use of a jsessionid path parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring web release direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details the ssi printenv command in apache tomcat to to and to echoes user provided data without escaping and is therefore vulnerable to xss ssi is disabled by default the printenv command is intended for debugging and is unlikely to be present in a production website publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library hibernate validator final jar hibernate s bean validation jsr reference implementation library home page a href path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org hibernate validator hibernate validator final hibernate validator final jar dependency hierarchy spring boot starter web release jar root library x hibernate validator final jar vulnerable library found in head commit a href found in base branch main vulnerability details a vulnerability was found in hibernate validator the safehtml validator annotation fails to properly sanitize payloads consisting of potentially malicious code in html comments and instructions this vulnerability can result in an xss attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org hibernate validator hibernate validator final direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend cve vulnerable library tomcat embed core jar core tomcat implementation path to dependency file java gradle kotlin build build gradle kts path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details when serving resources from a network location using the ntfs file system apache tomcat versions to to to and to were susceptible to jsp source code disclosure in some configurations the root cause was the unexpected behaviour of the jre api file getcanonicalpath which in turn was caused by the inconsistent behaviour of the windows api findfirstfilew in some circumstances publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend
| 0
|
630
| 2,798,583,331
|
IssuesEvent
|
2015-05-12 19:23:24
|
new-day-international/reddit
|
https://api.github.com/repos/new-day-international/reddit
|
closed
|
Investigate using fewer cloud search instances
|
Infrastructure
|
From @inflector
> Attached please find the usage report for September. It looks like we've got stuff firing off every hour uploading and somehow querying CloudSearch, what might be causing this? Are we getting spiders doing searches?
> Also, we seem to be getting separate charges for each of the different dev, test, and production searches, I thought we'd only be using one instance. We certainly don't need three actual machines for our work.
|
1.0
|
Investigate using fewer cloud search instances - From @inflector
> Attached please find the usage report for September. It looks like we've got stuff firing off every hour uploading and somehow querying CloudSearch, what might be causing this? Are we getting spiders doing searches?
> Also, we seem to be getting separate charges for each of the different dev, test, and production searches, I thought we'd only be using one instance. We certainly don't need three actual machines for our work.
|
non_process
|
investigate using fewer cloud search instances from inflector attached please find the usage report for september it looks like we ve got stuff firing off every hour uploading and somehow querying cloudsearch what might be causing this are we getting spiders doing searches also we seem to be getting separate charges for each of the different dev test and production searches i thought we d only be using one instance we certainly don t need three actual machines for our work
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.