Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
22,722
| 32,042,550,473
|
IssuesEvent
|
2023-09-22 20:45:59
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] Port `canRun` to MLv2
|
Querying/Native .Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
There're a few MLv1 methods the editor uses that'd be nice to port to MLv2 as well:
`canRun`
### MBQL query implementation:
https://github.com/metabase/metabase/blob/dbfca6c6d173294ddcf97b394750574b4ef10221/frontend/src/metabase-lib/queries/StructuredQuery.ts#L150
### native query implementation:
https://github.com/metabase/metabase/blob/dbfca6c6d173294ddcf97b394750574b4ef10221/frontend/src/metabase-lib/queries/NativeQuery.ts#L110
|
1.0
|
[MLv2] Port `canRun` to MLv2 - There're a few MLv1 methods the editor uses that'd be nice to port to MLv2 as well:
`canRun`
### MBQL query implementation:
https://github.com/metabase/metabase/blob/dbfca6c6d173294ddcf97b394750574b4ef10221/frontend/src/metabase-lib/queries/StructuredQuery.ts#L150
### native query implementation:
https://github.com/metabase/metabase/blob/dbfca6c6d173294ddcf97b394750574b4ef10221/frontend/src/metabase-lib/queries/NativeQuery.ts#L110
|
process
|
port canrun to there re a few methods the editor uses that d be nice to port to as well canrun mbql query implementation native query implementation
| 1
|
5,776
| 8,616,757,010
|
IssuesEvent
|
2018-11-20 01:46:45
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
`On` Date filter broken in v0.31 for Datetime fields with visual query builder?
|
Bug Query Processor
|
Hey there,
We just upgraded Metabase to v0.31 from v0.30, and it seems the behavior of Date filters broke, or at least changed in an unexpected way.
Our underlying database is PostgreSQL (v9.6), and we are trying to filter a Datetime field (e.g. *created*) by the Day.
E.g.
`Order filtered by Created *on* 11/09/2018`
Returns no results, but if we use *between* 11/09/2018-11/10/2018, the results for 11/09 are successfully returned.
**On 11/09 fails** (used to work):
<img width="1023" alt="on day fails" src="https://user-images.githubusercontent.com/193187/48350403-0e389d00-e655-11e8-82ea-e053b48b3e38.png">
**Between 11/09-11/09 fails** (think this used to work too):
<img width="1072" alt="between same day fails" src="https://user-images.githubusercontent.com/193187/48350424-23adc700-e655-11e8-8703-60e448c6ef76.png">
**Between 11/09-11/10 works** (but only returns the results for 11/09, think it used to return the results for 11/09 *and* 11/10:
<img width="1398" alt="between works" src="https://user-images.githubusercontent.com/193187/48350442-36280080-e655-11e8-997c-280917d6daa4.png">
FWIW, we have the reporting time zone set to "US/Eastern" rather than UTC the system default.
|
1.0
|
`On` Date filter broken in v0.31 for Datetime fields with visual query builder? - Hey there,
We just upgraded Metabase to v0.31 from v0.30, and it seems the behavior of Date filters broke, or at least changed in an unexpected way.
Our underlying database is PostgreSQL (v9.6), and we are trying to filter a Datetime field (e.g. *created*) by the Day.
E.g.
`Order filtered by Created *on* 11/09/2018`
Returns no results, but if we use *between* 11/09/2018-11/10/2018, the results for 11/09 are successfully returned.
**On 11/09 fails** (used to work):
<img width="1023" alt="on day fails" src="https://user-images.githubusercontent.com/193187/48350403-0e389d00-e655-11e8-82ea-e053b48b3e38.png">
**Between 11/09-11/09 fails** (think this used to work too):
<img width="1072" alt="between same day fails" src="https://user-images.githubusercontent.com/193187/48350424-23adc700-e655-11e8-8703-60e448c6ef76.png">
**Between 11/09-11/10 works** (but only returns the results for 11/09, think it used to return the results for 11/09 *and* 11/10:
<img width="1398" alt="between works" src="https://user-images.githubusercontent.com/193187/48350442-36280080-e655-11e8-997c-280917d6daa4.png">
FWIW, we have the reporting time zone set to "US/Eastern" rather than UTC the system default.
|
process
|
on date filter broken in for datetime fields with visual query builder hey there we just upgraded metabase to from and it seems the behavior of date filters broke or at least changed in an unexpected way our underlying database is postgresql and we are trying to filter a datetime field e g created by the day e g order filtered by created on returns no results but if we use between the results for are successfully returned on fails used to work img width alt on day fails src between fails think this used to work too img width alt between same day fails src between works but only returns the results for think it used to return the results for and img width alt between works src fwiw we have the reporting time zone set to us eastern rather than utc the system default
| 1
|
20,174
| 26,727,879,604
|
IssuesEvent
|
2023-01-29 23:01:51
|
evidence-dev/evidence
|
https://api.github.com/repos/evidence-dev/evidence
|
opened
|
Improved testing in development workspace
|
dev-process
|
It's clear to contributors how to add tests to the development workspace. Immediate goal here is _not_ high test coverage, just getting the systems in place in the development workspace.
Tests should include at least one example of:
1. Unit testing supporting modules
2. Unit testing an internal API
3. Browser testing components and/or a page in the dev workspace.
Playwright for browser testing, and Vitetest for unit testing seem popular in other Svelte projects, but no strong preference here.
Tests should run in CI/CD along with existing suite.
Additional considerations
* We should have a mechanism for tagging a subset of tests to run with different database connections (different environment variables set in the test runner)
|
1.0
|
Improved testing in development workspace - It's clear to contributors how to add tests to the development workspace. Immediate goal here is _not_ high test coverage, just getting the systems in place in the development workspace.
Tests should include at least one example of:
1. Unit testing supporting modules
2. Unit testing an internal API
3. Browser testing components and/or a page in the dev workspace.
Playwright for browser testing, and Vitetest for unit testing seem popular in other Svelte projects, but no strong preference here.
Tests should run in CI/CD along with existing suite.
Additional considerations
* We should have a mechanism for tagging a subset of tests to run with different database connections (different environment variables set in the test runner)
|
process
|
improved testing in development workspace it s clear to contributors how to add tests to the development workspace immediate goal here is not high test coverage just getting the systems in place in the development workspace tests should include at least one example of unit testing supporting modules unit testing an internal api browser testing components and or a page in the dev workspace playwright for browser testing and vitetest for unit testing seem popular in other svelte projects but no strong preference here tests should run in ci cd along with existing suite additional considerations we should have a mechanism for tagging a subset of tests to run with different database connections different environment variables set in the test runner
| 1
|
8,903
| 11,996,412,512
|
IssuesEvent
|
2020-04-08 16:43:27
|
MicrosoftDocs/vsts-docs
|
https://api.github.com/repos/MicrosoftDocs/vsts-docs
|
closed
|
String parameter can't be empty
|
Pri1 devops-cicd-process/tech devops/prod
|
Considering this definition:
```
parameters:
- name: deployEnvironment
displayName: Select your target environment.
type: string
default: Build_only
values:
- Build_only
- TST
- PP
- P
- name: releaseName
type: string
default: ''
steps:
- script: echo "Hi!"
```
Why is releaseName a required parameter? I was hoping that by specifying `default: ''` I could allow it to be left empty. I couldn't find anything about this in the documentation here, but I'm pretty sure that's how it works for template parameters.

---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 790318bb-8220-3241-4ca7-73351074492f
* Version Independent ID: db1da9db-3694-779b-17aa-1ed67fcecf86
* Content: [Use runtime and type-safe parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/runtime-parameters.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/runtime-parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
String parameter can't be empty - Considering this definition:
```
parameters:
- name: deployEnvironment
displayName: Select your target environment.
type: string
default: Build_only
values:
- Build_only
- TST
- PP
- P
- name: releaseName
type: string
default: ''
steps:
- script: echo "Hi!"
```
Why is releaseName a required parameter? I was hoping that by specifying `default: ''` I could allow it to be left empty. I couldn't find anything about this in the documentation here, but I'm pretty sure that's how it works for template parameters.

---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 790318bb-8220-3241-4ca7-73351074492f
* Version Independent ID: db1da9db-3694-779b-17aa-1ed67fcecf86
* Content: [Use runtime and type-safe parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/runtime-parameters.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/runtime-parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
string parameter can t be empty considering this definition parameters name deployenvironment displayname select your target environment type string default build only values build only tst pp p name releasename type string default steps script echo hi why is releasename a required parameter i was hoping that by specifying default i could allow it to be left empty i couldn t find anything about this in the documentation here but i m pretty sure that s how it works for template parameters document details β do not edit this section it is required for docs microsoft com β github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
5,566
| 8,406,693,755
|
IssuesEvent
|
2018-10-11 18:40:28
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
Allow user to attach callback in Symfony process that gets fired when status changes
|
Feature Process
|
| Q | A
| ---------------- | -----
| Bug report? | no
| Feature request? | yes
| BC Break report? | no
| RFC? | yes
| Symfony version | 3.3
Currently Symfony process allows user to attach callbacks to listen for output but it does not allow user to attach callback that gets fired during status change like completion, success, abort, error etc for async one. Right now only way to monitor the status is to check constantly in a loop which is bit inconvenient in my opinion.
|
1.0
|
Allow user to attach callback in Symfony process that gets fired when status changes - | Q | A
| ---------------- | -----
| Bug report? | no
| Feature request? | yes
| BC Break report? | no
| RFC? | yes
| Symfony version | 3.3
Currently Symfony process allows user to attach callbacks to listen for output but it does not allow user to attach callback that gets fired during status change like completion, success, abort, error etc for async one. Right now only way to monitor the status is to check constantly in a loop which is bit inconvenient in my opinion.
|
process
|
allow user to attach callback in symfony process that gets fired when status changes q a bug report no feature request yes bc break report no rfc yes symfony version currently symfony process allows user to attach callbacks to listen for output but it does not allow user to attach callback that gets fired during status change like completion success abort error etc for async one right now only way to monitor the status is to check constantly in a loop which is bit inconvenient in my opinion
| 1
|
9,532
| 12,501,574,411
|
IssuesEvent
|
2020-06-02 01:47:29
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
OMP: Warning #190 because of fork not waiting for parallel region to end
|
module: dataloader module: multiprocessing module: openmp triaged
|
## π Bug
After updating to v1.1.0, I get `OMP: Warning #190: Forking a process while a parallel region is active is potentially unsafe.` from DataLoader when `num_workers>1`. This was not the case when I used v1.0.1 and v1.0.0.
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
```
train_loader, test_loader = cifar10_loaders(batch_size=32, num_workers=num_workers)
for i in range(100):
for _ in zip(test_loader, train_loader):
pass
```
* len(test_loader.dataset) < len(train_loader.dataset)
* When `num_workers==1`, no warning.
* I also tested mnist and got the same warning.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Environment
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.14.0
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.2.148
...
Nvidia driver version: 396.44
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.1
Versions of relevant libraries:
[pip3] numpy==1.16.3
[pip3] torch==1.1.0
[pip3] torchvision==0.2.2
[conda] blas 1.0 mkl
[conda] cuda92 1.0 0 pytorch
[conda] mkl 2019.3 199
[conda] mkl_fft 1.0.12 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.1.0 py3.7_cuda9.0.176_cudnn7.5.1_0 pytorch
[conda] torchvision 0.2.2 py_3 pytorch
## Additional context
<!-- Add any other context about the problem here. -->
The code above was run inside a docker container.
macOS version does not cause this warning.
cc @ezyang @gchanan @zou3519 @SsnL
|
1.0
|
OMP: Warning #190 because of fork not waiting for parallel region to end - ## π Bug
After updating to v1.1.0, I get `OMP: Warning #190: Forking a process while a parallel region is active is potentially unsafe.` from DataLoader when `num_workers>1`. This was not the case when I used v1.0.1 and v1.0.0.
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
```
train_loader, test_loader = cifar10_loaders(batch_size=32, num_workers=num_workers)
for i in range(100):
for _ in zip(test_loader, train_loader):
pass
```
* len(test_loader.dataset) < len(train_loader.dataset)
* When `num_workers==1`, no warning.
* I also tested mnist and got the same warning.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Environment
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.14.0
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.2.148
...
Nvidia driver version: 396.44
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.1
Versions of relevant libraries:
[pip3] numpy==1.16.3
[pip3] torch==1.1.0
[pip3] torchvision==0.2.2
[conda] blas 1.0 mkl
[conda] cuda92 1.0 0 pytorch
[conda] mkl 2019.3 199
[conda] mkl_fft 1.0.12 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.1.0 py3.7_cuda9.0.176_cudnn7.5.1_0 pytorch
[conda] torchvision 0.2.2 py_3 pytorch
## Additional context
<!-- Add any other context about the problem here. -->
The code above was run inside a docker container.
macOS version does not cause this warning.
cc @ezyang @gchanan @zou3519 @SsnL
|
process
|
omp warning because of fork not waiting for parallel region to end π bug after updating to i get omp warning forking a process while a parallel region is active is potentially unsafe from dataloader when num workers this was not the case when i used and to reproduce train loader test loader loaders batch size num workers num workers for i in range for in zip test loader train loader pass len test loader dataset len train loader dataset when num workers no warning i also tested mnist and got the same warning environment pytorch version is debug build no cuda used to build pytorch os ubuntu lts gcc version ubuntu cmake version version python version is cuda available yes cuda runtime version nvidia driver version cudnn version usr lib linux gnu libcudnn so versions of relevant libraries numpy torch torchvision blas mkl pytorch mkl mkl fft mkl random pytorch pytorch torchvision py pytorch additional context the code above was run inside a docker container macos version does not cause this warning cc ezyang gchanan ssnl
| 1
|
10,525
| 13,307,371,871
|
IssuesEvent
|
2020-08-25 22:02:42
|
GoogleCloudPlatform/cloud-ops-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
|
closed
|
7/9 provisioning tests fail/error on manual run instructions
|
priority: p2 type: process
|
- [x] Tests are passing on e2e
But while manually running provisioning tests on Mac/unix, 7/9 tests fail on various errors likely due to the missing .config volumes that persist credentials
Steps to reproduce:
1. Provision sandbox on personal account
2. docker run [per instructions](https://github.com/GoogleCloudPlatform/cloud-ops-sandbox/tree/master/tests/provisioning)
[Full error log output](https://docs.google.com/document/d/1pA6bS30_S8VWhfS5_OyUxJnnX6RhE3Zt9P_U5eiH9ho/edit?hl=en)
**2/9 tests passing:**
```
to select an already authenticated account to use.
testNodeMachineType (__main__.TestGKECluster)
Test if the machine type for the nodes is as specified ... ERROR
testNumberOfNode (__main__.TestGKECluster)
Test if the number of nodes in the node pool is as specified ... ERROR
testReachOfHipsterShop (__main__.TestGKECluster)
Test if querying hipster shop returns 200 ... ERROR
testStatusOfServices (__main__.TestGKECluster)
Test if all the service deployments are ready ... ERROR
testDifferentZone (__main__.TestLoadGenerator)
Test if load generator is in a different zone from the GKE cluster ... ok
testNumberOfLoadgen (__main__.TestLoadGenerator)
Test if there's only one load generator instance ... ok
testReachOfLoadgen (__main__.TestLoadGenerator)
Test if querying load generator returns 200 ... ERROR
testAPIEnabled (__main__.TestProjectResources)
Test if all APIs requested are enabled ... FAIL
testErrorReporting (__main__.TestProjectResources)
Test if we can report error using Error Reporting API ... ERROR
```
Note: I have an active, credentialed GCloud user account. Though some errors could still be set up issues on my end.
|
1.0
|
7/9 provisioning tests fail/error on manual run instructions - - [x] Tests are passing on e2e
But while manually running provisioning tests on Mac/unix, 7/9 tests fail on various errors likely due to the missing .config volumes that persist credentials
Steps to reproduce:
1. Provision sandbox on personal account
2. docker run [per instructions](https://github.com/GoogleCloudPlatform/cloud-ops-sandbox/tree/master/tests/provisioning)
[Full error log output](https://docs.google.com/document/d/1pA6bS30_S8VWhfS5_OyUxJnnX6RhE3Zt9P_U5eiH9ho/edit?hl=en)
**2/9 tests passing:**
```
to select an already authenticated account to use.
testNodeMachineType (__main__.TestGKECluster)
Test if the machine type for the nodes is as specified ... ERROR
testNumberOfNode (__main__.TestGKECluster)
Test if the number of nodes in the node pool is as specified ... ERROR
testReachOfHipsterShop (__main__.TestGKECluster)
Test if querying hipster shop returns 200 ... ERROR
testStatusOfServices (__main__.TestGKECluster)
Test if all the service deployments are ready ... ERROR
testDifferentZone (__main__.TestLoadGenerator)
Test if load generator is in a different zone from the GKE cluster ... ok
testNumberOfLoadgen (__main__.TestLoadGenerator)
Test if there's only one load generator instance ... ok
testReachOfLoadgen (__main__.TestLoadGenerator)
Test if querying load generator returns 200 ... ERROR
testAPIEnabled (__main__.TestProjectResources)
Test if all APIs requested are enabled ... FAIL
testErrorReporting (__main__.TestProjectResources)
Test if we can report error using Error Reporting API ... ERROR
```
Note: I have an active, credentialed GCloud user account. Though some errors could still be set up issues on my end.
|
process
|
provisioning tests fail error on manual run instructions tests are passing on but while manually running provisioning tests on mac unix tests fail on various errors likely due to the missing config volumes that persist credentials steps to reproduce provision sandbox on personal account docker run tests passing to select an already authenticated account to use testnodemachinetype main testgkecluster test if the machine type for the nodes is as specified error testnumberofnode main testgkecluster test if the number of nodes in the node pool is as specified error testreachofhipstershop main testgkecluster test if querying hipster shop returns error teststatusofservices main testgkecluster test if all the service deployments are ready error testdifferentzone main testloadgenerator test if load generator is in a different zone from the gke cluster ok testnumberofloadgen main testloadgenerator test if there s only one load generator instance ok testreachofloadgen main testloadgenerator test if querying load generator returns error testapienabled main testprojectresources test if all apis requested are enabled fail testerrorreporting main testprojectresources test if we can report error using error reporting api error note i have an active credentialed gcloud user account though some errors could still be set up issues on my end
| 1
|
1,990
| 4,817,606,865
|
IssuesEvent
|
2016-11-04 14:12:38
|
mkdocs/mkdocs
|
https://api.github.com/repos/mkdocs/mkdocs
|
closed
|
Prepare 0.16 release
|
Process
|
I have created a [milestone](https://github.com/mkdocs/mkdocs/milestones/0.16), we just need to add outstanding issues to it that we want to fix soon. (@waylan I'd be interested in knowing what you think should be added to this, as you have been more in touch with things lately.) So far I have just added a few documentation tasks that I would like to tackle.
Ideally I would like to make this the last release before working towards a 1.0 and fixing some of our larger and longer standing issues that will require breaking compatibility. However, I am not sure I will have the time to dedicate to that yet - but wishful thinking etc.
|
1.0
|
Prepare 0.16 release - I have created a [milestone](https://github.com/mkdocs/mkdocs/milestones/0.16), we just need to add outstanding issues to it that we want to fix soon. (@waylan I'd be interested in knowing what you think should be added to this, as you have been more in touch with things lately.) So far I have just added a few documentation tasks that I would like to tackle.
Ideally I would like to make this the last release before working towards a 1.0 and fixing some of our larger and longer standing issues that will require breaking compatibility. However, I am not sure I will have the time to dedicate to that yet - but wishful thinking etc.
|
process
|
prepare release i have created a we just need to add outstanding issues to it that we want to fix soon waylan i d be interested in knowing what you think should be added to this as you have been more in touch with things lately so far i have just added a few documentation tasks that i would like to tackle ideally i would like to make this the last release before working towards a and fixing some of our larger and longer standing issues that will require breaking compatibility however i am not sure i will have the time to dedicate to that yet but wishful thinking etc
| 1
|
22,661
| 31,895,956,544
|
IssuesEvent
|
2023-09-18 01:43:45
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - references
|
Term - change Class - Record-level non-normative Task Group - Material Sample Process - complete
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): It would be understood that MaterialEntity would be an informal superclass to `dwc:MaterialSample`, `dwc:PreservedSpecimen`, `dwc:LivingSpecimen`, `dwc:FossilSpecimen`. Examples involving the use of MaterialSample should be expanded to include MaterialEntity.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): Usage as currently occurs in Global Biodiversity Information Facility (GBIF) Darwin Core Archives would not be affected by these changes. Darwin Core does not include formal class hierarchies, but if we ignore that formality and imagine what the hierarchy would look like for the classes, we have MaterialEntity as the highest for material things. All of the other material-based classes in Darwin Core (`dwc:MaterialSample`, `dwc:PreservedSpecimen`, `dwc:LivingSpecimen`, `dwc:FossilSpecimen`) might be expected to have references. As there are no other classes in between MaterialEntity and those subtypes, examples including the subtypes should be expanded to include MaterialEntity.
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dcterms_references
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): references
* Term label (English, not normative): References
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): Dublin Core terms namespace
* Definition of the term (normative): A related resource that is referenced, cited, or otherwise pointed to by the described resource.
* Usage comments (recommendations regarding content, etc., not normative): From Dublin Core, "This property is intended to be used with non-literal values. This property is an inverse property of Is Referenced By." The intended usage of this term in Darwin Core is to point to the definitive source representation of the resource (e.g., **dwc**:Taxon, **dwc**:Occurrence, **dwc**:Event in Darwin Core), if one is available. Note that the intended usage of dcterms:bibliographicCitation in Darwin Core, by contrast, is to provide the preferred way to cite the resource itself.
* Examples (not normative): **MaterialEntity**~~MaterialSample~~ example: http://arctos.database.museum/guid/MVZ:Mamm:165861, Taxon example: https://www.catalogueoflife.org/data/taxon/32664
* Refines (identifier of the broader term this term refines; normative):
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
Change term - references - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): It would be understood that MaterialEntity would be an informal superclass to `dwc:MaterialSample`, `dwc:PreservedSpecimen`, `dwc:LivingSpecimen`, `dwc:FossilSpecimen`. Examples involving the use of MaterialSample should be expanded to include MaterialEntity.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): Usage as currently occurs in Global Biodiversity Information Facility (GBIF) Darwin Core Archives would not be affected by these changes. Darwin Core does not include formal class hierarchies, but if we ignore that formality and imagine what the hierarchy would look like for the classes, we have MaterialEntity as the highest for material things. All of the other material-based classes in Darwin Core (`dwc:MaterialSample`, `dwc:PreservedSpecimen`, `dwc:LivingSpecimen`, `dwc:FossilSpecimen`) might be expected to have references. As there are no other classes in between MaterialEntity and those subtypes, examples including the subtypes should be expanded to include MaterialEntity.
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dcterms_references
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): references
* Term label (English, not normative): References
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): Dublin Core terms namespace
* Definition of the term (normative): A related resource that is referenced, cited, or otherwise pointed to by the described resource.
* Usage comments (recommendations regarding content, etc., not normative): From Dublin Core, "This property is intended to be used with non-literal values. This property is an inverse property of Is Referenced By." The intended usage of this term in Darwin Core is to point to the definitive source representation of the resource (e.g., **dwc**:Taxon, **dwc**:Occurrence, **dwc**:Event in Darwin Core), if one is available. Note that the intended usage of dcterms:bibliographicCitation in Darwin Core, by contrast, is to provide the preferred way to cite the resource itself.
* Examples (not normative): **MaterialEntity**~~MaterialSample~~ example: http://arctos.database.museum/guid/MVZ:Mamm:165861, Taxon example: https://www.catalogueoflife.org/data/taxon/32664
* Refines (identifier of the broader term this term refines; normative):
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
change term references term change submitter efficacy justification why is this change necessary it would be understood that materialentity would be an informal superclass to dwc materialsample dwc preservedspecimen dwc livingspecimen dwc fossilspecimen examples involving the use of materialsample should be expanded to include materialentity demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations usage as currently occurs in global biodiversity information facility gbif darwin core archives would not be affected by these changes darwin core does not include formal class hierarchies but if we ignore that formality and imagine what the hierarchy would look like for the classes we have materialentity as the highest for material things all of the other material based classes in darwin core dwc materialsample dwc preservedspecimen dwc livingspecimen dwc fossilspecimen might be expected to have references as there are no other classes in between materialentity and those subtypes examples including the subtypes should be expanded to include materialentity implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes references term label english not normative references organized in class e g occurrence event location taxon dublin core terms namespace definition of the term normative a related resource that is referenced cited or otherwise pointed to by the described resource usage comments recommendations regarding content etc not normative from dublin core this property is intended to be used with non literal values this property is an inverse property of is referenced by the intended usage of this term in darwin core is to point to the definitive source representation of the resource e g dwc taxon dwc occurrence dwc event in darwin core if one is available note that the intended usage of dcterms bibliographiccitation in darwin core by contrast is to provide the preferred way to cite the resource itself examples not normative materialentity materialsample example taxon example refines identifier of the broader term this term refines normative replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
5,075
| 7,870,256,791
|
IssuesEvent
|
2018-06-24 23:36:47
|
gvwilson/teachtogether.tech
|
https://api.github.com/repos/gvwilson/teachtogether.tech
|
closed
|
Ch06 Gerard Capes
|
Ch06 Process
|
- "The most important thing about a lesson isnβt having it," Italic isn't all that distinct from normal type face using this font.
|
1.0
|
Ch06 Gerard Capes - - "The most important thing about a lesson isnβt having it," Italic isn't all that distinct from normal type face using this font.
|
process
|
gerard capes the most important thing about a lesson isnβt having it italic isn t all that distinct from normal type face using this font
| 1
|
7,503
| 10,587,150,429
|
IssuesEvent
|
2019-10-08 21:21:59
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Solution deployment fails with a conflict failure
|
Pri1 automation/svc cxp doc-bug process-automation/subsvc triaged
|
I can't get this to work. When I try to deploy that solution, it fails with this error:
{
"id":"/subscriptions/<subscriptionid>/resourceGroups/<resourcegroup>/providers/Microsoft.Resources/deployments/Automation/operations/C71E0376786C61EC",
"operationId":"C71E0376786C61EC",
"properties":{
"provisioningOperation":"Create",
"provisioningState":"Failed",
"timestamp":"2019-10-05T10:43:52.9423083Z",
"duration":"PT1M36.7535462S",
"trackingId":"15fb7a62-4c57-4ebe-a352-65699d976ed6",
"statusCode":"Conflict",
"statusMessage":{
"status":"Failed",
"error":{
"code":"ResourceDeploymentFailure",
"message":"The resource operation completed with terminal provisioning state 'Failed'."
}
},
"targetResource":{
"id":"/subscriptions/<subscriptionid>/resourceGroups/<resourcegroup>/providers/Microsoft.Automation/automationAccounts/snred-cue-test-auto/modules/AzureRm.Resources",
"resourceType":"Microsoft.Automation/automationAccounts/modules",
"resourceName":"snred-cue-test-auto/AzureRm.Resources"
}
}
}
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
1.0
|
Solution deployment fails with a conflict failure - I can't get this to work. When I try to deploy that solution, it fails with this error:
{
"id":"/subscriptions/<subscriptionid>/resourceGroups/<resourcegroup>/providers/Microsoft.Resources/deployments/Automation/operations/C71E0376786C61EC",
"operationId":"C71E0376786C61EC",
"properties":{
"provisioningOperation":"Create",
"provisioningState":"Failed",
"timestamp":"2019-10-05T10:43:52.9423083Z",
"duration":"PT1M36.7535462S",
"trackingId":"15fb7a62-4c57-4ebe-a352-65699d976ed6",
"statusCode":"Conflict",
"statusMessage":{
"status":"Failed",
"error":{
"code":"ResourceDeploymentFailure",
"message":"The resource operation completed with terminal provisioning state 'Failed'."
}
},
"targetResource":{
"id":"/subscriptions/<subscriptionid>/resourceGroups/<resourcegroup>/providers/Microsoft.Automation/automationAccounts/snred-cue-test-auto/modules/AzureRm.Resources",
"resourceType":"Microsoft.Automation/automationAccounts/modules",
"resourceName":"snred-cue-test-auto/AzureRm.Resources"
}
}
}
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
process
|
solution deployment fails with a conflict failure i can t get this to work when i try to deploy that solution it fails with this error id subscriptions lt subscriptionid gt resourcegroups lt resourcegroup gt providers microsoft resources deployments automation operations operationid properties provisioningoperation create provisioningstate failed timestamp duration trackingid statuscode conflict statusmessage status failed error code resourcedeploymentfailure message the resource operation completed with terminal provisioning state failed targetresource id subscriptions lt subscriptionid gt resourcegroups lt resourcegroup gt providers microsoft automation automationaccounts snred cue test auto modules azurerm resources resourcetype microsoft automation automationaccounts modules resourcename snred cue test auto azurerm resources document details β do not edit this section it is required for docs microsoft com β github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
| 1
|
29,001
| 5,476,185,014
|
IssuesEvent
|
2017-03-11 18:21:21
|
richgel999/miniz
|
https://api.github.com/repos/richgel999/miniz
|
closed
|
enumeral and non-enumeral type in conditional expression
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. compile with -Wall
2. calls from MZ_MIN and MZ_MAX with different var types like in dict_size =
MZ_MIN(dict_size + cur_match_len, TDEFL_LZ_DICT_SIZE);
What is the expected output? What do you see instead?
expected are no warning, but i got the warning "enumeral and non-enumeral type
in conditional expression [enabled by default]"
What version of the product are you using? On what operating system?
1.15r4 with qt 5.0.2 on win7 64bit
Please provide any additional information below.
solved issue with explcit cast. i.e. dict_size = MZ_MIN(dict_size +
cur_match_len, (mz_uint)TDEFL_LZ_DICT_SIZE);
```
Original issue reported on code.google.com by `dragon...@googlemail.com` on 10 Dec 2013 at 5:00
|
1.0
|
enumeral and non-enumeral type in conditional expression - ```
What steps will reproduce the problem?
1. compile with -Wall
2. calls from MZ_MIN and MZ_MAX with different var types like in dict_size =
MZ_MIN(dict_size + cur_match_len, TDEFL_LZ_DICT_SIZE);
What is the expected output? What do you see instead?
expected are no warning, but i got the warning "enumeral and non-enumeral type
in conditional expression [enabled by default]"
What version of the product are you using? On what operating system?
1.15r4 with qt 5.0.2 on win7 64bit
Please provide any additional information below.
solved issue with explcit cast. i.e. dict_size = MZ_MIN(dict_size +
cur_match_len, (mz_uint)TDEFL_LZ_DICT_SIZE);
```
Original issue reported on code.google.com by `dragon...@googlemail.com` on 10 Dec 2013 at 5:00
|
non_process
|
enumeral and non enumeral type in conditional expression what steps will reproduce the problem compile with wall calls from mz min and mz max with different var types like in dict size mz min dict size cur match len tdefl lz dict size what is the expected output what do you see instead expected are no warning but i got the warning enumeral and non enumeral type in conditional expression what version of the product are you using on what operating system with qt on please provide any additional information below solved issue with explcit cast i e dict size mz min dict size cur match len mz uint tdefl lz dict size original issue reported on code google com by dragon googlemail com on dec at
| 0
|
3,385
| 6,507,175,479
|
IssuesEvent
|
2017-08-24 12:14:10
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Remove useTesting and isTesting from COptionsBase
|
libs-utillib status-inprocess type-enhancement
|
All test cases are run with an environment variable TEST_MODE set to 'true'. If we need to distinguish between testing and not-testing we should use this mechanism not the --testing command line option because many tools actually need the -t option for their own use. Plus, it's too complicated having two test modes. One from the environment variable and the other from the command line.
|
1.0
|
Remove useTesting and isTesting from COptionsBase - All test cases are run with an environment variable TEST_MODE set to 'true'. If we need to distinguish between testing and not-testing we should use this mechanism not the --testing command line option because many tools actually need the -t option for their own use. Plus, it's too complicated having two test modes. One from the environment variable and the other from the command line.
|
process
|
remove usetesting and istesting from coptionsbase all test cases are run with an environment variable test mode set to true if we need to distinguish between testing and not testing we should use this mechanism not the testing command line option because many tools actually need the t option for their own use plus it s too complicated having two test modes one from the environment variable and the other from the command line
| 1
|
22,033
| 30,546,804,007
|
IssuesEvent
|
2023-07-20 05:04:46
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
prodbuild 1.7.0 has 1 guarddog issues
|
npm-silent-process-execution
|
```{"npm-silent-process-execution":[{"code":" const server = spawn(\n 'node', \n [path.join(pb.root, 'node_modules/prodbuild/lib/server_service.js'), root, config.port], \n {detached: true, stdio: \"ignore\"}\n )","location":"package/lib/server.js:56","message":"This package is silently executing another executable"}]}```
|
1.0
|
prodbuild 1.7.0 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":" const server = spawn(\n 'node', \n [path.join(pb.root, 'node_modules/prodbuild/lib/server_service.js'), root, config.port], \n {detached: true, stdio: \"ignore\"}\n )","location":"package/lib/server.js:56","message":"This package is silently executing another executable"}]}```
|
process
|
prodbuild has guarddog issues npm silent process execution n detached true stdio ignore n location package lib server js message this package is silently executing another executable
| 1
|
14,818
| 18,154,199,544
|
IssuesEvent
|
2021-09-26 19:48:28
|
edmobe/android-video-magnification
|
https://api.github.com/repos/edmobe/android-video-magnification
|
opened
|
A pesar de ser exactamente el mismo algoritmo, la magnificaciΓ³n con filtros IIR no funciona
|
video-processing obstacle
|
Por una razΓ³n desconocida, el vΓdeo de salida de la magnificaciΓ³n con filtros IIR es el mismo que el vΓdeo de entrada.
|
1.0
|
A pesar de ser exactamente el mismo algoritmo, la magnificaciΓ³n con filtros IIR no funciona - Por una razΓ³n desconocida, el vΓdeo de salida de la magnificaciΓ³n con filtros IIR es el mismo que el vΓdeo de entrada.
|
process
|
a pesar de ser exactamente el mismo algoritmo la magnificaciΓ³n con filtros iir no funciona por una razΓ³n desconocida el vΓdeo de salida de la magnificaciΓ³n con filtros iir es el mismo que el vΓdeo de entrada
| 1
|
74,051
| 7,373,791,560
|
IssuesEvent
|
2018-03-13 18:16:58
|
yalapeno/gitar
|
https://api.github.com/repos/yalapeno/gitar
|
closed
|
Test - Test data
|
test
|
generate - find some sample data for the database for test - development purposes.
|
1.0
|
Test - Test data - generate - find some sample data for the database for test - development purposes.
|
non_process
|
test test data generate find some sample data for the database for test development purposes
| 0
|
9,641
| 12,603,413,814
|
IssuesEvent
|
2020-06-11 13:27:01
|
HackYourFutureBelgium/class-9-10
|
https://api.github.com/repos/HackYourFutureBelgium/class-9-10
|
opened
|
Your Name: module, week
|
class-9 process-week wednesday-check-in
|
# Wednesday Check-In
Incremental Development, process-week
## Progress
- Read suggestive studies
- Review Javascript from freecodecamp.
## Blocked
- Not yet.
## Next Steps
- Prep-work for the next module
## Tip(s) of the week
- Take time to read issues of classmates
|
1.0
|
Your Name: module, week - # Wednesday Check-In
Incremental Development, process-week
## Progress
- Read suggestive studies
- Review Javascript from freecodecamp.
## Blocked
- Not yet.
## Next Steps
- Prep-work for the next module
## Tip(s) of the week
- Take time to read issues of classmates
|
process
|
your name module week wednesday check in incremental development process week progress read suggestive studies review javascript from freecodecamp blocked not yet next steps prep work for the next module tip s of the week take time to read issues of classmates
| 1
|
252,515
| 21,582,207,046
|
IssuesEvent
|
2022-05-02 20:02:21
|
damccorm/test-migration-target
|
https://api.github.com/repos/damccorm/test-migration-target
|
opened
|
beam_PostCommit_XVR_GoUsingJava_Dataflow fails on some test transforms
|
bug test-failures cross-language sdk-go P2
|
Example failure: https://ci-beam.apache.org/job/beam_PostCommit_XVR_GoUsingJava_Dataflow/7/
I couldn't find accurate details about why the tests are failing, but TestXLang_Prefix, TestXLang_Multi, and TestXLang_Partition are failing while running for some reason. Investigating the Dataflow logs, we can see SDK harnesses are failing to connect for some reason. For example:
`noformat`
"getPodContainerStatuses for pod "df-go-testxlang-multi-03300551-62xv-harness-3msv_default(a7f1d8dfb2c3d2b4e80f5d92c1728787)" failed: rpc error: code = Unknown desc = Error: No such container: bea0d9bde42bf890f6fe1d4f589932471037a5948fb9588d01a06425cd14c177"
`noformat`
However I haven't been able to find any further details showing why the harness fails, and the tests keep running beyond that for a while with other errors that are also pretty inscrutable.
Imported from Jira [BEAM-14214](https://issues.apache.org/jira/browse/BEAM-14214). Original Jira may contain additional context.
Reported by: danoliveira.
|
1.0
|
beam_PostCommit_XVR_GoUsingJava_Dataflow fails on some test transforms - Example failure: https://ci-beam.apache.org/job/beam_PostCommit_XVR_GoUsingJava_Dataflow/7/
I couldn't find accurate details about why the tests are failing, but TestXLang_Prefix, TestXLang_Multi, and TestXLang_Partition are failing while running for some reason. Investigating the Dataflow logs, we can see SDK harnesses are failing to connect for some reason. For example:
`noformat`
"getPodContainerStatuses for pod "df-go-testxlang-multi-03300551-62xv-harness-3msv_default(a7f1d8dfb2c3d2b4e80f5d92c1728787)" failed: rpc error: code = Unknown desc = Error: No such container: bea0d9bde42bf890f6fe1d4f589932471037a5948fb9588d01a06425cd14c177"
`noformat`
However I haven't been able to find any further details showing why the harness fails, and the tests keep running beyond that for a while with other errors that are also pretty inscrutable.
Imported from Jira [BEAM-14214](https://issues.apache.org/jira/browse/BEAM-14214). Original Jira may contain additional context.
Reported by: danoliveira.
|
non_process
|
beam postcommit xvr gousingjava dataflow fails on some test transforms example failure i couldn t find accurate details about why the tests are failing but testxlang prefix testxlang multi and testxlang partition are failing while running for some reason investigating the dataflow logs we can see sdk harnesses are failing to connect for some reason for example noformat getpodcontainerstatuses for pod df go testxlang multi harness default failed rpc error code unknown desc error no such container noformat however i haven t been able to find any further details showing why the harness fails and the tests keep running beyond that for a while with other errors that are also pretty inscrutable imported from jira original jira may contain additional context reported by danoliveira
| 0
|
13,516
| 16,055,318,875
|
IssuesEvent
|
2021-04-23 03:26:58
|
rjsears/chia_plot_manager
|
https://api.github.com/repos/rjsears/chia_plot_manager
|
closed
|
Check for failed or stalled local plot moves
|
In Process TODO
|
Figure out how to check and see if a local plot copy has failed and restart the process so we don't overfill our local `-d` drive.
|
1.0
|
Check for failed or stalled local plot moves - Figure out how to check and see if a local plot copy has failed and restart the process so we don't overfill our local `-d` drive.
|
process
|
check for failed or stalled local plot moves figure out how to check and see if a local plot copy has failed and restart the process so we don t overfill our local d drive
| 1
|
86,966
| 24,997,641,581
|
IssuesEvent
|
2022-11-03 03:07:35
|
DarkflameUniverse/DarkflameServer
|
https://api.github.com/repos/DarkflameUniverse/DarkflameServer
|
closed
|
BUILD: Error while setting up assets
|
build P-high
|
### Make sure you've done the following:
- [X] I have read the [installation guide](https://github.com/DarkflameUniverse/DarkflameServer/blob/main/README.md).
### DarkflameServer Version
4a6f3e4
### Platform
Ubuntu
### Architecture
x86
### Error Logs
<details>
[03-11-22 01:07:54] [MasterServer] Starting Master server...
[03-11-22 01:07:54] [MasterServer] Version: 1.0
[03-11-22 01:07:54] [MasterServer] Compiled on: Thu Nov 3 00:13:21 2022
[03-11-22 01:07:54] [MasterServer] Got an error while setting up assets: Attempted to load asset bundle () however it is not a valid directory.
</details>
Any idea what I might be doing wrong?
|
1.0
|
BUILD: Error while setting up assets - ### Make sure you've done the following:
- [X] I have read the [installation guide](https://github.com/DarkflameUniverse/DarkflameServer/blob/main/README.md).
### DarkflameServer Version
4a6f3e4
### Platform
Ubuntu
### Architecture
x86
### Error Logs
<details>
[03-11-22 01:07:54] [MasterServer] Starting Master server...
[03-11-22 01:07:54] [MasterServer] Version: 1.0
[03-11-22 01:07:54] [MasterServer] Compiled on: Thu Nov 3 00:13:21 2022
[03-11-22 01:07:54] [MasterServer] Got an error while setting up assets: Attempted to load asset bundle () however it is not a valid directory.
</details>
Any idea what I might be doing wrong?
|
non_process
|
build error while setting up assets make sure you ve done the following i have read the darkflameserver version platform ubuntu architecture error logs starting master server version compiled on thu nov got an error while setting up assets attempted to load asset bundle however it is not a valid directory any idea what i might be doing wrong
| 0
|
78,098
| 22,141,855,194
|
IssuesEvent
|
2022-06-03 07:47:46
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
Doc building on PRs do not push to circleci yet
|
Build / CI
|
With https://github.com/scikit-learn/scikit-learn/pull/21137 do push to circleci yet.
This is being tested in https://github.com/scikit-learn/scikit-learn/pull/23508 and a possible solution is: https://github.com/scikit-learn/scikit-learn/pull/23508#issuecomment-1143087348
|
1.0
|
Doc building on PRs do not push to circleci yet - With https://github.com/scikit-learn/scikit-learn/pull/21137 do push to circleci yet.
This is being tested in https://github.com/scikit-learn/scikit-learn/pull/23508 and a possible solution is: https://github.com/scikit-learn/scikit-learn/pull/23508#issuecomment-1143087348
|
non_process
|
doc building on prs do not push to circleci yet with do push to circleci yet this is being tested in and a possible solution is
| 0
|
10,051
| 13,044,161,661
|
IssuesEvent
|
2020-07-29 03:47:25
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SubDateIntInt` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SubDateIntInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SubDateIntInt` from TiDB -
## Description
Port the scalar function `SubDateIntInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function subdateintint from tidb description port the scalar function subdateintint from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
93,287
| 15,885,355,840
|
IssuesEvent
|
2021-04-09 20:25:24
|
AlexRogalskiy/code-formats
|
https://api.github.com/repos/AlexRogalskiy/code-formats
|
opened
|
CVE-2021-23358 (High) detected in underscore-1.6.0.tgz
|
security vulnerability
|
## CVE-2021-23358 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.6.0.tgz</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.6.0.tgz">https://registry.npmjs.org/underscore/-/underscore-1.6.0.tgz</a></p>
<p>Path to dependency file: code-formats/package.json</p>
<p>Path to vulnerable library: code-formats/node_modules/underscore/package.json</p>
<p>
Dependency Hierarchy:
- jsonlint-1.6.3.tgz (Root Library)
- nomnom-1.8.1.tgz
- :x: **underscore-1.6.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/code-formats/commit/889ec6ee62100fbd6a2c1fc342fb6e1f61cf3e9d">889ec6ee62100fbd6a2c1fc342fb6e1f61cf3e9d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: underscore - 1.12.1,1.13.0-2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23358 (High) detected in underscore-1.6.0.tgz - ## CVE-2021-23358 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.6.0.tgz</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.6.0.tgz">https://registry.npmjs.org/underscore/-/underscore-1.6.0.tgz</a></p>
<p>Path to dependency file: code-formats/package.json</p>
<p>Path to vulnerable library: code-formats/node_modules/underscore/package.json</p>
<p>
Dependency Hierarchy:
- jsonlint-1.6.3.tgz (Root Library)
- nomnom-1.8.1.tgz
- :x: **underscore-1.6.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/code-formats/commit/889ec6ee62100fbd6a2c1fc342fb6e1f61cf3e9d">889ec6ee62100fbd6a2c1fc342fb6e1f61cf3e9d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: underscore - 1.12.1,1.13.0-2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in underscore tgz cve high severity vulnerability vulnerable library underscore tgz javascript s functional programming helper library library home page a href path to dependency file code formats package json path to vulnerable library code formats node modules underscore package json dependency hierarchy jsonlint tgz root library nomnom tgz x underscore tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package underscore from and before from and before are vulnerable to arbitrary code execution via the template function particularly when a variable property is passed as an argument as it is not sanitized publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution underscore step up your open source security game with whitesource
| 0
|
4,707
| 7,546,145,955
|
IssuesEvent
|
2018-04-18 01:14:51
|
UnbFeelings/unb-feelings-docs
|
https://api.github.com/repos/UnbFeelings/unb-feelings-docs
|
opened
|
[NΓ£o Conformidade] RelatΓ³rio Final nΓ£o existe
|
Processo Qualidade invalid
|
@UnbFeelings/process
Perante critΓ©rios definidos para as [Auditorias](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Crit%C3%A9rios-de-Avalia%C3%A7%C3%A3o-e-T%C3%A9cnicas-de-Auditoria#plano-de-medi%C3%A7%C3%A3o) fora auditada o artefato RelatΓ³rio Final, resultante da atividade [RelatΓ³rio Final de MediΓ§Γ£o](https://github.com/UnbFeelings/unb-feelings-docs/wiki/Processo#317-atividade-relatΓ³rio-final-de-mediΓ§Γ£o).
### DescriΓ§Γ£o
Foi identificado que nΓ£o foram realizadas coletas de mΓ©tricas de cΓ³digo fonte, ou se foram, nΓ£o estΓ£o descritas em um artefato de acordo com o proposto pelo processo.
#### RecomendaΓ§Γ΅es
Γ recomendado a integraΓ§Γ£o de ferramentas de anΓ‘lise de cΓ³digo para que as mΓ©tricas possam ser geradas automaticamente, sem que haja a necessidade de atribuir esta tarefa a uma pessoa. No entanto recomenda-se definir um responsΓ‘vel pela elaboraΓ§Γ£o do relatΓ³rio de mΓ©tricas bem como a definiΓ§Γ£o das aΓ§Γ΅es a serem tomadas para os casos em que os valores aferidos sΓ£o insatisfatΓ³rios tendo como base os indicadores definidos
#### Detalhes
**Auditor**: Jonathan Rufino
**TΓ©cnica de AudiΓ§Γ£o**: Checklist
**Tipo:** MediΓ§Γ£o e AnΓ‘lise
**Prazo:** 23/04/2018
|
1.0
|
[NΓ£o Conformidade] RelatΓ³rio Final nΓ£o existe - @UnbFeelings/process
Perante critΓ©rios definidos para as [Auditorias](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Crit%C3%A9rios-de-Avalia%C3%A7%C3%A3o-e-T%C3%A9cnicas-de-Auditoria#plano-de-medi%C3%A7%C3%A3o) fora auditada o artefato RelatΓ³rio Final, resultante da atividade [RelatΓ³rio Final de MediΓ§Γ£o](https://github.com/UnbFeelings/unb-feelings-docs/wiki/Processo#317-atividade-relatΓ³rio-final-de-mediΓ§Γ£o).
### DescriΓ§Γ£o
Foi identificado que nΓ£o foram realizadas coletas de mΓ©tricas de cΓ³digo fonte, ou se foram, nΓ£o estΓ£o descritas em um artefato de acordo com o proposto pelo processo.
#### RecomendaΓ§Γ΅es
Γ recomendado a integraΓ§Γ£o de ferramentas de anΓ‘lise de cΓ³digo para que as mΓ©tricas possam ser geradas automaticamente, sem que haja a necessidade de atribuir esta tarefa a uma pessoa. No entanto recomenda-se definir um responsΓ‘vel pela elaboraΓ§Γ£o do relatΓ³rio de mΓ©tricas bem como a definiΓ§Γ£o das aΓ§Γ΅es a serem tomadas para os casos em que os valores aferidos sΓ£o insatisfatΓ³rios tendo como base os indicadores definidos
#### Detalhes
**Auditor**: Jonathan Rufino
**TΓ©cnica de AudiΓ§Γ£o**: Checklist
**Tipo:** MediΓ§Γ£o e AnΓ‘lise
**Prazo:** 23/04/2018
|
process
|
relatΓ³rio final nΓ£o existe unbfeelings process perante critΓ©rios definidos para as fora auditada o artefato relatΓ³rio final resultante da atividade descriΓ§Γ£o foi identificado que nΓ£o foram realizadas coletas de mΓ©tricas de cΓ³digo fonte ou se foram nΓ£o estΓ£o descritas em um artefato de acordo com o proposto pelo processo recomendaΓ§Γ΅es Γ© recomendado a integraΓ§Γ£o de ferramentas de anΓ‘lise de cΓ³digo para que as mΓ©tricas possam ser geradas automaticamente sem que haja a necessidade de atribuir esta tarefa a uma pessoa no entanto recomenda se definir um responsΓ‘vel pela elaboraΓ§Γ£o do relatΓ³rio de mΓ©tricas bem como a definiΓ§Γ£o das aΓ§Γ΅es a serem tomadas para os casos em que os valores aferidos sΓ£o insatisfatΓ³rios tendo como base os indicadores definidos detalhes auditor jonathan rufino tΓ©cnica de audiΓ§Γ£o checklist tipo mediΓ§Γ£o e anΓ‘lise prazo
| 1
|
113,844
| 24,498,764,797
|
IssuesEvent
|
2022-10-10 10:59:48
|
Regalis11/Barotrauma
|
https://api.github.com/repos/Regalis11/Barotrauma
|
closed
|
weird effect applied to battery and supercapacitor state of charge progress bars.
|
Bug Code Low prio
|
### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
The state of charge progress bar on the super capacitor and battery seem to 'flicker' up and down 1 pixel (or up 2 pixels and then resets?)
The effect seems quite bad while the sub is moving, but still occurs at a slower rate (about 1hz) when locky and freecam is used to eliminate movement. The black outline around the progress bar also moves.
You may wish to avoid using line function(?) to draw it, and instead render a texture so the result gets properly filtered like all other graphics. There seems to be something else at play here though, since if it was aliasing it should only move between two positions and not three.
### Reproduction steps
_No response_
### Bug prevalence
Happens every time I play
### Version
0.17.15.0
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_
|
1.0
|
weird effect applied to battery and supercapacitor state of charge progress bars. - ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
The state of charge progress bar on the super capacitor and battery seem to 'flicker' up and down 1 pixel (or up 2 pixels and then resets?)
The effect seems quite bad while the sub is moving, but still occurs at a slower rate (about 1hz) when locky and freecam is used to eliminate movement. The black outline around the progress bar also moves.
You may wish to avoid using line function(?) to draw it, and instead render a texture so the result gets properly filtered like all other graphics. There seems to be something else at play here though, since if it was aliasing it should only move between two positions and not three.
### Reproduction steps
_No response_
### Bug prevalence
Happens every time I play
### Version
0.17.15.0
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_
|
non_process
|
weird effect applied to battery and supercapacitor state of charge progress bars disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened the state of charge progress bar on the super capacitor and battery seem to flicker up and down pixel or up pixels and then resets the effect seems quite bad while the sub is moving but still occurs at a slower rate about when locky and freecam is used to eliminate movement the black outline around the progress bar also moves you may wish to avoid using line function to draw it and instead render a texture so the result gets properly filtered like all other graphics there seems to be something else at play here though since if it was aliasing it should only move between two positions and not three reproduction steps no response bug prevalence happens every time i play version no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response
| 0
|
1,966
| 4,787,790,257
|
IssuesEvent
|
2016-10-30 06:42:12
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
setOptions in symfony process is not available.
|
Feature Process
|
Hi,
In [symfony/process](https://github.com/symfony/process) if I want to set multiple options then I'm not able to set it, for example, I can use `setOption(key, value)` to set option but what if I want to set multiple options then I should do is `setOptions([key1 => value1, key2 => value2...])` I have gone trough the [API](http://api.symfony.com/2.7/Symfony/Component/Process/ProcessBuilder.html) but I'm not able to find alternative.
Thanks.
|
1.0
|
setOptions in symfony process is not available. - Hi,
In [symfony/process](https://github.com/symfony/process) if I want to set multiple options then I'm not able to set it, for example, I can use `setOption(key, value)` to set option but what if I want to set multiple options then I should do is `setOptions([key1 => value1, key2 => value2...])` I have gone trough the [API](http://api.symfony.com/2.7/Symfony/Component/Process/ProcessBuilder.html) but I'm not able to find alternative.
Thanks.
|
process
|
setoptions in symfony process is not available hi in if i want to set multiple options then i m not able to set it for example i can use setoption key value to set option but what if i want to set multiple options then i should do is setoptions i have gone trough the but i m not able to find alternative thanks
| 1
|
18,996
| 24,988,350,697
|
IssuesEvent
|
2022-11-02 16:39:34
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Better debugging story for recursive calls
|
P4 type: support / not a bug (process) team-Starlark-Interpreter awaiting-user-response
|
### Description of the problem / feature request:
In non-obvious cases of recursive function calls (function A calls B calls C .... calls A), it's very difficult to debug where the recursion is coming from. There should be guidance on how to debug such issues, possibly with changes to tooling to make it easier. Eg., macros that can print the current stack trace, or even just printing the current stack trace whenever recursion is detected.
### Feature requests: what underlying problem are you trying to solve with this feature?
Being able to debug recursive calls to eliminate them.
### Have you found anything relevant by searching the web?
Nope, mostly just feature requests for allowing recursion.
|
1.0
|
Better debugging story for recursive calls - ### Description of the problem / feature request:
In non-obvious cases of recursive function calls (function A calls B calls C .... calls A), it's very difficult to debug where the recursion is coming from. There should be guidance on how to debug such issues, possibly with changes to tooling to make it easier. Eg., macros that can print the current stack trace, or even just printing the current stack trace whenever recursion is detected.
### Feature requests: what underlying problem are you trying to solve with this feature?
Being able to debug recursive calls to eliminate them.
### Have you found anything relevant by searching the web?
Nope, mostly just feature requests for allowing recursion.
|
process
|
better debugging story for recursive calls description of the problem feature request in non obvious cases of recursive function calls function a calls b calls c calls a it s very difficult to debug where the recursion is coming from there should be guidance on how to debug such issues possibly with changes to tooling to make it easier eg macros that can print the current stack trace or even just printing the current stack trace whenever recursion is detected feature requests what underlying problem are you trying to solve with this feature being able to debug recursive calls to eliminate them have you found anything relevant by searching the web nope mostly just feature requests for allowing recursion
| 1
|
76,322
| 21,337,009,868
|
IssuesEvent
|
2022-04-18 15:42:45
|
RocketChat/Rocket.Chat.Electron
|
https://api.github.com/repos/RocketChat/Rocket.Chat.Electron
|
closed
|
All the ZIP Donwloads of the RocketChat Client are invlaid ZIP files!
|
plat: windows subj: release building
|
<!--
Thanks for opening an issue! A few things to keep in mind:
- Before reporting a bug, please try reproducing your issue with the latest
version.
- Please verify that the bug is related to the desktop app, and NOT the main web
app by testing in Chrome/Firefox.
- If the issue also occurs in the browser, report to
https://github.com/RocketChat/Rocket.Chat instead
-->
## My Setup
- Operating System: Windows 10 with winzip 21
- App Version: not relevant, issue with ZIP package itself
- Installation type: ZIP (portable?)
<!-- Answer questions by putting x in box, e.g. [x] -->
- [X] I have tested with the latest version
- [X] I can simulate the issue easily
## Description
Go to https://github.com/RocketChat/Rocket.Chat.Electron/releases/
Try to download and open any ZIP Version. BTW: Why are 3.5.7 and 3.5.6 twice as big (~145 MB) as the releases before (~78MB)?
### Current Behavior

### Expected Behavior
ZIP can be downloaded and extracted
|
1.0
|
All the ZIP Donwloads of the RocketChat Client are invlaid ZIP files! - <!--
Thanks for opening an issue! A few things to keep in mind:
- Before reporting a bug, please try reproducing your issue with the latest
version.
- Please verify that the bug is related to the desktop app, and NOT the main web
app by testing in Chrome/Firefox.
- If the issue also occurs in the browser, report to
https://github.com/RocketChat/Rocket.Chat instead
-->
## My Setup
- Operating System: Windows 10 with winzip 21
- App Version: not relevant, issue with ZIP package itself
- Installation type: ZIP (portable?)
<!-- Answer questions by putting x in box, e.g. [x] -->
- [X] I have tested with the latest version
- [X] I can simulate the issue easily
## Description
Go to https://github.com/RocketChat/Rocket.Chat.Electron/releases/
Try to download and open any ZIP Version. BTW: Why are 3.5.7 and 3.5.6 twice as big (~145 MB) as the releases before (~78MB)?
### Current Behavior

### Expected Behavior
ZIP can be downloaded and extracted
|
non_process
|
all the zip donwloads of the rocketchat client are invlaid zip files thanks for opening an issue a few things to keep in mind before reporting a bug please try reproducing your issue with the latest version please verify that the bug is related to the desktop app and not the main web app by testing in chrome firefox if the issue also occurs in the browser report to instead my setup operating system windows with winzip app version not relevant issue with zip package itself installation type zip portable i have tested with the latest version i can simulate the issue easily description go to try to download and open any zip version btw why are and twice as big mb as the releases before current behavior expected behavior zip can be downloaded and extracted
| 0
|
9,381
| 12,389,026,646
|
IssuesEvent
|
2020-05-20 08:21:04
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Not possible to run processing script on master
|
Bug Processing Regression
|
**Describe the bug**
It's not possible to launch a Processing script on master.
The following steps are working on 3.10.5
**How to Reproduce**
1. Processing toolbox
2. Create a new script from template
3. Launch the default script
4. Execute the script, nothing happen, nothing in the log too
**QGIS and OS versions**
Version de QGIS | 3.13.0-Master | RΓ©vision du code | 262568e6fb
-- | -- | -- | --
CompilΓ© avec Qt | 5.12.4 | Utilisant Qt | 5.12.4
CompilΓ© avec GDAL/OGR | 2.4.2 | UtilisΓ© avec GDAL/OGR | 2.4.2
CompilΓ© avec GEOS | 3.7.2-CAPI-1.11.2 | UtilisΓ© avec GEOS | 3.7.2-CAPI-1.11.2 b55d2125
Compiled against SQLite | 3.29.0 | Running against SQLite | 3.29.0
Version du client PostgreSQL | 12.3 (Ubuntu 12.3-1.pgdg19.10+1) | Version de SpatiaLite | 4.3.0a
Version de QWT | 6.1.4 | Version de QScintilla2 | 2.10.4
CompilΓ© avec PROJ | 5.2.0 | Fonctionne avec PROJ | Rel. 5.2.0, September 15th, 2018
OS Version | Ubuntu 19.10 | Cette copie de QGIS dispose d'une sortie de dΓ©bogage.
Extensions Python actives | processing; db_manager; MetaSearch
|
1.0
|
Not possible to run processing script on master - **Describe the bug**
It's not possible to launch a Processing script on master.
The following steps are working on 3.10.5
**How to Reproduce**
1. Processing toolbox
2. Create a new script from template
3. Launch the default script
4. Execute the script, nothing happen, nothing in the log too
**QGIS and OS versions**
Version de QGIS | 3.13.0-Master | RΓ©vision du code | 262568e6fb
-- | -- | -- | --
CompilΓ© avec Qt | 5.12.4 | Utilisant Qt | 5.12.4
CompilΓ© avec GDAL/OGR | 2.4.2 | UtilisΓ© avec GDAL/OGR | 2.4.2
CompilΓ© avec GEOS | 3.7.2-CAPI-1.11.2 | UtilisΓ© avec GEOS | 3.7.2-CAPI-1.11.2 b55d2125
Compiled against SQLite | 3.29.0 | Running against SQLite | 3.29.0
Version du client PostgreSQL | 12.3 (Ubuntu 12.3-1.pgdg19.10+1) | Version de SpatiaLite | 4.3.0a
Version de QWT | 6.1.4 | Version de QScintilla2 | 2.10.4
CompilΓ© avec PROJ | 5.2.0 | Fonctionne avec PROJ | Rel. 5.2.0, September 15th, 2018
OS Version | Ubuntu 19.10 | Cette copie de QGIS dispose d'une sortie de dΓ©bogage.
Extensions Python actives | processing; db_manager; MetaSearch
|
process
|
not possible to run processing script on master describe the bug it s not possible to launch a processing script on master the following steps are working on how to reproduce processing toolbox create a new script from template launch the default script execute the script nothing happen nothing in the log too qgis and os versions version de qgis master rΓ©vision du code compilΓ© avec qt utilisant qt compilΓ© avec gdal ogr utilisΓ© avec gdal ogr compilΓ© avec geos capi utilisΓ© avec geos capi compiled against sqlite running against sqlite version du client postgresql ubuntu version de spatialite version de qwt version de compilΓ© avec proj fonctionne avec proj rel september os version ubuntu cette copie de qgis dispose d une sortie de dΓ©bogage extensions python actives processing db manager metasearch
| 1
|
59,078
| 6,627,998,055
|
IssuesEvent
|
2017-09-23 12:11:46
|
fossasia/badgeyay
|
https://api.github.com/repos/fossasia/badgeyay
|
closed
|
Use phantomjs only on travis
|
has-PR test-and-quality
|
**I'm submitting a ...**
- [X] feature request
**Current behavior:**
<!-- How the bug manifests. -->
Currently, the phantomjs webdriver is used.
https://github.com/fossasia/badgeyay/blob/development/app/tests/test.py#L10
**Expected behavior:**
<!-- Behavior would be without the bug. -->
- Please add an if based on a useful environment variable to use
- phantomjs in case of travis
- firefox/... in case of desktop usage
- add documentation to teh Test Section in the documentation how to use this environment variable
- add the environment variable to travis
You can work on this step by step and create several pull requests if oyu like.
|
1.0
|
Use phantomjs only on travis - **I'm submitting a ...**
- [X] feature request
**Current behavior:**
<!-- How the bug manifests. -->
Currently, the phantomjs webdriver is used.
https://github.com/fossasia/badgeyay/blob/development/app/tests/test.py#L10
**Expected behavior:**
<!-- Behavior would be without the bug. -->
- Please add an if based on a useful environment variable to use
- phantomjs in case of travis
- firefox/... in case of desktop usage
- add documentation to teh Test Section in the documentation how to use this environment variable
- add the environment variable to travis
You can work on this step by step and create several pull requests if oyu like.
|
non_process
|
use phantomjs only on travis i m submitting a feature request current behavior currently the phantomjs webdriver is used expected behavior please add an if based on a useful environment variable to use phantomjs in case of travis firefox in case of desktop usage add documentation to teh test section in the documentation how to use this environment variable add the environment variable to travis you can work on this step by step and create several pull requests if oyu like
| 0
|
19,035
| 25,042,283,395
|
IssuesEvent
|
2022-11-04 22:28:20
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
opened
|
BP: Add "Select State/Region" dropdown
|
Batch Processor
|
Part of #1455
- [ ] Create a dropdown that says "Select State / Region:"
- [ ] Make a service call to https://streamstats.usgs.gov/nssservices/regions to retrieve the list of all the Regions
- [ ] Populate the dropdown with the "name" of each Region
Note: we may need to refine this, as there are some regions like "Undefined" that we may not want to include.
|
1.0
|
BP: Add "Select State/Region" dropdown - Part of #1455
- [ ] Create a dropdown that says "Select State / Region:"
- [ ] Make a service call to https://streamstats.usgs.gov/nssservices/regions to retrieve the list of all the Regions
- [ ] Populate the dropdown with the "name" of each Region
Note: we may need to refine this, as there are some regions like "Undefined" that we may not want to include.
|
process
|
bp add select state region dropdown part of create a dropdown that says select state region make a service call to to retrieve the list of all the regions populate the dropdown with the name of each region note we may need to refine this as there are some regions like undefined that we may not want to include
| 1
|
520,078
| 15,078,343,346
|
IssuesEvent
|
2021-02-05 08:34:27
|
Azure/static-web-apps-cli
|
https://api.github.com/repos/Azure/static-web-apps-cli
|
closed
|
Simulate authentication page
|
priority: high scope: auth type: enhancement
|
**Describe the solution you'd like**
Instead of directly integrating with all supported providers, use a simulated login page that allows a user to specify the provider, user info, and roles that they want to log in as.
When the app navigates to `/.auth/<provider>/login`, display the following screen:

(the `anonymous` and `authenticated` roles are always included when authenticated)
Upon "login", store the provided information in a cookie. When an API endpoint is accessed, add a `x-ms-client-principal` header using the identity from this cookie.
Accessing `/.auth/me` returns the info from this cookie.
`/.auth/logout` clears this cookie.
|
1.0
|
Simulate authentication page - **Describe the solution you'd like**
Instead of directly integrating with all supported providers, use a simulated login page that allows a user to specify the provider, user info, and roles that they want to log in as.
When the app navigates to `/.auth/<provider>/login`, display the following screen:

(the `anonymous` and `authenticated` roles are always included when authenticated)
Upon "login", store the provided information in a cookie. When an API endpoint is accessed, add a `x-ms-client-principal` header using the identity from this cookie.
Accessing `/.auth/me` returns the info from this cookie.
`/.auth/logout` clears this cookie.
|
non_process
|
simulate authentication page describe the solution you d like instead of directly integrating with all supported providers use a simulated login page that allows a user to specify the provider user info and roles that they want to log in as when the app navigates to auth login display the following screen the anonymous and authenticated roles are always included when authenticated upon login store the provided information in a cookie when an api endpoint is accessed add a x ms client principal header using the identity from this cookie accessing auth me returns the info from this cookie auth logout clears this cookie
| 0
|
19,687
| 26,036,760,838
|
IssuesEvent
|
2022-12-22 06:12:52
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Needs Run As account update
|
automation/svc triaged cxp doc-enhancement process-automation/subsvc Pri1
|
Docs state Run As accounts will be discontinued in 2023 but the Prerequisites of this page do not state any workarounds or warnings of discontinuation. How-To needs to be updated with using Managed Identities
---
#### Document Details
β *Do not edit this section. It is required for learn.microsoft.com β GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Azure Automation Start/Stop VMs during off-hours overview](https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
1.0
|
Needs Run As account update -
Docs state Run As accounts will be discontinued in 2023 but the Prerequisites of this page do not state any workarounds or warnings of discontinuation. How-To needs to be updated with using Managed Identities
---
#### Document Details
β *Do not edit this section. It is required for learn.microsoft.com β GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Azure Automation Start/Stop VMs during off-hours overview](https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
process
|
needs run as account update docs state run as accounts will be discontinued in but the prerequisites of this page do not state any workarounds or warnings of discontinuation how to needs to be updated with using managed identities document details β do not edit this section it is required for learn microsoft com β github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha
| 1
|
8,040
| 11,216,542,599
|
IssuesEvent
|
2020-01-07 06:43:07
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Need more clarity
|
Pri1 automation/svc cxp process-automation/subsvc product-question triaged
|
CPU usage based power off will not allow non usage. Can this be tweaked to disconnect amd logoff the user and then power off. Also instead of tahs, can these systems be grouped via logical entities? Not dependent on tags.
Power on action is still static. It needs more intelligence.
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Need more clarity - CPU usage based power off will not allow non usage. Can this be tweaked to disconnect amd logoff the user and then power off. Also instead of tahs, can these systems be grouped via logical entities? Not dependent on tags.
Power on action is still static. It needs more intelligence.
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
need more clarity cpu usage based power off will not allow non usage can this be tweaked to disconnect amd logoff the user and then power off also instead of tahs can these systems be grouped via logical entities not dependent on tags power on action is still static it needs more intelligence document details β do not edit this section it is required for docs microsoft com β github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
98
| 2,537,622,217
|
IssuesEvent
|
2015-01-26 21:50:44
|
abb-iss/issues
|
https://api.github.com/repos/abb-iss/issues
|
closed
|
Section 1 items research questions
|
p-process
|
Reviewer 3: In section 1, you might consider using a Latex description environment instead of itemize for the research questions. (You do something like this in section 4.)
|
1.0
|
Section 1 items research questions - Reviewer 3: In section 1, you might consider using a Latex description environment instead of itemize for the research questions. (You do something like this in section 4.)
|
process
|
section items research questions reviewer in section you might consider using a latex description environment instead of itemize for the research questions you do something like this in section
| 1
|
785,961
| 27,629,659,127
|
IssuesEvent
|
2023-03-10 09:53:13
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
closed
|
Provide mechanism or documentation for cache cleanup
|
type:feature help wanted priority-3-medium status:in-progress core:cache
|
### What would you like Renovate to be able to do?
It would be nice to offer a method of pruning the cache. On a run with and empty cache directory it populates 32M of space in our instance and after 6 months of hourly runs this grows to 88G.
This is also an issue when using GitLab cache which zips -> uploads and downloads -> unzips the cache on each run, so over time runs begin to take longer and longer.
The current workaround it to completely wipe the cache on a regular basis or when a disk fills up.
Alternatively, if this is not implemented as a feature in code then documentation should be updated provide information on how cache can be pruned by running other commands like `find /renovate/cache -type f -mtime +30 -delete` or if this is not possible and the whole cache should be wiped regularly.
### If you have any ideas on how this should be implemented, please tell us here.
I think it would be preferable to have a config option that can enable cache cleanup (true/false).
A second option to tune age of cache items may also be desirable (e.g. 30d)
### Is this a feature you are interested in implementing yourself?
No
|
1.0
|
Provide mechanism or documentation for cache cleanup - ### What would you like Renovate to be able to do?
It would be nice to offer a method of pruning the cache. On a run with and empty cache directory it populates 32M of space in our instance and after 6 months of hourly runs this grows to 88G.
This is also an issue when using GitLab cache which zips -> uploads and downloads -> unzips the cache on each run, so over time runs begin to take longer and longer.
The current workaround it to completely wipe the cache on a regular basis or when a disk fills up.
Alternatively, if this is not implemented as a feature in code then documentation should be updated provide information on how cache can be pruned by running other commands like `find /renovate/cache -type f -mtime +30 -delete` or if this is not possible and the whole cache should be wiped regularly.
### If you have any ideas on how this should be implemented, please tell us here.
I think it would be preferable to have a config option that can enable cache cleanup (true/false).
A second option to tune age of cache items may also be desirable (e.g. 30d)
### Is this a feature you are interested in implementing yourself?
No
|
non_process
|
provide mechanism or documentation for cache cleanup what would you like renovate to be able to do it would be nice to offer a method of pruning the cache on a run with and empty cache directory it populates of space in our instance and after months of hourly runs this grows to this is also an issue when using gitlab cache which zips uploads and downloads unzips the cache on each run so over time runs begin to take longer and longer the current workaround it to completely wipe the cache on a regular basis or when a disk fills up alternatively if this is not implemented as a feature in code then documentation should be updated provide information on how cache can be pruned by running other commands like find renovate cache type f mtime delete or if this is not possible and the whole cache should be wiped regularly if you have any ideas on how this should be implemented please tell us here i think it would be preferable to have a config option that can enable cache cleanup true false a second option to tune age of cache items may also be desirable e g is this a feature you are interested in implementing yourself no
| 0
|
117,018
| 9,906,208,622
|
IssuesEvent
|
2019-06-27 13:24:41
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
opened
|
Backport: Allow S3 Snapshot service to trust internally signed certs when talking to the S3 endpoint
|
[zube]: To Test area/backup-recover area/etcd internal kind/enhancement team/ca
|
Backport: https://github.com/rancher/rancher/issues/19222
Note: This is the issue for the API which is related to the backport RKE changes: https://github.com/rancher/rancher/issues/21103
and backport UI: https://github.com/rancher/rancher/issues/21104
|
1.0
|
Backport: Allow S3 Snapshot service to trust internally signed certs when talking to the S3 endpoint - Backport: https://github.com/rancher/rancher/issues/19222
Note: This is the issue for the API which is related to the backport RKE changes: https://github.com/rancher/rancher/issues/21103
and backport UI: https://github.com/rancher/rancher/issues/21104
|
non_process
|
backport allow snapshot service to trust internally signed certs when talking to the endpoint backport note this is the issue for the api which is related to the backport rke changes and backport ui
| 0
|
16,611
| 21,671,503,637
|
IssuesEvent
|
2022-05-08 02:33:43
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
decode_xml and decode_xml_wineventlog fail on UTF-16 charsets
|
bug Winlogbeat :Processors Team:Security-External Integrations
|
I'm trying to parse a field that contains XML and it logs the following error:
`failed in decode_xml on the "winlog.event_data.param2" field: error decoding XML field: xml: encoding "utf-16" declared but Decoder.CharsetReader is nil`
The field itself is formatted as such:
```
<?xml version="1.0" encoding="utf-16"?>
<AuditBase xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="FreshCredentialAudit">
snip
</ContextComponents>
</AuditBase>
```
This was reported, but closed due to no responses, in February here: https://discuss.elastic.co/t/decode-xml-fails-on-utf-16-formatted-input/295996
When on logstash, the filter worked fine, but since moving to elastic agent (until 8.2 when Logstash output may be available) this means most of the data in ADFS authentication logs, for example, isn't available as the user, IP, user agent, and other relevant info is all in the XML.
|
1.0
|
decode_xml and decode_xml_wineventlog fail on UTF-16 charsets - I'm trying to parse a field that contains XML and it logs the following error:
`failed in decode_xml on the "winlog.event_data.param2" field: error decoding XML field: xml: encoding "utf-16" declared but Decoder.CharsetReader is nil`
The field itself is formatted as such:
```
<?xml version="1.0" encoding="utf-16"?>
<AuditBase xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="FreshCredentialAudit">
snip
</ContextComponents>
</AuditBase>
```
This was reported, but closed due to no responses, in February here: https://discuss.elastic.co/t/decode-xml-fails-on-utf-16-formatted-input/295996
When on logstash, the filter worked fine, but since moving to elastic agent (until 8.2 when Logstash output may be available) this means most of the data in ADFS authentication logs, for example, isn't available as the user, IP, user agent, and other relevant info is all in the XML.
|
process
|
decode xml and decode xml wineventlog fail on utf charsets i m trying to parse a field that contains xml and it logs the following error failed in decode xml on the winlog event data field error decoding xml field xml encoding utf declared but decoder charsetreader is nil the field itself is formatted as such snip this was reported but closed due to no responses in february here when on logstash the filter worked fine but since moving to elastic agent until when logstash output may be available this means most of the data in adfs authentication logs for example isn t available as the user ip user agent and other relevant info is all in the xml
| 1
|
11,547
| 3,497,539,385
|
IssuesEvent
|
2016-01-06 01:52:18
|
Azure/azure-iot-sdks
|
https://api.github.com/repos/Azure/azure-iot-sdks
|
closed
|
Question: Best way to provision devices with connection string
|
documentation
|
I know this has to be pretty simple, but what's considered best practice for getting the device connection string into the device?
Actually trying to think through the flow as if the user has a device in their hands and has to set it up.
|
1.0
|
Question: Best way to provision devices with connection string - I know this has to be pretty simple, but what's considered best practice for getting the device connection string into the device?
Actually trying to think through the flow as if the user has a device in their hands and has to set it up.
|
non_process
|
question best way to provision devices with connection string i know this has to be pretty simple but what s considered best practice for getting the device connection string into the device actually trying to think through the flow as if the user has a device in their hands and has to set it up
| 0
|
2,395
| 2,725,875,439
|
IssuesEvent
|
2015-04-15 05:38:30
|
adobe/brackets
|
https://api.github.com/repos/adobe/brackets
|
closed
|
brace-fold frequently throws exceptions
|
F Code Folding fix in progress medium priority
|
Put this in a JS file:
```
var test = {
"foo": {
"one": true,
"two": true,
"three": true
},
"bar": false
};
```
1. Select lines 3-5 entirely (i.e. everything inside the foo `{}`s)
2. Press delete
Result: `Uncaught TypeError: Cannot read property 'lastIndexOf' of undefined`
1. Undo
2. Select lines 2-6 entirely (i.e. the entire `"foo": { ... },` block)
3. Press delete
Result: same exception thrown again
Nothing immediately obvious is broken afterwards, but it spams the console and looks bad, so I think we need to fix this for 1.3.
|
1.0
|
brace-fold frequently throws exceptions - Put this in a JS file:
```
var test = {
"foo": {
"one": true,
"two": true,
"three": true
},
"bar": false
};
```
1. Select lines 3-5 entirely (i.e. everything inside the foo `{}`s)
2. Press delete
Result: `Uncaught TypeError: Cannot read property 'lastIndexOf' of undefined`
1. Undo
2. Select lines 2-6 entirely (i.e. the entire `"foo": { ... },` block)
3. Press delete
Result: same exception thrown again
Nothing immediately obvious is broken afterwards, but it spams the console and looks bad, so I think we need to fix this for 1.3.
|
non_process
|
brace fold frequently throws exceptions put this in a js file var test foo one true two true three true bar false select lines entirely i e everything inside the foo s press delete result uncaught typeerror cannot read property lastindexof of undefined undo select lines entirely i e the entire foo block press delete result same exception thrown again nothing immediately obvious is broken afterwards but it spams the console and looks bad so i think we need to fix this for
| 0
|
324,437
| 9,895,594,405
|
IssuesEvent
|
2019-06-26 08:12:58
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.rtve.es - video or audio doesn't play
|
browser-firefox engine-gecko priority-normal
|
<!-- @browser: Firefox 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:69.0) Gecko/20100101 Firefox/69.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/
**Browser / Version**: Firefox 69.0
**Operating System**: Mac OS X 10.14
**Tested Another Browser**: Yes
**Problem type**: Video or audio doesn't play
**Description**: The video doennΒ΄t play. Its Java Scrip
**Steps to Reproduce**:
DonΒ΄y play. In SAFARI it's play.
[](https://webcompat.com/uploads/2019/6/d5f03168-013e-4617-9d3d-363517710ebd.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190624092246</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://cdns.gigya.com/js/gigya.js?apikey=3_MVrnwOzwUbykkDXOuSGkJfyMFEpG8k4MI3A66RsIKg91icSRNMZ4XnfzYXsU6cmm fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "Un XMLHttpRequest sncrono en el hilo principal est desaprobado por sus efectos negativos en la experiencia del usuario final. Para ms ayuda vea http://xhr.spec.whatwg.org/" {file: "http://ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js" line: 2}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en http://connect.facebook.net/es_ES/sdk.js#appId=78994661336&xfbml=1&version=v2.0 fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://apis.google.com/js/plusone.js fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en http://platform.twitter.com/widgets.js fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Error: "NotFoundError: Node was not found" {file: "https://js2.rtve.es/js/v/alacarta/2.74.2/rtve.pack.documentales.init.js" line: 13}]\nc@https://js2.rtve.es/js/v/alacarta/2.74.2/rtve.pack.documentales.init.js:13:9667\n@https://js2.rtve.es/js/v/alacarta/2.74.2/rtve.pack.documentales.init.js:13:8408\nEventListener.handleEvent*@https://js2.rtve.es/js/v/alacarta/2.74.2/rtve.pack.documentales.init.js:13:8274\n', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://connect.facebook.net/es_ES/sdk.js#appId=78994661336&xfbml=1&version=v2.0 fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://apis.google.com/_/scs/apps-static/_/js/k=oz.gapi.es.itKDhejUXMY.O/m=plusone/rt=j/sv=1/d=1/ed=1/am=wQE/rs=AGLTcCOu2GgvbRp0DF3HYUAhKOiQ3Fm0mA/cb=gapi.loaded_0 fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[console.timeStamp(CSI/tbsd_) https://apis.google.com/_/scs/apps-static/_/js/k=oz.gapi.es.itKDhejUXMY.O/m=plusone/rt=j/sv=1/d=1/ed=1/am=wQE/rs=AGLTcCOu2GgvbRp0DF3HYUAhKOiQ3Fm0mA/cb=gapi.loaded_0:271:142]', u'[console.timeStamp(CSI/_tbnd) https://apis.google.com/_/scs/apps-static/_/js/k=oz.gapi.es.itKDhejUXMY.O/m=plusone/rt=j/sv=1/d=1/ed=1/am=wQE/rs=AGLTcCOu2GgvbRp0DF3HYUAhKOiQ3Fm0mA/cb=gapi.loaded_0:271:142]', u'[JavaScript Error: "TypeError: `target` argument of Proxy must be an object, got null" {file: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es" line: 11}]\n@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:11:23\n@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:39:9\n', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 5 column: 4111 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Error: "TypeError: LOCAL_STORAGE is null" {file: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es" line: 32}]\ngetLocalStorageItems@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:32:25\n@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:40:33\n@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:48:7\n@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:50:4\n', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://syndication.twitter.com/settings fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 4 column: 10 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 11 column: 22 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 4 column: 6 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 48 column: 0 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 5 column: 4111 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 5 column: 4111 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Error: "TypeError: `target` argument of Proxy must be an object, got null" {file: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es" line: 11}]\n@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:11:23\n@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:39:9\n', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 46 column: 3864 source: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 4 column: 10 source: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 11 column: 22 source: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Error: "TypeError: LOCAL_STORAGE is null" {file: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es" line: 32}]\ngetLocalStorageItems@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:32:25\n@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:40:33\n@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:48:7\n@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:50:4\n', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 4 column: 6 source: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 48 column: 0 source: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es"}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with β€οΈ_
|
1.0
|
www.rtve.es - video or audio doesn't play - <!-- @browser: Firefox 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:69.0) Gecko/20100101 Firefox/69.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/
**Browser / Version**: Firefox 69.0
**Operating System**: Mac OS X 10.14
**Tested Another Browser**: Yes
**Problem type**: Video or audio doesn't play
**Description**: The video doennΒ΄t play. Its Java Scrip
**Steps to Reproduce**:
DonΒ΄y play. In SAFARI it's play.
[](https://webcompat.com/uploads/2019/6/d5f03168-013e-4617-9d3d-363517710ebd.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190624092246</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://cdns.gigya.com/js/gigya.js?apikey=3_MVrnwOzwUbykkDXOuSGkJfyMFEpG8k4MI3A66RsIKg91icSRNMZ4XnfzYXsU6cmm fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "Un XMLHttpRequest sncrono en el hilo principal est desaprobado por sus efectos negativos en la experiencia del usuario final. Para ms ayuda vea http://xhr.spec.whatwg.org/" {file: "http://ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js" line: 2}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en http://connect.facebook.net/es_ES/sdk.js#appId=78994661336&xfbml=1&version=v2.0 fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://apis.google.com/js/plusone.js fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en http://platform.twitter.com/widgets.js fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Error: "NotFoundError: Node was not found" {file: "https://js2.rtve.es/js/v/alacarta/2.74.2/rtve.pack.documentales.init.js" line: 13}]\nc@https://js2.rtve.es/js/v/alacarta/2.74.2/rtve.pack.documentales.init.js:13:9667\n@https://js2.rtve.es/js/v/alacarta/2.74.2/rtve.pack.documentales.init.js:13:8408\nEventListener.handleEvent*@https://js2.rtve.es/js/v/alacarta/2.74.2/rtve.pack.documentales.init.js:13:8274\n', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://connect.facebook.net/es_ES/sdk.js#appId=78994661336&xfbml=1&version=v2.0 fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://apis.google.com/_/scs/apps-static/_/js/k=oz.gapi.es.itKDhejUXMY.O/m=plusone/rt=j/sv=1/d=1/ed=1/am=wQE/rs=AGLTcCOu2GgvbRp0DF3HYUAhKOiQ3Fm0mA/cb=gapi.loaded_0 fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[console.timeStamp(CSI/tbsd_) https://apis.google.com/_/scs/apps-static/_/js/k=oz.gapi.es.itKDhejUXMY.O/m=plusone/rt=j/sv=1/d=1/ed=1/am=wQE/rs=AGLTcCOu2GgvbRp0DF3HYUAhKOiQ3Fm0mA/cb=gapi.loaded_0:271:142]', u'[console.timeStamp(CSI/_tbnd) https://apis.google.com/_/scs/apps-static/_/js/k=oz.gapi.es.itKDhejUXMY.O/m=plusone/rt=j/sv=1/d=1/ed=1/am=wQE/rs=AGLTcCOu2GgvbRp0DF3HYUAhKOiQ3Fm0mA/cb=gapi.loaded_0:271:142]', u'[JavaScript Error: "TypeError: `target` argument of Proxy must be an object, got null" {file: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es" line: 11}]\n@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:11:23\n@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:39:9\n', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 5 column: 4111 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Error: "TypeError: LOCAL_STORAGE is null" {file: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es" line: 32}]\ngetLocalStorageItems@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:32:25\n@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:40:33\n@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:48:7\n@https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es:50:4\n', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://syndication.twitter.com/settings fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 4 column: 10 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 11 column: 22 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 4 column: 6 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 48 column: 0 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 5 column: 4111 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 5 column: 4111 source: "https://platform.twitter.com/widgets/widget_iframe.d753e00c3e838c1b2558149bd3f6ecb8.html?origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 0}]', u'[JavaScript Error: "TypeError: `target` argument of Proxy must be an object, got null" {file: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es" line: 11}]\n@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:11:23\n@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:39:9\n', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 46 column: 3864 source: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 4 column: 10 source: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 11 column: 22 source: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Error: "TypeError: LOCAL_STORAGE is null" {file: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es" line: 32}]\ngetLocalStorageItems@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:32:25\n@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:40:33\n@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:48:7\n@https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es:50:4\n', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 4 column: 6 source: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es"}]', u'[JavaScript Warning: "La solicitud de acceso a las cookies o al almacenamiento en https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es fue bloqueada porque provino de un rastreador y est habilitado el bloqueo de contenido." {file: "http://www.rtve.es/alacarta/videos/otros-documentales/otros-documentales-hicimos-fue-secreto/5290898/" line: 48 column: 0 source: "https://staticxx.facebook.com/connect/xd_arbiter.php?version=44#channel=f1f6ff0a33b15e&origin=http%3A%2F%2Fwww.rtve.es"}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with β€οΈ_
|
non_process
|
video or audio doesn t play url browser version firefox operating system mac os x tested another browser yes problem type video or audio doesn t play description the video doennΒ΄t play its java scrip steps to reproduce donΒ΄y play in safari it s play browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel nightly console messages u u u u u nc u u u u u u n u u u ngetlocalstorageitems u u u u u u u u u u n u u u u ngetlocalstorageitems u u from with β€οΈ
| 0
|
16,159
| 20,594,478,372
|
IssuesEvent
|
2022-03-05 09:03:37
|
googleapis/java-certificate-manager
|
https://api.github.com/repos/googleapis/java-certificate-manager
|
opened
|
Your .repo-metadata.json file has a problem π€
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan π:
* api_shortname 'certificate-manager' invalid in .repo-metadata.json
βοΈ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem π€ - You have a problem with your .repo-metadata.json file:
Result of scan π:
* api_shortname 'certificate-manager' invalid in .repo-metadata.json
βοΈ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem π€ you have a problem with your repo metadata json file result of scan π api shortname certificate manager invalid in repo metadata json βοΈ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
417,963
| 28,112,368,498
|
IssuesEvent
|
2023-03-31 08:13:08
|
ShanHng/ped
|
https://api.github.com/repos/ShanHng/ped
|
opened
|
Naming of `Explanation` segment under UG as `Example of Usage` instead
|
severity.VeryLow type.DocumentationBug
|
I love that you guys provide examples/scenarios for your functions. However, I feel like 'Example of Usage' or something that indicates the presence of an example use case might be a better name for the `Explanation` segment.

<!--session: 1680243621722-3462b2da-c685-4580-b6e2-accb506c307a-->
<!--Version: Web v3.4.7-->
|
1.0
|
Naming of `Explanation` segment under UG as `Example of Usage` instead - I love that you guys provide examples/scenarios for your functions. However, I feel like 'Example of Usage' or something that indicates the presence of an example use case might be a better name for the `Explanation` segment.

<!--session: 1680243621722-3462b2da-c685-4580-b6e2-accb506c307a-->
<!--Version: Web v3.4.7-->
|
non_process
|
naming of explanation segment under ug as example of usage instead i love that you guys provide examples scenarios for your functions however i feel like example of usage or something that indicates the presence of an example use case might be a better name for the explanation segment
| 0
|
172,161
| 21,040,459,516
|
IssuesEvent
|
2022-03-31 11:51:17
|
Tim-sandbox/webgoat-trng
|
https://api.github.com/repos/Tim-sandbox/webgoat-trng
|
opened
|
CVE-2022-27772 (Medium) detected in spring-boot-2.2.2.RELEASE.jar
|
security vulnerability
|
## CVE-2022-27772 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-2.2.2.RELEASE.jar</b></p></summary>
<p>Spring Boot</p>
<p>Library home page: <a href="https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot">https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot</a></p>
<p>Path to dependency file: /webwolf/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.2.2.RELEASE/spring-boot-2.2.2.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.2.2.RELEASE/spring-boot-2.2.2.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-devtools-2.2.2.RELEASE.jar (Root Library)
- :x: **spring-boot-2.2.2.RELEASE.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** UNSUPPORTED WHEN ASSIGNED ** spring-boot versions prior to version v2.2.11.RELEASE was vulnerable to temporary directory hijacking. This vulnerability impacted the org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir method. NOTE: This vulnerability only affects products and/or versions that are no longer supported by the maintainer.
<p>Publish Date: 2022-03-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27772>CVE-2022-27772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/JLLeitschuh/security-research/security/advisories/GHSA-cm59-pr5q-cw85">https://github.com/JLLeitschuh/security-research/security/advisories/GHSA-cm59-pr5q-cw85</a></p>
<p>Release Date: 2022-03-30</p>
<p>Fix Resolution: org.springframework.boot:spring-boot:2.2.11.RELEASE</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework.boot","packageName":"spring-boot","packageVersion":"2.2.2.RELEASE","packageFilePaths":["/webwolf/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-devtools:2.2.2.RELEASE;org.springframework.boot:spring-boot:2.2.2.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.boot:spring-boot:2.2.11.RELEASE","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-27772","vulnerabilityDetails":"** UNSUPPORTED WHEN ASSIGNED ** spring-boot versions prior to version v2.2.11.RELEASE was vulnerable to temporary directory hijacking. This vulnerability impacted the org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir method. NOTE: This vulnerability only affects products and/or versions that are no longer supported by the maintainer.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27772","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2022-27772 (Medium) detected in spring-boot-2.2.2.RELEASE.jar - ## CVE-2022-27772 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-2.2.2.RELEASE.jar</b></p></summary>
<p>Spring Boot</p>
<p>Library home page: <a href="https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot">https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot</a></p>
<p>Path to dependency file: /webwolf/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.2.2.RELEASE/spring-boot-2.2.2.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.2.2.RELEASE/spring-boot-2.2.2.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-devtools-2.2.2.RELEASE.jar (Root Library)
- :x: **spring-boot-2.2.2.RELEASE.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** UNSUPPORTED WHEN ASSIGNED ** spring-boot versions prior to version v2.2.11.RELEASE was vulnerable to temporary directory hijacking. This vulnerability impacted the org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir method. NOTE: This vulnerability only affects products and/or versions that are no longer supported by the maintainer.
<p>Publish Date: 2022-03-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27772>CVE-2022-27772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/JLLeitschuh/security-research/security/advisories/GHSA-cm59-pr5q-cw85">https://github.com/JLLeitschuh/security-research/security/advisories/GHSA-cm59-pr5q-cw85</a></p>
<p>Release Date: 2022-03-30</p>
<p>Fix Resolution: org.springframework.boot:spring-boot:2.2.11.RELEASE</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework.boot","packageName":"spring-boot","packageVersion":"2.2.2.RELEASE","packageFilePaths":["/webwolf/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-devtools:2.2.2.RELEASE;org.springframework.boot:spring-boot:2.2.2.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.boot:spring-boot:2.2.11.RELEASE","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-27772","vulnerabilityDetails":"** UNSUPPORTED WHEN ASSIGNED ** spring-boot versions prior to version v2.2.11.RELEASE was vulnerable to temporary directory hijacking. This vulnerability impacted the org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir method. NOTE: This vulnerability only affects products and/or versions that are no longer supported by the maintainer.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27772","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in spring boot release jar cve medium severity vulnerability vulnerable library spring boot release jar spring boot library home page a href path to dependency file webwolf pom xml path to vulnerable library home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar dependency hierarchy spring boot devtools release jar root library x spring boot release jar vulnerable library found in base branch main vulnerability details unsupported when assigned spring boot versions prior to version release was vulnerable to temporary directory hijacking this vulnerability impacted the org springframework boot web server abstractconfigurablewebserverfactory createtempdir method note this vulnerability only affects products and or versions that are no longer supported by the maintainer publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework boot spring boot release isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot devtools release org springframework boot spring boot release isminimumfixversionavailable true minimumfixversion org springframework boot spring boot release isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails unsupported when assigned spring boot versions prior to version release was vulnerable to temporary directory hijacking this vulnerability impacted the org springframework boot web server abstractconfigurablewebserverfactory createtempdir method note this vulnerability only affects products and or versions that are no longer supported by the maintainer vulnerabilityurl
| 0
|
22,063
| 30,585,072,324
|
IssuesEvent
|
2023-07-21 12:47:12
|
ukri-excalibur/excalibur-tests
|
https://api.github.com/repos/ukri-excalibur/excalibur-tests
|
opened
|
Publish post-processing plots in some way
|
postprocessing
|
Once the plotting in a simple html file is done, we have to decide on a way to publish those (github pages most likely) and embed them in a more substantial website. See also #18 and #70 .
- Make it interactive? This will substitute/overwrite the config file
- If the above is done, add functionality to export a config file to be able to reproduce the plot that has been made interactively.
|
1.0
|
Publish post-processing plots in some way - Once the plotting in a simple html file is done, we have to decide on a way to publish those (github pages most likely) and embed them in a more substantial website. See also #18 and #70 .
- Make it interactive? This will substitute/overwrite the config file
- If the above is done, add functionality to export a config file to be able to reproduce the plot that has been made interactively.
|
process
|
publish post processing plots in some way once the plotting in a simple html file is done we have to decide on a way to publish those github pages most likely and embed them in a more substantial website see also and make it interactive this will substitute overwrite the config file if the above is done add functionality to export a config file to be able to reproduce the plot that has been made interactively
| 1
|
205,752
| 15,685,949,079
|
IssuesEvent
|
2021-03-25 11:53:49
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
[CI] FullClusterRestartIT testDataStreams failing for certain BWC version
|
:Core/Features/Data streams >test-failure Team:Core/Features
|
This looks to be failing against 7.9.x and 7.10.x versions.
**Build scan:**
https://gradle-enterprise.elastic.co/s/kwfuj2tkzkdmo/tests/:x-pack:qa:full-cluster-restart:v7.9.1%23oldClusterTest/org.elasticsearch.xpack.restart.FullClusterRestartIT/testDataStreams
**Reproduction line:**
`./gradlew ':x-pack:qa:full-cluster-restart:v7.9.1#oldClusterTest' -Dtests.class="org.elasticsearch.xpack.restart.FullClusterRestartIT" -Dtests.method="testDataStreams" -Dtests.seed=8ECA05EC157E482A -Dtests.security.manager=true -Dtests.bwc=true -Dtests.locale=sr-BA -Dtests.timezone=Asia/Saigon -Druntime.java=11`
**Applicable branches:**
master
**Reproduces locally?:**
Yes
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.restart.FullClusterRestartIT&tests.test=testDataStreams
**Failure excerpt:**
```
org.junit.ComparisonFailure: expected:<.ds-ds-[2021.03.24-]000001> but was:<.ds-ds-[]000001>
at __randomizedtesting.SeedInfo.seed([8ECA05EC157E482A:C86E9C006C3E25C4]:0)
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at org.elasticsearch.xpack.restart.FullClusterRestartIT.testDataStreams(FullClusterRestartIT.java:734)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:834)
```
|
1.0
|
[CI] FullClusterRestartIT testDataStreams failing for certain BWC version - This looks to be failing against 7.9.x and 7.10.x versions.
**Build scan:**
https://gradle-enterprise.elastic.co/s/kwfuj2tkzkdmo/tests/:x-pack:qa:full-cluster-restart:v7.9.1%23oldClusterTest/org.elasticsearch.xpack.restart.FullClusterRestartIT/testDataStreams
**Reproduction line:**
`./gradlew ':x-pack:qa:full-cluster-restart:v7.9.1#oldClusterTest' -Dtests.class="org.elasticsearch.xpack.restart.FullClusterRestartIT" -Dtests.method="testDataStreams" -Dtests.seed=8ECA05EC157E482A -Dtests.security.manager=true -Dtests.bwc=true -Dtests.locale=sr-BA -Dtests.timezone=Asia/Saigon -Druntime.java=11`
**Applicable branches:**
master
**Reproduces locally?:**
Yes
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.restart.FullClusterRestartIT&tests.test=testDataStreams
**Failure excerpt:**
```
org.junit.ComparisonFailure: expected:<.ds-ds-[2021.03.24-]000001> but was:<.ds-ds-[]000001>
at __randomizedtesting.SeedInfo.seed([8ECA05EC157E482A:C86E9C006C3E25C4]:0)
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at org.elasticsearch.xpack.restart.FullClusterRestartIT.testDataStreams(FullClusterRestartIT.java:734)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:834)
```
|
non_process
|
fullclusterrestartit testdatastreams failing for certain bwc version this looks to be failing against x and x versions build scan reproduction line gradlew x pack qa full cluster restart oldclustertest dtests class org elasticsearch xpack restart fullclusterrestartit dtests method testdatastreams dtests seed dtests security manager true dtests bwc true dtests locale sr ba dtests timezone asia saigon druntime java applicable branches master reproduces locally yes failure history failure excerpt org junit comparisonfailure expected but was at randomizedtesting seedinfo seed at org junit assert assertequals assert java at org junit assert assertequals assert java at org elasticsearch xpack restart fullclusterrestartit testdatastreams fullclusterrestartit java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
| 0
|
120,986
| 10,145,397,181
|
IssuesEvent
|
2019-08-05 04:05:18
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
teamcity: failed test: TestDrainingProcessorSwallowsUncertaintyError
|
C-test-failure O-robot
|
The following tests appear to have failed on master (test): TestDrainingProcessorSwallowsUncertaintyError/dummy=false, TestDrainingProcessorSwallowsUncertaintyError, TestDrainingProcessorSwallowsUncertaintyError/dummy=true
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestDrainingProcessorSwallowsUncertaintyError).
[#1421800](https://teamcity.cockroachdb.com/viewLog.html?buildId=1421800):
```
TestDrainingProcessorSwallowsUncertaintyError/dummy=false
--- FAIL: test/TestDrainingProcessorSwallowsUncertaintyError/dummy=false (0.000s)
Test ended in panic.
------- Stdout: -------
I190804 05:07:55.381101 14849 gossip/gossip.go:1512 [n1] node has connected to cluster via gossip
I190804 05:07:55.381370 14849 storage/stores.go:259 [n1] wrote 2 node addresses to persistent storage
I190804 05:07:56.123420 15622 gossip/gossip.go:1512 [n2] node has connected to cluster via gossip
I190804 05:07:56.123730 15622 storage/stores.go:259 [n2] wrote 2 node addresses to persistent storage
I190804 05:07:56.165782 15881 gossip/gossip.go:1512 [n3] node has connected to cluster via gossip
I190804 05:07:56.166033 15881 storage/stores.go:259 [n3] wrote 2 node addresses to persistent storage
I190804 05:08:03.707247 14972 server/status/runtime.go:498 [n1] runtime stats: 310 MiB RSS, 631 goroutines, 178 MiB/66 MiB/269 MiB GO alloc/idle/total, 28 MiB/57 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (28x), 11 MiB/11 MiB (r/w)net
W190804 05:08:03.780037 14983 server/node.go:814 [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:ranges.underreplicated Value:20}]}
I190804 05:08:04.031861 15755 server/status/runtime.go:498 [n2] runtime stats: 312 MiB RSS, 631 goroutines, 184 MiB/61 MiB/269 MiB GO alloc/idle/total, 28 MiB/57 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (28x), 11 MiB/11 MiB (r/w)net
W190804 05:08:04.042285 15781 server/node.go:814 [n2,summaries] health alerts detected: {Alerts:[{StoreID:2 Category:METRICS Description:ranges.underreplicated Value:1}]}
I190804 05:08:04.147773 16038 server/status/runtime.go:498 [n3] runtime stats: 313 MiB RSS, 631 goroutines, 189 MiB/57 MiB/270 MiB GO alloc/idle/total, 30 MiB/58 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (29x), 11 MiB/11 MiB (r/w)net
I190804 05:08:07.731179 16810 storage/replica_consistency.go:191 [n1,consistencyChecker,s1,r4/1:/System{/tsd-tse}] triggering stats recomputation to resolve delta of {ContainsEstimates:true LastUpdateNanos:1564895284220145659 IntentAge:0 GCBytesAge:0 LiveBytes:-21501 LiveCount:-1866 KeyBytes:-90531 KeyCount:-1866 ValBytes:69030 ValCount:-1866 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
I190804 05:08:13.710037 14972 server/status/runtime.go:498 [n1] runtime stats: 314 MiB RSS, 631 goroutines, 127 MiB/100 MiB/270 MiB GO alloc/idle/total, 29 MiB/58 MiB CGO alloc/total, 387.9 CGO/sec, 4.1/0.4 %(u/s)time, 0.0 %gc (1x), 9.0 MiB/9.0 MiB (r/w)net
W190804 05:08:13.794271 14983 server/node.go:814 [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:ranges.underreplicated Value:20}]}
I190804 05:08:14.031902 15755 server/status/runtime.go:498 [n2] runtime stats: 314 MiB RSS, 631 goroutines, 133 MiB/95 MiB/270 MiB GO alloc/idle/total, 29 MiB/58 MiB CGO alloc/total, 388.2 CGO/sec, 4.1/0.4 %(u/s)time, 0.0 %gc (1x), 8.8 MiB/8.8 MiB (r/w)net
W190804 05:08:14.043197 15781 server/node.go:814 [n2,summaries] health alerts detected: {Alerts:[{StoreID:2 Category:METRICS Description:ranges.underreplicated Value:1}]}
I190804 05:08:14.147567 16038 server/status/runtime.go:498 [n3] runtime stats: 314 MiB RSS, 631 goroutines, 139 MiB/90 MiB/270 MiB GO alloc/idle/total, 29 MiB/58 MiB CGO alloc/total, 386.5 CGO/sec, 3.0/0.3 %(u/s)time, 0.0 %gc (0x), 8.8 MiB/8.8 MiB (r/w)net
I190804 05:08:23.710243 14972 server/status/runtime.go:498 [n1] runtime stats: 314 MiB RSS, 631 goroutines, 146 MiB/84 MiB/270 MiB GO alloc/idle/total, 29 MiB/58 MiB CGO alloc/total, 61.8 CGO/sec, 2.4/0.2 %(u/s)time, 0.0 %gc (0x), 4.6 MiB/4.6 MiB (r/w)net
TestDrainingProcessorSwallowsUncertaintyError/dummy=true
...istsqlrun/tablereader.go:228 +0x308
github.com/cockroachdb/cockroach/pkg/sql/distsqlrun.(*indexJoiner).Start(0xc00ce89500, 0x3a86900, 0xc009988540, 0x3a86840, 0xc00a1110c0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/indexjoiner.go:135 +0x52
github.com/cockroachdb/cockroach/pkg/sql/distsqlrun.(*sortAllProcessor).Start(0xc002fb0500, 0x3a86900, 0xc009988540, 0x597c540, 0x2e89640)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter.go:281 +0x52
github.com/cockroachdb/cockroach/pkg/sql/distsqlrun.(*ProcessorBase).Run(0xc002fb0500, 0x3a86900, 0xc009988540)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/processors.go:792 +0x52
github.com/cockroachdb/cockroach/pkg/sql/distsqlrun.(*Flow).Run(0xc0057737a0, 0x3a86900, 0xc009988540, 0x3414198, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/flow.go:578 +0x1e9
github.com/cockroachdb/cockroach/pkg/sql.(*DistSQLPlanner).Run(0xc0013fa5a0, 0xc00dff3200, 0xc00503e5a0, 0xc00e20c948, 0xc006aad340, 0xc005f467a8, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql_running.go:328 +0x368
github.com/cockroachdb/cockroach/pkg/sql.(*DistSQLPlanner).PlanAndRun(0xc0013fa5a0, 0x3a86900, 0xc0099934d0, 0xc005f467a8, 0xc00dff3200, 0xc00503e5a0, 0x3a88280, 0xc005773560, 0xc006aad340, 0xc005f46558)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql_running.go:937 +0x217
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execWithDistSQLEngine(0xc005f46380, 0x3a86900, 0xc0099934d0, 0xc005f466d0, 0x3, 0x7f43de9a4960, 0xc00dff2b40, 0xc00e20cb01, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:902 +0x370
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).dispatchToExecutionEngine(0xc005f46380, 0x3a86900, 0xc0099934d0, 0xc005f466d0, 0x7f43de9a4960, 0xc00dff2b40, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:734 +0x6e5
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execStmtInOpenState(0xc005f46380, 0x3a86900, 0xc0099934d0, 0x3a8bb40, 0xc00a110bc0, 0x337350f, 0x51, 0x3, 0x1, 0xc00a111040, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:417 +0xb64
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execStmt(0xc005f46380, 0x3a86900, 0xc0099934d0, 0x3a8bb40, 0xc00a110bc0, 0x337350f, 0x51, 0x3, 0x1, 0xc00a111040, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:99 +0x4ec
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execCmd(0xc005f46380, 0x3a86840, 0xc00a1110c0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor.go:1273 +0xee8
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).run(0xc005f46380, 0x3a86900, 0xc009499470, 0xc0047aa460, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor.go:1140 +0x1a3
github.com/cockroachdb/cockroach/pkg/sql.(*internalExecutorImpl).initConnEx.func1(0xc005f46380, 0x3a86900, 0xc009499470, 0xc00ab7f0e0, 0xc009bb5fa0, 0x0, 0xc00ab81990)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/internal.go:201 +0x64
created by github.com/cockroachdb/cockroach/pkg/sql.(*internalExecutorImpl).initConnEx
/go/src/github.com/cockroachdb/cockroach/pkg/sql/internal.go:200 +0x4bc
goroutine 33694 [select]:
github.com/cockroachdb/cockroach/pkg/sql/stats.(*Refresher).Start.func1.1(0x3a86900, 0xc004a91050)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/stats/automatic_stats.go:258 +0x11f
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask.func1(0xc006793560, 0x3a86900, 0xc004a91050, 0xc00eabcba0, 0x2a, 0x0, 0x0, 0xc0014b2390)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:321 +0xe6
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:316 +0x131
I190804 05:07:58.977554 1 rand.go:83 Random seed: 6357555487528681551
TestDrainingProcessorSwallowsUncertaintyError
...ore [n3,s3]
I190804 05:07:54.142991 14808 server/node.go:500 [n3] node=3: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I190804 05:07:54.143072 14808 server/server.go:1911 [n3] Could not start goroutine dumper worker due to: directory to store dumps could not be determined
I190804 05:07:54.143116 15536 storage/stores.go:259 [n2] wrote 2 node addresses to persistent storage
I190804 05:07:54.143144 14808 server/server.go:1590 [n3] starting https server at 127.0.0.1:34021 (use: 127.0.0.1:34021)
I190804 05:07:54.143162 14808 server/server.go:1592 [n3] starting grpc/postgres server at 127.0.0.1:46841
I190804 05:07:54.143178 14808 server/server.go:1593 [n3] advertising CockroachDB node at 127.0.0.1:46841
I190804 05:07:54.150965 14808 server/server.go:1661 [n3] done ensuring all necessary migrations have run
I190804 05:07:54.151001 14808 server/server.go:1664 [n3] serving sql connections
I190804 05:07:54.182151 16057 sql/event_log.go:130 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:127.0.0.1:46841 Attrs: Locality:region=test,dc=dc3 ServerVersion:19.1-7 BuildTag:v19.2.0-alpha.20190606-1213-g3b9a95b StartedAt:1564895274139875988 LocalityAddress:[]} ClusterID:19bf9cf5-21bd-4411-ae2d-0395dbcd58db StartedAt:1564895274139875988 LastUp:1564895274139875988}
I190804 05:07:54.191379 16054 server/server_update.go:68 [n3] no need to upgrade, cluster already at the newest version
I190804 05:07:54.194500 15458 sql/event_log.go:130 [n1,client=127.0.0.1:58644,user=root] Event: "create_database", target: 52, info: {DatabaseName:test Statement:CREATE DATABASE IF NOT EXISTS test User:root}
I190804 05:07:54.219154 15458 sql/event_log.go:130 [n1,client=127.0.0.1:58644,user=root] Event: "create_table", target: 53, info: {TableName:test.public.t Statement:CREATE TABLE test.public.t (x INT8 PRIMARY KEY) User:root}
I190804 05:07:54.242325 15458 storage/replica_command.go:283 [n1,s1,r20/1:/{Table/24-Max}] initiating a split of this range at key /Table/53/1/6 [r21] (manual)
I190804 05:07:54.259462 15458 storage/store_snapshot.go:775 [n1,s1,r21/1:/{Table/53/1/6-Max}] sending PREEMPTIVE snapshot 7f1039fa at applied index 12
I190804 05:07:54.259610 15458 storage/store_snapshot.go:818 [n1,s1,r21/1:/{Table/53/1/6-Max}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 0, rate-limit: 8.0 MiB/sec, 0.00s
I190804 05:07:54.260012 15935 storage/replica_raftstorage.go:827 [n2,s2,r21/?:{-}] applying PREEMPTIVE snapshot at index 12 (id=7f1039fa, encoded size=344, 1 rocksdb batches, 0 log entries)
I190804 05:07:54.260276 15935 storage/replica_raftstorage.go:833 [n2,s2,r21/?:/{Table/53/1/6-Max}] applied PREEMPTIVE snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I190804 05:07:54.260803 15458 storage/replica_command.go:1188 [n1,s1,r21/1:/{Table/53/1/6-Max}] change replicas (ADD_REPLICA (n2,s2):2): existing descriptor r21:/{Table/53/1/6-Max} [(n1,s1):1, next=2, gen=1]
I190804 05:07:54.264073 15458 storage/replica_raft.go:289 [n1,s1,r21/1:/{Table/53/1/6-Max}] proposing ADD_REPLICA((n2,s2):2): updated=(n1,s1):1,(n2,s2):2 next=3
I190804 05:07:54.321334 16219 storage/replica_command.go:1188 [n2,s2,r21/2:/{Table/53/1/6-Max}] change replicas (REMOVE_REPLICA (n1,s1):1): existing descriptor r21:/{Table/53/1/6-Max} [(n1,s1):1, (n2,s2):2, next=3, gen=2]
I190804 05:07:54.331682 16219 storage/replica_raft.go:289 [n2,s2,r21/2:/{Table/53/1/6-Max}] proposing REMOVE_REPLICA((n1,s1):1): updated=(n2,s2):2 next=3
I190804 05:07:54.343320 16306 storage/store.go:2530 [n1,replicaGC,s1,r21/1:/{Table/53/1/6-Max}] removing replica r21/1
I190804 05:07:54.343549 16306 storage/replica_destroy.go:146 [n1,replicaGC,s1,r21/1:/{Table/53/1/6-Max}] removed 10 (5+5) keys in 0ms [clear=0ms commit=0ms]
I190804 05:07:54.343333 15458 sql/event_log.go:130 [n1,client=127.0.0.1:58644,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:sql.defaults.results_buffer.size Value:0 User:root}
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
teamcity: failed test: TestDrainingProcessorSwallowsUncertaintyError - The following tests appear to have failed on master (test): TestDrainingProcessorSwallowsUncertaintyError/dummy=false, TestDrainingProcessorSwallowsUncertaintyError, TestDrainingProcessorSwallowsUncertaintyError/dummy=true
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestDrainingProcessorSwallowsUncertaintyError).
[#1421800](https://teamcity.cockroachdb.com/viewLog.html?buildId=1421800):
```
TestDrainingProcessorSwallowsUncertaintyError/dummy=false
--- FAIL: test/TestDrainingProcessorSwallowsUncertaintyError/dummy=false (0.000s)
Test ended in panic.
------- Stdout: -------
I190804 05:07:55.381101 14849 gossip/gossip.go:1512 [n1] node has connected to cluster via gossip
I190804 05:07:55.381370 14849 storage/stores.go:259 [n1] wrote 2 node addresses to persistent storage
I190804 05:07:56.123420 15622 gossip/gossip.go:1512 [n2] node has connected to cluster via gossip
I190804 05:07:56.123730 15622 storage/stores.go:259 [n2] wrote 2 node addresses to persistent storage
I190804 05:07:56.165782 15881 gossip/gossip.go:1512 [n3] node has connected to cluster via gossip
I190804 05:07:56.166033 15881 storage/stores.go:259 [n3] wrote 2 node addresses to persistent storage
I190804 05:08:03.707247 14972 server/status/runtime.go:498 [n1] runtime stats: 310 MiB RSS, 631 goroutines, 178 MiB/66 MiB/269 MiB GO alloc/idle/total, 28 MiB/57 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (28x), 11 MiB/11 MiB (r/w)net
W190804 05:08:03.780037 14983 server/node.go:814 [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:ranges.underreplicated Value:20}]}
I190804 05:08:04.031861 15755 server/status/runtime.go:498 [n2] runtime stats: 312 MiB RSS, 631 goroutines, 184 MiB/61 MiB/269 MiB GO alloc/idle/total, 28 MiB/57 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (28x), 11 MiB/11 MiB (r/w)net
W190804 05:08:04.042285 15781 server/node.go:814 [n2,summaries] health alerts detected: {Alerts:[{StoreID:2 Category:METRICS Description:ranges.underreplicated Value:1}]}
I190804 05:08:04.147773 16038 server/status/runtime.go:498 [n3] runtime stats: 313 MiB RSS, 631 goroutines, 189 MiB/57 MiB/270 MiB GO alloc/idle/total, 30 MiB/58 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (29x), 11 MiB/11 MiB (r/w)net
I190804 05:08:07.731179 16810 storage/replica_consistency.go:191 [n1,consistencyChecker,s1,r4/1:/System{/tsd-tse}] triggering stats recomputation to resolve delta of {ContainsEstimates:true LastUpdateNanos:1564895284220145659 IntentAge:0 GCBytesAge:0 LiveBytes:-21501 LiveCount:-1866 KeyBytes:-90531 KeyCount:-1866 ValBytes:69030 ValCount:-1866 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
I190804 05:08:13.710037 14972 server/status/runtime.go:498 [n1] runtime stats: 314 MiB RSS, 631 goroutines, 127 MiB/100 MiB/270 MiB GO alloc/idle/total, 29 MiB/58 MiB CGO alloc/total, 387.9 CGO/sec, 4.1/0.4 %(u/s)time, 0.0 %gc (1x), 9.0 MiB/9.0 MiB (r/w)net
W190804 05:08:13.794271 14983 server/node.go:814 [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:ranges.underreplicated Value:20}]}
I190804 05:08:14.031902 15755 server/status/runtime.go:498 [n2] runtime stats: 314 MiB RSS, 631 goroutines, 133 MiB/95 MiB/270 MiB GO alloc/idle/total, 29 MiB/58 MiB CGO alloc/total, 388.2 CGO/sec, 4.1/0.4 %(u/s)time, 0.0 %gc (1x), 8.8 MiB/8.8 MiB (r/w)net
W190804 05:08:14.043197 15781 server/node.go:814 [n2,summaries] health alerts detected: {Alerts:[{StoreID:2 Category:METRICS Description:ranges.underreplicated Value:1}]}
I190804 05:08:14.147567 16038 server/status/runtime.go:498 [n3] runtime stats: 314 MiB RSS, 631 goroutines, 139 MiB/90 MiB/270 MiB GO alloc/idle/total, 29 MiB/58 MiB CGO alloc/total, 386.5 CGO/sec, 3.0/0.3 %(u/s)time, 0.0 %gc (0x), 8.8 MiB/8.8 MiB (r/w)net
I190804 05:08:23.710243 14972 server/status/runtime.go:498 [n1] runtime stats: 314 MiB RSS, 631 goroutines, 146 MiB/84 MiB/270 MiB GO alloc/idle/total, 29 MiB/58 MiB CGO alloc/total, 61.8 CGO/sec, 2.4/0.2 %(u/s)time, 0.0 %gc (0x), 4.6 MiB/4.6 MiB (r/w)net
TestDrainingProcessorSwallowsUncertaintyError/dummy=true
...istsqlrun/tablereader.go:228 +0x308
github.com/cockroachdb/cockroach/pkg/sql/distsqlrun.(*indexJoiner).Start(0xc00ce89500, 0x3a86900, 0xc009988540, 0x3a86840, 0xc00a1110c0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/indexjoiner.go:135 +0x52
github.com/cockroachdb/cockroach/pkg/sql/distsqlrun.(*sortAllProcessor).Start(0xc002fb0500, 0x3a86900, 0xc009988540, 0x597c540, 0x2e89640)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/sorter.go:281 +0x52
github.com/cockroachdb/cockroach/pkg/sql/distsqlrun.(*ProcessorBase).Run(0xc002fb0500, 0x3a86900, 0xc009988540)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/processors.go:792 +0x52
github.com/cockroachdb/cockroach/pkg/sql/distsqlrun.(*Flow).Run(0xc0057737a0, 0x3a86900, 0xc009988540, 0x3414198, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsqlrun/flow.go:578 +0x1e9
github.com/cockroachdb/cockroach/pkg/sql.(*DistSQLPlanner).Run(0xc0013fa5a0, 0xc00dff3200, 0xc00503e5a0, 0xc00e20c948, 0xc006aad340, 0xc005f467a8, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql_running.go:328 +0x368
github.com/cockroachdb/cockroach/pkg/sql.(*DistSQLPlanner).PlanAndRun(0xc0013fa5a0, 0x3a86900, 0xc0099934d0, 0xc005f467a8, 0xc00dff3200, 0xc00503e5a0, 0x3a88280, 0xc005773560, 0xc006aad340, 0xc005f46558)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql_running.go:937 +0x217
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execWithDistSQLEngine(0xc005f46380, 0x3a86900, 0xc0099934d0, 0xc005f466d0, 0x3, 0x7f43de9a4960, 0xc00dff2b40, 0xc00e20cb01, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:902 +0x370
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).dispatchToExecutionEngine(0xc005f46380, 0x3a86900, 0xc0099934d0, 0xc005f466d0, 0x7f43de9a4960, 0xc00dff2b40, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:734 +0x6e5
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execStmtInOpenState(0xc005f46380, 0x3a86900, 0xc0099934d0, 0x3a8bb40, 0xc00a110bc0, 0x337350f, 0x51, 0x3, 0x1, 0xc00a111040, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:417 +0xb64
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execStmt(0xc005f46380, 0x3a86900, 0xc0099934d0, 0x3a8bb40, 0xc00a110bc0, 0x337350f, 0x51, 0x3, 0x1, 0xc00a111040, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:99 +0x4ec
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).execCmd(0xc005f46380, 0x3a86840, 0xc00a1110c0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor.go:1273 +0xee8
github.com/cockroachdb/cockroach/pkg/sql.(*connExecutor).run(0xc005f46380, 0x3a86900, 0xc009499470, 0xc0047aa460, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor.go:1140 +0x1a3
github.com/cockroachdb/cockroach/pkg/sql.(*internalExecutorImpl).initConnEx.func1(0xc005f46380, 0x3a86900, 0xc009499470, 0xc00ab7f0e0, 0xc009bb5fa0, 0x0, 0xc00ab81990)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/internal.go:201 +0x64
created by github.com/cockroachdb/cockroach/pkg/sql.(*internalExecutorImpl).initConnEx
/go/src/github.com/cockroachdb/cockroach/pkg/sql/internal.go:200 +0x4bc
goroutine 33694 [select]:
github.com/cockroachdb/cockroach/pkg/sql/stats.(*Refresher).Start.func1.1(0x3a86900, 0xc004a91050)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/stats/automatic_stats.go:258 +0x11f
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask.func1(0xc006793560, 0x3a86900, 0xc004a91050, 0xc00eabcba0, 0x2a, 0x0, 0x0, 0xc0014b2390)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:321 +0xe6
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:316 +0x131
I190804 05:07:58.977554 1 rand.go:83 Random seed: 6357555487528681551
TestDrainingProcessorSwallowsUncertaintyError
...ore [n3,s3]
I190804 05:07:54.142991 14808 server/node.go:500 [n3] node=3: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I190804 05:07:54.143072 14808 server/server.go:1911 [n3] Could not start goroutine dumper worker due to: directory to store dumps could not be determined
I190804 05:07:54.143116 15536 storage/stores.go:259 [n2] wrote 2 node addresses to persistent storage
I190804 05:07:54.143144 14808 server/server.go:1590 [n3] starting https server at 127.0.0.1:34021 (use: 127.0.0.1:34021)
I190804 05:07:54.143162 14808 server/server.go:1592 [n3] starting grpc/postgres server at 127.0.0.1:46841
I190804 05:07:54.143178 14808 server/server.go:1593 [n3] advertising CockroachDB node at 127.0.0.1:46841
I190804 05:07:54.150965 14808 server/server.go:1661 [n3] done ensuring all necessary migrations have run
I190804 05:07:54.151001 14808 server/server.go:1664 [n3] serving sql connections
I190804 05:07:54.182151 16057 sql/event_log.go:130 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:127.0.0.1:46841 Attrs: Locality:region=test,dc=dc3 ServerVersion:19.1-7 BuildTag:v19.2.0-alpha.20190606-1213-g3b9a95b StartedAt:1564895274139875988 LocalityAddress:[]} ClusterID:19bf9cf5-21bd-4411-ae2d-0395dbcd58db StartedAt:1564895274139875988 LastUp:1564895274139875988}
I190804 05:07:54.191379 16054 server/server_update.go:68 [n3] no need to upgrade, cluster already at the newest version
I190804 05:07:54.194500 15458 sql/event_log.go:130 [n1,client=127.0.0.1:58644,user=root] Event: "create_database", target: 52, info: {DatabaseName:test Statement:CREATE DATABASE IF NOT EXISTS test User:root}
I190804 05:07:54.219154 15458 sql/event_log.go:130 [n1,client=127.0.0.1:58644,user=root] Event: "create_table", target: 53, info: {TableName:test.public.t Statement:CREATE TABLE test.public.t (x INT8 PRIMARY KEY) User:root}
I190804 05:07:54.242325 15458 storage/replica_command.go:283 [n1,s1,r20/1:/{Table/24-Max}] initiating a split of this range at key /Table/53/1/6 [r21] (manual)
I190804 05:07:54.259462 15458 storage/store_snapshot.go:775 [n1,s1,r21/1:/{Table/53/1/6-Max}] sending PREEMPTIVE snapshot 7f1039fa at applied index 12
I190804 05:07:54.259610 15458 storage/store_snapshot.go:818 [n1,s1,r21/1:/{Table/53/1/6-Max}] streamed snapshot to (n2,s2):?: kv pairs: 9, log entries: 0, rate-limit: 8.0 MiB/sec, 0.00s
I190804 05:07:54.260012 15935 storage/replica_raftstorage.go:827 [n2,s2,r21/?:{-}] applying PREEMPTIVE snapshot at index 12 (id=7f1039fa, encoded size=344, 1 rocksdb batches, 0 log entries)
I190804 05:07:54.260276 15935 storage/replica_raftstorage.go:833 [n2,s2,r21/?:/{Table/53/1/6-Max}] applied PREEMPTIVE snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I190804 05:07:54.260803 15458 storage/replica_command.go:1188 [n1,s1,r21/1:/{Table/53/1/6-Max}] change replicas (ADD_REPLICA (n2,s2):2): existing descriptor r21:/{Table/53/1/6-Max} [(n1,s1):1, next=2, gen=1]
I190804 05:07:54.264073 15458 storage/replica_raft.go:289 [n1,s1,r21/1:/{Table/53/1/6-Max}] proposing ADD_REPLICA((n2,s2):2): updated=(n1,s1):1,(n2,s2):2 next=3
I190804 05:07:54.321334 16219 storage/replica_command.go:1188 [n2,s2,r21/2:/{Table/53/1/6-Max}] change replicas (REMOVE_REPLICA (n1,s1):1): existing descriptor r21:/{Table/53/1/6-Max} [(n1,s1):1, (n2,s2):2, next=3, gen=2]
I190804 05:07:54.331682 16219 storage/replica_raft.go:289 [n2,s2,r21/2:/{Table/53/1/6-Max}] proposing REMOVE_REPLICA((n1,s1):1): updated=(n2,s2):2 next=3
I190804 05:07:54.343320 16306 storage/store.go:2530 [n1,replicaGC,s1,r21/1:/{Table/53/1/6-Max}] removing replica r21/1
I190804 05:07:54.343549 16306 storage/replica_destroy.go:146 [n1,replicaGC,s1,r21/1:/{Table/53/1/6-Max}] removed 10 (5+5) keys in 0ms [clear=0ms commit=0ms]
I190804 05:07:54.343333 15458 sql/event_log.go:130 [n1,client=127.0.0.1:58644,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:sql.defaults.results_buffer.size Value:0 User:root}
```
Please assign, take a look and update the issue accordingly.
|
non_process
|
teamcity failed test testdrainingprocessorswallowsuncertaintyerror the following tests appear to have failed on master test testdrainingprocessorswallowsuncertaintyerror dummy false testdrainingprocessorswallowsuncertaintyerror testdrainingprocessorswallowsuncertaintyerror dummy true you may want to check testdrainingprocessorswallowsuncertaintyerror dummy false fail test testdrainingprocessorswallowsuncertaintyerror dummy false test ended in panic stdout gossip gossip go node has connected to cluster via gossip storage stores go wrote node addresses to persistent storage gossip gossip go node has connected to cluster via gossip storage stores go wrote node addresses to persistent storage gossip gossip go node has connected to cluster via gossip storage stores go wrote node addresses to persistent storage server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net server node go health alerts detected alerts server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net server node go health alerts detected alerts server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net storage replica consistency go triggering stats recomputation to resolve delta of containsestimates true lastupdatenanos intentage gcbytesage livebytes livecount keybytes keycount valbytes valcount intentbytes intentcount sysbytes syscount server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net server node go health alerts detected alerts server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net server node go health alerts detected alerts server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net testdrainingprocessorswallowsuncertaintyerror dummy true istsqlrun tablereader go github com cockroachdb cockroach pkg sql distsqlrun indexjoiner start go src github com cockroachdb cockroach pkg sql distsqlrun indexjoiner go github com cockroachdb cockroach pkg sql distsqlrun sortallprocessor start go src github com cockroachdb cockroach pkg sql distsqlrun sorter go github com cockroachdb cockroach pkg sql distsqlrun processorbase run go src github com cockroachdb cockroach pkg sql distsqlrun processors go github com cockroachdb cockroach pkg sql distsqlrun flow run go src github com cockroachdb cockroach pkg sql distsqlrun flow go github com cockroachdb cockroach pkg sql distsqlplanner run go src github com cockroachdb cockroach pkg sql distsql running go github com cockroachdb cockroach pkg sql distsqlplanner planandrun go src github com cockroachdb cockroach pkg sql distsql running go github com cockroachdb cockroach pkg sql connexecutor execwithdistsqlengine go src github com cockroachdb cockroach pkg sql conn executor exec go github com cockroachdb cockroach pkg sql connexecutor dispatchtoexecutionengine go src github com cockroachdb cockroach pkg sql conn executor exec go github com cockroachdb cockroach pkg sql connexecutor execstmtinopenstate go src github com cockroachdb cockroach pkg sql conn executor exec go github com cockroachdb cockroach pkg sql connexecutor execstmt go src github com cockroachdb cockroach pkg sql conn executor exec go github com cockroachdb cockroach pkg sql connexecutor execcmd go src github com cockroachdb cockroach pkg sql conn executor go github com cockroachdb cockroach pkg sql connexecutor run go src github com cockroachdb cockroach pkg sql conn executor go github com cockroachdb cockroach pkg sql internalexecutorimpl initconnex go src github com cockroachdb cockroach pkg sql internal go created by github com cockroachdb cockroach pkg sql internalexecutorimpl initconnex go src github com cockroachdb cockroach pkg sql internal go goroutine github com cockroachdb cockroach pkg sql stats refresher start go src github com cockroachdb cockroach pkg sql stats automatic stats go github com cockroachdb cockroach pkg util stop stopper runasynctask go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runasynctask go src github com cockroachdb cockroach pkg util stop stopper go rand go random seed testdrainingprocessorswallowsuncertaintyerror ore server node go node started with engine s and attributes server server go could not start goroutine dumper worker due to directory to store dumps could not be determined storage stores go wrote node addresses to persistent storage server server go starting https server at use server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections sql event log go event node join target info descriptor nodeid address attrs locality region test dc serverversion buildtag alpha startedat localityaddress clusterid startedat lastup server server update go no need to upgrade cluster already at the newest version sql event log go event create database target info databasename test statement create database if not exists test user root sql event log go event create table target info tablename test public t statement create table test public t x primary key user root storage replica command go initiating a split of this range at key table manual storage store snapshot go sending preemptive snapshot at applied index storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica existing descriptor table max storage replica raft go proposing add replica updated next storage replica command go change replicas remove replica existing descriptor table max storage replica raft go proposing remove replica updated next storage store go removing replica storage replica destroy go removed keys in sql event log go event set cluster setting target info settingname sql defaults results buffer size value user root please assign take a look and update the issue accordingly
| 0
|
5,031
| 7,851,543,527
|
IssuesEvent
|
2018-06-20 12:06:34
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
How to parse multiple logs for multipe hosts
|
log-processing question
|
Hey Guys,
we are running a couple of sites on one of our servers. Each of them have one locations for the access logs. I was wondering if it is possible to use a command like this to combine all log files to one output.
> goaccess --log-format=COMBINED /sites/**/logs/access.log
Inside /sites/ there are a couple of folders with another subfolder and the mentioned log files. What would be the best attempt to combine all of those and display also the hostname inside the goaccess dashboad?
Here is an example of one of those access logs. I added the hostname at the beginng to identify the host easier.
> www.domain.com 79.234.48.148 - - [15/Jun/2018:10:17:10 +0200] "GET /dsgvo-und-webseiten/ HTTP/2.0" 200 23023 "https://www.google.de/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Safari/537.36" "4.04"
|
1.0
|
How to parse multiple logs for multipe hosts - Hey Guys,
we are running a couple of sites on one of our servers. Each of them have one locations for the access logs. I was wondering if it is possible to use a command like this to combine all log files to one output.
> goaccess --log-format=COMBINED /sites/**/logs/access.log
Inside /sites/ there are a couple of folders with another subfolder and the mentioned log files. What would be the best attempt to combine all of those and display also the hostname inside the goaccess dashboad?
Here is an example of one of those access logs. I added the hostname at the beginng to identify the host easier.
> www.domain.com 79.234.48.148 - - [15/Jun/2018:10:17:10 +0200] "GET /dsgvo-und-webseiten/ HTTP/2.0" 200 23023 "https://www.google.de/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Safari/537.36" "4.04"
|
process
|
how to parse multiple logs for multipe hosts hey guys we are running a couple of sites on one of our servers each of them have one locations for the access logs i was wondering if it is possible to use a command like this to combine all log files to one output goaccess log format combined sites logs access log inside sites there are a couple of folders with another subfolder and the mentioned log files what would be the best attempt to combine all of those and display also the hostname inside the goaccess dashboad here is an example of one of those access logs i added the hostname at the beginng to identify the host easier get dsgvo und webseiten http mozilla windows nt applewebkit khtml like gecko chrome safari
| 1
|
516,707
| 14,986,724,195
|
IssuesEvent
|
2021-01-28 21:39:23
|
ChainSafe/gossamer
|
https://api.github.com/repos/ChainSafe/gossamer
|
closed
|
Create set of Polkadot.js test scripts
|
Priority: 2 - High approved
|
<!---
PLEASE READ CAREFULLY
-->
## Expected Behavior
- Scripts should list each of the available Polkadot.js methods and have an example of executing each of the call
-
## Current Behavior
<!---
If describing a bug, tell us what happens instead of the expected behavior.
If suggesting a change or an improvement, explain the difference between your
suggestion and current behavior.
-->
-
## Possible Solution
<!---
Not obligatory, but this is the place to suggest the underlying cause and
possible fix for the bug, if you have one, or ideas on how to implement the
fix. We'll be sure to credit your ideas in the commit log, or better yet,
submit a PR and you'll get credit for the whole thing.
-->
-
## Steps to Reproduce
<!---
This is the most important information you can give us for a bug report.
Without good information, it will take much longer to resolve your issue!
The best strategy here is to assume the maintainer reading this just started
working on the project yesterday.
If possible, please provide a link to a live example, or an unambiguous set of
steps to reproduce this bug. Include code to reproduce, if relevant.
-->
1.
2.
3.
4.
## Specification
<!---
Example specification (feel free to copy and paste if applicable or delete the
specification section if a specification is not applicable):
- go version: `1.13.7`
- gossamer version: `development`
- gossamer commit tag: NA
- gossamer commit hash: NA
- operating system: Ubuntu 19.10
- additional links: NA
-->
- go version:
- gossamer version:
- gossamer commit tag:
- gossamer commit hash:
- operating system:
- additional links:
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [x] I have read [CODE_OF_CONDUCT](https://github.com/ChainSafe/gossamer/blob/development/.github/CODE_OF_CONDUCT.md) and [CONTRIBUTING](https://github.com/ChainSafe/gossamer/blob/development/.github/CONTRIBUTING.md)
- [x] I have provided as much information as possible and necessary
- [ ] I am planning to submit a pull request to fix this issue myself
|
1.0
|
Create set of Polkadot.js test scripts - <!---
PLEASE READ CAREFULLY
-->
## Expected Behavior
- Scripts should list each of the available Polkadot.js methods and have an example of executing each of the call
-
## Current Behavior
<!---
If describing a bug, tell us what happens instead of the expected behavior.
If suggesting a change or an improvement, explain the difference between your
suggestion and current behavior.
-->
-
## Possible Solution
<!---
Not obligatory, but this is the place to suggest the underlying cause and
possible fix for the bug, if you have one, or ideas on how to implement the
fix. We'll be sure to credit your ideas in the commit log, or better yet,
submit a PR and you'll get credit for the whole thing.
-->
-
## Steps to Reproduce
<!---
This is the most important information you can give us for a bug report.
Without good information, it will take much longer to resolve your issue!
The best strategy here is to assume the maintainer reading this just started
working on the project yesterday.
If possible, please provide a link to a live example, or an unambiguous set of
steps to reproduce this bug. Include code to reproduce, if relevant.
-->
1.
2.
3.
4.
## Specification
<!---
Example specification (feel free to copy and paste if applicable or delete the
specification section if a specification is not applicable):
- go version: `1.13.7`
- gossamer version: `development`
- gossamer commit tag: NA
- gossamer commit hash: NA
- operating system: Ubuntu 19.10
- additional links: NA
-->
- go version:
- gossamer version:
- gossamer commit tag:
- gossamer commit hash:
- operating system:
- additional links:
## Checklist
<!---
Each empty square brackets below is a checkbox. Replace [ ] with [x] to check
the box after completing the task.
--->
- [x] I have read [CODE_OF_CONDUCT](https://github.com/ChainSafe/gossamer/blob/development/.github/CODE_OF_CONDUCT.md) and [CONTRIBUTING](https://github.com/ChainSafe/gossamer/blob/development/.github/CONTRIBUTING.md)
- [x] I have provided as much information as possible and necessary
- [ ] I am planning to submit a pull request to fix this issue myself
|
non_process
|
create set of polkadot js test scripts please read carefully expected behavior scripts should list each of the available polkadot js methods and have an example of executing each of the call current behavior if describing a bug tell us what happens instead of the expected behavior if suggesting a change or an improvement explain the difference between your suggestion and current behavior possible solution not obligatory but this is the place to suggest the underlying cause and possible fix for the bug if you have one or ideas on how to implement the fix we ll be sure to credit your ideas in the commit log or better yet submit a pr and you ll get credit for the whole thing steps to reproduce this is the most important information you can give us for a bug report without good information it will take much longer to resolve your issue the best strategy here is to assume the maintainer reading this just started working on the project yesterday if possible please provide a link to a live example or an unambiguous set of steps to reproduce this bug include code to reproduce if relevant specification example specification feel free to copy and paste if applicable or delete the specification section if a specification is not applicable go version gossamer version development gossamer commit tag na gossamer commit hash na operating system ubuntu additional links na go version gossamer version gossamer commit tag gossamer commit hash operating system additional links checklist each empty square brackets below is a checkbox replace with to check the box after completing the task i have read and i have provided as much information as possible and necessary i am planning to submit a pull request to fix this issue myself
| 0
|
22,760
| 15,436,389,617
|
IssuesEvent
|
2021-03-07 12:51:43
|
teambit/bit
|
https://api.github.com/repos/teambit/bit
|
closed
|
Move from flow to Typescript
|
area/infrastructure type/feature
|
### Tasks list
- [x] rename files from .js to .ts
- [x] rename e2e files from .js to .ts
- [x] remove flow plugins from babel config
- [x] remove flow babel plugins from package.json dependencies
- [x] add ts plugins to babel
- [x] create tsconfig file
- [x] configure babel to work with ts
- [x] tests need to be configured to let Babel/register be aware of .ts extension.
- [x] add ts coverage script
- [ ] add ts coverage mininal rate for CI
- [x] remove flow from all npm scripts
- [ ] eslint needs to be set with typescript rules/plugins. :white_check_mark: :x:
- [x] remove flow eslint plugins from package.json dependencies
- [x] prettier needs to be changed to work with typescript. :white_check_mark: :x:
- [x] remove @flow annotations
- [x] remove all $FlowFixMe
- [x] configure a new task for the CI to run "tsc" for type checking.
- [x] configure tsc to run only on a set of files that are clean.
- [ ] try to clean as much as possible typescript errors.
- [x] improve eslint-prettier-ts integration (eslint-prettier support old ts version only) - check how it affects lint-staged as well
- [x] make sure lint is green
- [x] make sure all unit-tests are passed.
- [x] make sure all e2e-tests are passed.
|
1.0
|
Move from flow to Typescript - ### Tasks list
- [x] rename files from .js to .ts
- [x] rename e2e files from .js to .ts
- [x] remove flow plugins from babel config
- [x] remove flow babel plugins from package.json dependencies
- [x] add ts plugins to babel
- [x] create tsconfig file
- [x] configure babel to work with ts
- [x] tests need to be configured to let Babel/register be aware of .ts extension.
- [x] add ts coverage script
- [ ] add ts coverage mininal rate for CI
- [x] remove flow from all npm scripts
- [ ] eslint needs to be set with typescript rules/plugins. :white_check_mark: :x:
- [x] remove flow eslint plugins from package.json dependencies
- [x] prettier needs to be changed to work with typescript. :white_check_mark: :x:
- [x] remove @flow annotations
- [x] remove all $FlowFixMe
- [x] configure a new task for the CI to run "tsc" for type checking.
- [x] configure tsc to run only on a set of files that are clean.
- [ ] try to clean as much as possible typescript errors.
- [x] improve eslint-prettier-ts integration (eslint-prettier support old ts version only) - check how it affects lint-staged as well
- [x] make sure lint is green
- [x] make sure all unit-tests are passed.
- [x] make sure all e2e-tests are passed.
|
non_process
|
move from flow to typescript tasks list rename files from js to ts rename files from js to ts remove flow plugins from babel config remove flow babel plugins from package json dependencies add ts plugins to babel create tsconfig file configure babel to work with ts tests need to be configured to let babel register be aware of ts extension add ts coverage script add ts coverage mininal rate for ci remove flow from all npm scripts eslint needs to be set with typescript rules plugins white check mark x remove flow eslint plugins from package json dependencies prettier needs to be changed to work with typescript white check mark x remove flow annotations remove all flowfixme configure a new task for the ci to run tsc for type checking configure tsc to run only on a set of files that are clean try to clean as much as possible typescript errors improve eslint prettier ts integration eslint prettier support old ts version only check how it affects lint staged as well make sure lint is green make sure all unit tests are passed make sure all tests are passed
| 0
|
13,954
| 16,737,202,624
|
IssuesEvent
|
2021-06-11 04:22:45
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
reopened
|
Change term - MaterialSample
|
Class - MaterialSample Controversial Extensions Process - needs Task Group Task Group - MaterialSample Term - change normative
|
## Change term
* Submitter: @jegelewicz
* Justification (why is this change necessary?): The definition of MaterialSample is essentially the same as that for PreservedSpecimen. Members of the Arctos Working Group feel that these two terms are currently interchangeable. See https://github.com/ArctosDB/arctos/issues/2432 for further discussion.
From https://dwc.tdwg.org/terms/#materialsample
> MaterialSample | info
> -- | --
> Definition | A physical result of a sampling (or subsampling) event. In biological collections, the material sample is typically collected, and either preserved or destructively processed.
> Examples | A whole organism preserved in a collection. A part of an organism isolated for some purpose. A soil sample. A marine microbial sample.
From https://dwc.tdwg.org/terms/#livingspecimen
> PreservedSpecimen | info
> -- | --
> Definition | A specimen that has been preserved.
> Comments | Β
> Examples | A plant on an herbarium sheet. A cataloged lot of fish in a jar.
Given the above, we propose that MaterialSample should be more specific to something less than what might be considered a "voucher" in order to delineate it from PreservedSpecimen.
* Proponents (who needs this change): Arctos Working Group
Proposed new attributes of the term:
* Term name (in lowerCamelCase): MaterialSample (no change)
* Organized in Class (e.g. Location, Taxon):
* Definition of the term: **A physical result of a subsampling event. In biological collections, the material sample is typically collected as a subsample from a preserved or living organism, and either preserved or destructively processed. In geological and environmental collections the material sample is typically collected as a subsample of a larger geologic or environmental construct.**
* Usage comments (recommendations regarding content, etc.):
* Examples: **A part of an organism isolated for some purpose. A tissue sample. A soil sample. A marine microbial sample.**
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/MaterialSample-2018-09-06 (added by @tucotuco)
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit (added by @tucotuco)
Note: all of the above is my interpretation of the Arctos Working Group conversation.
|
1.0
|
Change term - MaterialSample - ## Change term
* Submitter: @jegelewicz
* Justification (why is this change necessary?): The definition of MaterialSample is essentially the same as that for PreservedSpecimen. Members of the Arctos Working Group feel that these two terms are currently interchangeable. See https://github.com/ArctosDB/arctos/issues/2432 for further discussion.
From https://dwc.tdwg.org/terms/#materialsample
> MaterialSample | info
> -- | --
> Definition | A physical result of a sampling (or subsampling) event. In biological collections, the material sample is typically collected, and either preserved or destructively processed.
> Examples | A whole organism preserved in a collection. A part of an organism isolated for some purpose. A soil sample. A marine microbial sample.
From https://dwc.tdwg.org/terms/#livingspecimen
> PreservedSpecimen | info
> -- | --
> Definition | A specimen that has been preserved.
> Comments | Β
> Examples | A plant on an herbarium sheet. A cataloged lot of fish in a jar.
Given the above, we propose that MaterialSample should be more specific to something less than what might be considered a "voucher" in order to delineate it from PreservedSpecimen.
* Proponents (who needs this change): Arctos Working Group
Proposed new attributes of the term:
* Term name (in lowerCamelCase): MaterialSample (no change)
* Organized in Class (e.g. Location, Taxon):
* Definition of the term: **A physical result of a subsampling event. In biological collections, the material sample is typically collected as a subsample from a preserved or living organism, and either preserved or destructively processed. In geological and environmental collections the material sample is typically collected as a subsample of a larger geologic or environmental construct.**
* Usage comments (recommendations regarding content, etc.):
* Examples: **A part of an organism isolated for some purpose. A tissue sample. A soil sample. A marine microbial sample.**
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/MaterialSample-2018-09-06 (added by @tucotuco)
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit (added by @tucotuco)
Note: all of the above is my interpretation of the Arctos Working Group conversation.
|
process
|
change term materialsample change term submitter jegelewicz justification why is this change necessary the definition of materialsample is essentially the same as that for preservedspecimen members of the arctos working group feel that these two terms are currently interchangeable see for further discussion from materialsample info definition a physical result of a sampling or subsampling event in biological collections the material sample is typically collected and either preserved or destructively processed examples a whole organism preserved in a collection a part of an organism isolated for some purpose a soil sample a marine microbial sample from preservedspecimen info definition a specimen that has been preserved comments Β examples a plant on an herbarium sheet a cataloged lot of fish in a jar given the above we propose that materialsample should be more specific to something less than what might be considered a voucher in order to delineate it from preservedspecimen proponents who needs this change arctos working group proposed new attributes of the term term name in lowercamelcase materialsample no change organized in class e g location taxon definition of the term a physical result of a subsampling event in biological collections the material sample is typically collected as a subsample from a preserved or living organism and either preserved or destructively processed in geological and environmental collections the material sample is typically collected as a subsample of a larger geologic or environmental construct usage comments recommendations regarding content etc examples a part of an organism isolated for some purpose a tissue sample a soil sample a marine microbial sample refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable added by tucotuco abcd xpath of the equivalent term in abcd or efg if applicable datasets dataset units unit added by tucotuco note all of the above is my interpretation of the arctos working group conversation
| 1
|
618,213
| 19,429,394,365
|
IssuesEvent
|
2021-12-21 10:09:28
|
bounswe/2021SpringGroup4
|
https://api.github.com/repos/bounswe/2021SpringGroup4
|
closed
|
Frontend: Add Map Functionality for Location Selection on Event Create Page
|
Priority: High Status: Completed Type: Development Frontend
|
Google Maps will be added to the Event Creation Page. Users should be able to select location from map. A marker should be placed on the clicked place; latitude and longitude values and address should be sent to the Backend api events endpoint
|
1.0
|
Frontend: Add Map Functionality for Location Selection on Event Create Page - Google Maps will be added to the Event Creation Page. Users should be able to select location from map. A marker should be placed on the clicked place; latitude and longitude values and address should be sent to the Backend api events endpoint
|
non_process
|
frontend add map functionality for location selection on event create page google maps will be added to the event creation page users should be able to select location from map a marker should be placed on the clicked place latitude and longitude values and address should be sent to the backend api events endpoint
| 0
|
598,840
| 18,256,815,298
|
IssuesEvent
|
2021-10-03 06:52:40
|
joonas-yoon/boj-extended
|
https://api.github.com/repos/joonas-yoon/boj-extended
|
closed
|
:number: ν¬λ§· λ²κ·Έ(λΆλΆ μ μ)
|
π bug priority: 5
|
<!-- π μλ
νμΈμ! β¨μλ‘μ΄ κΈ°λ₯β¨μ μ μν΄μ£Όμ μμ΄λμ΄μ κ°μ¬λ립λλ€. -->
# λ΄μ©
<!-- μ΄λ€ κΈ°λ₯μΈμ§ κ°λ¨νκ² μμ±ν΄μ£ΌμΈμ -->
<!-- μ΄λ€ μ μμ νμμ±μ λκΌλ μ§λ μ’μ΅λλ€. -->

- `:number:` λ‘ μ«μ placeholder κ° μ μ©λμ΄μλλ° subtask κ°μ κ²½μ° 12/100 κ³Ό κ°μ΄ μ 체 κ°μκΉμ§ νμκ° λμ΄μΌ νλλ° μ΄ λΆλΆμ μ΄λ»κ² μ€μ νλκ±΄μ§ μ λͺ¨λ₯΄κ² μ΅λλ€.

μ€μ μ ν΄μ νλ©΄ λ€μκ³Ό κ°μ΄ subtaskλ¬Έμ μ νμκ° λ©λλ€.

κ·Έλ¦¬κ³ νμ¬λ λ€μκ³Ό κ°μ΄ μ€μ νμ¬ μ¬μ©ν©λλ€.

κ°μ¬ν©λλ€.
ν΄λΉ μ΄μλ #35 μμ λ°λ‘ λΆλ¦¬λμ΄ μ¬λ €μ‘μ΅λλ€.
|
1.0
|
:number: ν¬λ§· λ²κ·Έ(λΆλΆ μ μ) - <!-- π μλ
νμΈμ! β¨μλ‘μ΄ κΈ°λ₯β¨μ μ μν΄μ£Όμ μμ΄λμ΄μ κ°μ¬λ립λλ€. -->
# λ΄μ©
<!-- μ΄λ€ κΈ°λ₯μΈμ§ κ°λ¨νκ² μμ±ν΄μ£ΌμΈμ -->
<!-- μ΄λ€ μ μμ νμμ±μ λκΌλ μ§λ μ’μ΅λλ€. -->

- `:number:` λ‘ μ«μ placeholder κ° μ μ©λμ΄μλλ° subtask κ°μ κ²½μ° 12/100 κ³Ό κ°μ΄ μ 체 κ°μκΉμ§ νμκ° λμ΄μΌ νλλ° μ΄ λΆλΆμ μ΄λ»κ² μ€μ νλκ±΄μ§ μ λͺ¨λ₯΄κ² μ΅λλ€.

μ€μ μ ν΄μ νλ©΄ λ€μκ³Ό κ°μ΄ subtaskλ¬Έμ μ νμκ° λ©λλ€.

κ·Έλ¦¬κ³ νμ¬λ λ€μκ³Ό κ°μ΄ μ€μ νμ¬ μ¬μ©ν©λλ€.

κ°μ¬ν©λλ€.
ν΄λΉ μ΄μλ #35 μμ λ°λ‘ λΆλ¦¬λμ΄ μ¬λ €μ‘μ΅λλ€.
|
non_process
|
number ν¬λ§· λ²κ·Έ λΆλΆ μ μ λ΄μ© number λ‘ μ«μ placeholder κ° μ μ©λμ΄μλλ° subtask κ°μ κ²½μ° κ³Ό κ°μ΄ μ 체 κ°μκΉμ§ νμκ° λμ΄μΌ νλλ° μ΄ λΆλΆμ μ΄λ»κ² μ€μ νλκ±΄μ§ μ λͺ¨λ₯΄κ² μ΅λλ€ μ€μ μ ν΄μ νλ©΄ λ€μκ³Ό κ°μ΄ subtaskλ¬Έμ μ νμκ° λ©λλ€ κ·Έλ¦¬κ³ νμ¬λ λ€μκ³Ό κ°μ΄ μ€μ νμ¬ μ¬μ©ν©λλ€ κ°μ¬ν©λλ€ ν΄λΉ μ΄μλ μμ λ°λ‘ λΆλ¦¬λμ΄ μ¬λ €μ‘μ΅λλ€
| 0
|
69,584
| 22,550,278,395
|
IssuesEvent
|
2022-06-27 04:19:42
|
beefproject/beef
|
https://api.github.com/repos/beefproject/beef
|
reopened
|
Can't hook from within Firefox chrome zone
|
Defect Core Priority Low medium effort
|
A BeEF hook injected into a Firefox extension will not hook. Fix this.
Blocked on #875
|
1.0
|
Can't hook from within Firefox chrome zone - A BeEF hook injected into a Firefox extension will not hook. Fix this.
Blocked on #875
|
non_process
|
can t hook from within firefox chrome zone a beef hook injected into a firefox extension will not hook fix this blocked on
| 0
|
132,055
| 18,266,064,816
|
IssuesEvent
|
2021-10-04 08:36:05
|
artsking/linux-3.0.35_CVE-2020-15436_withPatch
|
https://api.github.com/repos/artsking/linux-3.0.35_CVE-2020-15436_withPatch
|
closed
|
CVE-2016-2187 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed
|
security vulnerability
|
## CVE-2016-2187 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-3.0.35_CVE-2020-15436_withPatch/commit/594a70cb9871ddd73cf61197bb1a2a1b1777a7ae">594a70cb9871ddd73cf61197bb1a2a1b1777a7ae</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/tablet/gtco.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/tablet/gtco.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/tablet/gtco.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The gtco_probe function in drivers/input/tablet/gtco.c in the Linux kernel through 4.5.2 allows physically proximate attackers to cause a denial of service (NULL pointer dereference and system crash) via a crafted endpoints value in a USB device descriptor.
<p>Publish Date: 2016-05-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2187>CVE-2016-2187</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-2187">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-2187</a></p>
<p>Release Date: 2016-05-02</p>
<p>Fix Resolution: v4.6-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-2187 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed - ## CVE-2016-2187 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-3.0.35_CVE-2020-15436_withPatch/commit/594a70cb9871ddd73cf61197bb1a2a1b1777a7ae">594a70cb9871ddd73cf61197bb1a2a1b1777a7ae</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/tablet/gtco.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/tablet/gtco.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/tablet/gtco.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The gtco_probe function in drivers/input/tablet/gtco.c in the Linux kernel through 4.5.2 allows physically proximate attackers to cause a denial of service (NULL pointer dereference and system crash) via a crafted endpoints value in a USB device descriptor.
<p>Publish Date: 2016-05-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2187>CVE-2016-2187</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-2187">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-2187</a></p>
<p>Release Date: 2016-05-02</p>
<p>Fix Resolution: v4.6-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers input tablet gtco c drivers input tablet gtco c drivers input tablet gtco c vulnerability details the gtco probe function in drivers input tablet gtco c in the linux kernel through allows physically proximate attackers to cause a denial of service null pointer dereference and system crash via a crafted endpoints value in a usb device descriptor publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
39,191
| 19,723,958,385
|
IssuesEvent
|
2022-01-13 17:59:07
|
ampproject/amphtml
|
https://api.github.com/repos/ampproject/amphtml
|
closed
|
version `2112231523001` breaks amp-toolbox
|
Type: Bug P0: Drop Everything WG: performance
|
### Description
the new release version `2112231523001` breaks an integration in the amp-toolbox repository. Specifically this commit https://github.com/ampproject/amphtml/commit/4df29d5cb8c054a42fafc8cfe0e07259c678754d as the toolbox relies on `latestVersion`
### Reproduction Steps
use the toolbox auto importer feature when optimizing a document on the 2.x branch
### Relevant Logs
_No response_
### Browser(s) Affected
_No response_
### OS(s) Affected
_No response_
### Device(s) Affected
_No response_
### AMP Version Affected
2112231523001
|
True
|
version `2112231523001` breaks amp-toolbox - ### Description
the new release version `2112231523001` breaks an integration in the amp-toolbox repository. Specifically this commit https://github.com/ampproject/amphtml/commit/4df29d5cb8c054a42fafc8cfe0e07259c678754d as the toolbox relies on `latestVersion`
### Reproduction Steps
use the toolbox auto importer feature when optimizing a document on the 2.x branch
### Relevant Logs
_No response_
### Browser(s) Affected
_No response_
### OS(s) Affected
_No response_
### Device(s) Affected
_No response_
### AMP Version Affected
2112231523001
|
non_process
|
version breaks amp toolbox description the new release version breaks an integration in the amp toolbox repository specifically this commit as the toolbox relies on latestversion reproduction steps use the toolbox auto importer feature when optimizing a document on the x branch relevant logs no response browser s affected no response os s affected no response device s affected no response amp version affected
| 0
|
12,702
| 15,078,506,140
|
IssuesEvent
|
2021-02-05 08:49:51
|
ovh/public-cloud-roadmap
|
https://api.github.com/repos/ovh/public-cloud-roadmap
|
closed
|
ISO 27001 compliance
|
AI Training Block Storage Compute Data Processing GPU ML Serving Managed Kubernetes Service Network Object Storage Security & Compliance
|
As a OVHcloud public cloud user
I want all my services (MKS, MPR and all IaaS it orchestrates) to be ISO 27001 certified
so that I can communicate on this to my end users to confirm the high level of secrity of the cloud service I host my apps on.
Note :
We currently target a public-cloud-wide certification
|
1.0
|
ISO 27001 compliance - As a OVHcloud public cloud user
I want all my services (MKS, MPR and all IaaS it orchestrates) to be ISO 27001 certified
so that I can communicate on this to my end users to confirm the high level of secrity of the cloud service I host my apps on.
Note :
We currently target a public-cloud-wide certification
|
process
|
iso compliance as a ovhcloud public cloud user i want all my services mks mpr and all iaas it orchestrates to be iso certified so that i can communicate on this to my end users to confirm the high level of secrity of the cloud service i host my apps on note we currently target a public cloud wide certification
| 1
|
6,573
| 9,658,912,637
|
IssuesEvent
|
2019-05-20 12:15:31
|
Gelbpunkt/IdleRPG
|
https://api.github.com/repos/Gelbpunkt/IdleRPG
|
closed
|
Less prefixes
|
enhancement multiprocessing
|
Right now, every process loads all prefixes into memory. We should check for the guild ID to be in bot.guilds in the for loop that adds the prefixes to bot.prefixes
|
1.0
|
Less prefixes - Right now, every process loads all prefixes into memory. We should check for the guild ID to be in bot.guilds in the for loop that adds the prefixes to bot.prefixes
|
process
|
less prefixes right now every process loads all prefixes into memory we should check for the guild id to be in bot guilds in the for loop that adds the prefixes to bot prefixes
| 1
|
17,627
| 23,444,737,586
|
IssuesEvent
|
2022-08-15 18:24:24
|
0xffset/rOSt
|
https://api.github.com/repos/0xffset/rOSt
|
closed
|
Thread Sleep and Yield support
|
help wanted syscalls processes
|
Threads should be able to sleep and yield.
We can do this by:
1. Splitting the threads in the scheduler into viable for running and not viable
2. Having the threads have some kind of state, e.g. Sleeping, NotStarted, Running etc.
3. Having the scheduler move from one list to the other when ok
|
1.0
|
Thread Sleep and Yield support - Threads should be able to sleep and yield.
We can do this by:
1. Splitting the threads in the scheduler into viable for running and not viable
2. Having the threads have some kind of state, e.g. Sleeping, NotStarted, Running etc.
3. Having the scheduler move from one list to the other when ok
|
process
|
thread sleep and yield support threads should be able to sleep and yield we can do this by splitting the threads in the scheduler into viable for running and not viable having the threads have some kind of state e g sleeping notstarted running etc having the scheduler move from one list to the other when ok
| 1
|
32,302
| 12,102,284,555
|
IssuesEvent
|
2020-04-20 16:27:04
|
AOSC-Dev/aosc-os-abbs
|
https://api.github.com/repos/AOSC-Dev/aosc-os-abbs
|
closed
|
chromium, google-chrome, opera, vivaldi: security update to 80.0.3987.149
|
security to-stable upgrade
|
<!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2020-6422, CVE-2020-6424, CVE-2020-6425, CVE-2020-6426, CVE-2020-6427, CVE-2020-6428, CVE-2020-6429, CVE-2019-20503, CVE-2020-6449
**Other security advisory IDs:** ASA-202003-12
**Descriptions:**
- CVE-2020-6422: Use after free in WebGL. Reported by David Manouchehri on 2020-02-13
- CVE-2020-6424: Use after free in media. Reported by Sergei Glazunov of Google Project Zero on 2019-12-05
- CVE-2020-6425: Insufficient policy enforcement in extensions. Reported by Sergei Glazunov of Google Project Zero on 2019-12-06
- CVE-2020-6426: Inappropriate implementation in V8. Reported by Avihay Cohen @ SeraphicAlgorithms on 2020-02-16
- CVE-2020-6427: Use after free in audio. Reported by Man Yue Mo of Semmle Security Research Team on 2020-02-25
- CVE-2020-6428: Use after free in audio. Reported by Man Yue Mo of Semmle Security Research Team on 2020-03-02
- CVE-2020-6429: Use after free in audio. Reported by Man Yue Mo of Semmle Security Research Team on 2020-03-02
- CVE-2019-20503: Out of bounds read in usersctplib. Reported by Natalie Silvanovich of Google Project Zero on 2020-03-06
- CVE-2020-6449: Use after free in audio. Reported by Man Yue Mo of Semmle Security Research Team on 2020-03-09
**Architectural progress (Chromium):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
- [ ] AArch64 `arm64`
- [ ] ARMv7 `armel`
**Architectural progress (Google Chrome):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
**Architectural progress (Opera):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
**Architectural progress (Vivaldi):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
<!-- If the specified package is `noarch`, please use the stub below. -->
<!-- - [ ] Architecture-independent `noarch` -->
|
True
|
chromium, google-chrome, opera, vivaldi: security update to 80.0.3987.149 - <!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2020-6422, CVE-2020-6424, CVE-2020-6425, CVE-2020-6426, CVE-2020-6427, CVE-2020-6428, CVE-2020-6429, CVE-2019-20503, CVE-2020-6449
**Other security advisory IDs:** ASA-202003-12
**Descriptions:**
- CVE-2020-6422: Use after free in WebGL. Reported by David Manouchehri on 2020-02-13
- CVE-2020-6424: Use after free in media. Reported by Sergei Glazunov of Google Project Zero on 2019-12-05
- CVE-2020-6425: Insufficient policy enforcement in extensions. Reported by Sergei Glazunov of Google Project Zero on 2019-12-06
- CVE-2020-6426: Inappropriate implementation in V8. Reported by Avihay Cohen @ SeraphicAlgorithms on 2020-02-16
- CVE-2020-6427: Use after free in audio. Reported by Man Yue Mo of Semmle Security Research Team on 2020-02-25
- CVE-2020-6428: Use after free in audio. Reported by Man Yue Mo of Semmle Security Research Team on 2020-03-02
- CVE-2020-6429: Use after free in audio. Reported by Man Yue Mo of Semmle Security Research Team on 2020-03-02
- CVE-2019-20503: Out of bounds read in usersctplib. Reported by Natalie Silvanovich of Google Project Zero on 2020-03-06
- CVE-2020-6449: Use after free in audio. Reported by Man Yue Mo of Semmle Security Research Team on 2020-03-09
**Architectural progress (Chromium):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
- [ ] AArch64 `arm64`
- [ ] ARMv7 `armel`
**Architectural progress (Google Chrome):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
**Architectural progress (Opera):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
**Architectural progress (Vivaldi):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
<!-- If the specified package is `noarch`, please use the stub below. -->
<!-- - [ ] Architecture-independent `noarch` -->
|
non_process
|
chromium google chrome opera vivaldi security update to cve ids cve cve cve cve cve cve cve cve cve other security advisory ids asa descriptions cve use after free in webgl reported by david manouchehri on cve use after free in media reported by sergei glazunov of google project zero on cve insufficient policy enforcement in extensions reported by sergei glazunov of google project zero on cve inappropriate implementation in reported by avihay cohen seraphicalgorithms on cve use after free in audio reported by man yue mo of semmle security research team on cve use after free in audio reported by man yue mo of semmle security research team on cve use after free in audio reported by man yue mo of semmle security research team on cve out of bounds read in usersctplib reported by natalie silvanovich of google project zero on cve use after free in audio reported by man yue mo of semmle security research team on architectural progress chromium armel architectural progress google chrome architectural progress opera architectural progress vivaldi
| 0
|
5,052
| 7,860,884,633
|
IssuesEvent
|
2018-06-21 21:33:29
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Create a library structure for post_processing (Trac #2)
|
Migrated from Trac enhancement fasching@uwm.edu post_processing
|
Several files are used repeatedly by post_processing scripts.
These files include:
header_read.m
read_grads_hoc_endian.m
convert.m
It would be useful if these each had only one version in one place (e.g. ../post_processing/library/ )
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/2
```json
{
"status": "closed",
"changetime": "2009-05-16T10:11:18",
"description": "Several files are used repeatedly by post_processing scripts.\n\nThese files include:\n\nheader_read.m\nread_grads_hoc_endian.m\nconvert.m\n\nIt would be useful if these each had only one version in one place (e.g. ../post_processing/library/ )",
"reporter": "fasching@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1242468678000000",
"component": "post_processing",
"summary": "Create a library structure for post_processing",
"priority": "minor",
"keywords": "matlab, conversion",
"time": "2009-05-01T21:14:16",
"milestone": "",
"owner": "fasching@uwm.edu",
"type": "enhancement"
}
```
|
1.0
|
Create a library structure for post_processing (Trac #2) - Several files are used repeatedly by post_processing scripts.
These files include:
header_read.m
read_grads_hoc_endian.m
convert.m
It would be useful if these each had only one version in one place (e.g. ../post_processing/library/ )
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/2
```json
{
"status": "closed",
"changetime": "2009-05-16T10:11:18",
"description": "Several files are used repeatedly by post_processing scripts.\n\nThese files include:\n\nheader_read.m\nread_grads_hoc_endian.m\nconvert.m\n\nIt would be useful if these each had only one version in one place (e.g. ../post_processing/library/ )",
"reporter": "fasching@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1242468678000000",
"component": "post_processing",
"summary": "Create a library structure for post_processing",
"priority": "minor",
"keywords": "matlab, conversion",
"time": "2009-05-01T21:14:16",
"milestone": "",
"owner": "fasching@uwm.edu",
"type": "enhancement"
}
```
|
process
|
create a library structure for post processing trac several files are used repeatedly by post processing scripts these files include header read m read grads hoc endian m convert m it would be useful if these each had only one version in one place e g post processing library attachments migrated from json status closed changetime description several files are used repeatedly by post processing scripts n nthese files include n nheader read m nread grads hoc endian m nconvert m n nit would be useful if these each had only one version in one place e g post processing library reporter fasching uwm edu cc resolution verified by v larson ts component post processing summary create a library structure for post processing priority minor keywords matlab conversion time milestone owner fasching uwm edu type enhancement
| 1
|
8,128
| 11,307,240,185
|
IssuesEvent
|
2020-01-18 19:32:24
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
v2: CSS files empty in case data is defined in .sassrc.js for scss files
|
:bug: Bug CSS Preprocessing
|
# π bug report
When using a `sassrc.js` file and define a `data` config, only the code from the configuration is used but the real file is ignored.
## π Configuration (.babelrc, package.json, cli command)
.sassrc.js
```js
module.exports = {
data: "$bg: red;"
}
```
## π€ Expected Behavior
Should merge the content of `data` together with the content of the imported file.
## π― Current Behavior
It only takes the content from the `data` config.
## π Possible Solution
This looks like a compatibility issue with node-sass and I think this would be worth to support on dart-sass. Therefore I opened an issue on their repo: https://github.com/sass/dart-sass/issues/864
## π¦ Context
While trying to migrate to v2 all my css files are empty.
## π» Code Sample
See https://github.com/sass/dart-sass/issues/864
## π Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | v2 beta 2.1
| Node | v12
|
1.0
|
v2: CSS files empty in case data is defined in .sassrc.js for scss files - # π bug report
When using a `sassrc.js` file and define a `data` config, only the code from the configuration is used but the real file is ignored.
## π Configuration (.babelrc, package.json, cli command)
.sassrc.js
```js
module.exports = {
data: "$bg: red;"
}
```
## π€ Expected Behavior
Should merge the content of `data` together with the content of the imported file.
## π― Current Behavior
It only takes the content from the `data` config.
## π Possible Solution
This looks like a compatibility issue with node-sass and I think this would be worth to support on dart-sass. Therefore I opened an issue on their repo: https://github.com/sass/dart-sass/issues/864
## π¦ Context
While trying to migrate to v2 all my css files are empty.
## π» Code Sample
See https://github.com/sass/dart-sass/issues/864
## π Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | v2 beta 2.1
| Node | v12
|
process
|
css files empty in case data is defined in sassrc js for scss files π bug report when using a sassrc js file and define a data config only the code from the configuration is used but the real file is ignored π configuration babelrc package json cli command sassrc js js module exports data bg red π€ expected behavior should merge the content of data together with the content of the imported file π― current behavior it only takes the content from the data config π possible solution this looks like a compatibility issue with node sass and i think this would be worth to support on dart sass therefore i opened an issue on their repo π¦ context while trying to migrate to all my css files are empty π» code sample see π your environment software version s parcel beta node
| 1
|
4,271
| 7,189,518,469
|
IssuesEvent
|
2018-02-02 14:19:21
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
πParcel & React & Per-Component SCSS/CSS
|
#Feature CSS Preprocessing Help Wanted
|
I'm pretty sure this might not be a feature-request and already possible but I just don't know how to do it, if it's not currently a doable here it is:
Could we set up an environment with parcel where each component has it's own, compartmentalized, SCSS style file that do not interfere with each other (E.G. you can even have the same class names in two files and they won't apply to both elements on your Webapp that use them, each component will receive it's designated one.)
I've seen this done before in an angular 4 project and I know it used to be possible with webpack : https://javascriptplayground.com/css-modules-webpack-react/
but I'd like to accomplish this with parcel instead.
I've followed this guide :
https://www.valentinog.com/blog/tutorial-react-parcel-bundler/
to set up Parcel-React. (and I added node-sass and my scss are working fine)
|
1.0
|
πParcel & React & Per-Component SCSS/CSS - I'm pretty sure this might not be a feature-request and already possible but I just don't know how to do it, if it's not currently a doable here it is:
Could we set up an environment with parcel where each component has it's own, compartmentalized, SCSS style file that do not interfere with each other (E.G. you can even have the same class names in two files and they won't apply to both elements on your Webapp that use them, each component will receive it's designated one.)
I've seen this done before in an angular 4 project and I know it used to be possible with webpack : https://javascriptplayground.com/css-modules-webpack-react/
but I'd like to accomplish this with parcel instead.
I've followed this guide :
https://www.valentinog.com/blog/tutorial-react-parcel-bundler/
to set up Parcel-React. (and I added node-sass and my scss are working fine)
|
process
|
πparcel react per component scss css i m pretty sure this might not be a feature request and already possible but i just don t know how to do it if it s not currently a doable here it is could we set up an environment with parcel where each component has it s own compartmentalized scss style file that do not interfere with each other e g you can even have the same class names in two files and they won t apply to both elements on your webapp that use them each component will receive it s designated one i ve seen this done before in an angular project and i know it used to be possible with webpack but i d like to accomplish this with parcel instead i ve followed this guide to set up parcel react and i added node sass and my scss are working fine
| 1
|
722
| 3,210,728,618
|
IssuesEvent
|
2015-10-06 06:09:14
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
ΠΠ° Π±ΡΠΊΠ΅ wf-base, ΠΏΡΠΈ ΠΎΡΡΡΠ»ΠΊΠ΅ ΠΏΠΎΡΡΡ ΠΈΡΠΊΠ°ΡΡ ΡΡΠ³ΠΈ enum{[*]}, Π³Π΄Π΅ Π²ΠΌΠ΅ΡΡΠΎ "*" ΠΈΠ΄ ΠΏΠΎΠ»Ρ Ρ ΡΠΈΠΏΠΎΠΌ enum, Π²Π·ΡΠ² Π·Π½Π°ΡΠ΅Π½ΠΈΠ΅ ΠΊΠΎΡΠΎΡΠΎΠ³ΠΎ Π½ΡΠΆΠ½ΠΎ Π΅Π³ΠΎ ΠΏΠΎΠ΄ΡΡΠ°Π²ΠΈΡΡ Π²ΠΌΠ΅ΡΡΠΎ Π΄Π°Π½Π½ΠΎΠ³ΠΎ ΡΡΠ³Π°
|
active bug hi priority In process of testing test version
|
Π Π°Π·Π΄Π΅Π» 2. ΠΏ.4.ΠΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ ΠΎΡΡΡΠ»Π°ΡΡ Π² ΡΠ»Π΅ΠΊΡΡΠΎΠ½ΠΊΠΈ Π½Π΅ ΠΠ ΡΠ½ΡΠΌΠ° Π° Π΅Π³ΠΎ Π½Π°Π·Π²Π°Π½ΠΈΠ΅!
https://docs.google.com/document/d/1fUJlMptp0npeXNShMwZedfcqnhfz6mhyDcw7qoxtZqU/edit#
|
1.0
|
ΠΠ° Π±ΡΠΊΠ΅ wf-base, ΠΏΡΠΈ ΠΎΡΡΡΠ»ΠΊΠ΅ ΠΏΠΎΡΡΡ ΠΈΡΠΊΠ°ΡΡ ΡΡΠ³ΠΈ enum{[*]}, Π³Π΄Π΅ Π²ΠΌΠ΅ΡΡΠΎ "*" ΠΈΠ΄ ΠΏΠΎΠ»Ρ Ρ ΡΠΈΠΏΠΎΠΌ enum, Π²Π·ΡΠ² Π·Π½Π°ΡΠ΅Π½ΠΈΠ΅ ΠΊΠΎΡΠΎΡΠΎΠ³ΠΎ Π½ΡΠΆΠ½ΠΎ Π΅Π³ΠΎ ΠΏΠΎΠ΄ΡΡΠ°Π²ΠΈΡΡ Π²ΠΌΠ΅ΡΡΠΎ Π΄Π°Π½Π½ΠΎΠ³ΠΎ ΡΡΠ³Π° - Π Π°Π·Π΄Π΅Π» 2. ΠΏ.4.ΠΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ ΠΎΡΡΡΠ»Π°ΡΡ Π² ΡΠ»Π΅ΠΊΡΡΠΎΠ½ΠΊΠΈ Π½Π΅ ΠΠ ΡΠ½ΡΠΌΠ° Π° Π΅Π³ΠΎ Π½Π°Π·Π²Π°Π½ΠΈΠ΅!
https://docs.google.com/document/d/1fUJlMptp0npeXNShMwZedfcqnhfz6mhyDcw7qoxtZqU/edit#
|
process
|
Π½Π° Π±ΡΠΊΠ΅ wf base ΠΏΡΠΈ ΠΎΡΡΡΠ»ΠΊΠ΅ ΠΏΠΎΡΡΡ ΠΈΡΠΊΠ°ΡΡ ΡΡΠ³ΠΈ enum Π³Π΄Π΅ Π²ΠΌΠ΅ΡΡΠΎ ΠΈΠ΄ ΠΏΠΎΠ»Ρ Ρ ΡΠΈΠΏΠΎΠΌ enum Π²Π·ΡΠ² Π·Π½Π°ΡΠ΅Π½ΠΈΠ΅ ΠΊΠΎΡΠΎΡΠΎΠ³ΠΎ Π½ΡΠΆΠ½ΠΎ Π΅Π³ΠΎ ΠΏΠΎΠ΄ΡΡΠ°Π²ΠΈΡΡ Π²ΠΌΠ΅ΡΡΠΎ Π΄Π°Π½Π½ΠΎΠ³ΠΎ ΡΡΠ³Π° ΡΠ°Π·Π΄Π΅Π» ΠΏ Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΡ ΠΎΡΡΡΠ»Π°ΡΡ Π² ΡΠ»Π΅ΠΊΡΡΠΎΠ½ΠΊΠΈ Π½Π΅ ΠΈΠ΄ ΡΠ½ΡΠΌΠ° Π° Π΅Π³ΠΎ Π½Π°Π·Π²Π°Π½ΠΈΠ΅
| 1
|
17,621
| 23,441,719,894
|
IssuesEvent
|
2022-08-15 15:29:28
|
ArneBinder/pie-utils
|
https://api.github.com/repos/ArneBinder/pie-utils
|
closed
|
adding reversed relations
|
document processor
|
Add a document processor that allows to add reversed relations. This may profit from the [previous implementation](https://github.com/ArneBinder/pytorch-ie-sam-template/blob/main/src/document_processors/relation.py#L135-L164). This should also collect relevant statistics about original and added relations.
|
1.0
|
adding reversed relations - Add a document processor that allows to add reversed relations. This may profit from the [previous implementation](https://github.com/ArneBinder/pytorch-ie-sam-template/blob/main/src/document_processors/relation.py#L135-L164). This should also collect relevant statistics about original and added relations.
|
process
|
adding reversed relations add a document processor that allows to add reversed relations this may profit from the this should also collect relevant statistics about original and added relations
| 1
|
22,211
| 30,762,181,736
|
IssuesEvent
|
2023-07-29 21:11:58
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
roblox-pyc 2.25.111 has 3 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "2.25.111",
"result": {
"issues": 3,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-2.25.111/robloxpyc/installationmanager.py:19",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-2.25.111/robloxpyc/installationmanager.py:26",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-2.25.111/robloxpyc/installationmanager.py:79",
"code": " subprocess.call([\"npm\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmptb_j9kxv/roblox-pyc"
}
}```
|
1.0
|
roblox-pyc 2.25.111 has 3 GuardDog issues - https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "2.25.111",
"result": {
"issues": 3,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-2.25.111/robloxpyc/installationmanager.py:19",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-2.25.111/robloxpyc/installationmanager.py:26",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-2.25.111/robloxpyc/installationmanager.py:79",
"code": " subprocess.call([\"npm\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmptb_j9kxv/roblox-pyc"
}
}```
|
process
|
roblox pyc has guarddog issues dependency roblox pyc version result issues errors results silent process execution location roblox pyc robloxpyc installationmanager py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc robloxpyc installationmanager py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc robloxpyc installationmanager py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmptb roblox pyc
| 1
|
815,636
| 30,565,338,046
|
IssuesEvent
|
2023-07-20 17:19:46
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
opened
|
Infer IP address for serving Brave Ads if a user has joined Brave Rewards
|
enhancement priority/P3 QA/Yes release-notes/exclude feature/ads OS/Desktop
|
- [ ] Infer IP address for serving Brave Ads if a user has joined Brave Rewards
- [ ] Fallback to existing business logic if a user has not joined Brave Rewards
|
1.0
|
Infer IP address for serving Brave Ads if a user has joined Brave Rewards - - [ ] Infer IP address for serving Brave Ads if a user has joined Brave Rewards
- [ ] Fallback to existing business logic if a user has not joined Brave Rewards
|
non_process
|
infer ip address for serving brave ads if a user has joined brave rewards infer ip address for serving brave ads if a user has joined brave rewards fallback to existing business logic if a user has not joined brave rewards
| 0
|
15,763
| 19,912,599,339
|
IssuesEvent
|
2022-01-25 18:45:16
|
MunchBit/MunchLove
|
https://api.github.com/repos/MunchBit/MunchLove
|
opened
|
Delivery driver gets Β£1 For every mile travelled to consumers address.
|
feature Payment Process
|
**Title**
Delivery driver gets Β£1 For every mile travelled to consumers address.
**Description**
Delivery driver gets Β£1 For every mile travelled to consumers address. The miles are rounded up to the nearest mile. The distanace to the Consumer is predetermined using google api shortest path algorithm. The price of the delivery should be configurable
|
1.0
|
Delivery driver gets Β£1 For every mile travelled to consumers address. - **Title**
Delivery driver gets Β£1 For every mile travelled to consumers address.
**Description**
Delivery driver gets Β£1 For every mile travelled to consumers address. The miles are rounded up to the nearest mile. The distanace to the Consumer is predetermined using google api shortest path algorithm. The price of the delivery should be configurable
|
process
|
delivery driver gets Β£ for every mile travelled to consumers address title delivery driver gets Β£ for every mile travelled to consumers address description delivery driver gets Β£ for every mile travelled to consumers address the miles are rounded up to the nearest mile the distanace to the consumer is predetermined using google api shortest path algorithm the price of the delivery should be configurable
| 1
|
147,083
| 23,162,927,663
|
IssuesEvent
|
2022-07-29 19:59:57
|
Endless-Creation-32nd/refill-front
|
https://api.github.com/repos/Endless-Creation-32nd/refill-front
|
closed
|
[Feat]: λΆλ§ν¬, λκΈ κΈ°λ₯
|
feature design
|
### κΈ°λ₯μ€λͺ
- μμ μ νμ¬ νΉμ λ€λ₯Έ μ¬λμ νμ¬ κ²μκΈμ λκΈμ λ¬ μ μμ΅λλ€.
- λ€λ₯Έ μ¬λμ νμ¬ κ²μκΈμ λΆλ§ν¬ν©λλ€.
- μ¬μ©μμ μλ°ν(μ΄λ―Έμ§) ν΄λ¦ μ ν΄λΉ νμμ νλ‘ν νμ΄μ§λ‘ μ΄λν©λλ€.
### μμ
μ¬ν
- [x] λκΈ λ¬κΈ° API μ°λ
- [x] νμ¬ κ²μκΈ λΆλ§ν¬ API μ°λ
- [x] λκΈ UI ꡬν
- [x] κ·Έλ£Ή λ΄ νμ¬ λͺ©λ‘ ν΄λ¦ μ, νμ¬ μμΈλ³΄κΈ°λ‘ μ΄λ
- [x] νμ¬ μμΈ λ³΄κΈ°μμ λκΈ, λΆλ§ν¬ κΈ°λ₯ ꡬν
- [x] `CustomAvatar` μ ν΄λΉ μ¬μ©μμ νλ‘νλ‘ μ΄λνλλ‘ κ΅¬ν
- [x] κ°μΈ νμ¬ μ
λ‘λ ꡬν..
|
1.0
|
[Feat]: λΆλ§ν¬, λκΈ κΈ°λ₯ - ### κΈ°λ₯μ€λͺ
- μμ μ νμ¬ νΉμ λ€λ₯Έ μ¬λμ νμ¬ κ²μκΈμ λκΈμ λ¬ μ μμ΅λλ€.
- λ€λ₯Έ μ¬λμ νμ¬ κ²μκΈμ λΆλ§ν¬ν©λλ€.
- μ¬μ©μμ μλ°ν(μ΄λ―Έμ§) ν΄λ¦ μ ν΄λΉ νμμ νλ‘ν νμ΄μ§λ‘ μ΄λν©λλ€.
### μμ
μ¬ν
- [x] λκΈ λ¬κΈ° API μ°λ
- [x] νμ¬ κ²μκΈ λΆλ§ν¬ API μ°λ
- [x] λκΈ UI ꡬν
- [x] κ·Έλ£Ή λ΄ νμ¬ λͺ©λ‘ ν΄λ¦ μ, νμ¬ μμΈλ³΄κΈ°λ‘ μ΄λ
- [x] νμ¬ μμΈ λ³΄κΈ°μμ λκΈ, λΆλ§ν¬ κΈ°λ₯ ꡬν
- [x] `CustomAvatar` μ ν΄λΉ μ¬μ©μμ νλ‘νλ‘ μ΄λνλλ‘ κ΅¬ν
- [x] κ°μΈ νμ¬ μ
λ‘λ ꡬν..
|
non_process
|
λΆλ§ν¬ λκΈ κΈ°λ₯ κΈ°λ₯μ€λͺ
μμ μ νμ¬ νΉμ λ€λ₯Έ μ¬λμ νμ¬ κ²μκΈμ λκΈμ λ¬ μ μμ΅λλ€ λ€λ₯Έ μ¬λμ νμ¬ κ²μκΈμ λΆλ§ν¬ν©λλ€ μ¬μ©μμ μλ°ν μ΄λ―Έμ§ ν΄λ¦ μ ν΄λΉ νμμ νλ‘ν νμ΄μ§λ‘ μ΄λν©λλ€ μμ
μ¬ν λκΈ λ¬κΈ° api μ°λ νμ¬ κ²μκΈ λΆλ§ν¬ api μ°λ λκΈ ui ꡬν κ·Έλ£Ή λ΄ νμ¬ λͺ©λ‘ ν΄λ¦ μ νμ¬ μμΈλ³΄κΈ°λ‘ μ΄λ νμ¬ μμΈ λ³΄κΈ°μμ λκΈ λΆλ§ν¬ κΈ°λ₯ ꡬν customavatar μ ν΄λΉ μ¬μ©μμ νλ‘νλ‘ μ΄λνλλ‘ κ΅¬ν κ°μΈ νμ¬ μ
λ‘λ ꡬν
| 0
|
60,725
| 17,023,503,660
|
IssuesEvent
|
2021-07-03 02:21:46
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
osm2pgsql adds all changeset tags to the first parsed node
|
Component: utils Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 3.58pm, Monday, 9th November 2009]**
While parsing the planet file, osm2pgsql makes a list of all tags found under <changeset> elements, then bulk-adds them to the first <node> element.
A possible fix would be to clear the tag list when the end of a <changeset> element is encountered:
/* File osm2pgsql.c */
void EndElement(const xmlChar *name)
{
...
} else if (xmlStrEqual(name, BAD_CAST "changeset")) {
/* Add this code */
resetList(&tags);
/* ignore */
} ...
|
1.0
|
osm2pgsql adds all changeset tags to the first parsed node - **[Submitted to the original trac issue database at 3.58pm, Monday, 9th November 2009]**
While parsing the planet file, osm2pgsql makes a list of all tags found under <changeset> elements, then bulk-adds them to the first <node> element.
A possible fix would be to clear the tag list when the end of a <changeset> element is encountered:
/* File osm2pgsql.c */
void EndElement(const xmlChar *name)
{
...
} else if (xmlStrEqual(name, BAD_CAST "changeset")) {
/* Add this code */
resetList(&tags);
/* ignore */
} ...
|
non_process
|
adds all changeset tags to the first parsed node while parsing the planet file makes a list of all tags found under elements then bulk adds them to the first element a possible fix would be to clear the tag list when the end of a element is encountered file c void endelement const xmlchar name else if xmlstrequal name bad cast changeset add this code resetlist tags ignore
| 0
|
118,725
| 11,987,065,308
|
IssuesEvent
|
2020-04-07 20:29:44
|
trustbloc/edge-service
|
https://api.github.com/repos/trustbloc/edge-service
|
closed
|
REST APIs need OpenAPI annotations
|
documentation
|
- add swagger annotations for OpenAPI.
- add Makefile target to generate OpenAPI/swagger documentation.
|
1.0
|
REST APIs need OpenAPI annotations - - add swagger annotations for OpenAPI.
- add Makefile target to generate OpenAPI/swagger documentation.
|
non_process
|
rest apis need openapi annotations add swagger annotations for openapi add makefile target to generate openapi swagger documentation
| 0
|
382,630
| 26,507,968,350
|
IssuesEvent
|
2023-01-18 15:07:32
|
sovity/edc-extensions
|
https://api.github.com/repos/sovity/edc-extensions
|
closed
|
Readme: Remove manually determining SKI AKI
|
documentation
|
# Feature Request
## Type of Issue
Cleanup
## Describe the ideal solution or feature request
- [x] Remove Option B https://github.com/sovity/edc-extensions#option-2-manually-with-keystore-explorer
- [x] Keep other parts of the Readme.md consistent
- [x] Add hint with regards to necessary command line packages (openssl, keytool) and wsl/linux
|
1.0
|
Readme: Remove manually determining SKI AKI - # Feature Request
## Type of Issue
Cleanup
## Describe the ideal solution or feature request
- [x] Remove Option B https://github.com/sovity/edc-extensions#option-2-manually-with-keystore-explorer
- [x] Keep other parts of the Readme.md consistent
- [x] Add hint with regards to necessary command line packages (openssl, keytool) and wsl/linux
|
non_process
|
readme remove manually determining ski aki feature request type of issue cleanup describe the ideal solution or feature request remove option b keep other parts of the readme md consistent add hint with regards to necessary command line packages openssl keytool and wsl linux
| 0
|
20,715
| 27,410,860,631
|
IssuesEvent
|
2023-03-01 10:25:44
|
Deltares/Ribasim.jl
|
https://api.github.com/repos/Deltares/Ribasim.jl
|
opened
|
Resistance based (bi)furcation
|
physical process
|
Possible this already can be done with LinearLevelConnectors, but we need to confirm that we can accommodate bifurcations which are based on some sort of resistance
|
1.0
|
Resistance based (bi)furcation - Possible this already can be done with LinearLevelConnectors, but we need to confirm that we can accommodate bifurcations which are based on some sort of resistance
|
process
|
resistance based bi furcation possible this already can be done with linearlevelconnectors but we need to confirm that we can accommodate bifurcations which are based on some sort of resistance
| 1
|
19,740
| 26,087,574,167
|
IssuesEvent
|
2022-12-26 06:13:33
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
[5.2.0] When build tag have dbg, works well, but when have no dbg, it have problems.
|
more data needed type: support / not a bug (process)
|
```
Command: bazelisk build --tool_tag=ijwb:AndroidStudio
--output_groups=+android_deploy_info
--curses=no --color=yes --progress_in_terminal_title=no
-c opt --strip=always --config=android_arm64 -s --fission=no
--build_event_binary_file=/var/folders/pf/2v4q3ng17ls2wpxh7pjpdlr40000gn/T/intellij-bep-1a5aee30-dc0f-48d1-b7af-ffa092e45a13
--nobuild_event_binary_file_path_conversion
-- //mediapipe/examples/android/src/java/com/google/mediapipe/apps/faceeffect_self:glassview
```
```
Command: bazelisk build --tool_tag=ijwb:AndroidStudio
--output_groups=+android_deploy_info
--curses=no --color=yes --progress_in_terminal_title=no
-c opt --strip=always --config=android_arm64 -s --fission=no
-c dbg
--build_event_binary_file=/var/folders/pf/2v4q3ng17ls2wpxh7pjpdlr40000gn/T/intellij-bep-a467c873-642a-4961-8aef-0a0572ab0d56
--nobuild_event_binary_file_path_conversion
-- //mediapipe/examples/android/src/java/com/google/mediapipe/apps/faceeffect_self:glassview
```
The second command compare with the first one , it added -c dbg, and --build_event_binary_file difference.
This is my project code:
```
glGenTextures(1, &brdfLUTMap);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, brdfLUTMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 512, 512, 0, GL_RGB, GL_FLOAT, 0);
// be sure to set wrapping mode to GL_CLAMP_TO_EDGE
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// then re-configure capture framebuffer object and render screen-space quad with BRDF shader.
glBindFramebuffer(GL_FRAMEBUFFER, captureFBO);
glBindRenderbuffer(GL_RENDERBUFFER, captureRBO);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 512, 512);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, captureRBO);
checkGLError("--glFramebufferRenderbuffer ");
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, brdfLUTMap, 0);
glViewport(0, 0, 512, 512);
brdfShader.use();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderQuad();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
```
If i use the first command , i can sampling data from brdfLUTMap, if i use the second command, can't sampling data from brdfLUTMap. it's very wired , because other textures have no problem using both two command.
|
1.0
|
[5.2.0] When build tag have dbg, works well, but when have no dbg, it have problems. - ```
Command: bazelisk build --tool_tag=ijwb:AndroidStudio
--output_groups=+android_deploy_info
--curses=no --color=yes --progress_in_terminal_title=no
-c opt --strip=always --config=android_arm64 -s --fission=no
--build_event_binary_file=/var/folders/pf/2v4q3ng17ls2wpxh7pjpdlr40000gn/T/intellij-bep-1a5aee30-dc0f-48d1-b7af-ffa092e45a13
--nobuild_event_binary_file_path_conversion
-- //mediapipe/examples/android/src/java/com/google/mediapipe/apps/faceeffect_self:glassview
```
```
Command: bazelisk build --tool_tag=ijwb:AndroidStudio
--output_groups=+android_deploy_info
--curses=no --color=yes --progress_in_terminal_title=no
-c opt --strip=always --config=android_arm64 -s --fission=no
-c dbg
--build_event_binary_file=/var/folders/pf/2v4q3ng17ls2wpxh7pjpdlr40000gn/T/intellij-bep-a467c873-642a-4961-8aef-0a0572ab0d56
--nobuild_event_binary_file_path_conversion
-- //mediapipe/examples/android/src/java/com/google/mediapipe/apps/faceeffect_self:glassview
```
The second command compare with the first one , it added -c dbg, and --build_event_binary_file difference.
This is my project code:
```
glGenTextures(1, &brdfLUTMap);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, brdfLUTMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 512, 512, 0, GL_RGB, GL_FLOAT, 0);
// be sure to set wrapping mode to GL_CLAMP_TO_EDGE
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// then re-configure capture framebuffer object and render screen-space quad with BRDF shader.
glBindFramebuffer(GL_FRAMEBUFFER, captureFBO);
glBindRenderbuffer(GL_RENDERBUFFER, captureRBO);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 512, 512);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, captureRBO);
checkGLError("--glFramebufferRenderbuffer ");
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, brdfLUTMap, 0);
glViewport(0, 0, 512, 512);
brdfShader.use();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderQuad();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
```
If i use the first command , i can sampling data from brdfLUTMap, if i use the second command, can't sampling data from brdfLUTMap. it's very wired , because other textures have no problem using both two command.
|
process
|
when build tag have dbg works well but when have no dbg it have problems command bazelisk build tool tag ijwb androidstudio output groups android deploy info curses no color yes progress in terminal title no c opt strip always config android s fission no build event binary file var folders pf t intellij bep nobuild event binary file path conversion mediapipe examples android src java com google mediapipe apps faceeffect self glassview command bazelisk build tool tag ijwb androidstudio output groups android deploy info curses no color yes progress in terminal title no c opt strip always config android s fission no c dbg build event binary file var folders pf t intellij bep nobuild event binary file path conversion mediapipe examples android src java com google mediapipe apps faceeffect self glassview the second command compare with the first one it added c dbg and build event binary file difference this is my project code glgentextures brdflutmap glactivetexture gl glbindtexture gl texture brdflutmap gl texture gl gl rgb gl float be sure to set wrapping mode to gl clamp to edge gltexparameteri gl texture gl texture wrap s gl clamp to edge gltexparameteri gl texture gl texture wrap t gl clamp to edge gltexparameteri gl texture gl texture min filter gl linear gltexparameteri gl texture gl texture mag filter gl linear then re configure capture framebuffer object and render screen space quad with brdf shader glbindframebuffer gl framebuffer capturefbo glbindrenderbuffer gl renderbuffer capturerbo glrenderbufferstorage gl renderbuffer gl depth glframebufferrenderbuffer gl framebuffer gl depth attachment gl renderbuffer capturerbo checkglerror glframebufferrenderbuffer gl framebuffer gl color gl texture brdflutmap glviewport brdfshader use glclear gl color buffer bit gl depth buffer bit renderquad glbindframebuffer gl framebuffer if i use the first command i can sampling data from brdflutmap if i use the second command can t sampling data from brdflutmap it s very wired because other textures have no problem using both two command
| 1
|
17,187
| 22,768,705,028
|
IssuesEvent
|
2022-07-08 07:55:39
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Obsoletion notice: GO:0039596 modulation by virus of host protein dephosphorylation
|
obsoletion multi-species process
|
Dear all,
The proposal has been made to obsolete
* GO:0039596 modulation by virus of host protein serine/threonine phosphatase activity and
* GO:0039596 modulation by virus of host protein dephosphorylation
There are no annotations or mappings to these terms; these terms are not present in any subsets.
You can comment on the ticket: https://github.com/geneontology/go-ontology/issues/23641
Thanks, Pascale
|
1.0
|
Obsoletion notice: GO:0039596 modulation by virus of host protein dephosphorylation - Dear all,
The proposal has been made to obsolete
* GO:0039596 modulation by virus of host protein serine/threonine phosphatase activity and
* GO:0039596 modulation by virus of host protein dephosphorylation
There are no annotations or mappings to these terms; these terms are not present in any subsets.
You can comment on the ticket: https://github.com/geneontology/go-ontology/issues/23641
Thanks, Pascale
|
process
|
obsoletion notice go modulation by virus of host protein dephosphorylation dear all the proposal has been made to obsolete go modulation by virus of host protein serine threonine phosphatase activity and go modulation by virus of host protein dephosphorylation there are no annotations or mappings to these terms these terms are not present in any subsets you can comment on the ticket thanks pascale
| 1
|
163,804
| 12,745,178,374
|
IssuesEvent
|
2020-06-26 13:49:53
|
Thy-Vipe/BeastsOfBermuda-issues
|
https://api.github.com/repos/Thy-Vipe/BeastsOfBermuda-issues
|
opened
|
[Bug] players that have the same display name will share the ID and profile picture
|
UI bug tester-team
|
_Originally written by **Cleafspear | 76561198077984700**_
Game Version: 1.1.955
*===== System Specs =====
CPU Brand: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
Vendor: GenuineIntel
GPU Brand: NVIDIA GeForce GTX 1080
GPU Driver Info: Unknown
Num CPU Cores: 6
===================*
Map: Rival_Shores
*Expected Results:* players that have the same name while on the same server should be visually distinct on the tab menu/commands
*Actual Results:* when multiple players that share the exact same in game name join, they will share the same id and profile picture as the last person to join. usage of commands require usage of the steam id only to work
*Replication:* have multiple players with the same exact name join and open the tab menu.
|
1.0
|
[Bug] players that have the same display name will share the ID and profile picture - _Originally written by **Cleafspear | 76561198077984700**_
Game Version: 1.1.955
*===== System Specs =====
CPU Brand: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
Vendor: GenuineIntel
GPU Brand: NVIDIA GeForce GTX 1080
GPU Driver Info: Unknown
Num CPU Cores: 6
===================*
Map: Rival_Shores
*Expected Results:* players that have the same name while on the same server should be visually distinct on the tab menu/commands
*Actual Results:* when multiple players that share the exact same in game name join, they will share the same id and profile picture as the last person to join. usage of commands require usage of the steam id only to work
*Replication:* have multiple players with the same exact name join and open the tab menu.
|
non_process
|
players that have the same display name will share the id and profile picture originally written by cleafspear game version system specs cpu brand intel r core tm cpu vendor genuineintel gpu brand nvidia geforce gtx gpu driver info unknown num cpu cores map rival shores expected results players that have the same name while on the same server should be visually distinct on the tab menu commands actual results when multiple players that share the exact same in game name join they will share the same id and profile picture as the last person to join usage of commands require usage of the steam id only to work replication have multiple players with the same exact name join and open the tab menu
| 0
|
559,691
| 16,568,892,381
|
IssuesEvent
|
2021-05-30 01:39:20
|
Parabeac/Parabeac-Core
|
https://api.github.com/repos/Parabeac/Parabeac-Core
|
opened
|
Figma Component Masters that aren't at the root of page don't get detected.
|
Figma High Priority bug
|
## Change Proposal
Replace Component (Symbol) Masters that are a child of some parent with Component Instances. Then Move the Component Master to the `Symbols` in PBDL.
### Module and Current Solution
Not sure how the current implementation works but if we're looking at root[0] like we do for Sketch, we will not find all of the Component Masters.
|
1.0
|
Figma Component Masters that aren't at the root of page don't get detected. - ## Change Proposal
Replace Component (Symbol) Masters that are a child of some parent with Component Instances. Then Move the Component Master to the `Symbols` in PBDL.
### Module and Current Solution
Not sure how the current implementation works but if we're looking at root[0] like we do for Sketch, we will not find all of the Component Masters.
|
non_process
|
figma component masters that aren t at the root of page don t get detected change proposal replace component symbol masters that are a child of some parent with component instances then move the component master to the symbols in pbdl module and current solution not sure how the current implementation works but if we re looking at root like we do for sketch we will not find all of the component masters
| 0
|
5,009
| 7,841,694,544
|
IssuesEvent
|
2018-06-18 20:29:18
|
googleapis/nodejs-common-grpc
|
https://api.github.com/repos/googleapis/nodejs-common-grpc
|
closed
|
Do a release
|
status: blocked type: process
|
The last release (0.6.1) was in March. A lot of changes have piled up.
Specifically, grpc@1.11.0 fails to install with Node 10 on a mac.
|
1.0
|
Do a release - The last release (0.6.1) was in March. A lot of changes have piled up.
Specifically, grpc@1.11.0 fails to install with Node 10 on a mac.
|
process
|
do a release the last release was in march a lot of changes have piled up specifically grpc fails to install with node on a mac
| 1
|
236,270
| 18,090,957,978
|
IssuesEvent
|
2021-09-22 01:31:59
|
fga-eps-mds/2021-1-Bot
|
https://api.github.com/repos/fga-eps-mds/2021-1-Bot
|
opened
|
Atualizar roadmap de devops para a sprint 8
|
documentation Time-PlusUltra devops
|
## DescriΓ§Γ£o da Issue
Adicionar:
- migraΓ§Γ£o para o jekyll
- automaΓ§Γ£o do deploy da documentaΓ§Γ£o
## Tasks:
- [ ] Atualizar roadmap de devops
|
1.0
|
Atualizar roadmap de devops para a sprint 8 - ## DescriΓ§Γ£o da Issue
Adicionar:
- migraΓ§Γ£o para o jekyll
- automaΓ§Γ£o do deploy da documentaΓ§Γ£o
## Tasks:
- [ ] Atualizar roadmap de devops
|
non_process
|
atualizar roadmap de devops para a sprint descriΓ§Γ£o da issue adicionar migraΓ§Γ£o para o jekyll automaΓ§Γ£o do deploy da documentaΓ§Γ£o tasks atualizar roadmap de devops
| 0
|
4,333
| 7,242,199,094
|
IssuesEvent
|
2018-02-14 06:16:37
|
muflihun/residue
|
https://api.github.com/repos/muflihun/residue
|
closed
|
Long pending requests fail after dead client's key is reset
|
area: log-processing edge-case type: bug
|
### Details
If we have long running log queue and following things happen all at once:
* process hasn't finished before client is dead
* client has short life
* no request made to client (touch) in the mean time
* after client is dead and before integrity task is executed, request is made (this will reset the key)
* there are remaining requests in backlog
all the subsequent request in next backlog process will fail
This is very rare situation and we only found this out after crazy load testing and really hammering the server.
|
1.0
|
Long pending requests fail after dead client's key is reset - ### Details
If we have long running log queue and following things happen all at once:
* process hasn't finished before client is dead
* client has short life
* no request made to client (touch) in the mean time
* after client is dead and before integrity task is executed, request is made (this will reset the key)
* there are remaining requests in backlog
all the subsequent request in next backlog process will fail
This is very rare situation and we only found this out after crazy load testing and really hammering the server.
|
process
|
long pending requests fail after dead client s key is reset details if we have long running log queue and following things happen all at once process hasn t finished before client is dead client has short life no request made to client touch in the mean time after client is dead and before integrity task is executed request is made this will reset the key there are remaining requests in backlog all the subsequent request in next backlog process will fail this is very rare situation and we only found this out after crazy load testing and really hammering the server
| 1
|
11,258
| 14,040,237,634
|
IssuesEvent
|
2020-11-01 01:12:00
|
PPHubApp/PPHub-Feedback
|
https://api.github.com/repos/PPHubApp/PPHub-Feedback
|
closed
|
v2.4.10 - no map data.
GitHub ...
|
BugΒ π Processing π¨π»βπ»π§
|
no map data.
GitHub ηε°ζ ΌεδΈθ§δΊγ
θΏθ‘η―ε’: iPhone XR - iOS14.1 - v2.4.10(189)
|
1.0
|
v2.4.10 - no map data.
GitHub ... - no map data.
GitHub ηε°ζ ΌεδΈθ§δΊγ
θΏθ‘η―ε’: iPhone XR - iOS14.1 - v2.4.10(189)
|
process
|
no map data github no map data github ηε°ζ ΌεδΈθ§δΊγ θΏθ‘η―ε’ iphone xr
| 1
|
12,823
| 8,000,209,774
|
IssuesEvent
|
2018-07-22 13:20:13
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
Reusing TinyMCE instances
|
Framework Performance [Component] TinyMCE
|
If we're going to have multiple TinyMCE instances (one or more for each block), it might be good to try to reuse one for all of the fields. The instance could be attached to the field that is currently focussed, undo levels could be disabled etc. Creating new blocks would also be a bit faster.
|
True
|
Reusing TinyMCE instances - If we're going to have multiple TinyMCE instances (one or more for each block), it might be good to try to reuse one for all of the fields. The instance could be attached to the field that is currently focussed, undo levels could be disabled etc. Creating new blocks would also be a bit faster.
|
non_process
|
reusing tinymce instances if we re going to have multiple tinymce instances one or more for each block it might be good to try to reuse one for all of the fields the instance could be attached to the field that is currently focussed undo levels could be disabled etc creating new blocks would also be a bit faster
| 0
|
18,849
| 24,763,209,509
|
IssuesEvent
|
2022-10-22 06:53:00
|
home-climate-control/dz
|
https://api.github.com/repos/home-climate-control/dz
|
closed
|
Economizer: implement InfluxDB integration
|
enhancement visualization instrumentation metrics InfluxDB telemetry process control reactive-only
|
Details will be published when the work is complete.
|
1.0
|
Economizer: implement InfluxDB integration - Details will be published when the work is complete.
|
process
|
economizer implement influxdb integration details will be published when the work is complete
| 1
|
19,122
| 25,171,944,273
|
IssuesEvent
|
2022-11-11 04:43:09
|
emily-writes-poems/emily-writes-poems-processing
|
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
|
closed
|
react: poem & details page
|
processing
|
have these at least functioning from the react page:
- [x] display all poems list
- [x] create new poem/details file
- [x] process poem file
- [x] remove poem
- [x] process details file
|
1.0
|
react: poem & details page - have these at least functioning from the react page:
- [x] display all poems list
- [x] create new poem/details file
- [x] process poem file
- [x] remove poem
- [x] process details file
|
process
|
react poem details page have these at least functioning from the react page display all poems list create new poem details file process poem file remove poem process details file
| 1
|
365,819
| 10,798,212,483
|
IssuesEvent
|
2019-11-06 09:35:56
|
digital-york/ncelp
|
https://api.github.com/repos/digital-york/ncelp
|
closed
|
Add 'accessibility' link in the footer of the Resource Portal
|
Done High priority enhancement
|
Email 2 October:
Frank - is it ok if you add an 'Accessibility' link to the footer of the NCELP Resource Portal? Perhaps bottom right before the Legal statement e.g. Accessibility | Legal statement @ University of York?
The link should go to this page: (https://ncelp.org/accessibility/)
|
1.0
|
Add 'accessibility' link in the footer of the Resource Portal - Email 2 October:
Frank - is it ok if you add an 'Accessibility' link to the footer of the NCELP Resource Portal? Perhaps bottom right before the Legal statement e.g. Accessibility | Legal statement @ University of York?
The link should go to this page: (https://ncelp.org/accessibility/)
|
non_process
|
add accessibility link in the footer of the resource portal email october frank is it ok if you add an accessibility link to the footer of the ncelp resource portal perhaps bottom right before the legal statement e g accessibility legal statement university of york the link should go to this page
| 0
|
6,419
| 9,522,800,593
|
IssuesEvent
|
2019-04-27 11:59:28
|
plazi/arcadia-project
|
https://api.github.com/repos/plazi/arcadia-project
|
opened
|
adding zookeys back issues 1 -50
|
Article processing treatment
|
in order to process the back issues 1- 50 of Zookeys, some work is needed. I listed this for easier initial discussion here https://docs.google.com/document/d/1pCkzTTM_uahQuJxF-rx1giZ1DUT41T9npQt9EtfW-qw/edit#
Once we agree and split this into issues, we can start adding issues here and add to the project https://github.com/plazi/arcadia-project/projects/3
|
1.0
|
adding zookeys back issues 1 -50 - in order to process the back issues 1- 50 of Zookeys, some work is needed. I listed this for easier initial discussion here https://docs.google.com/document/d/1pCkzTTM_uahQuJxF-rx1giZ1DUT41T9npQt9EtfW-qw/edit#
Once we agree and split this into issues, we can start adding issues here and add to the project https://github.com/plazi/arcadia-project/projects/3
|
process
|
adding zookeys back issues in order to process the back issues of zookeys some work is needed i listed this for easier initial discussion here once we agree and split this into issues we can start adding issues here and add to the project
| 1
|
20,852
| 27,631,945,693
|
IssuesEvent
|
2023-03-10 11:33:10
|
pwittchen/ReactiveSensors
|
https://api.github.com/repos/pwittchen/ReactiveSensors
|
opened
|
Automate deployment to Maven Central
|
enhancement release process
|
It should be possible to execute the following gradle commands on CI:
- `uploadArchives`
- `closeAndReleaseRepository`
I've already added secrets to this repo via GH secret configuration.
Things to be done:
- build project
- add private gpg key to the project during the build (`GPG_PRIVATE_KEY` secret)
- configure `signing.keyId` gradle param with `GPG_KEY_ID` secret
- configure `signing.password` gradle param with `GPG_PASSWORD` secret
- configure `signing.secretKeyRingFile` (it's defined locally on my machine) or a specific certificate (2nd point) - maybe this can be skipped, left empty or replaced with something else -> to be verified and tested
- configure `NEXUS_USERNAME` gradle param with `NEXUS_USERNAME` secret
- configure `NEXUS_PASSWORD` gradle param with `NEXUS_PASSWORD` secret
- run `uploadArchives` gradle command
- run `closeAndReleaseRepository` gradle command (this should be tested once command above will work because during tests we can upload and then remove artifacts without releasing them and when command above will fail, this one will fail too)
References:
- https://stackoverflow.com/questions/61096521/how-to-use-gpg-key-in-github-actions
|
1.0
|
Automate deployment to Maven Central - It should be possible to execute the following gradle commands on CI:
- `uploadArchives`
- `closeAndReleaseRepository`
I've already added secrets to this repo via GH secret configuration.
Things to be done:
- build project
- add private gpg key to the project during the build (`GPG_PRIVATE_KEY` secret)
- configure `signing.keyId` gradle param with `GPG_KEY_ID` secret
- configure `signing.password` gradle param with `GPG_PASSWORD` secret
- configure `signing.secretKeyRingFile` (it's defined locally on my machine) or a specific certificate (2nd point) - maybe this can be skipped, left empty or replaced with something else -> to be verified and tested
- configure `NEXUS_USERNAME` gradle param with `NEXUS_USERNAME` secret
- configure `NEXUS_PASSWORD` gradle param with `NEXUS_PASSWORD` secret
- run `uploadArchives` gradle command
- run `closeAndReleaseRepository` gradle command (this should be tested once command above will work because during tests we can upload and then remove artifacts without releasing them and when command above will fail, this one will fail too)
References:
- https://stackoverflow.com/questions/61096521/how-to-use-gpg-key-in-github-actions
|
process
|
automate deployment to maven central it should be possible to execute the following gradle commands on ci uploadarchives closeandreleaserepository i ve already added secrets to this repo via gh secret configuration things to be done build project add private gpg key to the project during the build gpg private key secret configure signing keyid gradle param with gpg key id secret configure signing password gradle param with gpg password secret configure signing secretkeyringfile it s defined locally on my machine or a specific certificate point maybe this can be skipped left empty or replaced with something else to be verified and tested configure nexus username gradle param with nexus username secret configure nexus password gradle param with nexus password secret run uploadarchives gradle command run closeandreleaserepository gradle command this should be tested once command above will work because during tests we can upload and then remove artifacts without releasing them and when command above will fail this one will fail too references
| 1
|
18,248
| 24,323,972,316
|
IssuesEvent
|
2022-09-30 13:18:53
|
km4ack/patmenu2
|
https://api.github.com/repos/km4ack/patmenu2
|
closed
|
Success notice in autopat broken
|
bug in process
|
The success notice of autopat isn't displaying correctly. This is due to a change in the Pat url scheme. The new url scheme includes a "&" character. This causes a failure in YAD when it tries to display the message. Need to modify the URL to remove the "&" before displaying the connection success message. See [this section](https://github.com/km4ack/patmenu2/blob/master/autopat#L208-L211) of code.
|
1.0
|
Success notice in autopat broken - The success notice of autopat isn't displaying correctly. This is due to a change in the Pat url scheme. The new url scheme includes a "&" character. This causes a failure in YAD when it tries to display the message. Need to modify the URL to remove the "&" before displaying the connection success message. See [this section](https://github.com/km4ack/patmenu2/blob/master/autopat#L208-L211) of code.
|
process
|
success notice in autopat broken the success notice of autopat isn t displaying correctly this is due to a change in the pat url scheme the new url scheme includes a character this causes a failure in yad when it tries to display the message need to modify the url to remove the before displaying the connection success message see of code
| 1
|
3,038
| 6,039,749,623
|
IssuesEvent
|
2017-06-10 06:58:43
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
opened
|
Removing a module causes unrecoverable Parse Error.
|
bug critical parse-tree-processing
|
Steps to reproduce.
1. New Workbook
2. Open VBE, allow auto-parse
3. Add module (either Standard or Class)
4. Allow the auto parse to complete
5. Remove the module without saving, using the PE context menu
6. Auto parse results in Parse Error. The VBE isn't frozen, but Excel needs to restart for RD to recover
|
1.0
|
Removing a module causes unrecoverable Parse Error. - Steps to reproduce.
1. New Workbook
2. Open VBE, allow auto-parse
3. Add module (either Standard or Class)
4. Allow the auto parse to complete
5. Remove the module without saving, using the PE context menu
6. Auto parse results in Parse Error. The VBE isn't frozen, but Excel needs to restart for RD to recover
|
process
|
removing a module causes unrecoverable parse error steps to reproduce new workbook open vbe allow auto parse add module either standard or class allow the auto parse to complete remove the module without saving using the pe context menu auto parse results in parse error the vbe isn t frozen but excel needs to restart for rd to recover
| 1
|
103,844
| 8,952,059,543
|
IssuesEvent
|
2019-01-25 15:35:02
|
ValveSoftware/steamvr_unity_plugin
|
https://api.github.com/repos/ValveSoftware/steamvr_unity_plugin
|
closed
|
Pose binding and 2018.3
|
Need Retest
|
Hello,
I am currently working on a VR project, using 2018.3.0b9.
I've bound multiple actions and also had the poses working for both controller, but since today the Poses are not send to unity. In the steam menu the controllers work fine. The boolean actions also still work, just the poses stopped working. Rebuild actions multiple times already, doesn't seem to fix it though.
Also I have a problem with the openvr package. Without the package, VR doesn't transfer to the steamvr application at all but the openvr_api script has errors complaining that there isn't a openvr_api.dll. I added the dll manually to the folder local Plugins folder, but now Unity complains that there are multiple dlls. Without the package VR doesn't work at all, without the other the script complains.
Thanks a lot in advance.
Best regards
|
1.0
|
Pose binding and 2018.3 - Hello,
I am currently working on a VR project, using 2018.3.0b9.
I've bound multiple actions and also had the poses working for both controller, but since today the Poses are not send to unity. In the steam menu the controllers work fine. The boolean actions also still work, just the poses stopped working. Rebuild actions multiple times already, doesn't seem to fix it though.
Also I have a problem with the openvr package. Without the package, VR doesn't transfer to the steamvr application at all but the openvr_api script has errors complaining that there isn't a openvr_api.dll. I added the dll manually to the folder local Plugins folder, but now Unity complains that there are multiple dlls. Without the package VR doesn't work at all, without the other the script complains.
Thanks a lot in advance.
Best regards
|
non_process
|
pose binding and hello i am currently working on a vr project using i ve bound multiple actions and also had the poses working for both controller but since today the poses are not send to unity in the steam menu the controllers work fine the boolean actions also still work just the poses stopped working rebuild actions multiple times already doesn t seem to fix it though also i have a problem with the openvr package without the package vr doesn t transfer to the steamvr application at all but the openvr api script has errors complaining that there isn t a openvr api dll i added the dll manually to the folder local plugins folder but now unity complains that there are multiple dlls without the package vr doesn t work at all without the other the script complains thanks a lot in advance best regards
| 0
|
35,393
| 14,680,234,764
|
IssuesEvent
|
2020-12-31 09:27:22
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
list of speakers in a json
|
cognitive-services/svc cxp product-question speech-service/subsvc triaged
|
Would it be possible to get this list of available speaker by language in a more programmatic manner? Like in a JSON instead of a HTML web page?
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 4de08d1f-8114-7bef-7ef4-022f79c69742
* Version Independent ID: 4c0ceab9-24ed-de01-db30-31acf1de0f48
* Content: [Language support - Speech service - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#text-to-speech)
* Content Source: [articles/cognitive-services/Speech-Service/language-support.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/cognitive-services/Speech-Service/language-support.md)
* Service: **cognitive-services**
* Sub-service: **speech-service**
* GitHub Login: @trevorbye
* Microsoft Alias: **trbye**
|
2.0
|
list of speakers in a json -
Would it be possible to get this list of available speaker by language in a more programmatic manner? Like in a JSON instead of a HTML web page?
---
#### Document Details
β *Do not edit this section. It is required for docs.microsoft.com β GitHub issue linking.*
* ID: 4de08d1f-8114-7bef-7ef4-022f79c69742
* Version Independent ID: 4c0ceab9-24ed-de01-db30-31acf1de0f48
* Content: [Language support - Speech service - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#text-to-speech)
* Content Source: [articles/cognitive-services/Speech-Service/language-support.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/cognitive-services/Speech-Service/language-support.md)
* Service: **cognitive-services**
* Sub-service: **speech-service**
* GitHub Login: @trevorbye
* Microsoft Alias: **trbye**
|
non_process
|
list of speakers in a json would it be possible to get this list of available speaker by language in a more programmatic manner like in a json instead of a html web page document details β do not edit this section it is required for docs microsoft com β github issue linking id version independent id content content source service cognitive services sub service speech service github login trevorbye microsoft alias trbye
| 0
|
170,364
| 20,862,630,196
|
IssuesEvent
|
2022-03-22 01:29:32
|
michaelweiss092/k3s
|
https://api.github.com/repos/michaelweiss092/k3s
|
opened
|
CVE-2021-44716 (High) detected in github.com/golang/net/http2-5ee1b9f4859acd2e99987ef94ec7a58427c53bef
|
security vulnerability
|
## CVE-2021-44716 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/net/http2-5ee1b9f4859acd2e99987ef94ec7a58427c53bef</b></p></summary>
<p>[mirror] Go supplementary network libraries</p>
<p>
Dependency Hierarchy:
- github.com/rancher/dynamiclistener/storage/kubernetes-v0.2.0 (Root Library)
- github.com/rancher/dynamiclistener-v0.2.0
- github.com/kubernetes/api/core/v1-v0.18.5
- k8s.io/apimachinery/pkg/apis/meta/v1-c1bd2c2a276f7fa89b8b8dc4e5b70e85d9bdc147
- github.com/kubernetes/apimachinery/pkg/watch-v0.19.0-beta.2
- github.com/kubernetes/apimachinery/pkg/util/net-v0.18.6
- :x: **github.com/golang/net/http2-5ee1b9f4859acd2e99987ef94ec7a58427c53bef** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
net/http in Go before 1.16.12 and 1.17.x before 1.17.5 allows uncontrolled memory consumption in the header canonicalization cache via HTTP/2 requests.
<p>Publish Date: 2022-01-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44716>CVE-2021-44716</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-vc3p-29h2-gpcp">https://github.com/advisories/GHSA-vc3p-29h2-gpcp</a></p>
<p>Release Date: 2022-01-01</p>
<p>Fix Resolution: github.com/golang/net - 491a49abca63de5e07ef554052d180a1b5fe2d70</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"GO","packageName":"github.com/golang/net/http2","packageVersion":"5ee1b9f4859acd2e99987ef94ec7a58427c53bef","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"github.com/rancher/dynamiclistener/storage/kubernetes:v0.2.0;github.com/rancher/dynamiclistener:v0.2.0;k8s.io/api/core/v1:v1.18.2-k3s.1;k8s.io/apimachinery/pkg/apis/meta/v1:v1.18.2-k3s.1;k8s.io/apimachinery/pkg/watch:v1.18.2-k3s.1;k8s.io/apimachinery/pkg/util/net:v1.18.2-k3s.1;github.com/golang/net/http2:5ee1b9f4859acd2e99987ef94ec7a58427c53bef","isMinimumFixVersionAvailable":true,"minimumFixVersion":"github.com/golang/net - 491a49abca63de5e07ef554052d180a1b5fe2d70","isBinary":true}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-44716","vulnerabilityDetails":"net/http in Go before 1.16.12 and 1.17.x before 1.17.5 allows uncontrolled memory consumption in the header canonicalization cache via HTTP/2 requests.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44716","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-44716 (High) detected in github.com/golang/net/http2-5ee1b9f4859acd2e99987ef94ec7a58427c53bef - ## CVE-2021-44716 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/net/http2-5ee1b9f4859acd2e99987ef94ec7a58427c53bef</b></p></summary>
<p>[mirror] Go supplementary network libraries</p>
<p>
Dependency Hierarchy:
- github.com/rancher/dynamiclistener/storage/kubernetes-v0.2.0 (Root Library)
- github.com/rancher/dynamiclistener-v0.2.0
- github.com/kubernetes/api/core/v1-v0.18.5
- k8s.io/apimachinery/pkg/apis/meta/v1-c1bd2c2a276f7fa89b8b8dc4e5b70e85d9bdc147
- github.com/kubernetes/apimachinery/pkg/watch-v0.19.0-beta.2
- github.com/kubernetes/apimachinery/pkg/util/net-v0.18.6
- :x: **github.com/golang/net/http2-5ee1b9f4859acd2e99987ef94ec7a58427c53bef** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
net/http in Go before 1.16.12 and 1.17.x before 1.17.5 allows uncontrolled memory consumption in the header canonicalization cache via HTTP/2 requests.
<p>Publish Date: 2022-01-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44716>CVE-2021-44716</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-vc3p-29h2-gpcp">https://github.com/advisories/GHSA-vc3p-29h2-gpcp</a></p>
<p>Release Date: 2022-01-01</p>
<p>Fix Resolution: github.com/golang/net - 491a49abca63de5e07ef554052d180a1b5fe2d70</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"GO","packageName":"github.com/golang/net/http2","packageVersion":"5ee1b9f4859acd2e99987ef94ec7a58427c53bef","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"github.com/rancher/dynamiclistener/storage/kubernetes:v0.2.0;github.com/rancher/dynamiclistener:v0.2.0;k8s.io/api/core/v1:v1.18.2-k3s.1;k8s.io/apimachinery/pkg/apis/meta/v1:v1.18.2-k3s.1;k8s.io/apimachinery/pkg/watch:v1.18.2-k3s.1;k8s.io/apimachinery/pkg/util/net:v1.18.2-k3s.1;github.com/golang/net/http2:5ee1b9f4859acd2e99987ef94ec7a58427c53bef","isMinimumFixVersionAvailable":true,"minimumFixVersion":"github.com/golang/net - 491a49abca63de5e07ef554052d180a1b5fe2d70","isBinary":true}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-44716","vulnerabilityDetails":"net/http in Go before 1.16.12 and 1.17.x before 1.17.5 allows uncontrolled memory consumption in the header canonicalization cache via HTTP/2 requests.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44716","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in github com golang net cve high severity vulnerability vulnerable library github com golang net go supplementary network libraries dependency hierarchy github com rancher dynamiclistener storage kubernetes root library github com rancher dynamiclistener github com kubernetes api core io apimachinery pkg apis meta github com kubernetes apimachinery pkg watch beta github com kubernetes apimachinery pkg util net x github com golang net vulnerable library vulnerability details net http in go before and x before allows uncontrolled memory consumption in the header canonicalization cache via http requests publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution github com golang net isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree github com rancher dynamiclistener storage kubernetes github com rancher dynamiclistener io api core io apimachinery pkg apis meta io apimachinery pkg watch io apimachinery pkg util net github com golang net isminimumfixversionavailable true minimumfixversion github com golang net isbinary true basebranches vulnerabilityidentifier cve vulnerabilitydetails net http in go before and x before allows uncontrolled memory consumption in the header canonicalization cache via http requests vulnerabilityurl
| 0
|
103,873
| 12,977,422,791
|
IssuesEvent
|
2020-07-21 20:37:28
|
Opentrons/opentrons
|
https://api.github.com/repos/Opentrons/opentrons
|
closed
|
feat: PD Sensible default- Delay
|
:spider: SPDDRS protocol designer
|
# Overview
As a PD user I would like my advanced settings to have sensible defaults
# Implementation Details
- Time: 1s
- Tip position: Default is whatever the aspirate/dispense tip position is
|
1.0
|
feat: PD Sensible default- Delay - # Overview
As a PD user I would like my advanced settings to have sensible defaults
# Implementation Details
- Time: 1s
- Tip position: Default is whatever the aspirate/dispense tip position is
|
non_process
|
feat pd sensible default delay overview as a pd user i would like my advanced settings to have sensible defaults implementation details time tip position default is whatever the aspirate dispense tip position is
| 0
|
17,567
| 23,382,791,985
|
IssuesEvent
|
2022-08-11 11:05:28
|
hashicorp/terraform-cdk
|
https://api.github.com/repos/hashicorp/terraform-cdk
|
closed
|
Publish RC versions of prebuilt providers
|
enhancement providers priority/important-soon dev-process size/medium theme/construct-ecosystem
|
<!--- Please keep this note for the community --->
### Community Note
- Please vote on this issue by adding a π [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please leave a helpful description of the feature request here. --->
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Currently we make it very hard for users of prebuilt providers to work on the rc version of the cdk. This makes it hard for them to test out fixes (not only to providers but also other parts). By pushing a `cdktf-provider-xyz@cdktf-0.8.rc1` version we could enable them to update and test fixes more easily. This becomes more relevant the more we have a construct ecosystem.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
- https://github.com/hashicorp/terraform-cdk/issues/1203
|
1.0
|
Publish RC versions of prebuilt providers - <!--- Please keep this note for the community --->
### Community Note
- Please vote on this issue by adding a π [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please leave a helpful description of the feature request here. --->
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Currently we make it very hard for users of prebuilt providers to work on the rc version of the cdk. This makes it hard for them to test out fixes (not only to providers but also other parts). By pushing a `cdktf-provider-xyz@cdktf-0.8.rc1` version we could enable them to update and test fixes more easily. This becomes more relevant the more we have a construct ecosystem.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
- https://github.com/hashicorp/terraform-cdk/issues/1203
|
process
|
publish rc versions of prebuilt providers community note please vote on this issue by adding a π to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description currently we make it very hard for users of prebuilt providers to work on the rc version of the cdk this makes it hard for them to test out fixes not only to providers but also other parts by pushing a cdktf provider xyz cdktf version we could enable them to update and test fixes more easily this becomes more relevant the more we have a construct ecosystem references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation
| 1
|
24,813
| 5,104,821,085
|
IssuesEvent
|
2017-01-05 03:22:34
|
coala/coala
|
https://api.github.com/repos/coala/coala
|
closed
|
Newcomers guide should reference to coala.io/devsetup
|
area/documentation difficulty/newcomer
|
We probably want to elaborate on that in step zero. Found by @satwikkansal
difficutly newcomer type/documentation
Opened via [gitter](https://gitter.im/coala/coala/?at=586b547c9d4cc4fc536df4fb) by @sils
|
1.0
|
Newcomers guide should reference to coala.io/devsetup -
We probably want to elaborate on that in step zero. Found by @satwikkansal
difficutly newcomer type/documentation
Opened via [gitter](https://gitter.im/coala/coala/?at=586b547c9d4cc4fc536df4fb) by @sils
|
non_process
|
newcomers guide should reference to coala io devsetup we probably want to elaborate on that in step zero found by satwikkansal difficutly newcomer type documentation opened via by sils
| 0
|
14,024
| 16,835,589,159
|
IssuesEvent
|
2021-06-18 11:36:01
|
ckeditor/ckeditor5
|
https://api.github.com/repos/ckeditor/ckeditor5
|
opened
|
[GHS] Applying attributes to existing features - image
|
domain:v4-compatibility squad:compat type:bug
|
## π Provide detailed reproduction steps (if any)
Due to the complex model structure of the [Image](https://ckeditor.com/docs/ckeditor5/latest/features/image.html) feature, it's not possible to easily extend it with additional attributes. We will need to create separate integration for it.
---
If you'd like to see this fixed sooner, add a π reaction to this post.
|
True
|
[GHS] Applying attributes to existing features - image - ## π Provide detailed reproduction steps (if any)
Due to the complex model structure of the [Image](https://ckeditor.com/docs/ckeditor5/latest/features/image.html) feature, it's not possible to easily extend it with additional attributes. We will need to create separate integration for it.
---
If you'd like to see this fixed sooner, add a π reaction to this post.
|
non_process
|
applying attributes to existing features image π provide detailed reproduction steps if any due to the complex model structure of the feature it s not possible to easily extend it with additional attributes we will need to create separate integration for it if you d like to see this fixed sooner add a π reaction to this post
| 0
|
17,076
| 22,575,028,811
|
IssuesEvent
|
2022-06-28 06:25:32
|
weiquany/KTVAnywhere
|
https://api.github.com/repos/weiquany/KTVAnywhere
|
closed
|
Get name for song from mp3 tags
|
feature: song management feature: song preprocessing
|
Automatically detect song name and artist from mp3 file when uploading to the application.
## User story
As a user,
* I do not want to manually name all the songs.
### Acceptance criteria
The application should:
- [x] Get song name and artist from mp3 tags if available
- [x] Get song name from file name if mp3 tags are not available
- [x] Prevent upload if no song file is selected
## Complexities
* Getting mp3 tags from song file
|
1.0
|
Get name for song from mp3 tags - Automatically detect song name and artist from mp3 file when uploading to the application.
## User story
As a user,
* I do not want to manually name all the songs.
### Acceptance criteria
The application should:
- [x] Get song name and artist from mp3 tags if available
- [x] Get song name from file name if mp3 tags are not available
- [x] Prevent upload if no song file is selected
## Complexities
* Getting mp3 tags from song file
|
process
|
get name for song from tags automatically detect song name and artist from file when uploading to the application user story as a user i do not want to manually name all the songs acceptance criteria the application should get song name and artist from tags if available get song name from file name if tags are not available prevent upload if no song file is selected complexities getting tags from song file
| 1
|
21,618
| 30,022,525,403
|
IssuesEvent
|
2023-06-27 01:34:54
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
marketplace.gcr.io/google/bazel is missing recent Bazel releases
|
P2 type: process team-OSS stale
|
I reported in https://github.com/GoogleCloudPlatform/container-definitions/issues/12037 but this is perhaps the more appropriate place:
`latest` release is currently 3.5.0 here https://console.cloud.google.com/gcr/images/cloud-marketplace/GLOBAL/google/bazel
|
1.0
|
marketplace.gcr.io/google/bazel is missing recent Bazel releases - I reported in https://github.com/GoogleCloudPlatform/container-definitions/issues/12037 but this is perhaps the more appropriate place:
`latest` release is currently 3.5.0 here https://console.cloud.google.com/gcr/images/cloud-marketplace/GLOBAL/google/bazel
|
process
|
marketplace gcr io google bazel is missing recent bazel releases i reported in but this is perhaps the more appropriate place latest release is currently here
| 1
|
346,465
| 10,412,557,057
|
IssuesEvent
|
2019-09-13 16:13:59
|
opentargets/platform
|
https://api.github.com/repos/opentargets/platform
|
closed
|
quantitative data for genes on which cell types they are expressed in
|
Kind: Data Kind: Enhancement Priority: Low Topic: Atlas
|
@elipapa commented on [Fri Jan 12 2018](https://github.com/opentargets/data_release/issues/55)
@elipapa commented on [Tue Feb 21 2017](https://github.com/opentargets/platform_issues/issues/9)
> I'd like to see quantitative data for genes on which cell types they are expressed in. For example, "Gene A" is expressed highly in "Cell type B" but not at all in "Cell type C".
---
@elipapa commented on [Mon Apr 24 2017](https://github.com/opentargets/platform_issues/issues/9#issuecomment-281212821)
Quantitative expression data could be extracted from http://www.immgen.org/databrowser/index.html for lymphocytes and http://servers.binf.ku.dk/bloodspot/ for other blood cells
---
@elipapa commented on [Mon Apr 24 2017](https://github.com/opentargets/platform_issues/issues/9#issuecomment-296716747)
@ckongEbi will ask/compare with expression atlas submission
---
@ckongEbi commented on [Tue Apr 25 2017](https://github.com/opentargets/platform_issues/issues/9#issuecomment-296963061)
@elipapa can you move this to data-providers repo https://github.com/opentargets/data-providers-docs/issues, i take it up with atlas
|
1.0
|
quantitative data for genes on which cell types they are expressed in - @elipapa commented on [Fri Jan 12 2018](https://github.com/opentargets/data_release/issues/55)
@elipapa commented on [Tue Feb 21 2017](https://github.com/opentargets/platform_issues/issues/9)
> I'd like to see quantitative data for genes on which cell types they are expressed in. For example, "Gene A" is expressed highly in "Cell type B" but not at all in "Cell type C".
---
@elipapa commented on [Mon Apr 24 2017](https://github.com/opentargets/platform_issues/issues/9#issuecomment-281212821)
Quantitative expression data could be extracted from http://www.immgen.org/databrowser/index.html for lymphocytes and http://servers.binf.ku.dk/bloodspot/ for other blood cells
---
@elipapa commented on [Mon Apr 24 2017](https://github.com/opentargets/platform_issues/issues/9#issuecomment-296716747)
@ckongEbi will ask/compare with expression atlas submission
---
@ckongEbi commented on [Tue Apr 25 2017](https://github.com/opentargets/platform_issues/issues/9#issuecomment-296963061)
@elipapa can you move this to data-providers repo https://github.com/opentargets/data-providers-docs/issues, i take it up with atlas
|
non_process
|
quantitative data for genes on which cell types they are expressed in elipapa commented on elipapa commented on i d like to see quantitative data for genes on which cell types they are expressed in for example gene a is expressed highly in cell type b but not at all in cell type c elipapa commented on quantitative expression data could be extracted from for lymphocytes and for other blood cells elipapa commented on ckongebi will ask compare with expression atlas submission ckongebi commented on elipapa can you move this to data providers repo i take it up with atlas
| 0
|
35,196
| 12,321,116,767
|
IssuesEvent
|
2020-05-13 08:11:48
|
tamirverthim/fitbit-api-example-java
|
https://api.github.com/repos/tamirverthim/fitbit-api-example-java
|
opened
|
CVE-2019-16335 (High) detected in jackson-databind-2.8.1.jar
|
security vulnerability
|
## CVE-2019-16335 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/fitbit-api-example-java/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.8.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/tamirverthim/fitbit-api-example-java/commits/1d4a86820b5ccc9e51b82198be488c68e9299e40">1d4a86820b5ccc9e51b82198be488c68e9299e40</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.
<p>Publish Date: 2019-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335>CVE-2019-16335</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x">https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x</a></p>
<p>Release Date: 2019-09-15</p>
<p>Fix Resolution: 2.9.10</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.1","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.4.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.10"}],"vulnerabilityIdentifier":"CVE-2019-16335","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-16335 (High) detected in jackson-databind-2.8.1.jar - ## CVE-2019-16335 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/fitbit-api-example-java/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.8.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/tamirverthim/fitbit-api-example-java/commits/1d4a86820b5ccc9e51b82198be488c68e9299e40">1d4a86820b5ccc9e51b82198be488c68e9299e40</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.
<p>Publish Date: 2019-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335>CVE-2019-16335</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x">https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x</a></p>
<p>Release Date: 2019-09-15</p>
<p>Fix Resolution: 2.9.10</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.1","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.4.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.10"}],"vulnerabilityIdentifier":"CVE-2019-16335","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm fitbit api example java pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind before it is related to com zaxxer hikari hikaridatasource this is a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind before it is related to com zaxxer hikari hikaridatasource this is a different vulnerability than cve vulnerabilityurl
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.