Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
44,535
| 5,843,584,760
|
IssuesEvent
|
2017-05-10 09:32:58
|
canonical-websites/www.ubuntu.com
|
https://api.github.com/repos/canonical-websites/www.ubuntu.com
|
closed
|
Top and bottom padding of footer should be the same (L screen)
|
Review: Design +1 Status: Review
|
Top seems bigger now. Reviewing /about section
|
1.0
|
Top and bottom padding of footer should be the same (L screen) - Top seems bigger now. Reviewing /about section
|
non_process
|
top and bottom padding of footer should be the same l screen top seems bigger now reviewing about section
| 0
|
4,626
| 7,472,697,351
|
IssuesEvent
|
2018-04-03 13:26:43
|
ODiogoSilva/assemblerflow
|
https://api.github.com/repos/ODiogoSilva/assemblerflow
|
closed
|
Add dependency for spades and trimmomatic processes
|
process
|
The `integrity_coverage` module should be specified as a dependency of `spades` (because of the read max length) and `trimmomatic` and `fastqc_trimmomatic` (because of the phred).
|
1.0
|
Add dependency for spades and trimmomatic processes - The `integrity_coverage` module should be specified as a dependency of `spades` (because of the read max length) and `trimmomatic` and `fastqc_trimmomatic` (because of the phred).
|
process
|
add dependency for spades and trimmomatic processes the integrity coverage module should be specified as a dependency of spades because of the read max length and trimmomatic and fastqc trimmomatic because of the phred
| 1
|
12,361
| 14,888,833,850
|
IssuesEvent
|
2021-01-20 20:27:51
|
cypress-io/cypress-documentation
|
https://api.github.com/repos/cypress-io/cypress-documentation
|
closed
|
Look into speeding up parallel builds via RAM disk
|
process: ci
|
When we install and run tests, we get widely different times for attaching the workspace. CircleCI has suggested we use RAM disk instead of the workspace
https://discuss.circleci.com/t/attaching-workspace-has-an-unpredicably-different-timing/38521
|
1.0
|
Look into speeding up parallel builds via RAM disk - When we install and run tests, we get widely different times for attaching the workspace. CircleCI has suggested we use RAM disk instead of the workspace
https://discuss.circleci.com/t/attaching-workspace-has-an-unpredicably-different-timing/38521
|
process
|
look into speeding up parallel builds via ram disk when we install and run tests we get widely different times for attaching the workspace circleci has suggested we use ram disk instead of the workspace
| 1
|
5,622
| 8,477,035,089
|
IssuesEvent
|
2018-10-25 00:48:43
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
closed
|
FASTO: Initial submission failed to parse
|
in progress ontology processing problem
|
User reached out to us on the support list to indicate that their initial upload of the [FASTO ontology](http://bioportal.bioontology.org/ontologies/FASTO) didn't parse.
There were two issues that I asked the user to address:
1). The ontology was submitted as a ZIP file, but didn't follow our required convention where the name of the ZIP file matches the name of the main ontology file. User fixed this issue and submitted a new ZIP file.
2). The parsing process then failed with the following error:
```
"Illegal Element Name (Element Is Not A QName): http://www.co-ode.org/ontologies/ont.owl#DDO:0000002”
```
There were 3 occurrences in the FASTO.owl source file of the above URI, where the colon character isn’t allowed in the end fragment (DDO:0000002). User fixed this issue and submitted a new version.
|
1.0
|
FASTO: Initial submission failed to parse - User reached out to us on the support list to indicate that their initial upload of the [FASTO ontology](http://bioportal.bioontology.org/ontologies/FASTO) didn't parse.
There were two issues that I asked the user to address:
1). The ontology was submitted as a ZIP file, but didn't follow our required convention where the name of the ZIP file matches the name of the main ontology file. User fixed this issue and submitted a new ZIP file.
2). The parsing process then failed with the following error:
```
"Illegal Element Name (Element Is Not A QName): http://www.co-ode.org/ontologies/ont.owl#DDO:0000002”
```
There were 3 occurrences in the FASTO.owl source file of the above URI, where the colon character isn’t allowed in the end fragment (DDO:0000002). User fixed this issue and submitted a new version.
|
process
|
fasto initial submission failed to parse user reached out to us on the support list to indicate that their initial upload of the didn t parse there were two issues that i asked the user to address the ontology was submitted as a zip file but didn t follow our required convention where the name of the zip file matches the name of the main ontology file user fixed this issue and submitted a new zip file the parsing process then failed with the following error illegal element name element is not a qname there were occurrences in the fasto owl source file of the above uri where the colon character isn’t allowed in the end fragment ddo user fixed this issue and submitted a new version
| 1
|
736
| 3,214,317,146
|
IssuesEvent
|
2015-10-07 00:46:24
|
broadinstitute/hellbender
|
https://api.github.com/repos/broadinstitute/hellbender
|
closed
|
Choose approach to fix scaling of ReadBAMTransform, and implement fix
|
Dataflow DataflowPreprocessingPipeline profiling
|
From an analysis by @jean-philippe-martin:
**The Problem**
Doing a preliminary performance analysis of Hellbender, I found that ReadBAM did not scale with the number of workers.

Logs indicated it was running on a single worker, regardless of how many were specified for the job.
**The Cause**
The underlying cause is a combination of ReadBAM's design and Dataflow's own perhaps over-eager optimization.
ReadBAM is implemented as a series of transforms. It could also have been implemented as a Dataflow BoundedSource, but the latter is much more complicated.
The transforms are as follows:
Start with a collection of filenames and a collection of contigs.
Transform 1 -- input: filenames, side input: contigs. Generates a list of regions to read ("BAMShard")
Transform 2 -- input: `PCollection<BAMShard>`, output: `PCollection<Read>`. Each worker reads from the BAM file, using the index to find where to read from.
Dataflow sees that transform 2 takes as input transform 1's output, and so these two can be run in sequence on the same machines, skipping a serialization/deserialization step. This optimization is called "fusing" and it's generally a very good thing.
However in this case, the input PCollection has a single element (the file we want to read), so only one worker is involved. Because of the fusion, that same worker then ends up doing all of the reading work, ruining our day.
**The Solutions**
There are multiple ways to solve this problem.
1. change transform 1 to have the contig collection as a primary input in the hope that we always have more than one contig.
This solution's very brittle (our benchmark, for example, reads a single chromosome so the contig list has effectively only one element). I did not pursue it.
2. Insert a groupby step between the two transforms.
pro: this gets all the workers involved again
con: the groupby itself takes some time, unnecessarily.
3. Compute the BAMShards at the client and then send those to workers.
pro: this gets all the workers involved again, and they do not have to spend any time on groupby
con: an existing Dataflow bug will cause the program to crash if the shard list is too long. We can work around this, though, by increasing the shard size when we have many.
4. Bite the bullet and implement a BoundedSource.
I implemented solutions 2 and 3. Solution 3 is the fastest. I suspect solution 4 wouldn't be any faster, though it would be more idiomatic for Dataflow. The graph below shows the time in the Dataflow Read phase with the new code when using the groupby method (this includes sharding, groupby, and actually reading the BAM file).

Next Steps
The next step is to pick either solution 3 or 4 (or 2, I suppose, if we want to be expedient). If 3, then we need to change the sharding to deal with large files. If 4, then we need to spend the time and effort writing the new source (and of course testing that it scales as we're expecting).
Comments & feedback welcome!
|
1.0
|
Choose approach to fix scaling of ReadBAMTransform, and implement fix - From an analysis by @jean-philippe-martin:
**The Problem**
Doing a preliminary performance analysis of Hellbender, I found that ReadBAM did not scale with the number of workers.

Logs indicated it was running on a single worker, regardless of how many were specified for the job.
**The Cause**
The underlying cause is a combination of ReadBAM's design and Dataflow's own perhaps over-eager optimization.
ReadBAM is implemented as a series of transforms. It could also have been implemented as a Dataflow BoundedSource, but the latter is much more complicated.
The transforms are as follows:
Start with a collection of filenames and a collection of contigs.
Transform 1 -- input: filenames, side input: contigs. Generates a list of regions to read ("BAMShard")
Transform 2 -- input: `PCollection<BAMShard>`, output: `PCollection<Read>`. Each worker reads from the BAM file, using the index to find where to read from.
Dataflow sees that transform 2 takes as input transform 1's output, and so these two can be run in sequence on the same machines, skipping a serialization/deserialization step. This optimization is called "fusing" and it's generally a very good thing.
However in this case, the input PCollection has a single element (the file we want to read), so only one worker is involved. Because of the fusion, that same worker then ends up doing all of the reading work, ruining our day.
**The Solutions**
There are multiple ways to solve this problem.
1. change transform 1 to have the contig collection as a primary input in the hope that we always have more than one contig.
This solution's very brittle (our benchmark, for example, reads a single chromosome so the contig list has effectively only one element). I did not pursue it.
2. Insert a groupby step between the two transforms.
pro: this gets all the workers involved again
con: the groupby itself takes some time, unnecessarily.
3. Compute the BAMShards at the client and then send those to workers.
pro: this gets all the workers involved again, and they do not have to spend any time on groupby
con: an existing Dataflow bug will cause the program to crash if the shard list is too long. We can work around this, though, by increasing the shard size when we have many.
4. Bite the bullet and implement a BoundedSource.
I implemented solutions 2 and 3. Solution 3 is the fastest. I suspect solution 4 wouldn't be any faster, though it would be more idiomatic for Dataflow. The graph below shows the time in the Dataflow Read phase with the new code when using the groupby method (this includes sharding, groupby, and actually reading the BAM file).

Next Steps
The next step is to pick either solution 3 or 4 (or 2, I suppose, if we want to be expedient). If 3, then we need to change the sharding to deal with large files. If 4, then we need to spend the time and effort writing the new source (and of course testing that it scales as we're expecting).
Comments & feedback welcome!
|
process
|
choose approach to fix scaling of readbamtransform and implement fix from an analysis by jean philippe martin the problem doing a preliminary performance analysis of hellbender i found that readbam did not scale with the number of workers logs indicated it was running on a single worker regardless of how many were specified for the job the cause the underlying cause is a combination of readbam s design and dataflow s own perhaps over eager optimization readbam is implemented as a series of transforms it could also have been implemented as a dataflow boundedsource but the latter is much more complicated the transforms are as follows start with a collection of filenames and a collection of contigs transform input filenames side input contigs generates a list of regions to read bamshard transform input pcollection output pcollection each worker reads from the bam file using the index to find where to read from dataflow sees that transform takes as input transform s output and so these two can be run in sequence on the same machines skipping a serialization deserialization step this optimization is called fusing and it s generally a very good thing however in this case the input pcollection has a single element the file we want to read so only one worker is involved because of the fusion that same worker then ends up doing all of the reading work ruining our day the solutions there are multiple ways to solve this problem change transform to have the contig collection as a primary input in the hope that we always have more than one contig this solution s very brittle our benchmark for example reads a single chromosome so the contig list has effectively only one element i did not pursue it insert a groupby step between the two transforms pro this gets all the workers involved again con the groupby itself takes some time unnecessarily compute the bamshards at the client and then send those to workers pro this gets all the workers involved again and they do not have to spend any time on groupby con an existing dataflow bug will cause the program to crash if the shard list is too long we can work around this though by increasing the shard size when we have many bite the bullet and implement a boundedsource i implemented solutions and solution is the fastest i suspect solution wouldn t be any faster though it would be more idiomatic for dataflow the graph below shows the time in the dataflow read phase with the new code when using the groupby method this includes sharding groupby and actually reading the bam file next steps the next step is to pick either solution or or i suppose if we want to be expedient if then we need to change the sharding to deal with large files if then we need to spend the time and effort writing the new source and of course testing that it scales as we re expecting comments feedback welcome
| 1
|
15,249
| 19,188,277,569
|
IssuesEvent
|
2021-12-05 15:23:34
|
RobertCraigie/prisma-client-py
|
https://api.github.com/repos/RobertCraigie/prisma-client-py
|
opened
|
Support setting the connect timeout from the Client constructor
|
kind/feature process/candidate
|
## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
For #103, we should support configuring the connection timeout from the class constructor.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should still support changing the timeout on a per-connect basis.
```py
client = Client(
connect_timeout=5,
)
await client.connect() # timeout 5
await client.connect(timeout=10) # timeout 10
```
|
1.0
|
Support setting the connect timeout from the Client constructor - ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
For #103, we should support configuring the connection timeout from the class constructor.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should still support changing the timeout on a per-connect basis.
```py
client = Client(
connect_timeout=5,
)
await client.connect() # timeout 5
await client.connect(timeout=10) # timeout 10
```
|
process
|
support setting the connect timeout from the client constructor problem for we should support configuring the connection timeout from the class constructor suggested solution we should still support changing the timeout on a per connect basis py client client connect timeout await client connect timeout await client connect timeout timeout
| 1
|
272,274
| 8,506,766,324
|
IssuesEvent
|
2018-10-30 17:20:57
|
fecgov/openFEC
|
https://api.github.com/repos/fecgov/openFEC
|
closed
|
Add Amendment number to schedule e endpoint
|
High priority
|
What we are after: We recently added amendment indicator and previous file_num to the schedule e endpoint. In order for the users to have the most clarity from the downloaded data we need to add the amendment number as well.
The URL for the independent expenditures:
https://www.fec.gov/data/independent-expenditures/?data_type=processed&is_notice=true
The previous issue with
https://github.com/fecgov/openFEC/issues/3448
MV with extra column already created in dev:
ofec_schedule_e_temp_mv
Completion criteria:
- [ ] exported data needs to include the amndt_ind column
|
1.0
|
Add Amendment number to schedule e endpoint - What we are after: We recently added amendment indicator and previous file_num to the schedule e endpoint. In order for the users to have the most clarity from the downloaded data we need to add the amendment number as well.
The URL for the independent expenditures:
https://www.fec.gov/data/independent-expenditures/?data_type=processed&is_notice=true
The previous issue with
https://github.com/fecgov/openFEC/issues/3448
MV with extra column already created in dev:
ofec_schedule_e_temp_mv
Completion criteria:
- [ ] exported data needs to include the amndt_ind column
|
non_process
|
add amendment number to schedule e endpoint what we are after we recently added amendment indicator and previous file num to the schedule e endpoint in order for the users to have the most clarity from the downloaded data we need to add the amendment number as well the url for the independent expenditures the previous issue with mv with extra column already created in dev ofec schedule e temp mv completion criteria exported data needs to include the amndt ind column
| 0
|
4,304
| 7,197,738,575
|
IssuesEvent
|
2018-02-05 10:12:32
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
opened
|
Source code of inline calculations to be put in the repository
|
priority/nice to have requires discussion topic/JobCalculationAndProcess
|
The source code of an inline calc. can represent a large amount of data, and I don't think it makes sense to have it queryable. The function name is, instead, enough for queries.
So we should instead store the source code in the repository, rather than in the Attributes.
|
1.0
|
Source code of inline calculations to be put in the repository - The source code of an inline calc. can represent a large amount of data, and I don't think it makes sense to have it queryable. The function name is, instead, enough for queries.
So we should instead store the source code in the repository, rather than in the Attributes.
|
process
|
source code of inline calculations to be put in the repository the source code of an inline calc can represent a large amount of data and i don t think it makes sense to have it queryable the function name is instead enough for queries so we should instead store the source code in the repository rather than in the attributes
| 1
|
376,921
| 11,158,164,863
|
IssuesEvent
|
2019-12-25 18:14:50
|
Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth
|
https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth
|
closed
|
No character can join society after game reloading
|
:beetle: bug :beetle: :exclamation: priority high :x: info needed :page_facing_up:
|
**Mod Version**
136c94f4
**What expansions do you have installed?**
All
**Please explain your issue in as much detail as possible:**
Sometimes "No character" with "Rebels" name joins society.
This things probably can be related:
> [character.cpp:27448]: Character 'Fitch of ---(2100888)' has invalid religion
> [character.cpp:27448]: Character 'Darrel of ---(2100889)' has invalid religion
<details>
<summary>Click to expand</summary>

</details>
**Steps to reproduce the issue:**
Save and reload the game
**Upload an attachment below: .zip of your save, or screenshots:**
<details>
<summary>Click to expand</summary>

</details>
|
1.0
|
No character can join society after game reloading - **Mod Version**
136c94f4
**What expansions do you have installed?**
All
**Please explain your issue in as much detail as possible:**
Sometimes "No character" with "Rebels" name joins society.
This things probably can be related:
> [character.cpp:27448]: Character 'Fitch of ---(2100888)' has invalid religion
> [character.cpp:27448]: Character 'Darrel of ---(2100889)' has invalid religion
<details>
<summary>Click to expand</summary>

</details>
**Steps to reproduce the issue:**
Save and reload the game
**Upload an attachment below: .zip of your save, or screenshots:**
<details>
<summary>Click to expand</summary>

</details>
|
non_process
|
no character can join society after game reloading mod version what expansions do you have installed all please explain your issue in as much detail as possible sometimes no character with rebels name joins society this things probably can be related character fitch of has invalid religion character darrel of has invalid religion click to expand steps to reproduce the issue save and reload the game upload an attachment below zip of your save or screenshots click to expand
| 0
|
260,471
| 22,623,631,808
|
IssuesEvent
|
2022-06-30 08:45:41
|
gra-m/DBServer
|
https://api.github.com/repos/gra-m/DBServer
|
closed
|
Investigate testing Private Methods/seek & write
|
testing
|
I was curious of how seek/write works (seek to length of record/file write writes FROM there, obviously meaning the next available byte AFTER that point).
I was curious of how to test private methods, and despite the general opinion seeming to be that this means code-smell I took it as an opportunity to try Reflection for the first time.
https://stackoverflow.com/questions/34571/how-do-i-test-a-class-that-has-private-methods-fields-or-inner-classes
|
1.0
|
Investigate testing Private Methods/seek & write - I was curious of how seek/write works (seek to length of record/file write writes FROM there, obviously meaning the next available byte AFTER that point).
I was curious of how to test private methods, and despite the general opinion seeming to be that this means code-smell I took it as an opportunity to try Reflection for the first time.
https://stackoverflow.com/questions/34571/how-do-i-test-a-class-that-has-private-methods-fields-or-inner-classes
|
non_process
|
investigate testing private methods seek write i was curious of how seek write works seek to length of record file write writes from there obviously meaning the next available byte after that point i was curious of how to test private methods and despite the general opinion seeming to be that this means code smell i took it as an opportunity to try reflection for the first time
| 0
|
230,315
| 25,463,880,800
|
IssuesEvent
|
2022-11-25 00:27:03
|
neinteractiveliterature/intercode
|
https://api.github.com/repos/neinteractiveliterature/intercode
|
closed
|
babel-loader-8.3.0.tgz: 2 vulnerabilities (highest severity is: 7.5) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>babel-loader-8.3.0.tgz</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/neinteractiveliterature/intercode/commit/da0c9c84fdbc82b3b8e2221482a86225136e26be">da0c9c84fdbc82b3b8e2221482a86225136e26be</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (babel-loader version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-37603](https://www.mend.io/vulnerability-database/CVE-2022-37603) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | loader-utils-2.0.0.tgz | Transitive | 9.1.0 | ❌ |
| [CVE-2022-37599](https://www.mend.io/vulnerability-database/CVE-2022-37599) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | loader-utils-2.0.0.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-37603</summary>
### Vulnerable Library - <b>loader-utils-2.0.0.tgz</b></p>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz</a></p>
<p>
Dependency Hierarchy:
- babel-loader-8.3.0.tgz (Root Library)
- :x: **loader-utils-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/neinteractiveliterature/intercode/commit/da0c9c84fdbc82b3b8e2221482a86225136e26be">da0c9c84fdbc82b3b8e2221482a86225136e26be</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the url variable in interpolateName.js.
<p>Publish Date: 2022-10-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37603>CVE-2022-37603</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-14</p>
<p>Fix Resolution (loader-utils): 2.0.1</p>
<p>Direct dependency fix Resolution (babel-loader): 9.1.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-37599</summary>
### Vulnerable Library - <b>loader-utils-2.0.0.tgz</b></p>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz</a></p>
<p>
Dependency Hierarchy:
- babel-loader-8.3.0.tgz (Root Library)
- :x: **loader-utils-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/neinteractiveliterature/intercode/commit/da0c9c84fdbc82b3b8e2221482a86225136e26be">da0c9c84fdbc82b3b8e2221482a86225136e26be</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the resourcePath variable in interpolateName.js.
<p>Publish Date: 2022-10-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37599>CVE-2022-37599</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
babel-loader-8.3.0.tgz: 2 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>babel-loader-8.3.0.tgz</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/neinteractiveliterature/intercode/commit/da0c9c84fdbc82b3b8e2221482a86225136e26be">da0c9c84fdbc82b3b8e2221482a86225136e26be</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (babel-loader version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-37603](https://www.mend.io/vulnerability-database/CVE-2022-37603) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | loader-utils-2.0.0.tgz | Transitive | 9.1.0 | ❌ |
| [CVE-2022-37599](https://www.mend.io/vulnerability-database/CVE-2022-37599) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | loader-utils-2.0.0.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-37603</summary>
### Vulnerable Library - <b>loader-utils-2.0.0.tgz</b></p>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz</a></p>
<p>
Dependency Hierarchy:
- babel-loader-8.3.0.tgz (Root Library)
- :x: **loader-utils-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/neinteractiveliterature/intercode/commit/da0c9c84fdbc82b3b8e2221482a86225136e26be">da0c9c84fdbc82b3b8e2221482a86225136e26be</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the url variable in interpolateName.js.
<p>Publish Date: 2022-10-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37603>CVE-2022-37603</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-14</p>
<p>Fix Resolution (loader-utils): 2.0.1</p>
<p>Direct dependency fix Resolution (babel-loader): 9.1.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-37599</summary>
### Vulnerable Library - <b>loader-utils-2.0.0.tgz</b></p>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz</a></p>
<p>
Dependency Hierarchy:
- babel-loader-8.3.0.tgz (Root Library)
- :x: **loader-utils-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/neinteractiveliterature/intercode/commit/da0c9c84fdbc82b3b8e2221482a86225136e26be">da0c9c84fdbc82b3b8e2221482a86225136e26be</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the resourcePath variable in interpolateName.js.
<p>Publish Date: 2022-10-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37599>CVE-2022-37599</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_process
|
babel loader tgz vulnerabilities highest severity is autoclosed vulnerable library babel loader tgz found in head commit a href vulnerabilities cve severity cvss dependency type fixed in babel loader version remediation available high loader utils tgz transitive high loader utils tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library loader utils tgz utils for webpack loaders library home page a href dependency hierarchy babel loader tgz root library x loader utils tgz vulnerable library found in head commit a href found in base branch main vulnerability details a regular expression denial of service redos flaw was found in function interpolatename in interpolatename js in webpack loader utils via the url variable in interpolatename js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution loader utils direct dependency fix resolution babel loader step up your open source security game with mend cve vulnerable library loader utils tgz utils for webpack loaders library home page a href dependency hierarchy babel loader tgz root library x loader utils tgz vulnerable library found in head commit a href found in base branch main vulnerability details a regular expression denial of service redos flaw was found in function interpolatename in interpolatename js in webpack loader utils via the resourcepath variable in interpolatename js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
| 0
|
499,314
| 14,444,785,395
|
IssuesEvent
|
2020-12-07 21:48:46
|
ansible/awx
|
https://api.github.com/repos/ansible/awx
|
closed
|
ansible.tower.tower_group module is having an erratic behavior.
|
component:awx_collection priority:medium state:needs_info
|
##### ISSUE TYPE
- Bug Report
##### SUMMARY
<!-- Briefly describe the problem. -->
Running the tower_group module, it is erratically removing the existing hosts from the existing groups instead of preserve them there. As I can see, the group IDs are preserved (it is not removing them)
##### ENVIRONMENT
* AWX version: 3.7.2
* AWX install method: Bundle Standalone
* Ansible version: 2.9.1
* Operating System: RHEL 8.2
* Web Browser: Chrome 86.0.4240.111
##### STEPS TO REPRODUCE
I created several hosts using the following template. It creates new hosts and assign them to 2 different groups depending on some variables. In this example, I launched 5 executions with different hostnames simultaneously.
```
#Add Host Linux to inventory LINUX_POSTINSTALL
- name: Add tower host (Linux)
ansible.tower.tower_host:
name: "{{ vm_hostname }}"
tower_oauthtoken: "xxxxxxxxxxxxxxxx"
validate_certs: no
description: "{{ vm_hostname }}"
inventory: "LINUX_POSTINSTALL"
state: present
when: machine_os_fact == 'RHEL'
#Add group Linux version to Host Linux
- name: Add tower group (RHEL version)
ansible.tower.tower_group:
name: "{{ rhel_version_fact }}"
tower_oauthtoken: "xxxxxxxxxxxxxxxxxxxxxxxx"
validate_certs: no
inventory: "LINUX_POSTINSTALL"
state: present
hosts: "{{ vm_hostname }}"
when: machine_os_fact == 'RHEL'
#Add group Linux localisation to Host Linux
- name: Add tower group (RHEL localisation)
ansible.tower.tower_group:
name: "{{ rhel_localisation_fact }}"
tower_oauthtoken: "xxxxxxxxxxxxxxxxxxxxx"
validate_certs: no
inventory: "LINUX_POSTINSTALL"
state: present
hosts: "{{ vm_hostname }}"
when: machine_os_fact == 'RHEL'
```
As you can see in the image below, apparently the module is adding to the group only the last execution.

In the job output, we can see how the module is returning that the task are done ok. In this case, it is returning that "hostname3" is added to the groups.
```
ansible-playbook 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/awx/venv/ansible/lib/python2.7/site-packages/ansible
executable location = /var/lib/awx/venv/ansible/bin/ansible-playbook
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /tmp/awx_624_1dt9fhbh/tmphd1xuum6 as it did not pass its verify_file() method
Parsed /tmp/awx_624_1dt9fhbh/tmphd1xuum6 inventory source with script plugin
PLAYBOOK: create_group.yml *****************************************************
1 plays in create_group.yml
PLAY [Test] ********************************************************************
TASK [Gathering Facts] *********************************************************
task path: /tmp/awx_624_1dt9fhbh/project/create_group.yml:2
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: awx
<localhost> EXEC /bin/sh -c 'echo ~awx && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414 `" && echo ansible-tmp-1605200082.3-263112399318414="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414 `" ) && sleep 0'
Using module file /var/lib/awx/venv/ansible/lib/python2.7/site-packages/ansible/modules/system/setup.py
<localhost> PUT /var/lib/awx/.ansible/tmp/ansible-local-3hG7UUy/tmpZKaMGV TO /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414/AnsiballZ_setup.py
<localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414/ /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414/AnsiballZ_setup.py && sleep 0'
<localhost> EXEC /bin/sh -c '/var/lib/awx/venv/ansible/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414/AnsiballZ_setup.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers
TASK [Add tower host (Linux)] **************************************************
task path: /tmp/awx_624_1dt9fhbh/project/create_group.yml:31
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: awx
<localhost> EXEC /bin/sh -c 'echo ~awx && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133 `" && echo ansible-tmp-1605200090.21-2584096818133="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133 `" ) && sleep 0'
Using module file /tmp/awx_624_1dt9fhbh/requirements_collections/ansible_collections/ansible/tower/plugins/modules/tower_host.py
<localhost> PUT /var/lib/awx/.ansible/tmp/ansible-local-3hG7UUy/tmpP1yxME TO /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133/AnsiballZ_tower_host.py
<localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133/ /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133/AnsiballZ_tower_host.py && sleep 0'
<localhost> EXEC /bin/sh -c '/var/lib/awx/venv/ansible/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133/AnsiballZ_tower_host.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"id": 15,
"invocation": {
"module_args": {
"description": "hostname3",
"enabled": true,
"inventory": "LINUX_POSTINSTALL",
"name": "hostname3",
"new_name": null,
"state": "present",
"tower_config_file": null,
"tower_host": null,
"tower_oauthtoken": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"tower_password": null,
"tower_username": null,
"validate_certs": false
}
},
…
TASK [Add tower group (RHEL version)] ******************************************
task path: /tmp/awx_624_1dt9fhbh/project/create_group.yml:56
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: awx
<localhost> EXEC /bin/sh -c 'echo ~awx && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155 `" && echo ansible-tmp-1605200097.5-228059388741155="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155 `" ) && sleep 0'
Using module file /tmp/awx_624_1dt9fhbh/requirements_collections/ansible_collections/ansible/tower/plugins/modules/tower_group.py
<localhost> PUT /var/lib/awx/.ansible/tmp/ansible-local-3hG7UUy/tmpM6SQM0 TO /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155/AnsiballZ_tower_group.py
<localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155/ /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155/AnsiballZ_tower_group.py && sleep 0'
<localhost> EXEC /bin/sh -c '/var/lib/awx/venv/ansible/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155/AnsiballZ_tower_group.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"id": 7,
"invocation": {
"module_args": {
"children": null,
"description": null,
"hosts": [
"hostname3"
],
"inventory": "LINUX_POSTINSTALL",
"name": "8.2",
"new_name": null,
"tower_config_file": null,
"tower_host": null,
"tower_oauthtoken": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"tower_password": null,
"tower_username": null,
"validate_certs": false,
"variables": null
}
}
}
TASK [Add tower group (RHEL localisation)] *************************************
task path: /tmp/awx_624_1dt9fhbh/project/create_group.yml:67
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: awx
<localhost> EXEC /bin/sh -c 'echo ~awx && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292 `" && echo ansible-tmp-1605200104.56-170507897592292="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292 `" ) && sleep 0'
Using module file /tmp/awx_624_1dt9fhbh/requirements_collections/ansible_collections/ansible/tower/plugins/modules/tower_group.py
<localhost> PUT /var/lib/awx/.ansible/tmp/ansible-local-3hG7UUy/tmpENU7Vr TO /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292/AnsiballZ_tower_group.py
<localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292/ /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292/AnsiballZ_tower_group.py && sleep 0'
<localhost> EXEC /bin/sh -c '/var/lib/awx/venv/ansible/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292/AnsiballZ_tower_group.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"id": 9,
"invocation": {
"module_args": {
"children": null,
"description": null,
"hosts": [
"hostname3"
],
"inventory": "LINUX_POSTINSTALL",
"name": "location",
"new_name": null,
"tower_config_file": null,
"tower_host": null,
"tower_oauthtoken": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"tower_password": null,
"tower_username": null,
"validate_certs": false,
"variables": null
}
}
}
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************
localhost : ok=3 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
```
If I run the job template again using an existing host, it removes every hosts from the existing groups and add it only the current hostname.

<!-- Please describe exactly how to reproduce the problem. -->
##### EXPECTED RESULTS
<!-- What did you expect to happen when running the steps above? -->
The current group assignation should be preserved if state = present.
##### ACTUAL RESULTS
The current group assignation is removed erratically.
<!-- What actually happened? -->
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->
|
1.0
|
ansible.tower.tower_group module is having an erratic behavior. - ##### ISSUE TYPE
- Bug Report
##### SUMMARY
<!-- Briefly describe the problem. -->
Running the tower_group module, it is erratically removing the existing hosts from the existing groups instead of preserve them there. As I can see, the group IDs are preserved (it is not removing them)
##### ENVIRONMENT
* AWX version: 3.7.2
* AWX install method: Bundle Standalone
* Ansible version: 2.9.1
* Operating System: RHEL 8.2
* Web Browser: Chrome 86.0.4240.111
##### STEPS TO REPRODUCE
I created several hosts using the following template. It creates new hosts and assign them to 2 different groups depending on some variables. In this example, I launched 5 executions with different hostnames simultaneously.
```
#Add Host Linux to inventory LINUX_POSTINSTALL
- name: Add tower host (Linux)
ansible.tower.tower_host:
name: "{{ vm_hostname }}"
tower_oauthtoken: "xxxxxxxxxxxxxxxx"
validate_certs: no
description: "{{ vm_hostname }}"
inventory: "LINUX_POSTINSTALL"
state: present
when: machine_os_fact == 'RHEL'
#Add group Linux version to Host Linux
- name: Add tower group (RHEL version)
ansible.tower.tower_group:
name: "{{ rhel_version_fact }}"
tower_oauthtoken: "xxxxxxxxxxxxxxxxxxxxxxxx"
validate_certs: no
inventory: "LINUX_POSTINSTALL"
state: present
hosts: "{{ vm_hostname }}"
when: machine_os_fact == 'RHEL'
#Add group Linux localisation to Host Linux
- name: Add tower group (RHEL localisation)
ansible.tower.tower_group:
name: "{{ rhel_localisation_fact }}"
tower_oauthtoken: "xxxxxxxxxxxxxxxxxxxxx"
validate_certs: no
inventory: "LINUX_POSTINSTALL"
state: present
hosts: "{{ vm_hostname }}"
when: machine_os_fact == 'RHEL'
```
As you can see in the image below, apparently the module is adding to the group only the last execution.

In the job output, we can see how the module is returning that the task are done ok. In this case, it is returning that "hostname3" is added to the groups.
```
ansible-playbook 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/awx/venv/ansible/lib/python2.7/site-packages/ansible
executable location = /var/lib/awx/venv/ansible/bin/ansible-playbook
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /tmp/awx_624_1dt9fhbh/tmphd1xuum6 as it did not pass its verify_file() method
Parsed /tmp/awx_624_1dt9fhbh/tmphd1xuum6 inventory source with script plugin
PLAYBOOK: create_group.yml *****************************************************
1 plays in create_group.yml
PLAY [Test] ********************************************************************
TASK [Gathering Facts] *********************************************************
task path: /tmp/awx_624_1dt9fhbh/project/create_group.yml:2
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: awx
<localhost> EXEC /bin/sh -c 'echo ~awx && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414 `" && echo ansible-tmp-1605200082.3-263112399318414="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414 `" ) && sleep 0'
Using module file /var/lib/awx/venv/ansible/lib/python2.7/site-packages/ansible/modules/system/setup.py
<localhost> PUT /var/lib/awx/.ansible/tmp/ansible-local-3hG7UUy/tmpZKaMGV TO /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414/AnsiballZ_setup.py
<localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414/ /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414/AnsiballZ_setup.py && sleep 0'
<localhost> EXEC /bin/sh -c '/var/lib/awx/venv/ansible/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414/AnsiballZ_setup.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1605200082.3-263112399318414/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers
TASK [Add tower host (Linux)] **************************************************
task path: /tmp/awx_624_1dt9fhbh/project/create_group.yml:31
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: awx
<localhost> EXEC /bin/sh -c 'echo ~awx && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133 `" && echo ansible-tmp-1605200090.21-2584096818133="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133 `" ) && sleep 0'
Using module file /tmp/awx_624_1dt9fhbh/requirements_collections/ansible_collections/ansible/tower/plugins/modules/tower_host.py
<localhost> PUT /var/lib/awx/.ansible/tmp/ansible-local-3hG7UUy/tmpP1yxME TO /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133/AnsiballZ_tower_host.py
<localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133/ /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133/AnsiballZ_tower_host.py && sleep 0'
<localhost> EXEC /bin/sh -c '/var/lib/awx/venv/ansible/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133/AnsiballZ_tower_host.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1605200090.21-2584096818133/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"id": 15,
"invocation": {
"module_args": {
"description": "hostname3",
"enabled": true,
"inventory": "LINUX_POSTINSTALL",
"name": "hostname3",
"new_name": null,
"state": "present",
"tower_config_file": null,
"tower_host": null,
"tower_oauthtoken": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"tower_password": null,
"tower_username": null,
"validate_certs": false
}
},
…
TASK [Add tower group (RHEL version)] ******************************************
task path: /tmp/awx_624_1dt9fhbh/project/create_group.yml:56
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: awx
<localhost> EXEC /bin/sh -c 'echo ~awx && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155 `" && echo ansible-tmp-1605200097.5-228059388741155="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155 `" ) && sleep 0'
Using module file /tmp/awx_624_1dt9fhbh/requirements_collections/ansible_collections/ansible/tower/plugins/modules/tower_group.py
<localhost> PUT /var/lib/awx/.ansible/tmp/ansible-local-3hG7UUy/tmpM6SQM0 TO /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155/AnsiballZ_tower_group.py
<localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155/ /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155/AnsiballZ_tower_group.py && sleep 0'
<localhost> EXEC /bin/sh -c '/var/lib/awx/venv/ansible/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155/AnsiballZ_tower_group.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1605200097.5-228059388741155/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"id": 7,
"invocation": {
"module_args": {
"children": null,
"description": null,
"hosts": [
"hostname3"
],
"inventory": "LINUX_POSTINSTALL",
"name": "8.2",
"new_name": null,
"tower_config_file": null,
"tower_host": null,
"tower_oauthtoken": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"tower_password": null,
"tower_username": null,
"validate_certs": false,
"variables": null
}
}
}
TASK [Add tower group (RHEL localisation)] *************************************
task path: /tmp/awx_624_1dt9fhbh/project/create_group.yml:67
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: awx
<localhost> EXEC /bin/sh -c 'echo ~awx && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292 `" && echo ansible-tmp-1605200104.56-170507897592292="` echo /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292 `" ) && sleep 0'
Using module file /tmp/awx_624_1dt9fhbh/requirements_collections/ansible_collections/ansible/tower/plugins/modules/tower_group.py
<localhost> PUT /var/lib/awx/.ansible/tmp/ansible-local-3hG7UUy/tmpENU7Vr TO /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292/AnsiballZ_tower_group.py
<localhost> EXEC /bin/sh -c 'chmod u+x /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292/ /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292/AnsiballZ_tower_group.py && sleep 0'
<localhost> EXEC /bin/sh -c '/var/lib/awx/venv/ansible/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292/AnsiballZ_tower_group.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /var/lib/awx/.ansible/tmp/ansible-tmp-1605200104.56-170507897592292/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"id": 9,
"invocation": {
"module_args": {
"children": null,
"description": null,
"hosts": [
"hostname3"
],
"inventory": "LINUX_POSTINSTALL",
"name": "location",
"new_name": null,
"tower_config_file": null,
"tower_host": null,
"tower_oauthtoken": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"tower_password": null,
"tower_username": null,
"validate_certs": false,
"variables": null
}
}
}
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************
localhost : ok=3 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
```
If I run the job template again using an existing host, it removes every hosts from the existing groups and add it only the current hostname.

<!-- Please describe exactly how to reproduce the problem. -->
##### EXPECTED RESULTS
<!-- What did you expect to happen when running the steps above? -->
The current group assignation should be preserved if state = present.
##### ACTUAL RESULTS
The current group assignation is removed erratically.
<!-- What actually happened? -->
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->
|
non_process
|
ansible tower tower group module is having an erratic behavior issue type bug report summary running the tower group module it is erratically removing the existing hosts from the existing groups instead of preserve them there as i can see the group ids are preserved it is not removing them environment awx version awx install method bundle standalone ansible version operating system rhel web browser chrome steps to reproduce i created several hosts using the following template it creates new hosts and assign them to different groups depending on some variables in this example i launched executions with different hostnames simultaneously add host linux to inventory linux postinstall name add tower host linux ansible tower tower host name vm hostname tower oauthtoken xxxxxxxxxxxxxxxx validate certs no description vm hostname inventory linux postinstall state present when machine os fact rhel add group linux version to host linux name add tower group rhel version ansible tower tower group name rhel version fact tower oauthtoken xxxxxxxxxxxxxxxxxxxxxxxx validate certs no inventory linux postinstall state present hosts vm hostname when machine os fact rhel add group linux localisation to host linux name add tower group rhel localisation ansible tower tower group name rhel localisation fact tower oauthtoken xxxxxxxxxxxxxxxxxxxxx validate certs no inventory linux postinstall state present hosts vm hostname when machine os fact rhel as you can see in the image below apparently the module is adding to the group only the last execution in the job output we can see how the module is returning that the task are done ok in this case it is returning that is added to the groups ansible playbook config file etc ansible ansible cfg configured module search path ansible python module location var lib awx venv ansible lib site packages ansible executable location var lib awx venv ansible bin ansible playbook python version default apr using etc ansible ansible cfg as config file host list declined parsing tmp awx as it did not pass its verify file method parsed tmp awx inventory source with script plugin playbook create group yml plays in create group yml play task task path tmp awx project create group yml establish local connection for user awx exec bin sh c echo awx sleep exec bin sh c umask mkdir p echo var lib awx ansible tmp ansible tmp echo ansible tmp echo var lib awx ansible tmp ansible tmp sleep using module file var lib awx venv ansible lib site packages ansible modules system setup py put var lib awx ansible tmp ansible local tmpzkamgv to var lib awx ansible tmp ansible tmp ansiballz setup py exec bin sh c chmod u x var lib awx ansible tmp ansible tmp var lib awx ansible tmp ansible tmp ansiballz setup py sleep exec bin sh c var lib awx venv ansible bin python var lib awx ansible tmp ansible tmp ansiballz setup py sleep exec bin sh c rm f r var lib awx ansible tmp ansible tmp dev null sleep ok meta ran handlers task task path tmp awx project create group yml establish local connection for user awx exec bin sh c echo awx sleep exec bin sh c umask mkdir p echo var lib awx ansible tmp ansible tmp echo ansible tmp echo var lib awx ansible tmp ansible tmp sleep using module file tmp awx requirements collections ansible collections ansible tower plugins modules tower host py put var lib awx ansible tmp ansible local to var lib awx ansible tmp ansible tmp ansiballz tower host py exec bin sh c chmod u x var lib awx ansible tmp ansible tmp var lib awx ansible tmp ansible tmp ansiballz tower host py sleep exec bin sh c var lib awx venv ansible bin python var lib awx ansible tmp ansible tmp ansiballz tower host py sleep exec bin sh c rm f r var lib awx ansible tmp ansible tmp dev null sleep changed changed true id invocation module args description enabled true inventory linux postinstall name new name null state present tower config file null tower host null tower oauthtoken value specified in no log parameter tower password null tower username null validate certs false … task task path tmp awx project create group yml establish local connection for user awx exec bin sh c echo awx sleep exec bin sh c umask mkdir p echo var lib awx ansible tmp ansible tmp echo ansible tmp echo var lib awx ansible tmp ansible tmp sleep using module file tmp awx requirements collections ansible collections ansible tower plugins modules tower group py put var lib awx ansible tmp ansible local to var lib awx ansible tmp ansible tmp ansiballz tower group py exec bin sh c chmod u x var lib awx ansible tmp ansible tmp var lib awx ansible tmp ansible tmp ansiballz tower group py sleep exec bin sh c var lib awx venv ansible bin python var lib awx ansible tmp ansible tmp ansiballz tower group py sleep exec bin sh c rm f r var lib awx ansible tmp ansible tmp dev null sleep changed changed true id invocation module args children null description null hosts inventory linux postinstall name new name null tower config file null tower host null tower oauthtoken value specified in no log parameter tower password null tower username null validate certs false variables null task task path tmp awx project create group yml establish local connection for user awx exec bin sh c echo awx sleep exec bin sh c umask mkdir p echo var lib awx ansible tmp ansible tmp echo ansible tmp echo var lib awx ansible tmp ansible tmp sleep using module file tmp awx requirements collections ansible collections ansible tower plugins modules tower group py put var lib awx ansible tmp ansible local to var lib awx ansible tmp ansible tmp ansiballz tower group py exec bin sh c chmod u x var lib awx ansible tmp ansible tmp var lib awx ansible tmp ansible tmp ansiballz tower group py sleep exec bin sh c var lib awx venv ansible bin python var lib awx ansible tmp ansible tmp ansiballz tower group py sleep exec bin sh c rm f r var lib awx ansible tmp ansible tmp dev null sleep changed changed true id invocation module args children null description null hosts inventory linux postinstall name location new name null tower config file null tower host null tower oauthtoken value specified in no log parameter tower password null tower username null validate certs false variables null meta ran handlers meta ran handlers play recap localhost ok changed unreachable failed skipped rescued ignored if i run the job template again using an existing host it removes every hosts from the existing groups and add it only the current hostname expected results the current group assignation should be preserved if state present actual results the current group assignation is removed erratically additional information include any links to sosreport database dumps screenshots or other information
| 0
|
12,829
| 15,211,953,375
|
IssuesEvent
|
2021-02-17 09:46:45
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Using supported types in `Unsupported` fields yields unnecessary migrations
|
engines/migration engine process/candidate team/migrations topic: migrate
|
Reproduction on postgres/mysql.
```prisma
model Cat {
id Int @id
data Unsupported("TEXT")
}
```
We have to make type diffing specifically aware of this scenario to avoid migrating, in that case.
|
1.0
|
Using supported types in `Unsupported` fields yields unnecessary migrations - Reproduction on postgres/mysql.
```prisma
model Cat {
id Int @id
data Unsupported("TEXT")
}
```
We have to make type diffing specifically aware of this scenario to avoid migrating, in that case.
|
process
|
using supported types in unsupported fields yields unnecessary migrations reproduction on postgres mysql prisma model cat id int id data unsupported text we have to make type diffing specifically aware of this scenario to avoid migrating in that case
| 1
|
59,373
| 14,379,615,717
|
IssuesEvent
|
2020-12-02 00:46:48
|
gate5/react-16.0.0
|
https://api.github.com/repos/gate5/react-16.0.0
|
opened
|
CVE-2018-0734 (Medium) detected in io.jsv5.11.1
|
security vulnerability
|
## CVE-2018-0734 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>io.jsv5.11.1</b></p></summary>
<p>
<p>Node.js JavaScript runtime :sparkles::turtle::rocket::sparkles:</p>
<p>Library home page: <a href=https://github.com/iojs/io.js.git>https://github.com/iojs/io.js.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/gate5/react-16.0.0/commit/2a806761a8d27ad65d559febfdab96cc7efdbee5">2a806761a8d27ad65d559febfdab96cc7efdbee5</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>react-16.0.0/scripts/bench/node_modules/nodegit/vendor/openssl/openssl/crypto/dsa/dsa_ossl.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>react-16.0.0/scripts/bench/node_modules/nodegit/vendor/openssl/openssl/crypto/dsa/dsa_ossl.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The OpenSSL DSA signature algorithm has been shown to be vulnerable to a timing side channel attack. An attacker could use variations in the signing algorithm to recover the private key. Fixed in OpenSSL 1.1.1a (Affected 1.1.1). Fixed in OpenSSL 1.1.0j (Affected 1.1.0-1.1.0i). Fixed in OpenSSL 1.0.2q (Affected 1.0.2-1.0.2p).
<p>Publish Date: 2018-10-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-0734>CVE-2018-0734</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-0734">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-0734</a></p>
<p>Release Date: 2018-10-30</p>
<p>Fix Resolution: 1.0.2q,1.1.0j,1.1.1a</p>
</p>
</details>
<p></p>
|
True
|
CVE-2018-0734 (Medium) detected in io.jsv5.11.1 - ## CVE-2018-0734 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>io.jsv5.11.1</b></p></summary>
<p>
<p>Node.js JavaScript runtime :sparkles::turtle::rocket::sparkles:</p>
<p>Library home page: <a href=https://github.com/iojs/io.js.git>https://github.com/iojs/io.js.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/gate5/react-16.0.0/commit/2a806761a8d27ad65d559febfdab96cc7efdbee5">2a806761a8d27ad65d559febfdab96cc7efdbee5</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>react-16.0.0/scripts/bench/node_modules/nodegit/vendor/openssl/openssl/crypto/dsa/dsa_ossl.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>react-16.0.0/scripts/bench/node_modules/nodegit/vendor/openssl/openssl/crypto/dsa/dsa_ossl.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The OpenSSL DSA signature algorithm has been shown to be vulnerable to a timing side channel attack. An attacker could use variations in the signing algorithm to recover the private key. Fixed in OpenSSL 1.1.1a (Affected 1.1.1). Fixed in OpenSSL 1.1.0j (Affected 1.1.0-1.1.0i). Fixed in OpenSSL 1.0.2q (Affected 1.0.2-1.0.2p).
<p>Publish Date: 2018-10-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-0734>CVE-2018-0734</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-0734">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-0734</a></p>
<p>Release Date: 2018-10-30</p>
<p>Fix Resolution: 1.0.2q,1.1.0j,1.1.1a</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in io cve medium severity vulnerability vulnerable library io node js javascript runtime sparkles turtle rocket sparkles library home page a href found in head commit a href vulnerable source files react scripts bench node modules nodegit vendor openssl openssl crypto dsa dsa ossl c react scripts bench node modules nodegit vendor openssl openssl crypto dsa dsa ossl c vulnerability details the openssl dsa signature algorithm has been shown to be vulnerable to a timing side channel attack an attacker could use variations in the signing algorithm to recover the private key fixed in openssl affected fixed in openssl affected fixed in openssl affected publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
19,468
| 25,763,307,908
|
IssuesEvent
|
2022-12-08 22:40:23
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Terminal killed
|
info-needed terminal-process new release
|
Type: <b>Bug</b>
If I open a new terminal is auto killed and closed
VS Code version: Code 1.74.0 (Universal) (5235c6bb189b60b01b1f49062f4ffa42384f8c91, 2022-12-05T16:43:37.594Z)
OS version: Darwin x64 22.1.0
Modes:
Sandboxed: No
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz (16 x 2300)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>metal: disabled_off<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_renderer: enabled_on<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off|
|Load (avg)|2, 3, 3|
|Memory (System)|32.00GB (3.70GB free)|
|Process Argv|--crash-reporter-id 92794c09-d9db-4efe-87e9-01ad85a95fcf|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (27)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-eslint|dba|2.2.6
python-environment-manager|don|1.0.4
python-extension-pack|don|1.7.0
es7-react-js-snippets|dsz|4.4.3
gitlens|eam|13.1.1
prettier-vscode|esb|9.10.3
styled-components-snippets|jon|0.10.0
vscode-colorize|kam|0.11.1
vsc-python-indent|Kev|1.18.0
isort|ms-|2022.8.0
python|ms-|2022.20.0
vscode-pylance|ms-|2022.12.20
jupyter|ms-|2022.11.1003412109
jupyter-keymap|ms-|1.0.0
jupyter-renderers|ms-|1.0.12
vscode-jupyter-cell-tags|ms-|0.1.6
vscode-jupyter-slideshow|ms-|0.1.5
vscode-typescript-tslint-plugin|ms-|1.3.4
autodocstring|njp|0.6.1
js-jsx-snippets|sky|11.0.0
code-spell-checker|str|2.12.0
vscode-styled-components|sty|1.7.5
intellicode-api-usage-examples|Vis|0.2.6
vscodeintellicode|Vis|1.2.29
gitblame|wad|10.0.0
jinja|who|0.0.8
html-css-class-completion|Zig|1.20.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vsreu685:30147344
python383cf:30185419
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vslsvsres303:30308271
pythonvspyl392:30443607
vserr242:30382549
pythontb:30283811
vsjup518:30340749
pythonptprofiler:30281270
vsdfh931:30280409
vshan820:30294714
vstes263:30335439
vscorecescf:30445987
pythondataviewer:30285071
vscod805:30301674
binariesv615:30325510
bridge0708:30335490
bridge0723:30353136
cmake_vspar411:30581797
vsaa593cf:30376535
pythonvs932:30410667
cppdebug:30492333
vsclangdf:30486550
c4g48928:30535728
dsvsc012cf:30540253
azure-dev_surveyone:30548225
vsccc:30610678
pyindex848cf:30577861
nodejswelcome1cf:30587006
3biah626:30602489
gswce1:30612156
iaj6b796:30613358
dbltrim-noruby:30604474
89544117:30613380
fim-prod:30623723
ejf25101:30620729
```
</details>
<!-- generated by issue reporter -->
|
1.0
|
Terminal killed -
Type: <b>Bug</b>
If I open a new terminal is auto killed and closed
VS Code version: Code 1.74.0 (Universal) (5235c6bb189b60b01b1f49062f4ffa42384f8c91, 2022-12-05T16:43:37.594Z)
OS version: Darwin x64 22.1.0
Modes:
Sandboxed: No
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz (16 x 2300)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>metal: disabled_off<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_renderer: enabled_on<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off|
|Load (avg)|2, 3, 3|
|Memory (System)|32.00GB (3.70GB free)|
|Process Argv|--crash-reporter-id 92794c09-d9db-4efe-87e9-01ad85a95fcf|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (27)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-eslint|dba|2.2.6
python-environment-manager|don|1.0.4
python-extension-pack|don|1.7.0
es7-react-js-snippets|dsz|4.4.3
gitlens|eam|13.1.1
prettier-vscode|esb|9.10.3
styled-components-snippets|jon|0.10.0
vscode-colorize|kam|0.11.1
vsc-python-indent|Kev|1.18.0
isort|ms-|2022.8.0
python|ms-|2022.20.0
vscode-pylance|ms-|2022.12.20
jupyter|ms-|2022.11.1003412109
jupyter-keymap|ms-|1.0.0
jupyter-renderers|ms-|1.0.12
vscode-jupyter-cell-tags|ms-|0.1.6
vscode-jupyter-slideshow|ms-|0.1.5
vscode-typescript-tslint-plugin|ms-|1.3.4
autodocstring|njp|0.6.1
js-jsx-snippets|sky|11.0.0
code-spell-checker|str|2.12.0
vscode-styled-components|sty|1.7.5
intellicode-api-usage-examples|Vis|0.2.6
vscodeintellicode|Vis|1.2.29
gitblame|wad|10.0.0
jinja|who|0.0.8
html-css-class-completion|Zig|1.20.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vsreu685:30147344
python383cf:30185419
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vslsvsres303:30308271
pythonvspyl392:30443607
vserr242:30382549
pythontb:30283811
vsjup518:30340749
pythonptprofiler:30281270
vsdfh931:30280409
vshan820:30294714
vstes263:30335439
vscorecescf:30445987
pythondataviewer:30285071
vscod805:30301674
binariesv615:30325510
bridge0708:30335490
bridge0723:30353136
cmake_vspar411:30581797
vsaa593cf:30376535
pythonvs932:30410667
cppdebug:30492333
vsclangdf:30486550
c4g48928:30535728
dsvsc012cf:30540253
azure-dev_surveyone:30548225
vsccc:30610678
pyindex848cf:30577861
nodejswelcome1cf:30587006
3biah626:30602489
gswce1:30612156
iaj6b796:30613358
dbltrim-noruby:30604474
89544117:30613380
fim-prod:30623723
ejf25101:30620729
```
</details>
<!-- generated by issue reporter -->
|
process
|
terminal killed type bug if i open a new terminal is auto killed and closed vs code version code universal os version darwin modes sandboxed no system info item value cpus intel r core tm cpu x gpu status canvas enabled canvas oop rasterization disabled off direct rendering display compositor disabled off ok gpu compositing enabled metal disabled off multiple raster threads enabled on opengl enabled on rasterization enabled raw draw disabled off ok skia renderer enabled on video decode enabled video encode enabled vulkan disabled off webgl enabled enabled webgpu disabled off load avg memory system free process argv crash reporter id screen reader no vm extensions extension author truncated version vscode eslint dba python environment manager don python extension pack don react js snippets dsz gitlens eam prettier vscode esb styled components snippets jon vscode colorize kam vsc python indent kev isort ms python ms vscode pylance ms jupyter ms jupyter keymap ms jupyter renderers ms vscode jupyter cell tags ms vscode jupyter slideshow ms vscode typescript tslint plugin ms autodocstring njp js jsx snippets sky code spell checker str vscode styled components sty intellicode api usage examples vis vscodeintellicode vis gitblame wad jinja who html css class completion zig a b experiments pythontb pythonptprofiler vscorecescf pythondataviewer cmake cppdebug vsclangdf azure dev surveyone vsccc dbltrim noruby fim prod
| 1
|
127,393
| 12,321,882,204
|
IssuesEvent
|
2020-05-13 09:25:47
|
Chocobozzz/PeerTube
|
https://api.github.com/repos/Chocobozzz/PeerTube
|
closed
|
Documentation contains broken links
|
Component: Documentation :books: Type: Bug :bug: good first issue :beginner:
|
<!-- If you have a question, please read the FAQ.md first -->
<!-- If you report a security issue, please refrain from filling an issue and refer to SECURITY.md for the disclosure procedure. -->
<!-- If you report a bug, please fill the form -->
**What happened?**
When clicking some links (e.g. `/support/doc/api`) it leads to a 404 page.
It seems like these were not intended to be actual links.

**What do you expect to happen instead?**
I expect links I click on to lead somewhere (preferably somewhere useful). When I clicked on `/support/doc/api` I expected to be taken to the API support docs.
If these were not intended to be links, then I expect them not to be links.
**Steps to reproduce:**
1. Visit PeerTube docs "Getting Started" page. (https://docs.joinpeertube.org/#/contribute-getting-started?id=write-documentation)
2. Click on a link starting with /
**Additional information**
* Browser name/version: Google Chrome Version 75.0.3770.100 (no extensions)
|
1.0
|
Documentation contains broken links - <!-- If you have a question, please read the FAQ.md first -->
<!-- If you report a security issue, please refrain from filling an issue and refer to SECURITY.md for the disclosure procedure. -->
<!-- If you report a bug, please fill the form -->
**What happened?**
When clicking some links (e.g. `/support/doc/api`) it leads to a 404 page.
It seems like these were not intended to be actual links.

**What do you expect to happen instead?**
I expect links I click on to lead somewhere (preferably somewhere useful). When I clicked on `/support/doc/api` I expected to be taken to the API support docs.
If these were not intended to be links, then I expect them not to be links.
**Steps to reproduce:**
1. Visit PeerTube docs "Getting Started" page. (https://docs.joinpeertube.org/#/contribute-getting-started?id=write-documentation)
2. Click on a link starting with /
**Additional information**
* Browser name/version: Google Chrome Version 75.0.3770.100 (no extensions)
|
non_process
|
documentation contains broken links what happened when clicking some links e g support doc api it leads to a page it seems like these were not intended to be actual links what do you expect to happen instead i expect links i click on to lead somewhere preferably somewhere useful when i clicked on support doc api i expected to be taken to the api support docs if these were not intended to be links then i expect them not to be links steps to reproduce visit peertube docs getting started page click on a link starting with additional information browser name version google chrome version no extensions
| 0
|
22,037
| 30,553,933,099
|
IssuesEvent
|
2023-07-20 10:21:11
|
brucemiller/LaTeXML
|
https://api.github.com/repos/brucemiller/LaTeXML
|
closed
|
MakeBibliography.pm typo?
|
bug postprocessing bibliography
|
https://github.com/brucemiller/LaTeXML/blob/master/lib/LaTeXML/Post/MakeBibliography.pm, line 785 says
`ltx:title`
while the other bibliographic types use `ltx:bib-title` .
Please check if this is by design or a typo.
|
1.0
|
MakeBibliography.pm typo? - https://github.com/brucemiller/LaTeXML/blob/master/lib/LaTeXML/Post/MakeBibliography.pm, line 785 says
`ltx:title`
while the other bibliographic types use `ltx:bib-title` .
Please check if this is by design or a typo.
|
process
|
makebibliography pm typo line says ltx title while the other bibliographic types use ltx bib title please check if this is by design or a typo
| 1
|
218,083
| 7,330,384,065
|
IssuesEvent
|
2018-03-05 09:45:10
|
NCEAS/metacat
|
https://api.github.com/repos/NCEAS/metacat
|
closed
|
Install of data-registry requires cvs checkout
|
Category: metacat Component: Bugzilla-Id Priority: Normal Status: Resolved Tracker: Bug
|
---
Author Name: **Saurabh Garg** (Saurabh Garg)
Original Redmine Issue: 1755, https://projects.ecoinformatics.org/ecoinfo/issues/1755
Original Date: 2004-11-03
Original Assignee: Saurabh Garg
---
Install of data-registry does a cvs checkout from cvs.nceas.ucsb.edu. For
someone outside NCEAS, this requires getting a cvs username info. As a
standalone install, these files should be part of the metacat install.
(Chin from Taiwan faced this problem)
|
1.0
|
Install of data-registry requires cvs checkout - ---
Author Name: **Saurabh Garg** (Saurabh Garg)
Original Redmine Issue: 1755, https://projects.ecoinformatics.org/ecoinfo/issues/1755
Original Date: 2004-11-03
Original Assignee: Saurabh Garg
---
Install of data-registry does a cvs checkout from cvs.nceas.ucsb.edu. For
someone outside NCEAS, this requires getting a cvs username info. As a
standalone install, these files should be part of the metacat install.
(Chin from Taiwan faced this problem)
|
non_process
|
install of data registry requires cvs checkout author name saurabh garg saurabh garg original redmine issue original date original assignee saurabh garg install of data registry does a cvs checkout from cvs nceas ucsb edu for someone outside nceas this requires getting a cvs username info as a standalone install these files should be part of the metacat install chin from taiwan faced this problem
| 0
|
336,776
| 30,221,140,980
|
IssuesEvent
|
2023-07-05 19:30:37
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
ccl/changefeedccl: TestWebhookSink failed
|
C-test-failure O-robot A-cdc T-cdc branch-release-23.1
|
ccl/changefeedccl.TestWebhookSink [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9599673?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9599673?buildTab=artifacts#/) on release-23.1 @ [ad16885ca3b4567ed5eb34646fe8281fd2d740e3](https://github.com/cockroachdb/cockroach/commits/ad16885ca3b4567ed5eb34646fe8281fd2d740e3):
```
=== RUN TestWebhookSink
I230414 07:04:25.252910 1115825 (gostd) server.go:3230 [-] 5885 http: TLS handshake error from 127.0.0.1:46648: remote error: tls: bad certificate
I230414 07:04:25.269727 1115938 (gostd) server.go:3230 [-] 5886 http: TLS handshake error from 127.0.0.1:46654: remote error: tls: bad certificate
I230414 07:04:25.291354 1115760 (gostd) server.go:3230 [-] 5887 http: TLS handshake error from 127.0.0.1:46670: remote error: tls: bad certificate
I230414 07:04:25.326171 1115680 (gostd) server.go:3230 [-] 5888 http: TLS handshake error from 127.0.0.1:46674: remote error: tls: bad certificate
I230414 07:04:32.061236 1115961 (gostd) server.go:3230 [-] 5889 http: TLS handshake error from 127.0.0.1:41586: remote error: tls: bad certificate
I230414 07:04:32.077404 1116004 (gostd) server.go:3230 [-] 5890 http: TLS handshake error from 127.0.0.1:41600: remote error: tls: bad certificate
I230414 07:04:32.098742 1115962 (gostd) server.go:3230 [-] 5891 http: TLS handshake error from 127.0.0.1:41604: remote error: tls: bad certificate
I230414 07:04:32.130158 1115992 (gostd) server.go:3230 [-] 5892 http: TLS handshake error from 127.0.0.1:41618: remote error: tls: bad certificate
I230414 07:04:45.014515 1116027 (gostd) server.go:3230 [-] 5893 http: TLS handshake error from 127.0.0.1:59414: remote error: tls: bad certificate
I230414 07:04:45.029888 1116055 (gostd) server.go:3230 [-] 5894 http: TLS handshake error from 127.0.0.1:59418: remote error: tls: bad certificate
I230414 07:04:45.051987 1116028 (gostd) server.go:3230 [-] 5895 http: TLS handshake error from 127.0.0.1:59428: remote error: tls: bad certificate
I230414 07:04:45.081834 1116084 (gostd) server.go:3230 [-] 5896 http: TLS handshake error from 127.0.0.1:59440: remote error: tls: bad certificate
I230414 07:04:50.700523 1116125 (gostd) server.go:3230 [-] 5897 http: TLS handshake error from 127.0.0.1:45046: remote error: tls: bad certificate
I230414 07:04:50.718159 1116143 (gostd) server.go:3230 [-] 5898 http: TLS handshake error from 127.0.0.1:45056: remote error: tls: bad certificate
I230414 07:04:50.740669 1116127 (gostd) server.go:3230 [-] 5899 http: TLS handshake error from 127.0.0.1:45068: remote error: tls: bad certificate
I230414 07:04:50.773563 1116078 (gostd) server.go:3230 [-] 5900 http: TLS handshake error from 127.0.0.1:45078: remote error: tls: bad certificate
sink_webhook_test.go:108:
Error Trace: github.com/cockroachdb/cockroach/pkg/ccl/changefeedccl/sink_webhook_test.go:108
github.com/cockroachdb/cockroach/pkg/ccl/changefeedccl/sink_webhook_test.go:164
github.com/cockroachdb/cockroach/pkg/ccl/changefeedccl/sink_webhook_test.go:252
Error: Not equal:
expected: int(0)
actual : int64(1)
Test: TestWebhookSink
--- FAIL: TestWebhookSink (28.63s)
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestWebhookSink.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-26988
|
1.0
|
ccl/changefeedccl: TestWebhookSink failed - ccl/changefeedccl.TestWebhookSink [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9599673?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9599673?buildTab=artifacts#/) on release-23.1 @ [ad16885ca3b4567ed5eb34646fe8281fd2d740e3](https://github.com/cockroachdb/cockroach/commits/ad16885ca3b4567ed5eb34646fe8281fd2d740e3):
```
=== RUN TestWebhookSink
I230414 07:04:25.252910 1115825 (gostd) server.go:3230 [-] 5885 http: TLS handshake error from 127.0.0.1:46648: remote error: tls: bad certificate
I230414 07:04:25.269727 1115938 (gostd) server.go:3230 [-] 5886 http: TLS handshake error from 127.0.0.1:46654: remote error: tls: bad certificate
I230414 07:04:25.291354 1115760 (gostd) server.go:3230 [-] 5887 http: TLS handshake error from 127.0.0.1:46670: remote error: tls: bad certificate
I230414 07:04:25.326171 1115680 (gostd) server.go:3230 [-] 5888 http: TLS handshake error from 127.0.0.1:46674: remote error: tls: bad certificate
I230414 07:04:32.061236 1115961 (gostd) server.go:3230 [-] 5889 http: TLS handshake error from 127.0.0.1:41586: remote error: tls: bad certificate
I230414 07:04:32.077404 1116004 (gostd) server.go:3230 [-] 5890 http: TLS handshake error from 127.0.0.1:41600: remote error: tls: bad certificate
I230414 07:04:32.098742 1115962 (gostd) server.go:3230 [-] 5891 http: TLS handshake error from 127.0.0.1:41604: remote error: tls: bad certificate
I230414 07:04:32.130158 1115992 (gostd) server.go:3230 [-] 5892 http: TLS handshake error from 127.0.0.1:41618: remote error: tls: bad certificate
I230414 07:04:45.014515 1116027 (gostd) server.go:3230 [-] 5893 http: TLS handshake error from 127.0.0.1:59414: remote error: tls: bad certificate
I230414 07:04:45.029888 1116055 (gostd) server.go:3230 [-] 5894 http: TLS handshake error from 127.0.0.1:59418: remote error: tls: bad certificate
I230414 07:04:45.051987 1116028 (gostd) server.go:3230 [-] 5895 http: TLS handshake error from 127.0.0.1:59428: remote error: tls: bad certificate
I230414 07:04:45.081834 1116084 (gostd) server.go:3230 [-] 5896 http: TLS handshake error from 127.0.0.1:59440: remote error: tls: bad certificate
I230414 07:04:50.700523 1116125 (gostd) server.go:3230 [-] 5897 http: TLS handshake error from 127.0.0.1:45046: remote error: tls: bad certificate
I230414 07:04:50.718159 1116143 (gostd) server.go:3230 [-] 5898 http: TLS handshake error from 127.0.0.1:45056: remote error: tls: bad certificate
I230414 07:04:50.740669 1116127 (gostd) server.go:3230 [-] 5899 http: TLS handshake error from 127.0.0.1:45068: remote error: tls: bad certificate
I230414 07:04:50.773563 1116078 (gostd) server.go:3230 [-] 5900 http: TLS handshake error from 127.0.0.1:45078: remote error: tls: bad certificate
sink_webhook_test.go:108:
Error Trace: github.com/cockroachdb/cockroach/pkg/ccl/changefeedccl/sink_webhook_test.go:108
github.com/cockroachdb/cockroach/pkg/ccl/changefeedccl/sink_webhook_test.go:164
github.com/cockroachdb/cockroach/pkg/ccl/changefeedccl/sink_webhook_test.go:252
Error: Not equal:
expected: int(0)
actual : int64(1)
Test: TestWebhookSink
--- FAIL: TestWebhookSink (28.63s)
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestWebhookSink.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-26988
|
non_process
|
ccl changefeedccl testwebhooksink failed ccl changefeedccl testwebhooksink with on release run testwebhooksink gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate gostd server go http tls handshake error from remote error tls bad certificate sink webhook test go error trace github com cockroachdb cockroach pkg ccl changefeedccl sink webhook test go github com cockroachdb cockroach pkg ccl changefeedccl sink webhook test go github com cockroachdb cockroach pkg ccl changefeedccl sink webhook test go error not equal expected int actual test testwebhooksink fail testwebhooksink parameters tags bazel gss race help see also cc cockroachdb cdc jira issue crdb
| 0
|
21,064
| 28,012,482,563
|
IssuesEvent
|
2023-03-27 19:44:12
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
PowerPC comparisons returned or stored in variables decompile poorly
|
Type: Bug Feature: Processor/PowerPC
|
**Describe the bug**
PowerPC comparisons returned or stored in variables decompile poorly, as they use `countLeadingZeros(v) >> 5` as a way to check if `v` is 0 (as if so, `v` has 32 bits set to 0, and `32 >> 5 == 1`).
**To Reproduce**
Decompile the attached binaries, using `PowerPC:BE:32:default:default` as the language. For instance, `equals3_32` in `cntlz_O1.o` looks like this:
```C
uint equals3_32(uint param_1)
{
uint uVar1;
uVar1 = countLeadingZeros(param_1 ^ 3);
return uVar1 >> 5;
}
```
The 64-bit version looks sillier, producing this (note that I had to manually set the data type and set custom storage to r3 and r4 for this; by default ghidra detects it as 2 `uint` parameters and with just `longlong` it tries to place it on the stack):
```C
uint equals3_64(longlong param_1)
{
uint uVar1;
uVar1 = countLeadingZeros((uint)((ulonglong)param_1 >> 0x20) | (uint)param_1 ^ 3);
return uVar1 >> 5;
}
```
The unoptimized versions have various sign-extensions and masks that don't show up when optimisations are turned on:
```C
uint equals3_8(char param_1)
{
uint uVar1;
uVar1 = countLeadingZeros((int)param_1 ^ 3);
return uVar1 >> 5 & 0xff;
}
```
**Expected behavior**
The decompiled code should look closer to the original code, ideally generating `param_1 == 3` in all cases (but something like `param_1 ^ 3 == 0` would still be a major improvement).
**Attachments**
[cntlz.zip](https://github.com/NationalSecurityAgency/ghidra/files/6055991/cntlz.zip) — contains `.o` files at `-O0` through `-O3` with and without `-g` (though `-O1`, `-O2`, and `-O3` are all the same without `-g`) compiled using `powerpc-eabi-gcc` provided via [devkitPPC](https://wiibrew.org/wiki/DevkitPPC). (I release this code under CC0.)
```C
#include <stdbool.h>
#include <stdint.h>
bool equals3_8 (int8_t param) { return param == 3; }
bool equals3_16(int16_t param) { return param == 3; }
bool equals3_32(int32_t param) { return param == 3; }
bool equals3_64(int64_t param) { return param == 3; }
```
**Environment (please complete the following information):**
- OS: Windows 10, insider build 19042
- Java Version: 11.0.3
- Ghidra Version: 9.2.2
- Ghidra Origin: official ghidra-sre.org distro
**Additional context**
I have also seen `variable - 3` instead of `variable ^ 3`, but this doesn't seem to happen with GCC.
This relates to #2121 (though I don't think adding a PcodeOp for `countLeadingZeros` would fix it on its own, nor am I sure that a fix would actually require adding a PcodeOp).
|
1.0
|
PowerPC comparisons returned or stored in variables decompile poorly - **Describe the bug**
PowerPC comparisons returned or stored in variables decompile poorly, as they use `countLeadingZeros(v) >> 5` as a way to check if `v` is 0 (as if so, `v` has 32 bits set to 0, and `32 >> 5 == 1`).
**To Reproduce**
Decompile the attached binaries, using `PowerPC:BE:32:default:default` as the language. For instance, `equals3_32` in `cntlz_O1.o` looks like this:
```C
uint equals3_32(uint param_1)
{
uint uVar1;
uVar1 = countLeadingZeros(param_1 ^ 3);
return uVar1 >> 5;
}
```
The 64-bit version looks sillier, producing this (note that I had to manually set the data type and set custom storage to r3 and r4 for this; by default ghidra detects it as 2 `uint` parameters and with just `longlong` it tries to place it on the stack):
```C
uint equals3_64(longlong param_1)
{
uint uVar1;
uVar1 = countLeadingZeros((uint)((ulonglong)param_1 >> 0x20) | (uint)param_1 ^ 3);
return uVar1 >> 5;
}
```
The unoptimized versions have various sign-extensions and masks that don't show up when optimisations are turned on:
```C
uint equals3_8(char param_1)
{
uint uVar1;
uVar1 = countLeadingZeros((int)param_1 ^ 3);
return uVar1 >> 5 & 0xff;
}
```
**Expected behavior**
The decompiled code should look closer to the original code, ideally generating `param_1 == 3` in all cases (but something like `param_1 ^ 3 == 0` would still be a major improvement).
**Attachments**
[cntlz.zip](https://github.com/NationalSecurityAgency/ghidra/files/6055991/cntlz.zip) — contains `.o` files at `-O0` through `-O3` with and without `-g` (though `-O1`, `-O2`, and `-O3` are all the same without `-g`) compiled using `powerpc-eabi-gcc` provided via [devkitPPC](https://wiibrew.org/wiki/DevkitPPC). (I release this code under CC0.)
```C
#include <stdbool.h>
#include <stdint.h>
bool equals3_8 (int8_t param) { return param == 3; }
bool equals3_16(int16_t param) { return param == 3; }
bool equals3_32(int32_t param) { return param == 3; }
bool equals3_64(int64_t param) { return param == 3; }
```
**Environment (please complete the following information):**
- OS: Windows 10, insider build 19042
- Java Version: 11.0.3
- Ghidra Version: 9.2.2
- Ghidra Origin: official ghidra-sre.org distro
**Additional context**
I have also seen `variable - 3` instead of `variable ^ 3`, but this doesn't seem to happen with GCC.
This relates to #2121 (though I don't think adding a PcodeOp for `countLeadingZeros` would fix it on its own, nor am I sure that a fix would actually require adding a PcodeOp).
|
process
|
powerpc comparisons returned or stored in variables decompile poorly describe the bug powerpc comparisons returned or stored in variables decompile poorly as they use countleadingzeros v as a way to check if v is as if so v has bits set to and to reproduce decompile the attached binaries using powerpc be default default as the language for instance in cntlz o looks like this c uint uint param uint countleadingzeros param return the bit version looks sillier producing this note that i had to manually set the data type and set custom storage to and for this by default ghidra detects it as uint parameters and with just longlong it tries to place it on the stack c uint longlong param uint countleadingzeros uint ulonglong param uint param return the unoptimized versions have various sign extensions and masks that don t show up when optimisations are turned on c uint char param uint countleadingzeros int param return expected behavior the decompiled code should look closer to the original code ideally generating param in all cases but something like param would still be a major improvement attachments mdash contains o files at through with and without g though and are all the same without g compiled using powerpc eabi gcc provided via i release this code under c include include bool t param return param bool t param return param bool t param return param bool t param return param environment please complete the following information os windows insider build java version ghidra version ghidra origin official ghidra sre org distro additional context i have also seen variable instead of variable but this doesn t seem to happen with gcc this relates to though i don t think adding a pcodeop for countleadingzeros would fix it on its own nor am i sure that a fix would actually require adding a pcodeop
| 1
|
17,425
| 23,246,365,495
|
IssuesEvent
|
2022-08-03 20:36:25
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
[FALSE-POSITIVE?] breitbart.com
|
whitelisting process
|
**Domains or links**
www.breitbart.com
**More Information**
How did you discover your web site or domain was listed here?
didn't load in browser
**Have you requested removal from other sources?**
no, no other blocklists I use contain this domain.
**Additional context**
This is a news website. Even though its political stance is not mine, I don't think that blocklists should be motivated by political arguments.
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
|
1.0
|
[FALSE-POSITIVE?] breitbart.com - **Domains or links**
www.breitbart.com
**More Information**
How did you discover your web site or domain was listed here?
didn't load in browser
**Have you requested removal from other sources?**
no, no other blocklists I use contain this domain.
**Additional context**
This is a news website. Even though its political stance is not mine, I don't think that blocklists should be motivated by political arguments.
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
|
process
|
breitbart com domains or links more information how did you discover your web site or domain was listed here didn t load in browser have you requested removal from other sources no no other blocklists i use contain this domain additional context this is a news website even though its political stance is not mine i don t think that blocklists should be motivated by political arguments exclamation we understand being listed on a list like this can be frustrating and embarrassing for many web site owners the first step is to remain calm the second step is to rest assured one of our maintainers will address your issue as soon as possible please make sure you have provided as much information as possible to help speed up the process
| 1
|
245,371
| 7,885,550,695
|
IssuesEvent
|
2018-06-27 12:49:47
|
linterhub/schema
|
https://api.github.com/repos/linterhub/schema
|
closed
|
Support multiple urls
|
Priority: Medium Status: In Progress Type: Feature
|
There are a lot of cases when packages described more than one url: homepage, issues, repository, etc. Add ability to specify **one or more** urls for `package` (backward compatible) and it's type. Default is `homepage`.
|
1.0
|
Support multiple urls - There are a lot of cases when packages described more than one url: homepage, issues, repository, etc. Add ability to specify **one or more** urls for `package` (backward compatible) and it's type. Default is `homepage`.
|
non_process
|
support multiple urls there are a lot of cases when packages described more than one url homepage issues repository etc add ability to specify one or more urls for package backward compatible and it s type default is homepage
| 0
|
66,678
| 16,674,184,975
|
IssuesEvent
|
2021-06-07 14:23:31
|
adventuregamestudio/ags
|
https://api.github.com/repos/adventuregamestudio/ags
|
closed
|
Tool: translation compiler
|
ags3 context: game building type: enhancement
|
As a step of decoupling game compilation procedure from the Editor, we need a stand-alone tool that is run from the command-line, parses a translation source (TRS) file and writes a compiled translation (TRA) file in binary format.
Should be written in the similar line with the existing tools, in C++ (#1262, #1264, #1269).
Name suggestion: `trac` (TRAnslation Compiler).
_NOTE:_ this task is exclusively in writing this tool, adjusting Editor is a separate task, so no need to concern yourself with that if you are doing this. In fact it's best to assume that Editor will not be present to ensure tool's work result has no reliance on it.
**Input:**
* a TRS file (AGS translation source);
* path to output file;
**Output**
* a TRA file (AGS compiled translation)
---
### Details
The algorithm is relatively simple, most of TRS is just pairs of text lines that form a key/value map, with a few exceptions (comments, options).
Current implementation of reading and writing TRS in the Editor may be found here:
https://github.com/adventuregamestudio/ags/blob/master/Editor/AGS.Types/Translation.cs
Implementation of the TRA writer in the Editor may be found here: https://github.com/adventuregamestudio/ags/blob/master/Editor/AGS.Editor/Components/TranslationsComponent.cs#L92
User documentation of TRS format:
https://github.com/adventuregamestudio/ags-manual/wiki/Translations
The existing Tool code in the current master branch:
https://github.com/adventuregamestudio/ags/tree/master/Tools
MSVS solution for standalone tools:
https://github.com/adventuregamestudio/ags/blob/master/Solutions/Tools.sln
|
1.0
|
Tool: translation compiler - As a step of decoupling game compilation procedure from the Editor, we need a stand-alone tool that is run from the command-line, parses a translation source (TRS) file and writes a compiled translation (TRA) file in binary format.
Should be written in the similar line with the existing tools, in C++ (#1262, #1264, #1269).
Name suggestion: `trac` (TRAnslation Compiler).
_NOTE:_ this task is exclusively in writing this tool, adjusting Editor is a separate task, so no need to concern yourself with that if you are doing this. In fact it's best to assume that Editor will not be present to ensure tool's work result has no reliance on it.
**Input:**
* a TRS file (AGS translation source);
* path to output file;
**Output**
* a TRA file (AGS compiled translation)
---
### Details
The algorithm is relatively simple, most of TRS is just pairs of text lines that form a key/value map, with a few exceptions (comments, options).
Current implementation of reading and writing TRS in the Editor may be found here:
https://github.com/adventuregamestudio/ags/blob/master/Editor/AGS.Types/Translation.cs
Implementation of the TRA writer in the Editor may be found here: https://github.com/adventuregamestudio/ags/blob/master/Editor/AGS.Editor/Components/TranslationsComponent.cs#L92
User documentation of TRS format:
https://github.com/adventuregamestudio/ags-manual/wiki/Translations
The existing Tool code in the current master branch:
https://github.com/adventuregamestudio/ags/tree/master/Tools
MSVS solution for standalone tools:
https://github.com/adventuregamestudio/ags/blob/master/Solutions/Tools.sln
|
non_process
|
tool translation compiler as a step of decoupling game compilation procedure from the editor we need a stand alone tool that is run from the command line parses a translation source trs file and writes a compiled translation tra file in binary format should be written in the similar line with the existing tools in c name suggestion trac translation compiler note this task is exclusively in writing this tool adjusting editor is a separate task so no need to concern yourself with that if you are doing this in fact it s best to assume that editor will not be present to ensure tool s work result has no reliance on it input a trs file ags translation source path to output file output a tra file ags compiled translation details the algorithm is relatively simple most of trs is just pairs of text lines that form a key value map with a few exceptions comments options current implementation of reading and writing trs in the editor may be found here implementation of the tra writer in the editor may be found here user documentation of trs format the existing tool code in the current master branch msvs solution for standalone tools
| 0
|
67,262
| 14,860,779,924
|
IssuesEvent
|
2021-01-18 21:15:30
|
kadirselcuk/electron-webpack-quick-start
|
https://api.github.com/repos/kadirselcuk/electron-webpack-quick-start
|
opened
|
CVE-2020-4075 (High) detected in electron-8.2.0.tgz
|
security vulnerability
|
## CVE-2020-4075 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>electron-8.2.0.tgz</b></p></summary>
<p>Build cross platform desktop apps with JavaScript, HTML, and CSS</p>
<p>Library home page: <a href="https://registry.npmjs.org/electron/-/electron-8.2.0.tgz">https://registry.npmjs.org/electron/-/electron-8.2.0.tgz</a></p>
<p>Path to dependency file: electron-webpack-quick-start/package.json</p>
<p>Path to vulnerable library: electron-webpack-quick-start/node_modules/electron/package.json</p>
<p>
Dependency Hierarchy:
- :x: **electron-8.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kadirselcuk/electron-webpack-quick-start/commit/5304aff4a959f5816e06796abb476c3823cd141c">5304aff4a959f5816e06796abb476c3823cd141c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Electron before versions 7.2.4, 8.2.4, and 9.0.0-beta21, arbitrary local file read is possible by defining unsafe window options on a child window opened via window.open. As a workaround, ensure you are calling `event.preventDefault()` on all new-window events where the `url` or `options` is not something you expect. This is fixed in versions 9.0.0-beta.21, 8.2.4 and 7.2.4.
<p>Publish Date: 2020-07-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-4075>CVE-2020-4075</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/electron/electron/security/advisories/GHSA-f9mq-jph6-9mhm">https://github.com/electron/electron/security/advisories/GHSA-f9mq-jph6-9mhm</a></p>
<p>Release Date: 2020-07-07</p>
<p>Fix Resolution: 7.2.4,8.2.4,9.0.0-beta.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-4075 (High) detected in electron-8.2.0.tgz - ## CVE-2020-4075 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>electron-8.2.0.tgz</b></p></summary>
<p>Build cross platform desktop apps with JavaScript, HTML, and CSS</p>
<p>Library home page: <a href="https://registry.npmjs.org/electron/-/electron-8.2.0.tgz">https://registry.npmjs.org/electron/-/electron-8.2.0.tgz</a></p>
<p>Path to dependency file: electron-webpack-quick-start/package.json</p>
<p>Path to vulnerable library: electron-webpack-quick-start/node_modules/electron/package.json</p>
<p>
Dependency Hierarchy:
- :x: **electron-8.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kadirselcuk/electron-webpack-quick-start/commit/5304aff4a959f5816e06796abb476c3823cd141c">5304aff4a959f5816e06796abb476c3823cd141c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Electron before versions 7.2.4, 8.2.4, and 9.0.0-beta21, arbitrary local file read is possible by defining unsafe window options on a child window opened via window.open. As a workaround, ensure you are calling `event.preventDefault()` on all new-window events where the `url` or `options` is not something you expect. This is fixed in versions 9.0.0-beta.21, 8.2.4 and 7.2.4.
<p>Publish Date: 2020-07-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-4075>CVE-2020-4075</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/electron/electron/security/advisories/GHSA-f9mq-jph6-9mhm">https://github.com/electron/electron/security/advisories/GHSA-f9mq-jph6-9mhm</a></p>
<p>Release Date: 2020-07-07</p>
<p>Fix Resolution: 7.2.4,8.2.4,9.0.0-beta.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in electron tgz cve high severity vulnerability vulnerable library electron tgz build cross platform desktop apps with javascript html and css library home page a href path to dependency file electron webpack quick start package json path to vulnerable library electron webpack quick start node modules electron package json dependency hierarchy x electron tgz vulnerable library found in head commit a href found in base branch master vulnerability details in electron before versions and arbitrary local file read is possible by defining unsafe window options on a child window opened via window open as a workaround ensure you are calling event preventdefault on all new window events where the url or options is not something you expect this is fixed in versions beta and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution beta step up your open source security game with whitesource
| 0
|
5,737
| 8,391,993,949
|
IssuesEvent
|
2018-10-09 16:21:10
|
bull313/Musika
|
https://api.github.com/repos/bull313/Musika
|
opened
|
Requirement 8: Music Dynamics
|
compiler requirement
|
The Musika language shall support different dynamic levels (at least the basics: ppp, pp, p, mp, mf, f, ff, fff).
|
1.0
|
Requirement 8: Music Dynamics - The Musika language shall support different dynamic levels (at least the basics: ppp, pp, p, mp, mf, f, ff, fff).
|
non_process
|
requirement music dynamics the musika language shall support different dynamic levels at least the basics ppp pp p mp mf f ff fff
| 0
|
16,637
| 21,707,261,437
|
IssuesEvent
|
2022-05-10 10:45:59
|
sjmog/smartflix
|
https://api.github.com/repos/sjmog/smartflix
|
opened
|
Enriching show data synchronously
|
04-background-processing Ruby/HTTP Ruby/JSON
|
In the previous ticket, we started enriching shows with data from the OMDb API. The enriched data is fetched when the user views a single show.
But right now, we're just dumping this rich data into the view as JSON. Let's store some of this useful data in the database and display it in a more useful way.
In this ticket, you'll update show fields dynamically based on the API response. Then, you'll render the enriched show to the view, displaying selected fields in a neater way.
```
As a user,
So I can judge if a show is worth watching,
I want to see the show's critical rating.
```
## To complete this challenge, you will have to:
- [ ] Write a test that expects to see a show's critical rating when viewing a single show.
- [ ] Pass the test by adding a new database field and updating it from the API whenever a show is viewed.
## Tips
- Make sure the show is updated before rendering the view!
- You could add additional attributes to the show model, such as `year`, `released`, `runtime`, `plot`, `poster` and `imdbRating`. Remember to use proper data types!
- `poster` is particularly fun to add, as movie posters can make the homepage much more exciting. However, you need to be a Patron of OMDb to fetch poster data.
|
1.0
|
Enriching show data synchronously - In the previous ticket, we started enriching shows with data from the OMDb API. The enriched data is fetched when the user views a single show.
But right now, we're just dumping this rich data into the view as JSON. Let's store some of this useful data in the database and display it in a more useful way.
In this ticket, you'll update show fields dynamically based on the API response. Then, you'll render the enriched show to the view, displaying selected fields in a neater way.
```
As a user,
So I can judge if a show is worth watching,
I want to see the show's critical rating.
```
## To complete this challenge, you will have to:
- [ ] Write a test that expects to see a show's critical rating when viewing a single show.
- [ ] Pass the test by adding a new database field and updating it from the API whenever a show is viewed.
## Tips
- Make sure the show is updated before rendering the view!
- You could add additional attributes to the show model, such as `year`, `released`, `runtime`, `plot`, `poster` and `imdbRating`. Remember to use proper data types!
- `poster` is particularly fun to add, as movie posters can make the homepage much more exciting. However, you need to be a Patron of OMDb to fetch poster data.
|
process
|
enriching show data synchronously in the previous ticket we started enriching shows with data from the omdb api the enriched data is fetched when the user views a single show but right now we re just dumping this rich data into the view as json let s store some of this useful data in the database and display it in a more useful way in this ticket you ll update show fields dynamically based on the api response then you ll render the enriched show to the view displaying selected fields in a neater way as a user so i can judge if a show is worth watching i want to see the show s critical rating to complete this challenge you will have to write a test that expects to see a show s critical rating when viewing a single show pass the test by adding a new database field and updating it from the api whenever a show is viewed tips make sure the show is updated before rendering the view you could add additional attributes to the show model such as year released runtime plot poster and imdbrating remember to use proper data types poster is particularly fun to add as movie posters can make the homepage much more exciting however you need to be a patron of omdb to fetch poster data
| 1
|
7,846
| 11,015,563,872
|
IssuesEvent
|
2019-12-05 02:04:52
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
Run all tests not just a subset
|
testing type: process
|
I noticed on a few of my PRs, it was only running a subset of the tests and not all of them.
While I've run them all locally there could be something I have set correctly here but not for everyone else that would be good to catch.
|
1.0
|
Run all tests not just a subset - I noticed on a few of my PRs, it was only running a subset of the tests and not all of them.
While I've run them all locally there could be something I have set correctly here but not for everyone else that would be good to catch.
|
process
|
run all tests not just a subset i noticed on a few of my prs it was only running a subset of the tests and not all of them while i ve run them all locally there could be something i have set correctly here but not for everyone else that would be good to catch
| 1
|
626
| 3,091,957,920
|
IssuesEvent
|
2015-08-26 15:32:21
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
opened
|
Тернополь ОДА - Надання містобудівних умов та обмежень забудови земельної ділянки
|
in process of creating
|
Описание
https://drive.google.com/file/d/0B-TXzbaEvbw9dWFIbnVBb0ZYblU/view?usp=sharing
Заказчик: Ирина Кельнер
|
1.0
|
Тернополь ОДА - Надання містобудівних умов та обмежень забудови земельної ділянки - Описание
https://drive.google.com/file/d/0B-TXzbaEvbw9dWFIbnVBb0ZYblU/view?usp=sharing
Заказчик: Ирина Кельнер
|
process
|
тернополь ода надання містобудівних умов та обмежень забудови земельної ділянки описание заказчик ирина кельнер
| 1
|
20,088
| 26,599,153,278
|
IssuesEvent
|
2023-01-23 14:37:59
|
NixOS/nix
|
https://api.github.com/repos/NixOS/nix
|
opened
|
Release notes automation
|
developer-experience process
|
**Is your feature request related to a problem? Please describe.**
Goals
- Automate more of the release process
- Avoid merge conflicts in the release notes
- Improve the release notes of patch releases
By automating more of the release process, we move towards a situation where a release can be triggered by any maintainer, freeing up Eelco's time to work on important things.
Release notes automation can merge multiple files into a single release notes file, thereby avoiding the ubiquitous conflicts that would otherwise disincentivize the writing of release notes in the PR that introduces the change. Arguably the PR should document itself, and the pull request template already asks for this.
Currently, we don't document what's in our patch releases, but this is quite relevant to users. I suppose doing this wasn't considered worthwhile, but it would help users quite a bit; especially those experiencing the bugs or doing the packaging in Nixpkgs.
**Describe the solution you'd like**
- contributors add a new file to a specific directory in each pr
- at release time,
- for a patch release, the release script appends the files in that directory to the existing rl-${version} file
- for a non-patch release, the release script creates a new file with all the items from those files
- after the concatenation / write step, it removes the original files, as their contents have been processed. This is helpful because it makes the patch release automation work.
**Describe alternatives you've considered**
Keep burdening Eelco (minor releases) and users (patch releases) with the task of digging through commit logs.
We might want to do a final review of release notes, but moving this work forward as suggested above should be more productive anyway.
**Additional context**
Add any other context or screenshots about the feature request here.
**Priorities**
Add :+1: to [issues you find important](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
|
1.0
|
Release notes automation - **Is your feature request related to a problem? Please describe.**
Goals
- Automate more of the release process
- Avoid merge conflicts in the release notes
- Improve the release notes of patch releases
By automating more of the release process, we move towards a situation where a release can be triggered by any maintainer, freeing up Eelco's time to work on important things.
Release notes automation can merge multiple files into a single release notes file, thereby avoiding the ubiquitous conflicts that would otherwise disincentivize the writing of release notes in the PR that introduces the change. Arguably the PR should document itself, and the pull request template already asks for this.
Currently, we don't document what's in our patch releases, but this is quite relevant to users. I suppose doing this wasn't considered worthwhile, but it would help users quite a bit; especially those experiencing the bugs or doing the packaging in Nixpkgs.
**Describe the solution you'd like**
- contributors add a new file to a specific directory in each pr
- at release time,
- for a patch release, the release script appends the files in that directory to the existing rl-${version} file
- for a non-patch release, the release script creates a new file with all the items from those files
- after the concatenation / write step, it removes the original files, as their contents have been processed. This is helpful because it makes the patch release automation work.
**Describe alternatives you've considered**
Keep burdening Eelco (minor releases) and users (patch releases) with the task of digging through commit logs.
We might want to do a final review of release notes, but moving this work forward as suggested above should be more productive anyway.
**Additional context**
Add any other context or screenshots about the feature request here.
**Priorities**
Add :+1: to [issues you find important](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
|
process
|
release notes automation is your feature request related to a problem please describe goals automate more of the release process avoid merge conflicts in the release notes improve the release notes of patch releases by automating more of the release process we move towards a situation where a release can be triggered by any maintainer freeing up eelco s time to work on important things release notes automation can merge multiple files into a single release notes file thereby avoiding the ubiquitous conflicts that would otherwise disincentivize the writing of release notes in the pr that introduces the change arguably the pr should document itself and the pull request template already asks for this currently we don t document what s in our patch releases but this is quite relevant to users i suppose doing this wasn t considered worthwhile but it would help users quite a bit especially those experiencing the bugs or doing the packaging in nixpkgs describe the solution you d like contributors add a new file to a specific directory in each pr at release time for a patch release the release script appends the files in that directory to the existing rl version file for a non patch release the release script creates a new file with all the items from those files after the concatenation write step it removes the original files as their contents have been processed this is helpful because it makes the patch release automation work describe alternatives you ve considered keep burdening eelco minor releases and users patch releases with the task of digging through commit logs we might want to do a final review of release notes but moving this work forward as suggested above should be more productive anyway additional context add any other context or screenshots about the feature request here priorities add to
| 1
|
17,072
| 22,550,313,293
|
IssuesEvent
|
2022-06-27 04:23:07
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Feature Request: On Timer events add scheduling at specific time
|
kind/feature scope/broker blocker/stakeholder team/process-automation area/bpmn-support
|
Hi,
we are looking at automating all our user workflows using Zeebe-IO, We are a e-commerce analytics company and we are supposed to deliver reports at US time . we would like to have the capability of configuring and running the report analytic workflows at UTC 12:30 AM. Kindly consider adding timer-events on specific time as well. We use Zee-Be for repeated timer events.
|
1.0
|
Feature Request: On Timer events add scheduling at specific time - Hi,
we are looking at automating all our user workflows using Zeebe-IO, We are a e-commerce analytics company and we are supposed to deliver reports at US time . we would like to have the capability of configuring and running the report analytic workflows at UTC 12:30 AM. Kindly consider adding timer-events on specific time as well. We use Zee-Be for repeated timer events.
|
process
|
feature request on timer events add scheduling at specific time hi we are looking at automating all our user workflows using zeebe io we are a e commerce analytics company and we are supposed to deliver reports at us time we would like to have the capability of configuring and running the report analytic workflows at utc am kindly consider adding timer events on specific time as well we use zee be for repeated timer events
| 1
|
95,374
| 3,946,683,300
|
IssuesEvent
|
2016-04-28 06:20:03
|
Captianrock/android_PV
|
https://api.github.com/repos/Captianrock/android_PV
|
closed
|
Need GUI to ask user to enter source path if not src/main/java
|
High Priority New Feature
|
Some source code, such as AlarmClock, and most decompiled APKs don't have src/main/java defined. We need the true source path defined to we don't analyze extra classes (such as Test classes) that will ultimately break the module.
|
1.0
|
Need GUI to ask user to enter source path if not src/main/java - Some source code, such as AlarmClock, and most decompiled APKs don't have src/main/java defined. We need the true source path defined to we don't analyze extra classes (such as Test classes) that will ultimately break the module.
|
non_process
|
need gui to ask user to enter source path if not src main java some source code such as alarmclock and most decompiled apks don t have src main java defined we need the true source path defined to we don t analyze extra classes such as test classes that will ultimately break the module
| 0
|
221,778
| 17,026,917,471
|
IssuesEvent
|
2021-07-03 18:21:11
|
MysteryBlokHed/databind
|
https://api.github.com/repos/MysteryBlokHed/databind
|
closed
|
Use tabs (or 4 spaces) in functions/loops
|
documentation enhancement
|
Right now, a function might look like this:
```databind
func example
say Hello, World!
end
```
It would be easier to tell what's where if tabbing in lines inside functions was recommended, like this:
```databind
func example
say Hello, World!
end
```
Or this:
```databind
func example
var i := 10
while tvar i matches 1..
var i -= 1
end
end
```
Documentation examples and test `.databind` files should also be changed.
|
1.0
|
Use tabs (or 4 spaces) in functions/loops - Right now, a function might look like this:
```databind
func example
say Hello, World!
end
```
It would be easier to tell what's where if tabbing in lines inside functions was recommended, like this:
```databind
func example
say Hello, World!
end
```
Or this:
```databind
func example
var i := 10
while tvar i matches 1..
var i -= 1
end
end
```
Documentation examples and test `.databind` files should also be changed.
|
non_process
|
use tabs or spaces in functions loops right now a function might look like this databind func example say hello world end it would be easier to tell what s where if tabbing in lines inside functions was recommended like this databind func example say hello world end or this databind func example var i while tvar i matches var i end end documentation examples and test databind files should also be changed
| 0
|
18,692
| 24,595,215,613
|
IssuesEvent
|
2022-10-14 07:43:36
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Admins > Search bar > Placeholder text >Text change
|
Bug P2 Participant manager Process: Fixed Process: Tested dev
|
Admins > Search bar > Placeholder text >Text change > 'user' should be changed to 'admin'

|
2.0
|
[PM] Admins > Search bar > Placeholder text >Text change - Admins > Search bar > Placeholder text >Text change > 'user' should be changed to 'admin'

|
process
|
admins search bar placeholder text text change admins search bar placeholder text text change user should be changed to admin
| 1
|
317,624
| 9,667,051,936
|
IssuesEvent
|
2019-05-21 12:23:53
|
telerik/kendo-ui-core
|
https://api.github.com/repos/telerik/kendo-ui-core
|
closed
|
Decimal values are not displayed when negative percentage value is filled in a cell
|
Bug C: Spreadsheet FP: Completed Kendo2 Next LIB Priority 2 SEV: Low
|
### Bug report
Decimal values are not displayed when negative percentage value is filled in a cell
Related to #2963
### Reproduction of the problem
Use the Kendo Spreadsheet to fill "-0.046%" and set its format to %. Sample here: [https://dojo.telerik.com/ACAGeTEL/2](https://dojo.telerik.com/ACAGeTEL/2)
### Current behavior
-.05% is displayed
### Expected/desired behavior
-0.05% should be displayed
### Environment
* **Kendo UI version:** 2019.1.220
* **Browser:** [all]
|
1.0
|
Decimal values are not displayed when negative percentage value is filled in a cell - ### Bug report
Decimal values are not displayed when negative percentage value is filled in a cell
Related to #2963
### Reproduction of the problem
Use the Kendo Spreadsheet to fill "-0.046%" and set its format to %. Sample here: [https://dojo.telerik.com/ACAGeTEL/2](https://dojo.telerik.com/ACAGeTEL/2)
### Current behavior
-.05% is displayed
### Expected/desired behavior
-0.05% should be displayed
### Environment
* **Kendo UI version:** 2019.1.220
* **Browser:** [all]
|
non_process
|
decimal values are not displayed when negative percentage value is filled in a cell bug report decimal values are not displayed when negative percentage value is filled in a cell related to reproduction of the problem use the kendo spreadsheet to fill and set its format to sample here current behavior is displayed expected desired behavior should be displayed environment kendo ui version browser
| 0
|
634,506
| 20,363,690,967
|
IssuesEvent
|
2022-02-21 01:16:35
|
NCC-CNC/whattemplatemaker
|
https://api.github.com/repos/NCC-CNC/whattemplatemaker
|
closed
|
Action IDs must be unique - should we provide guidance?
|
bug high priority
|
e.g., if they have two separate types of invasive species management? I think people can get around this by manually entering invasive spp management for second instance. Should we make that clearer or is that something for the manual?
|
1.0
|
Action IDs must be unique - should we provide guidance? - e.g., if they have two separate types of invasive species management? I think people can get around this by manually entering invasive spp management for second instance. Should we make that clearer or is that something for the manual?
|
non_process
|
action ids must be unique should we provide guidance e g if they have two separate types of invasive species management i think people can get around this by manually entering invasive spp management for second instance should we make that clearer or is that something for the manual
| 0
|
109,354
| 16,843,681,014
|
IssuesEvent
|
2021-06-19 02:49:25
|
bharathirajatut/fitbit-api-example-java2
|
https://api.github.com/repos/bharathirajatut/fitbit-api-example-java2
|
opened
|
CVE-2019-11358 (Medium) detected in jquery-2.1.1.jar
|
security vulnerability
|
## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.1.jar</b></p></summary>
<p>WebJar for jQuery</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to dependency file: fitbit-api-example-java2/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/org/webjars/jquery/2.1.1/jquery-2.1.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/bharathirajatut/fitbit-api-example-java2/commits/8c153ad064e8f07a4ddade35ac13a9b485ca3dac">8c153ad064e8f07a4ddade35ac13a9b485ca3dac</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-11358 (Medium) detected in jquery-2.1.1.jar - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.1.jar</b></p></summary>
<p>WebJar for jQuery</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to dependency file: fitbit-api-example-java2/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/org/webjars/jquery/2.1.1/jquery-2.1.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/bharathirajatut/fitbit-api-example-java2/commits/8c153ad064e8f07a4ddade35ac13a9b485ca3dac">8c153ad064e8f07a4ddade35ac13a9b485ca3dac</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery jar cve medium severity vulnerability vulnerable library jquery jar webjar for jquery library home page a href path to dependency file fitbit api example pom xml path to vulnerable library canner repository org webjars jquery jquery jar dependency hierarchy x jquery jar vulnerable library found in head commit a href found in base branch master vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
395,530
| 27,073,466,242
|
IssuesEvent
|
2023-02-14 08:57:04
|
microsoftgraph/msgraph-developer-proxy
|
https://api.github.com/repos/microsoftgraph/msgraph-developer-proxy
|
closed
|
[How-to guide]: Update my application code to use Microsoft Graph JavaScript SDK
|
documentation
|
We should move the [Move to the Graph JS SDK](https://github.com/microsoftgraph/msgraph-developer-proxy/blob/main/msgraph-developer-proxy/Move-to-JS-SDK.md) guidance to our wiki so that it's in the same place as the rest of our docs and we can update it more easily.
|
1.0
|
[How-to guide]: Update my application code to use Microsoft Graph JavaScript SDK - We should move the [Move to the Graph JS SDK](https://github.com/microsoftgraph/msgraph-developer-proxy/blob/main/msgraph-developer-proxy/Move-to-JS-SDK.md) guidance to our wiki so that it's in the same place as the rest of our docs and we can update it more easily.
|
non_process
|
update my application code to use microsoft graph javascript sdk we should move the guidance to our wiki so that it s in the same place as the rest of our docs and we can update it more easily
| 0
|
299,529
| 25,909,683,114
|
IssuesEvent
|
2022-12-15 13:03:28
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: sqlsmith/setup=seed-multi-region/setting=multi-region failed
|
C-test-failure O-robot O-roachtest branch-release-22.1
|
roachtest.sqlsmith/setup=seed-multi-region/setting=multi-region [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=7969273&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=7969273&tab=artifacts#/sqlsmith/setup=seed-multi-region/setting=multi-region) on release-22.1 @ [7ce174a7b4295b860d6648a0e05bc77eab27e6ac](https://github.com/cockroachdb/cockroach/commits/7ce174a7b4295b860d6648a0e05bc77eab27e6ac):
```
The test failed on branch=release-22.1, cloud=gce:
test artifacts and logs in: /artifacts/sqlsmith/setup=seed-multi-region/setting=multi-region/run_1
sqlsmith.go:265,sqlsmith.go:324,test_runner.go:883: error: pq: internal error: comparison overload not found (is, void, unknown)
stmt:
SELECT
*
FROM
(SELECT COALESCE(NULL::FLOAT4[], NULL::FLOAT4[]) AS col_6905, NULL::VOID AS col_6906) AS tab_3279
ORDER BY
col_6905 ASC NULLS FIRST, col_6906 NULLS LAST;
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #90060 roachtest: sqlsmith/setup=seed-multi-region/setting=multi-region failed [C-test-failure O-roachtest O-robot T-sql-queries T-sql-schema branch-release-22.2.0]
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=seed-multi-region/setting=multi-region.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: sqlsmith/setup=seed-multi-region/setting=multi-region failed - roachtest.sqlsmith/setup=seed-multi-region/setting=multi-region [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=7969273&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=7969273&tab=artifacts#/sqlsmith/setup=seed-multi-region/setting=multi-region) on release-22.1 @ [7ce174a7b4295b860d6648a0e05bc77eab27e6ac](https://github.com/cockroachdb/cockroach/commits/7ce174a7b4295b860d6648a0e05bc77eab27e6ac):
```
The test failed on branch=release-22.1, cloud=gce:
test artifacts and logs in: /artifacts/sqlsmith/setup=seed-multi-region/setting=multi-region/run_1
sqlsmith.go:265,sqlsmith.go:324,test_runner.go:883: error: pq: internal error: comparison overload not found (is, void, unknown)
stmt:
SELECT
*
FROM
(SELECT COALESCE(NULL::FLOAT4[], NULL::FLOAT4[]) AS col_6905, NULL::VOID AS col_6906) AS tab_3279
ORDER BY
col_6905 ASC NULLS FIRST, col_6906 NULLS LAST;
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #90060 roachtest: sqlsmith/setup=seed-multi-region/setting=multi-region failed [C-test-failure O-roachtest O-robot T-sql-queries T-sql-schema branch-release-22.2.0]
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=seed-multi-region/setting=multi-region.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_process
|
roachtest sqlsmith setup seed multi region setting multi region failed roachtest sqlsmith setup seed multi region setting multi region with on release the test failed on branch release cloud gce test artifacts and logs in artifacts sqlsmith setup seed multi region setting multi region run sqlsmith go sqlsmith go test runner go error pq internal error comparison overload not found is void unknown stmt select from select coalesce null null as col null void as col as tab order by col asc nulls first col nulls last help see see same failure on other branches roachtest sqlsmith setup seed multi region setting multi region failed cc cockroachdb sql queries
| 0
|
11,518
| 2,653,053,394
|
IssuesEvent
|
2015-03-16 20:51:16
|
portah/biowardrobe
|
https://api.github.com/repos/portah/biowardrobe
|
closed
|
For some genes RPKMs don't correlate with log change
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. DEseq
2. Filter on significant genes with either removing or not non coding
What is the expected output? What do you see instead?
NM_001198868,NM_001198869,NM_005186,NR_040008 CAPN1 chr11 64949304 64979477 + 16
8.8881362 168.8581147 -0.585938255 0.004820842 0.026719021
What version of the product are you using? On what operating system?
PC
Please provide any additional information below.
note that RPKMs are essentially the same (168.8) but log chnage is -0.5 with
high significance
```
Original issue reported on code.google.com by `mroch...@gmail.com` on 22 Jan 2014 at 7:47
Attachments:
* [RNA-seq biopsies Joe my DEseq analysis no non coding.xlsx](https://storage.googleapis.com/google-code-attachments/genome-tools/issue-7/comment-0/RNA-seq biopsies Joe my DEseq analysis no non coding.xlsx)
|
1.0
|
For some genes RPKMs don't correlate with log change - ```
What steps will reproduce the problem?
1. DEseq
2. Filter on significant genes with either removing or not non coding
What is the expected output? What do you see instead?
NM_001198868,NM_001198869,NM_005186,NR_040008 CAPN1 chr11 64949304 64979477 + 16
8.8881362 168.8581147 -0.585938255 0.004820842 0.026719021
What version of the product are you using? On what operating system?
PC
Please provide any additional information below.
note that RPKMs are essentially the same (168.8) but log chnage is -0.5 with
high significance
```
Original issue reported on code.google.com by `mroch...@gmail.com` on 22 Jan 2014 at 7:47
Attachments:
* [RNA-seq biopsies Joe my DEseq analysis no non coding.xlsx](https://storage.googleapis.com/google-code-attachments/genome-tools/issue-7/comment-0/RNA-seq biopsies Joe my DEseq analysis no non coding.xlsx)
|
non_process
|
for some genes rpkms don t correlate with log change what steps will reproduce the problem deseq filter on significant genes with either removing or not non coding what is the expected output what do you see instead nm nm nm nr what version of the product are you using on what operating system pc please provide any additional information below note that rpkms are essentially the same but log chnage is with high significance original issue reported on code google com by mroch gmail com on jan at attachments biopsies joe my deseq analysis no non coding xlsx
| 0
|
13,144
| 15,569,437,663
|
IssuesEvent
|
2021-03-17 00:09:33
|
tokio-rs/tokio
|
https://api.github.com/repos/tokio-rs/tokio
|
closed
|
Memory leak in tokio runtime.
|
A-tokio C-bug M-process M-signal
|
**Version**
└── tokio v1.2.0
└── tokio-macros v1.1.0 (proc-macro)
**Platform**
Linux TukuZaZa-D1 5.11.1-zen1-1-zen-uksm #1 ZEN SMP PREEMPT Wed, 24 Feb 2021 07:02:49 +0000 x86_64 GNU/Linux
**Description**
Tokio always leak memory after drop a `tokio::runtime::Runtime`.
````rust
fn main() {
for _ in 0..1000 {
let rt = tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap();
}
}
````
If you increse the times of loop, the leaked memory reported by valgrind is also increasing.
Valgrind report:
````console
$ valgrind --leak-check=full --show-leak-kinds=all target/debug/test
==3645212== Memcheck, a memory error detector
==3645212== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==3645212== Using Valgrind-3.16.1 and LibVEX; rerun with -h for copyright info
==3645212== Command: target/debug/test
==3645212==
==3645212==
==3645212== HEAP SUMMARY:
==3645212== in use at exit: 221,180 bytes in 2,043 blocks
==3645212== total heap usage: 40,067 allocs, 38,024 frees, 26,859,837 bytes allocated
==3645212==
==3645212== 32 bytes in 1 blocks are possibly lost in loss record 1 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x17A11C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x165627: alloc::sync::Arc<T>::new (sync.rs:330)
==3645212== by 0x16A8D2: <alloc::sync::Arc<T> as core::convert::From<T>>::from (sync.rs:2178)
==3645212== by 0x1346C1: signal_hook_registry::register_unchecked_impl (lib.rs:572)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212== by 0x135090: signal_hook_registry::register (lib.rs:498)
==3645212== by 0x133CF8: tokio::signal::unix::signal_enable::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:244)
==3645212== by 0x19ED6B: std::sync::once::Once::call_once::{{closure}} (once.rs:261)
==3645212==
==3645212== 40 bytes in 1 blocks are still reachable in loss record 2 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x21938C: alloc (alloc.rs:86)
==3645212== by 0x21938C: alloc_impl (alloc.rs:166)
==3645212== by 0x21938C: allocate (alloc.rs:226)
==3645212== by 0x21938C: exchange_malloc (alloc.rs:316)
==3645212== by 0x21938C: new<std::sys::unix::mutex::Mutex> (boxed.rs:186)
==3645212== by 0x21938C: from<std::sys::unix::mutex::Mutex> (boxed.rs:1015)
==3645212== by 0x21938C: std::sys_common::mutex::MovableMutex::new (library/std/src/sys_common/mutex.rs:64)
==3645212== by 0x1C6BB6: std::sync::mutex::Mutex<T>::new (mutex.rs:217)
==3645212== by 0x1CAAD5: signal_hook_registry::half_lock::HalfLock<T>::new (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/half_lock.rs:121)
==3645212== by 0x1CC8C1: signal_hook_registry::GlobalData::ensure::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/lib.rs:286)
==3645212== by 0x1C8F5B: std::sync::once::Once::call_once::{{closure}} (once.rs:261)
==3645212== by 0x218701: std::sync::once::Once::call_inner (library/std/src/sync/once.rs:420)
==3645212== by 0x1C8EDC: std::sync::once::Once::call_once (once.rs:261)
==3645212== by 0x1CC833: signal_hook_registry::GlobalData::ensure (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/lib.rs:284)
==3645212== by 0x134687: signal_hook_registry::register_unchecked_impl (lib.rs:571)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212== by 0x135090: signal_hook_registry::register (lib.rs:498)
==3645212==
==3645212== 40 bytes in 1 blocks are still reachable in loss record 3 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x21938C: alloc (alloc.rs:86)
==3645212== by 0x21938C: alloc_impl (alloc.rs:166)
==3645212== by 0x21938C: allocate (alloc.rs:226)
==3645212== by 0x21938C: exchange_malloc (alloc.rs:316)
==3645212== by 0x21938C: new<std::sys::unix::mutex::Mutex> (boxed.rs:186)
==3645212== by 0x21938C: from<std::sys::unix::mutex::Mutex> (boxed.rs:1015)
==3645212== by 0x21938C: std::sys_common::mutex::MovableMutex::new (library/std/src/sys_common/mutex.rs:64)
==3645212== by 0x1C6BB6: std::sync::mutex::Mutex<T>::new (mutex.rs:217)
==3645212== by 0x1CAC80: signal_hook_registry::half_lock::HalfLock<T>::new (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/half_lock.rs:121)
==3645212== by 0x1CC8E2: signal_hook_registry::GlobalData::ensure::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/lib.rs:290)
==3645212== by 0x1C8F5B: std::sync::once::Once::call_once::{{closure}} (once.rs:261)
==3645212== by 0x218701: std::sync::once::Once::call_inner (library/std/src/sync/once.rs:420)
==3645212== by 0x1C8EDC: std::sync::once::Once::call_once (once.rs:261)
==3645212== by 0x1CC833: signal_hook_registry::GlobalData::ensure (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/lib.rs:284)
==3645212== by 0x134687: signal_hook_registry::register_unchecked_impl (lib.rs:571)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212== by 0x135090: signal_hook_registry::register (lib.rs:498)
==3645212==
==3645212== 56 bytes in 1 blocks are still reachable in loss record 4 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x17A11C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x177908: pin<tokio::signal::registry::Globals> (boxed.rs:242)
==3645212== by 0x177908: tokio::signal::registry::globals::GLOBALS::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/registry.rs:169)
==3645212== by 0x135AFD: core::ops::function::FnOnce::call_once (function.rs:227)
==3645212== by 0x135D5A: core::ops::function::FnOnce::call_once (function.rs:227)
==3645212== by 0x1B8828: once_cell::sync::Lazy<T,F>::force::{{closure}} (lib.rs:1023)
==3645212== by 0x1B891E: once_cell::sync::OnceCell<T>::get_or_init::{{closure}} (lib.rs:845)
==3645212== by 0x1A1B08: once_cell::imp::OnceCell<T>::initialize::{{closure}} (imp_std.rs:93)
==3645212== by 0x1E3BAB: once_cell::imp::initialize_inner (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/once_cell-1.6.0/src/imp_std.rs:168)
==3645212==
==3645212== 64 bytes in 1 blocks are still reachable in loss record 5 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x1C2CDB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x1C2D99: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x1C33E9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x1C2C3C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x1CA90C: new<signal_hook_registry::SignalData> (boxed.rs:186)
==3645212== by 0x1CA90C: signal_hook_registry::half_lock::WriteGuard<T>::store (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/half_lock.rs:75)
==3645212== by 0x134DFA: signal_hook_registry::register_unchecked_impl (lib.rs:610)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212== by 0x135090: signal_hook_registry::register (lib.rs:498)
==3645212== by 0x133CF8: tokio::signal::unix::signal_enable::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:244)
==3645212== by 0x19ED6B: std::sync::once::Once::call_once::{{closure}} (once.rs:261)
==3645212== by 0x218701: std::sync::once::Once::call_inner (library/std/src/sync/once.rs:420)
==3645212==
==3645212== 168 bytes in 1 blocks are still reachable in loss record 6 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x17A11C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x16E88C: new<core::option::Option<signal_hook_registry::Prev>> (boxed.rs:186)
==3645212== by 0x16E88C: signal_hook_registry::half_lock::WriteGuard<T>::store (half_lock.rs:75)
==3645212== by 0x134A85: signal_hook_registry::register_unchecked_impl (lib.rs:599)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212== by 0x135090: signal_hook_registry::register (lib.rs:498)
==3645212== by 0x133CF8: tokio::signal::unix::signal_enable::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:244)
==3645212== by 0x19ED6B: std::sync::once::Once::call_once::{{closure}} (once.rs:261)
==3645212== by 0x218701: std::sync::once::Once::call_inner (library/std/src/sync/once.rs:420)
==3645212==
==3645212== 368 bytes in 1 blocks are possibly lost in loss record 7 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x1C2CDB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x1C2D99: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x1C33E9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x1C2C3C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x1BA980: new<alloc::collections::btree::node::LeafNode<signal_hook_registry::ActionId, alloc::sync::Arc<Fn<(&libc::unix::linux_like::linux::gnu::b64::x86_64::siginfo_t)>>>> (boxed.rs:186)
==3645212== by 0x1BA980: alloc::collections::btree::node::NodeRef<alloc::collections::btree::node::marker::Owned,K,V,alloc::collections::btree::node::marker::Leaf>::new_leaf (node.rs:136)
==3645212== by 0x1BB619: alloc::collections::btree::node::NodeRef<alloc::collections::btree::node::marker::Owned,K,V,alloc::collections::btree::node::marker::LeafOrInternal>::new (node.rs:130)
==3645212== by 0x1CE569: core::ops::function::FnOnce::call_once (function.rs:227)
==3645212== by 0x1C4D4F: core::option::Option<T>::get_or_insert_with (option.rs:869)
==3645212== by 0x1CFFEE: alloc::collections::btree::map::BTreeMap<K,V>::ensure_is_owned (map.rs:2164)
==3645212== by 0x1A00AD: alloc::collections::btree::map::BTreeMap<K,V>::entry (map.rs:1050)
==3645212== by 0x1A036D: alloc::collections::btree::map::BTreeMap<K,V>::insert (map.rs:796)
==3645212==
==3645212== 788 bytes in 1 blocks are possibly lost in loss record 8 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x1C2CDB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x1C0526: hashbrown::raw::RawTable<T>::new_uninitialized (mod.rs:411)
==3645212== by 0x19A8E6: hashbrown::raw::RawTable<T>::fallible_with_capacity (mod.rs:440)
==3645212== by 0x19B7A2: hashbrown::raw::RawTable<T>::resize (mod.rs:873)
==3645212== by 0x198693: hashbrown::raw::RawTable<T>::reserve_rehash (mod.rs:754)
==3645212== by 0x19C401: hashbrown::raw::RawTable<T>::reserve (mod.rs:707)
==3645212== by 0x1B8551: hashbrown::map::HashMap<K,V,S>::reserve (map.rs:670)
==3645212== by 0x1B7E18: hashbrown::rustc_entry::<impl hashbrown::map::HashMap<K,V,S>>::rustc_entry (rustc_entry.rs:45)
==3645212== by 0x182277: std::collections::hash::map::HashMap<K,V,S>::entry (map.rs:705)
==3645212== by 0x1347E4: signal_hook_registry::register_unchecked_impl (lib.rs:580)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212==
==3645212== 1,320 bytes in 33 blocks are still reachable in loss record 9 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x21938C: alloc (alloc.rs:86)
==3645212== by 0x21938C: alloc_impl (alloc.rs:166)
==3645212== by 0x21938C: allocate (alloc.rs:226)
==3645212== by 0x21938C: exchange_malloc (alloc.rs:316)
==3645212== by 0x21938C: new<std::sys::unix::mutex::Mutex> (boxed.rs:186)
==3645212== by 0x21938C: from<std::sys::unix::mutex::Mutex> (boxed.rs:1015)
==3645212== by 0x21938C: std::sys_common::mutex::MovableMutex::new (library/std/src/sys_common/mutex.rs:64)
==3645212== by 0x19EEDB: std::sync::mutex::Mutex<T>::new (mutex.rs:217)
==3645212== by 0x1A092D: <std::sync::mutex::Mutex<T> as core::default::Default>::default (mutex.rs:417)
==3645212== by 0x17799B: <tokio::signal::registry::EventInfo as core::default::Default>::default (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/registry.rs:20)
==3645212== by 0x1332F0: <tokio::signal::unix::SignalInfo as core::default::Default>::default (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:196)
==3645212== by 0x13315A: tokio::signal::unix::<impl tokio::signal::registry::Init for alloc::vec::Vec<tokio::signal::unix::SignalInfo>>::init::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:31)
==3645212== by 0x13C534: core::iter::adapters::map::map_fold::{{closure}} (map.rs:80)
==3645212== by 0x12BEEB: core::iter::traits::iterator::Iterator::fold (iterator.rs:2023)
==3645212== by 0x134109: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold (map.rs:120)
==3645212== by 0x13C1B8: core::iter::traits::iterator::Iterator::for_each (iterator.rs:678)
==3645212== by 0x192B88: <alloc::vec::Vec<T,A> as alloc::vec::SpecExtend<T,I>>::spec_extend (vec.rs:2568)
==3645212==
==3645212== 2,112 bytes in 1 blocks are still reachable in loss record 10 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x1B0EDF: alloc::raw_vec::RawVec<T,A>::allocate_in (raw_vec.rs:188)
==3645212== by 0x1B552C: alloc::raw_vec::RawVec<T,A>::with_capacity_in (raw_vec.rs:129)
==3645212== by 0x18FF3E: alloc::vec::Vec<T,A>::with_capacity_in (vec.rs:498)
==3645212== by 0x18E975: alloc::vec::Vec<T>::with_capacity (vec.rs:364)
==3645212== by 0x192F8E: <alloc::vec::Vec<T> as alloc::vec::SpecFromIterNested<T,I>>::from_iter (vec.rs:2331)
==3645212== by 0x19277A: <alloc::vec::Vec<T> as alloc::vec::SpecFromIter<T,I>>::from_iter (vec.rs:2346)
==3645212== by 0x1939E5: <alloc::vec::Vec<T> as core::iter::traits::collect::FromIterator<T>>::from_iter (vec.rs:2181)
==3645212== by 0x13C06A: core::iter::traits::iterator::Iterator::collect (iterator.rs:1670)
==3645212==
==3645212== 8,192 bytes in 1 blocks are still reachable in loss record 11 of 13
==3645212== at 0x4840D7B: realloc (vg_replace_malloc.c:834)
==3645212== by 0x1D2B4C: alloc::alloc::realloc (alloc.rs:122)
==3645212== by 0x1D282C: alloc::alloc::Global::grow_impl (alloc.rs:198)
==3645212== by 0x1D3103: <alloc::alloc::Global as core::alloc::Allocator>::grow (alloc.rs:251)
==3645212== by 0x1D455C: alloc::raw_vec::finish_grow (raw_vec.rs:487)
==3645212== by 0x1B53AF: alloc::raw_vec::RawVec<T,A>::grow_amortized (raw_vec.rs:422)
==3645212== by 0x1B1723: alloc::raw_vec::RawVec<T,A>::try_reserve (raw_vec.rs:311)
==3645212== by 0x1B6772: alloc::raw_vec::RawVec<T,A>::reserve (raw_vec.rs:305)
==3645212== by 0x1912B8: alloc::vec::Vec<T,A>::reserve (vec.rs:697)
==3645212== by 0x190264: alloc::vec::Vec<T,A>::push (vec.rs:1409)
==3645212== by 0x1772DA: tokio::signal::registry::Registry<S>::register_listener (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/registry.rs:71)
==3645212== by 0x17779D: tokio::signal::registry::Globals::register_listener (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/registry.rs:141)
==3645212==
==3645212== 32,000 bytes in 1,000 blocks are still reachable in loss record 12 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x17A11C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x1ADEA4: new<tokio::sync::mpsc::block::Block<()>> (boxed.rs:186)
==3645212== by 0x1ADEA4: tokio::sync::mpsc::list::channel (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/sync/mpsc/list.rs:35)
==3645212== by 0x177A04: tokio::sync::mpsc::chan::channel (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/sync/mpsc/chan.rs:105)
==3645212== by 0x1604CF: tokio::sync::mpsc::bounded::channel (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/sync/mpsc/bounded.rs:93)
==3645212== by 0x133E95: tokio::signal::unix::signal_with_handle (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:365)
==3645212== by 0x189A0B: tokio::process::imp::driver::Driver::new (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/process/unix/driver.rs:59)
==3645212== by 0x176EFC: tokio::runtime::driver::create_process_driver (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/runtime/driver.rs:84)
==3645212== by 0x17676A: tokio::runtime::driver::create_io_stack (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/runtime/driver.rs:26)
==3645212==
==3645212== 176,000 bytes in 1,000 blocks are still reachable in loss record 13 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x17A11C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x1658CD: alloc::sync::Arc<T>::new (sync.rs:330)
==3645212== by 0x177BDD: tokio::sync::mpsc::chan::channel (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/sync/mpsc/chan.rs:107)
==3645212== by 0x1604CF: tokio::sync::mpsc::bounded::channel (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/sync/mpsc/bounded.rs:93)
==3645212== by 0x133E95: tokio::signal::unix::signal_with_handle (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:365)
==3645212== by 0x189A0B: tokio::process::imp::driver::Driver::new (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/process/unix/driver.rs:59)
==3645212== by 0x176EFC: tokio::runtime::driver::create_process_driver (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/runtime/driver.rs:84)
==3645212== by 0x17676A: tokio::runtime::driver::create_io_stack (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/runtime/driver.rs:26)
==3645212==
==3645212== LEAK SUMMARY:
==3645212== definitely lost: 0 bytes in 0 blocks
==3645212== indirectly lost: 0 bytes in 0 blocks
==3645212== possibly lost: 1,188 bytes in 3 blocks
==3645212== still reachable: 219,992 bytes in 2,040 blocks
==3645212== suppressed: 0 bytes in 0 blocks
==3645212==
==3645212== For lists of detected and suppressed errors, rerun with: -s
==3645212== ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
````
|
1.0
|
Memory leak in tokio runtime. - **Version**
└── tokio v1.2.0
└── tokio-macros v1.1.0 (proc-macro)
**Platform**
Linux TukuZaZa-D1 5.11.1-zen1-1-zen-uksm #1 ZEN SMP PREEMPT Wed, 24 Feb 2021 07:02:49 +0000 x86_64 GNU/Linux
**Description**
Tokio always leak memory after drop a `tokio::runtime::Runtime`.
````rust
fn main() {
for _ in 0..1000 {
let rt = tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap();
}
}
````
If you increse the times of loop, the leaked memory reported by valgrind is also increasing.
Valgrind report:
````console
$ valgrind --leak-check=full --show-leak-kinds=all target/debug/test
==3645212== Memcheck, a memory error detector
==3645212== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==3645212== Using Valgrind-3.16.1 and LibVEX; rerun with -h for copyright info
==3645212== Command: target/debug/test
==3645212==
==3645212==
==3645212== HEAP SUMMARY:
==3645212== in use at exit: 221,180 bytes in 2,043 blocks
==3645212== total heap usage: 40,067 allocs, 38,024 frees, 26,859,837 bytes allocated
==3645212==
==3645212== 32 bytes in 1 blocks are possibly lost in loss record 1 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x17A11C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x165627: alloc::sync::Arc<T>::new (sync.rs:330)
==3645212== by 0x16A8D2: <alloc::sync::Arc<T> as core::convert::From<T>>::from (sync.rs:2178)
==3645212== by 0x1346C1: signal_hook_registry::register_unchecked_impl (lib.rs:572)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212== by 0x135090: signal_hook_registry::register (lib.rs:498)
==3645212== by 0x133CF8: tokio::signal::unix::signal_enable::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:244)
==3645212== by 0x19ED6B: std::sync::once::Once::call_once::{{closure}} (once.rs:261)
==3645212==
==3645212== 40 bytes in 1 blocks are still reachable in loss record 2 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x21938C: alloc (alloc.rs:86)
==3645212== by 0x21938C: alloc_impl (alloc.rs:166)
==3645212== by 0x21938C: allocate (alloc.rs:226)
==3645212== by 0x21938C: exchange_malloc (alloc.rs:316)
==3645212== by 0x21938C: new<std::sys::unix::mutex::Mutex> (boxed.rs:186)
==3645212== by 0x21938C: from<std::sys::unix::mutex::Mutex> (boxed.rs:1015)
==3645212== by 0x21938C: std::sys_common::mutex::MovableMutex::new (library/std/src/sys_common/mutex.rs:64)
==3645212== by 0x1C6BB6: std::sync::mutex::Mutex<T>::new (mutex.rs:217)
==3645212== by 0x1CAAD5: signal_hook_registry::half_lock::HalfLock<T>::new (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/half_lock.rs:121)
==3645212== by 0x1CC8C1: signal_hook_registry::GlobalData::ensure::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/lib.rs:286)
==3645212== by 0x1C8F5B: std::sync::once::Once::call_once::{{closure}} (once.rs:261)
==3645212== by 0x218701: std::sync::once::Once::call_inner (library/std/src/sync/once.rs:420)
==3645212== by 0x1C8EDC: std::sync::once::Once::call_once (once.rs:261)
==3645212== by 0x1CC833: signal_hook_registry::GlobalData::ensure (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/lib.rs:284)
==3645212== by 0x134687: signal_hook_registry::register_unchecked_impl (lib.rs:571)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212== by 0x135090: signal_hook_registry::register (lib.rs:498)
==3645212==
==3645212== 40 bytes in 1 blocks are still reachable in loss record 3 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x21938C: alloc (alloc.rs:86)
==3645212== by 0x21938C: alloc_impl (alloc.rs:166)
==3645212== by 0x21938C: allocate (alloc.rs:226)
==3645212== by 0x21938C: exchange_malloc (alloc.rs:316)
==3645212== by 0x21938C: new<std::sys::unix::mutex::Mutex> (boxed.rs:186)
==3645212== by 0x21938C: from<std::sys::unix::mutex::Mutex> (boxed.rs:1015)
==3645212== by 0x21938C: std::sys_common::mutex::MovableMutex::new (library/std/src/sys_common/mutex.rs:64)
==3645212== by 0x1C6BB6: std::sync::mutex::Mutex<T>::new (mutex.rs:217)
==3645212== by 0x1CAC80: signal_hook_registry::half_lock::HalfLock<T>::new (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/half_lock.rs:121)
==3645212== by 0x1CC8E2: signal_hook_registry::GlobalData::ensure::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/lib.rs:290)
==3645212== by 0x1C8F5B: std::sync::once::Once::call_once::{{closure}} (once.rs:261)
==3645212== by 0x218701: std::sync::once::Once::call_inner (library/std/src/sync/once.rs:420)
==3645212== by 0x1C8EDC: std::sync::once::Once::call_once (once.rs:261)
==3645212== by 0x1CC833: signal_hook_registry::GlobalData::ensure (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/lib.rs:284)
==3645212== by 0x134687: signal_hook_registry::register_unchecked_impl (lib.rs:571)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212== by 0x135090: signal_hook_registry::register (lib.rs:498)
==3645212==
==3645212== 56 bytes in 1 blocks are still reachable in loss record 4 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x17A11C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x177908: pin<tokio::signal::registry::Globals> (boxed.rs:242)
==3645212== by 0x177908: tokio::signal::registry::globals::GLOBALS::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/registry.rs:169)
==3645212== by 0x135AFD: core::ops::function::FnOnce::call_once (function.rs:227)
==3645212== by 0x135D5A: core::ops::function::FnOnce::call_once (function.rs:227)
==3645212== by 0x1B8828: once_cell::sync::Lazy<T,F>::force::{{closure}} (lib.rs:1023)
==3645212== by 0x1B891E: once_cell::sync::OnceCell<T>::get_or_init::{{closure}} (lib.rs:845)
==3645212== by 0x1A1B08: once_cell::imp::OnceCell<T>::initialize::{{closure}} (imp_std.rs:93)
==3645212== by 0x1E3BAB: once_cell::imp::initialize_inner (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/once_cell-1.6.0/src/imp_std.rs:168)
==3645212==
==3645212== 64 bytes in 1 blocks are still reachable in loss record 5 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x1C2CDB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x1C2D99: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x1C33E9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x1C2C3C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x1CA90C: new<signal_hook_registry::SignalData> (boxed.rs:186)
==3645212== by 0x1CA90C: signal_hook_registry::half_lock::WriteGuard<T>::store (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/signal-hook-registry-1.3.0/src/half_lock.rs:75)
==3645212== by 0x134DFA: signal_hook_registry::register_unchecked_impl (lib.rs:610)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212== by 0x135090: signal_hook_registry::register (lib.rs:498)
==3645212== by 0x133CF8: tokio::signal::unix::signal_enable::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:244)
==3645212== by 0x19ED6B: std::sync::once::Once::call_once::{{closure}} (once.rs:261)
==3645212== by 0x218701: std::sync::once::Once::call_inner (library/std/src/sync/once.rs:420)
==3645212==
==3645212== 168 bytes in 1 blocks are still reachable in loss record 6 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x17A11C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x16E88C: new<core::option::Option<signal_hook_registry::Prev>> (boxed.rs:186)
==3645212== by 0x16E88C: signal_hook_registry::half_lock::WriteGuard<T>::store (half_lock.rs:75)
==3645212== by 0x134A85: signal_hook_registry::register_unchecked_impl (lib.rs:599)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212== by 0x135090: signal_hook_registry::register (lib.rs:498)
==3645212== by 0x133CF8: tokio::signal::unix::signal_enable::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:244)
==3645212== by 0x19ED6B: std::sync::once::Once::call_once::{{closure}} (once.rs:261)
==3645212== by 0x218701: std::sync::once::Once::call_inner (library/std/src/sync/once.rs:420)
==3645212==
==3645212== 368 bytes in 1 blocks are possibly lost in loss record 7 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x1C2CDB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x1C2D99: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x1C33E9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x1C2C3C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x1BA980: new<alloc::collections::btree::node::LeafNode<signal_hook_registry::ActionId, alloc::sync::Arc<Fn<(&libc::unix::linux_like::linux::gnu::b64::x86_64::siginfo_t)>>>> (boxed.rs:186)
==3645212== by 0x1BA980: alloc::collections::btree::node::NodeRef<alloc::collections::btree::node::marker::Owned,K,V,alloc::collections::btree::node::marker::Leaf>::new_leaf (node.rs:136)
==3645212== by 0x1BB619: alloc::collections::btree::node::NodeRef<alloc::collections::btree::node::marker::Owned,K,V,alloc::collections::btree::node::marker::LeafOrInternal>::new (node.rs:130)
==3645212== by 0x1CE569: core::ops::function::FnOnce::call_once (function.rs:227)
==3645212== by 0x1C4D4F: core::option::Option<T>::get_or_insert_with (option.rs:869)
==3645212== by 0x1CFFEE: alloc::collections::btree::map::BTreeMap<K,V>::ensure_is_owned (map.rs:2164)
==3645212== by 0x1A00AD: alloc::collections::btree::map::BTreeMap<K,V>::entry (map.rs:1050)
==3645212== by 0x1A036D: alloc::collections::btree::map::BTreeMap<K,V>::insert (map.rs:796)
==3645212==
==3645212== 788 bytes in 1 blocks are possibly lost in loss record 8 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x1C2CDB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x1C0526: hashbrown::raw::RawTable<T>::new_uninitialized (mod.rs:411)
==3645212== by 0x19A8E6: hashbrown::raw::RawTable<T>::fallible_with_capacity (mod.rs:440)
==3645212== by 0x19B7A2: hashbrown::raw::RawTable<T>::resize (mod.rs:873)
==3645212== by 0x198693: hashbrown::raw::RawTable<T>::reserve_rehash (mod.rs:754)
==3645212== by 0x19C401: hashbrown::raw::RawTable<T>::reserve (mod.rs:707)
==3645212== by 0x1B8551: hashbrown::map::HashMap<K,V,S>::reserve (map.rs:670)
==3645212== by 0x1B7E18: hashbrown::rustc_entry::<impl hashbrown::map::HashMap<K,V,S>>::rustc_entry (rustc_entry.rs:45)
==3645212== by 0x182277: std::collections::hash::map::HashMap<K,V,S>::entry (map.rs:705)
==3645212== by 0x1347E4: signal_hook_registry::register_unchecked_impl (lib.rs:580)
==3645212== by 0x134527: signal_hook_registry::register_sigaction_impl (lib.rs:527)
==3645212==
==3645212== 1,320 bytes in 33 blocks are still reachable in loss record 9 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x21938C: alloc (alloc.rs:86)
==3645212== by 0x21938C: alloc_impl (alloc.rs:166)
==3645212== by 0x21938C: allocate (alloc.rs:226)
==3645212== by 0x21938C: exchange_malloc (alloc.rs:316)
==3645212== by 0x21938C: new<std::sys::unix::mutex::Mutex> (boxed.rs:186)
==3645212== by 0x21938C: from<std::sys::unix::mutex::Mutex> (boxed.rs:1015)
==3645212== by 0x21938C: std::sys_common::mutex::MovableMutex::new (library/std/src/sys_common/mutex.rs:64)
==3645212== by 0x19EEDB: std::sync::mutex::Mutex<T>::new (mutex.rs:217)
==3645212== by 0x1A092D: <std::sync::mutex::Mutex<T> as core::default::Default>::default (mutex.rs:417)
==3645212== by 0x17799B: <tokio::signal::registry::EventInfo as core::default::Default>::default (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/registry.rs:20)
==3645212== by 0x1332F0: <tokio::signal::unix::SignalInfo as core::default::Default>::default (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:196)
==3645212== by 0x13315A: tokio::signal::unix::<impl tokio::signal::registry::Init for alloc::vec::Vec<tokio::signal::unix::SignalInfo>>::init::{{closure}} (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:31)
==3645212== by 0x13C534: core::iter::adapters::map::map_fold::{{closure}} (map.rs:80)
==3645212== by 0x12BEEB: core::iter::traits::iterator::Iterator::fold (iterator.rs:2023)
==3645212== by 0x134109: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold (map.rs:120)
==3645212== by 0x13C1B8: core::iter::traits::iterator::Iterator::for_each (iterator.rs:678)
==3645212== by 0x192B88: <alloc::vec::Vec<T,A> as alloc::vec::SpecExtend<T,I>>::spec_extend (vec.rs:2568)
==3645212==
==3645212== 2,112 bytes in 1 blocks are still reachable in loss record 10 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x1B0EDF: alloc::raw_vec::RawVec<T,A>::allocate_in (raw_vec.rs:188)
==3645212== by 0x1B552C: alloc::raw_vec::RawVec<T,A>::with_capacity_in (raw_vec.rs:129)
==3645212== by 0x18FF3E: alloc::vec::Vec<T,A>::with_capacity_in (vec.rs:498)
==3645212== by 0x18E975: alloc::vec::Vec<T>::with_capacity (vec.rs:364)
==3645212== by 0x192F8E: <alloc::vec::Vec<T> as alloc::vec::SpecFromIterNested<T,I>>::from_iter (vec.rs:2331)
==3645212== by 0x19277A: <alloc::vec::Vec<T> as alloc::vec::SpecFromIter<T,I>>::from_iter (vec.rs:2346)
==3645212== by 0x1939E5: <alloc::vec::Vec<T> as core::iter::traits::collect::FromIterator<T>>::from_iter (vec.rs:2181)
==3645212== by 0x13C06A: core::iter::traits::iterator::Iterator::collect (iterator.rs:1670)
==3645212==
==3645212== 8,192 bytes in 1 blocks are still reachable in loss record 11 of 13
==3645212== at 0x4840D7B: realloc (vg_replace_malloc.c:834)
==3645212== by 0x1D2B4C: alloc::alloc::realloc (alloc.rs:122)
==3645212== by 0x1D282C: alloc::alloc::Global::grow_impl (alloc.rs:198)
==3645212== by 0x1D3103: <alloc::alloc::Global as core::alloc::Allocator>::grow (alloc.rs:251)
==3645212== by 0x1D455C: alloc::raw_vec::finish_grow (raw_vec.rs:487)
==3645212== by 0x1B53AF: alloc::raw_vec::RawVec<T,A>::grow_amortized (raw_vec.rs:422)
==3645212== by 0x1B1723: alloc::raw_vec::RawVec<T,A>::try_reserve (raw_vec.rs:311)
==3645212== by 0x1B6772: alloc::raw_vec::RawVec<T,A>::reserve (raw_vec.rs:305)
==3645212== by 0x1912B8: alloc::vec::Vec<T,A>::reserve (vec.rs:697)
==3645212== by 0x190264: alloc::vec::Vec<T,A>::push (vec.rs:1409)
==3645212== by 0x1772DA: tokio::signal::registry::Registry<S>::register_listener (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/registry.rs:71)
==3645212== by 0x17779D: tokio::signal::registry::Globals::register_listener (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/registry.rs:141)
==3645212==
==3645212== 32,000 bytes in 1,000 blocks are still reachable in loss record 12 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x17A11C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x1ADEA4: new<tokio::sync::mpsc::block::Block<()>> (boxed.rs:186)
==3645212== by 0x1ADEA4: tokio::sync::mpsc::list::channel (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/sync/mpsc/list.rs:35)
==3645212== by 0x177A04: tokio::sync::mpsc::chan::channel (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/sync/mpsc/chan.rs:105)
==3645212== by 0x1604CF: tokio::sync::mpsc::bounded::channel (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/sync/mpsc/bounded.rs:93)
==3645212== by 0x133E95: tokio::signal::unix::signal_with_handle (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:365)
==3645212== by 0x189A0B: tokio::process::imp::driver::Driver::new (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/process/unix/driver.rs:59)
==3645212== by 0x176EFC: tokio::runtime::driver::create_process_driver (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/runtime/driver.rs:84)
==3645212== by 0x17676A: tokio::runtime::driver::create_io_stack (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/runtime/driver.rs:26)
==3645212==
==3645212== 176,000 bytes in 1,000 blocks are still reachable in loss record 13 of 13
==3645212== at 0x483E77F: malloc (vg_replace_malloc.c:307)
==3645212== by 0x17A1BB: alloc::alloc::alloc (alloc.rs:86)
==3645212== by 0x17A279: alloc::alloc::Global::alloc_impl (alloc.rs:166)
==3645212== by 0x17CAB9: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc.rs:226)
==3645212== by 0x17A11C: alloc::alloc::exchange_malloc (alloc.rs:316)
==3645212== by 0x1658CD: alloc::sync::Arc<T>::new (sync.rs:330)
==3645212== by 0x177BDD: tokio::sync::mpsc::chan::channel (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/sync/mpsc/chan.rs:107)
==3645212== by 0x1604CF: tokio::sync::mpsc::bounded::channel (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/sync/mpsc/bounded.rs:93)
==3645212== by 0x133E95: tokio::signal::unix::signal_with_handle (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/signal/unix.rs:365)
==3645212== by 0x189A0B: tokio::process::imp::driver::Driver::new (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/process/unix/driver.rs:59)
==3645212== by 0x176EFC: tokio::runtime::driver::create_process_driver (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/runtime/driver.rs:84)
==3645212== by 0x17676A: tokio::runtime::driver::create_io_stack (/home/tuxzz/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/runtime/driver.rs:26)
==3645212==
==3645212== LEAK SUMMARY:
==3645212== definitely lost: 0 bytes in 0 blocks
==3645212== indirectly lost: 0 bytes in 0 blocks
==3645212== possibly lost: 1,188 bytes in 3 blocks
==3645212== still reachable: 219,992 bytes in 2,040 blocks
==3645212== suppressed: 0 bytes in 0 blocks
==3645212==
==3645212== For lists of detected and suppressed errors, rerun with: -s
==3645212== ERROR SUMMARY: 3 errors from 3 contexts (suppressed: 0 from 0)
````
|
process
|
memory leak in tokio runtime version └── tokio └── tokio macros proc macro platform linux tukuzaza zen uksm zen smp preempt wed feb gnu linux description tokio always leak memory after drop a tokio runtime runtime rust fn main for in let rt tokio runtime builder new current thread enable all build unwrap if you increse the times of loop the leaked memory reported by valgrind is also increasing valgrind report console valgrind leak check full show leak kinds all target debug test memcheck a memory error detector copyright c and gnu gpl d by julian seward et al using valgrind and libvex rerun with h for copyright info command target debug test heap summary in use at exit bytes in blocks total heap usage allocs frees bytes allocated bytes in blocks are possibly lost in loss record of at malloc vg replace malloc c by alloc alloc alloc alloc rs by alloc alloc global alloc impl alloc rs by allocate alloc rs by alloc alloc exchange malloc alloc rs by alloc sync arc new sync rs by as core convert from from sync rs by signal hook registry register unchecked impl lib rs by signal hook registry register sigaction impl lib rs by signal hook registry register lib rs by tokio signal unix signal enable closure home tuxzz cargo registry src github com tokio src signal unix rs by std sync once once call once closure once rs bytes in blocks are still reachable in loss record of at malloc vg replace malloc c by alloc alloc rs by alloc impl alloc rs by allocate alloc rs by exchange malloc alloc rs by new boxed rs by from boxed rs by std sys common mutex movablemutex new library std src sys common mutex rs by std sync mutex mutex new mutex rs by signal hook registry half lock halflock new home tuxzz cargo registry src github com signal hook registry src half lock rs by signal hook registry globaldata ensure closure home tuxzz cargo registry src github com signal hook registry src lib rs by std sync once once call once closure once rs by std sync once once call inner library std src sync once rs by std sync once once call once once rs by signal hook registry globaldata ensure home tuxzz cargo registry src github com signal hook registry src lib rs by signal hook registry register unchecked impl lib rs by signal hook registry register sigaction impl lib rs by signal hook registry register lib rs bytes in blocks are still reachable in loss record of at malloc vg replace malloc c by alloc alloc rs by alloc impl alloc rs by allocate alloc rs by exchange malloc alloc rs by new boxed rs by from boxed rs by std sys common mutex movablemutex new library std src sys common mutex rs by std sync mutex mutex new mutex rs by signal hook registry half lock halflock new home tuxzz cargo registry src github com signal hook registry src half lock rs by signal hook registry globaldata ensure closure home tuxzz cargo registry src github com signal hook registry src lib rs by std sync once once call once closure once rs by std sync once once call inner library std src sync once rs by std sync once once call once once rs by signal hook registry globaldata ensure home tuxzz cargo registry src github com signal hook registry src lib rs by signal hook registry register unchecked impl lib rs by signal hook registry register sigaction impl lib rs by signal hook registry register lib rs bytes in blocks are still reachable in loss record of at malloc vg replace malloc c by alloc alloc alloc alloc rs by alloc alloc global alloc impl alloc rs by allocate alloc rs by alloc alloc exchange malloc alloc rs by pin boxed rs by tokio signal registry globals globals closure home tuxzz cargo registry src github com tokio src signal registry rs by core ops function fnonce call once function rs by core ops function fnonce call once function rs by once cell sync lazy force closure lib rs by once cell sync oncecell get or init closure lib rs by once cell imp oncecell initialize closure imp std rs by once cell imp initialize inner home tuxzz cargo registry src github com once cell src imp std rs bytes in blocks are still reachable in loss record of at malloc vg replace malloc c by alloc alloc alloc alloc rs by alloc alloc global alloc impl alloc rs by allocate alloc rs by alloc alloc exchange malloc alloc rs by new boxed rs by signal hook registry half lock writeguard store home tuxzz cargo registry src github com signal hook registry src half lock rs by signal hook registry register unchecked impl lib rs by signal hook registry register sigaction impl lib rs by signal hook registry register lib rs by tokio signal unix signal enable closure home tuxzz cargo registry src github com tokio src signal unix rs by std sync once once call once closure once rs by std sync once once call inner library std src sync once rs bytes in blocks are still reachable in loss record of at malloc vg replace malloc c by alloc alloc alloc alloc rs by alloc alloc global alloc impl alloc rs by allocate alloc rs by alloc alloc exchange malloc alloc rs by new boxed rs by signal hook registry half lock writeguard store half lock rs by signal hook registry register unchecked impl lib rs by signal hook registry register sigaction impl lib rs by signal hook registry register lib rs by tokio signal unix signal enable closure home tuxzz cargo registry src github com tokio src signal unix rs by std sync once once call once closure once rs by std sync once once call inner library std src sync once rs bytes in blocks are possibly lost in loss record of at malloc vg replace malloc c by alloc alloc alloc alloc rs by alloc alloc global alloc impl alloc rs by allocate alloc rs by alloc alloc exchange malloc alloc rs by new boxed rs by alloc collections btree node noderef new leaf node rs by alloc collections btree node noderef new node rs by core ops function fnonce call once function rs by core option option get or insert with option rs by alloc collections btree map btreemap ensure is owned map rs by alloc collections btree map btreemap entry map rs by alloc collections btree map btreemap insert map rs bytes in blocks are possibly lost in loss record of at malloc vg replace malloc c by alloc alloc alloc alloc rs by hashbrown raw rawtable new uninitialized mod rs by hashbrown raw rawtable fallible with capacity mod rs by hashbrown raw rawtable resize mod rs by hashbrown raw rawtable reserve rehash mod rs by hashbrown raw rawtable reserve mod rs by hashbrown map hashmap reserve map rs by hashbrown rustc entry rustc entry rustc entry rs by std collections hash map hashmap entry map rs by signal hook registry register unchecked impl lib rs by signal hook registry register sigaction impl lib rs bytes in blocks are still reachable in loss record of at malloc vg replace malloc c by alloc alloc rs by alloc impl alloc rs by allocate alloc rs by exchange malloc alloc rs by new boxed rs by from boxed rs by std sys common mutex movablemutex new library std src sys common mutex rs by std sync mutex mutex new mutex rs by as core default default default mutex rs by default home tuxzz cargo registry src github com tokio src signal registry rs by default home tuxzz cargo registry src github com tokio src signal unix rs by tokio signal unix init closure home tuxzz cargo registry src github com tokio src signal unix rs by core iter adapters map map fold closure map rs by core iter traits iterator iterator fold iterator rs by as core iter traits iterator iterator fold map rs by core iter traits iterator iterator for each iterator rs by as alloc vec specextend spec extend vec rs bytes in blocks are still reachable in loss record of at malloc vg replace malloc c by alloc alloc alloc alloc rs by alloc alloc global alloc impl alloc rs by allocate alloc rs by alloc raw vec rawvec allocate in raw vec rs by alloc raw vec rawvec with capacity in raw vec rs by alloc vec vec with capacity in vec rs by alloc vec vec with capacity vec rs by as alloc vec specfromiternested from iter vec rs by as alloc vec specfromiter from iter vec rs by as core iter traits collect fromiterator from iter vec rs by core iter traits iterator iterator collect iterator rs bytes in blocks are still reachable in loss record of at realloc vg replace malloc c by alloc alloc realloc alloc rs by alloc alloc global grow impl alloc rs by grow alloc rs by alloc raw vec finish grow raw vec rs by alloc raw vec rawvec grow amortized raw vec rs by alloc raw vec rawvec try reserve raw vec rs by alloc raw vec rawvec reserve raw vec rs by alloc vec vec reserve vec rs by alloc vec vec push vec rs by tokio signal registry registry register listener home tuxzz cargo registry src github com tokio src signal registry rs by tokio signal registry globals register listener home tuxzz cargo registry src github com tokio src signal registry rs bytes in blocks are still reachable in loss record of at malloc vg replace malloc c by alloc alloc alloc alloc rs by alloc alloc global alloc impl alloc rs by allocate alloc rs by alloc alloc exchange malloc alloc rs by new boxed rs by tokio sync mpsc list channel home tuxzz cargo registry src github com tokio src sync mpsc list rs by tokio sync mpsc chan channel home tuxzz cargo registry src github com tokio src sync mpsc chan rs by tokio sync mpsc bounded channel home tuxzz cargo registry src github com tokio src sync mpsc bounded rs by tokio signal unix signal with handle home tuxzz cargo registry src github com tokio src signal unix rs by tokio process imp driver driver new home tuxzz cargo registry src github com tokio src process unix driver rs by tokio runtime driver create process driver home tuxzz cargo registry src github com tokio src runtime driver rs by tokio runtime driver create io stack home tuxzz cargo registry src github com tokio src runtime driver rs bytes in blocks are still reachable in loss record of at malloc vg replace malloc c by alloc alloc alloc alloc rs by alloc alloc global alloc impl alloc rs by allocate alloc rs by alloc alloc exchange malloc alloc rs by alloc sync arc new sync rs by tokio sync mpsc chan channel home tuxzz cargo registry src github com tokio src sync mpsc chan rs by tokio sync mpsc bounded channel home tuxzz cargo registry src github com tokio src sync mpsc bounded rs by tokio signal unix signal with handle home tuxzz cargo registry src github com tokio src signal unix rs by tokio process imp driver driver new home tuxzz cargo registry src github com tokio src process unix driver rs by tokio runtime driver create process driver home tuxzz cargo registry src github com tokio src runtime driver rs by tokio runtime driver create io stack home tuxzz cargo registry src github com tokio src runtime driver rs leak summary definitely lost bytes in blocks indirectly lost bytes in blocks possibly lost bytes in blocks still reachable bytes in blocks suppressed bytes in blocks for lists of detected and suppressed errors rerun with s error summary errors from contexts suppressed from
| 1
|
17,017
| 3,353,840,346
|
IssuesEvent
|
2015-11-18 09:02:35
|
hollyjoke/33HU6POQKFJUS6K4M6BXQISV
|
https://api.github.com/repos/hollyjoke/33HU6POQKFJUS6K4M6BXQISV
|
closed
|
fdPfWf9+736SSTOcIIL4M2tW4rWlSVeS5zWvfVgXL8vRQKhSZnYCPFO04kP/YlI3MM8sAvHEEZ21HxOlsaoQKWI5XigQyYuvO3+YIv63EGm9SIYQvW9OuIl+yRUfr7zhoV/ZdX/UFge2acmPiuDIJc2NRS6csKiMuQ7pESyHMR0=
|
design
|
DPlZI5wK0CgLi4uTSHdfeIvFy8T9ENf3sDMt4KemT/szs9c7i+hfPAuX31gO4qy9nvMxGLC2Zef401uTsEjdlU1zfVinmjkxqP/Rvi4W3YdqsYQ3z/n4ehwkFy8JXa3gXyWerg90nGAQneU2mmi+62wIfYVQDPT/vMYFml3Bl/HsPWN+f7xYtK9ikn2DuAvqux/ECYkXdzMM82yWK9PNWhocBgHLkaGWpNppMHLdWG1HhK8CoOT1w9NzoPrZIEqMnyzgzODjbyMQj8OMSzvuJU7VXIaocAa4TzCDxT3YUJzDwlb4Xt13tZXyWwR6n87KZXfWZ8pXSp7DJDpWFwbD6a3yQIfAg+ZNHKFxyK3osyyLILQdV2rHetL5QygNgpiVkXNXB/QpvmDpKAnTj6qczR2mMuKc6mAcquHtu5y2iBLK7pkw2JBnj6FkH+VOpztg+QgJNjfb4lgbnSzlv5ldzIFVON9o1hopShXOgbSFHCMdNpzACIpLxgmUJT7I6e6AT0q/vfGQ+UFyvsG693YPl7RqgR0w2K5MsmheR9sw6fodNpzACIpLxgmUJT7I6e6AT0q/vfGQ+UFyvsG693YPl21ENBx76CWUYjCKLG095Z/X2hCwQGZU2lzcuxsBQc+Y1hxtZlwB0OSzKOyPmjdKUzH8ObdDTMJVm8KhGmV0G/sM3I3cjDu76JFqAfBDkXuv84NQKDz3088RLFeWPB9lwHtbCGctSRZsxCqAeHSMiUSuWWnO6MeXCGJH+ldsKw7hK9cx3IgJ5X65Lxjhs80UONFDlcNVVRo2+Gc3zRtK/mJqMcbYVS+66k92M5CHPwiyTw5HcN+IpBwzb7Dw9otDVNGrRxUN29aGCtWUfLhvpKjX11ARNSVexyuu/CoZ+IlxhxmDz0aFEQbF7YB9JaFPJz+yl42ZmXflLz8u+ytsGD1NNLlp+OXkNQGjiUByNhYJ
|
1.0
|
fdPfWf9+736SSTOcIIL4M2tW4rWlSVeS5zWvfVgXL8vRQKhSZnYCPFO04kP/YlI3MM8sAvHEEZ21HxOlsaoQKWI5XigQyYuvO3+YIv63EGm9SIYQvW9OuIl+yRUfr7zhoV/ZdX/UFge2acmPiuDIJc2NRS6csKiMuQ7pESyHMR0= - DPlZI5wK0CgLi4uTSHdfeIvFy8T9ENf3sDMt4KemT/szs9c7i+hfPAuX31gO4qy9nvMxGLC2Zef401uTsEjdlU1zfVinmjkxqP/Rvi4W3YdqsYQ3z/n4ehwkFy8JXa3gXyWerg90nGAQneU2mmi+62wIfYVQDPT/vMYFml3Bl/HsPWN+f7xYtK9ikn2DuAvqux/ECYkXdzMM82yWK9PNWhocBgHLkaGWpNppMHLdWG1HhK8CoOT1w9NzoPrZIEqMnyzgzODjbyMQj8OMSzvuJU7VXIaocAa4TzCDxT3YUJzDwlb4Xt13tZXyWwR6n87KZXfWZ8pXSp7DJDpWFwbD6a3yQIfAg+ZNHKFxyK3osyyLILQdV2rHetL5QygNgpiVkXNXB/QpvmDpKAnTj6qczR2mMuKc6mAcquHtu5y2iBLK7pkw2JBnj6FkH+VOpztg+QgJNjfb4lgbnSzlv5ldzIFVON9o1hopShXOgbSFHCMdNpzACIpLxgmUJT7I6e6AT0q/vfGQ+UFyvsG693YPl7RqgR0w2K5MsmheR9sw6fodNpzACIpLxgmUJT7I6e6AT0q/vfGQ+UFyvsG693YPl21ENBx76CWUYjCKLG095Z/X2hCwQGZU2lzcuxsBQc+Y1hxtZlwB0OSzKOyPmjdKUzH8ObdDTMJVm8KhGmV0G/sM3I3cjDu76JFqAfBDkXuv84NQKDz3088RLFeWPB9lwHtbCGctSRZsxCqAeHSMiUSuWWnO6MeXCGJH+ldsKw7hK9cx3IgJ5X65Lxjhs80UONFDlcNVVRo2+Gc3zRtK/mJqMcbYVS+66k92M5CHPwiyTw5HcN+IpBwzb7Dw9otDVNGrRxUN29aGCtWUfLhvpKjX11ARNSVexyuu/CoZ+IlxhxmDz0aFEQbF7YB9JaFPJz+yl42ZmXflLz8u+ytsGD1NNLlp+OXkNQGjiUByNhYJ
|
non_process
|
zdx hspwn vopztg vfgq vfgq mjqmcbyvs coz oxknqgjiubynhyj
| 0
|
55,611
| 6,910,506,038
|
IssuesEvent
|
2017-11-28 02:42:25
|
cs340tabyu/cs340Fall2017
|
https://api.github.com/repos/cs340tabyu/cs340Fall2017
|
closed
|
Pressing the back button in the lobby places user in blank activity
|
P4: Aesthetic or Design Flaw Team 7
|
When the user is in the game lobby, pressing the back button sends them to a blank activity, of which they cannot leave and have to restart the app.
|
1.0
|
Pressing the back button in the lobby places user in blank activity - When the user is in the game lobby, pressing the back button sends them to a blank activity, of which they cannot leave and have to restart the app.
|
non_process
|
pressing the back button in the lobby places user in blank activity when the user is in the game lobby pressing the back button sends them to a blank activity of which they cannot leave and have to restart the app
| 0
|
1,124
| 3,603,634,967
|
IssuesEvent
|
2016-02-03 19:47:27
|
mkdocs/mkdocs
|
https://api.github.com/repos/mkdocs/mkdocs
|
opened
|
Collecting a list of MkDocs themes
|
Process
|
With 0.15 out the door, I am starting to hear about new themes - much faster than I expected. I know of three already.
- [Cinder](https://github.com/chrissimpkins/cinder)
- [Alabaster](https://github.com/iamale/mkdocs-alabaster)
- http://protobluff.org/getting-started/ (I don't think this has a name and isn't on PyPI yet, but this is an example usage)
To aid discovery and help promote the great work done by the community I would like to create a gallery for themes. Where should we do it?
- In the MkDocs documentation? We would have the issue of keeping this relevant and up to date
- In the MkDocs wiki? https://github.com/mkdocs/mkdocs/wiki/MkDocs-Themes - this is already stale since I first started it and I bet it is usually missed.
|
1.0
|
Collecting a list of MkDocs themes - With 0.15 out the door, I am starting to hear about new themes - much faster than I expected. I know of three already.
- [Cinder](https://github.com/chrissimpkins/cinder)
- [Alabaster](https://github.com/iamale/mkdocs-alabaster)
- http://protobluff.org/getting-started/ (I don't think this has a name and isn't on PyPI yet, but this is an example usage)
To aid discovery and help promote the great work done by the community I would like to create a gallery for themes. Where should we do it?
- In the MkDocs documentation? We would have the issue of keeping this relevant and up to date
- In the MkDocs wiki? https://github.com/mkdocs/mkdocs/wiki/MkDocs-Themes - this is already stale since I first started it and I bet it is usually missed.
|
process
|
collecting a list of mkdocs themes with out the door i am starting to hear about new themes much faster than i expected i know of three already i don t think this has a name and isn t on pypi yet but this is an example usage to aid discovery and help promote the great work done by the community i would like to create a gallery for themes where should we do it in the mkdocs documentation we would have the issue of keeping this relevant and up to date in the mkdocs wiki this is already stale since i first started it and i bet it is usually missed
| 1
|
8,778
| 11,900,468,755
|
IssuesEvent
|
2020-03-30 10:42:49
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
closed
|
Job status endpoint isn't returning XML response
|
BUG :bug: EPIC - Auto Batch Process :oncoming_automobile:
|
**Describe the bug**
Job status endpoint isn't returning XML response - only JSON. Reported by Accenture team whilst performing SIT.
**To Reproduce**
Make XML request to job status endpoint.
```
curl "http://localhost:8000/jobs/cd14ea09-7fa5-4df1-a051-9cdd2aef2770" \
-H 'Accept: application/xml' \
-H 'Content-Type: application/xml' \
-u 'username:password'
```
**Expected behavior**
Response should be in xml format, not JSON.
**Screenshots**
N/A
**Additional context**
Assuming that XML filter that was added to other endpoints wasn't added to the job status endpoint.
|
1.0
|
Job status endpoint isn't returning XML response - **Describe the bug**
Job status endpoint isn't returning XML response - only JSON. Reported by Accenture team whilst performing SIT.
**To Reproduce**
Make XML request to job status endpoint.
```
curl "http://localhost:8000/jobs/cd14ea09-7fa5-4df1-a051-9cdd2aef2770" \
-H 'Accept: application/xml' \
-H 'Content-Type: application/xml' \
-u 'username:password'
```
**Expected behavior**
Response should be in xml format, not JSON.
**Screenshots**
N/A
**Additional context**
Assuming that XML filter that was added to other endpoints wasn't added to the job status endpoint.
|
process
|
job status endpoint isn t returning xml response describe the bug job status endpoint isn t returning xml response only json reported by accenture team whilst performing sit to reproduce make xml request to job status endpoint curl h accept application xml h content type application xml u username password expected behavior response should be in xml format not json screenshots n a additional context assuming that xml filter that was added to other endpoints wasn t added to the job status endpoint
| 1
|
541,928
| 15,836,312,408
|
IssuesEvent
|
2021-04-06 19:09:45
|
GoogleChrome/lighthouse
|
https://api.github.com/repos/GoogleChrome/lighthouse
|
closed
|
Pagespeed Insights results differ from Chrome to Firefox
|
needs-priority pending-close
|
<!-- We would love to hear anything on your mind about Lighthouse -->
**Summary**
When I run the check on Firefox everything is Groovy. 92% The same test with Chrome is 77%
What in my site is making Chrome hang?
Our site is: https://habitatgtr.org/
|
1.0
|
Pagespeed Insights results differ from Chrome to Firefox - <!-- We would love to hear anything on your mind about Lighthouse -->
**Summary**
When I run the check on Firefox everything is Groovy. 92% The same test with Chrome is 77%
What in my site is making Chrome hang?
Our site is: https://habitatgtr.org/
|
non_process
|
pagespeed insights results differ from chrome to firefox summary when i run the check on firefox everything is groovy the same test with chrome is what in my site is making chrome hang our site is
| 0
|
2,152
| 4,999,057,955
|
IssuesEvent
|
2016-12-09 21:56:37
|
codeforamerica/jail-dashboard
|
https://api.github.com/repos/codeforamerica/jail-dashboard
|
closed
|
User can see bar chart for population status
|
in progress processing_status
|
_From @hartsick on November 30, 2016 21:53_
Derived from `status` field in bookings, bounded by booking date & release date
Use existing Louisville site for reference
**N.B.**: Mob on this to set up example for future bar charts.
_Copied from original issue: codeforamerica/jail-dashboard-project#130_
|
1.0
|
User can see bar chart for population status - _From @hartsick on November 30, 2016 21:53_
Derived from `status` field in bookings, bounded by booking date & release date
Use existing Louisville site for reference
**N.B.**: Mob on this to set up example for future bar charts.
_Copied from original issue: codeforamerica/jail-dashboard-project#130_
|
process
|
user can see bar chart for population status from hartsick on november derived from status field in bookings bounded by booking date release date use existing louisville site for reference n b mob on this to set up example for future bar charts copied from original issue codeforamerica jail dashboard project
| 1
|
95,550
| 3,953,524,712
|
IssuesEvent
|
2016-04-29 13:49:29
|
opencaching/opencaching-pl
|
https://api.github.com/repos/opencaching/opencaching-pl
|
opened
|
Improve password strength in registration and password change forms
|
Component_Core General_Discussion Priority_High Type_Enhancement
|
When I tested my new solution for activating account by link I noticed problem about password strength in OC web-service.
At this moment:
- my password have to be longer then **two** characters !
- I can't use any special characters in my password !!!
Now I can create account with those passwords:
- qaz
- 123
etc.
But I **can't** create account with password like:
- !qa
- 12^
So... We permit to use very very week passwords at oc service. For me it is very big issue in security.
What limitation should we use for passwords?
My proposition:
Password:
- must contain at least one digit
- must contains at least one lowercase characters
- must contains at least one uppercase characters
- must contain at least one special symbol
- be longer then 8 characters
Any suggestions ?
|
1.0
|
Improve password strength in registration and password change forms - When I tested my new solution for activating account by link I noticed problem about password strength in OC web-service.
At this moment:
- my password have to be longer then **two** characters !
- I can't use any special characters in my password !!!
Now I can create account with those passwords:
- qaz
- 123
etc.
But I **can't** create account with password like:
- !qa
- 12^
So... We permit to use very very week passwords at oc service. For me it is very big issue in security.
What limitation should we use for passwords?
My proposition:
Password:
- must contain at least one digit
- must contains at least one lowercase characters
- must contains at least one uppercase characters
- must contain at least one special symbol
- be longer then 8 characters
Any suggestions ?
|
non_process
|
improve password strength in registration and password change forms when i tested my new solution for activating account by link i noticed problem about password strength in oc web service at this moment my password have to be longer then two characters i can t use any special characters in my password now i can create account with those passwords qaz etc but i can t create account with password like qa so we permit to use very very week passwords at oc service for me it is very big issue in security what limitation should we use for passwords my proposition password must contain at least one digit must contains at least one lowercase characters must contains at least one uppercase characters must contain at least one special symbol be longer then characters any suggestions
| 0
|
128
| 2,564,632,410
|
IssuesEvent
|
2015-02-06 21:14:29
|
MozillaFoundation/plan
|
https://api.github.com/repos/MozillaFoundation/plan
|
opened
|
Improve demo calls
|
process
|
People should feel engaged and inspired by seeing each other's work at demos. It should be fun! Let's come up with some ways to make it even more so than it already is.
Basic improvement ideas on the table:
* Time limit for each demo, circa 5min
* Rotate facilitation
* keep longform discussion/ Q&A to other forums to minimize the "lull" in presentations
Some other possibilities:
* Could try different format styles, a la ignite
* Potential training around how to do great presentations
* More agreed upon 'rules' like no multi-tasking during demos =)
* Recognition? for best demo? best emcee?
|
1.0
|
Improve demo calls - People should feel engaged and inspired by seeing each other's work at demos. It should be fun! Let's come up with some ways to make it even more so than it already is.
Basic improvement ideas on the table:
* Time limit for each demo, circa 5min
* Rotate facilitation
* keep longform discussion/ Q&A to other forums to minimize the "lull" in presentations
Some other possibilities:
* Could try different format styles, a la ignite
* Potential training around how to do great presentations
* More agreed upon 'rules' like no multi-tasking during demos =)
* Recognition? for best demo? best emcee?
|
process
|
improve demo calls people should feel engaged and inspired by seeing each other s work at demos it should be fun let s come up with some ways to make it even more so than it already is basic improvement ideas on the table time limit for each demo circa rotate facilitation keep longform discussion q a to other forums to minimize the lull in presentations some other possibilities could try different format styles a la ignite potential training around how to do great presentations more agreed upon rules like no multi tasking during demos recognition for best demo best emcee
| 1
|
16,353
| 21,012,714,132
|
IssuesEvent
|
2022-03-30 08:14:00
|
threefoldfoundation/tft
|
https://api.github.com/repos/threefoldfoundation/tft
|
closed
|
No suitable peers
|
priority_critical process_duplicate
|
```
ERROR[03-30|07:42:48.054] Error occured while minting: context deadline exceeded
INFO [03-30|07:42:49.043] Imported new block headers count=1 elapsed=902.003µs number=16503891 hash=67e57a…2e91aa
INFO [03-30|07:42:51.964] Imported new block headers count=1 elapsed=929.935µs number=16503892 hash=63b5bc…3bb237
INFO [03-30|07:42:54.318] Imported new block headers count=1 elapsed=1.438ms number=16503893 hash=aa5742…584dfd
WARN [03-30|07:42:57.824] Served eth_call reqid=120574 t=1m59.800879714s err="no suitable peers available" X-Forwarded-For=nil
INFO [03-30|07:42:58.055] Minting receiver=0548b01c168b79c6b2b24aacea6d8363bebd523f txID=7a0e64f73d65b40e812976eb887baf7f9ff246afaa58a056fb155659a608d977
```
|
1.0
|
No suitable peers - ```
ERROR[03-30|07:42:48.054] Error occured while minting: context deadline exceeded
INFO [03-30|07:42:49.043] Imported new block headers count=1 elapsed=902.003µs number=16503891 hash=67e57a…2e91aa
INFO [03-30|07:42:51.964] Imported new block headers count=1 elapsed=929.935µs number=16503892 hash=63b5bc…3bb237
INFO [03-30|07:42:54.318] Imported new block headers count=1 elapsed=1.438ms number=16503893 hash=aa5742…584dfd
WARN [03-30|07:42:57.824] Served eth_call reqid=120574 t=1m59.800879714s err="no suitable peers available" X-Forwarded-For=nil
INFO [03-30|07:42:58.055] Minting receiver=0548b01c168b79c6b2b24aacea6d8363bebd523f txID=7a0e64f73d65b40e812976eb887baf7f9ff246afaa58a056fb155659a608d977
```
|
process
|
no suitable peers error error occured while minting context deadline exceeded info imported new block headers count elapsed number hash … info imported new block headers count elapsed number hash … info imported new block headers count elapsed number hash … warn served eth call reqid t err no suitable peers available x forwarded for nil info minting receiver txid
| 1
|
18,838
| 24,744,337,319
|
IssuesEvent
|
2022-10-21 08:28:25
|
fadeoutsoftware/WASDI
|
https://api.github.com/repos/fadeoutsoftware/WASDI
|
closed
|
Docker > Jupyter / Traefik > Implement a native Jinja engine / stop to use the sysadmin tool
|
bug P2 app / processor
|
For the Jupyter containers and for Traefik, we need to render a Jinja template.
For the need of my presentation meeting, I wrote a script myself. This tool was more:
- a proof of concept
- a tool for sysadmin needs
To use this tool is dangerous:
- the tool uses my Toolbox engine: if I have to update the Toolbox engine, potentially I break a business feature
- my opinion - it is not logic to have a call to an external tool when it is possible to embed a native library
Example:
wrappersnap/launcher/src/main/java/wasdi/processors/JupyterNotebookProcessorEngine.java
```
oSB.append("/usr/bin/python3 \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" -B \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" /opt/companyExploitation/common/tool/toolbox/sysadmin/code/toolbox.py \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --configuration-directory /opt/companyExploitation/common/tool/toolbox/sysadmin/configuration/ \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --module jinja2 \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --submodule renderTemplate \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --template /data/wasdi/container/volume/traefik-notebook/etc_traefik/template/conf.d_notebook.yml.j2 \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --rendered-file /data/wasdi/container/volume/traefik-notebook/etc_traefik/conf.d/nb_" + sJupyterNotebookCode + ".yml \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --json-inline '{\"wasdiNotebookId\": \"" + sJupyterNotebookCode + "\"}' \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --strict");
```
Objective of this ticket:
- remove these calls
- replace with a native engine
I set the label "bug" because I consider it is a bug: I asked many times during the development to remove this call and it was delivered as it. Maybe we can remove this label if you disagree with me.
|
1.0
|
Docker > Jupyter / Traefik > Implement a native Jinja engine / stop to use the sysadmin tool - For the Jupyter containers and for Traefik, we need to render a Jinja template.
For the need of my presentation meeting, I wrote a script myself. This tool was more:
- a proof of concept
- a tool for sysadmin needs
To use this tool is dangerous:
- the tool uses my Toolbox engine: if I have to update the Toolbox engine, potentially I break a business feature
- my opinion - it is not logic to have a call to an external tool when it is possible to embed a native library
Example:
wrappersnap/launcher/src/main/java/wasdi/processors/JupyterNotebookProcessorEngine.java
```
oSB.append("/usr/bin/python3 \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" -B \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" /opt/companyExploitation/common/tool/toolbox/sysadmin/code/toolbox.py \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --configuration-directory /opt/companyExploitation/common/tool/toolbox/sysadmin/configuration/ \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --module jinja2 \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --submodule renderTemplate \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --template /data/wasdi/container/volume/traefik-notebook/etc_traefik/template/conf.d_notebook.yml.j2 \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --rendered-file /data/wasdi/container/volume/traefik-notebook/etc_traefik/conf.d/nb_" + sJupyterNotebookCode + ".yml \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --json-inline '{\"wasdiNotebookId\": \"" + sJupyterNotebookCode + "\"}' \\");
oSB.append(LINE_SEPARATOR);
oSB.append(" --strict");
```
Objective of this ticket:
- remove these calls
- replace with a native engine
I set the label "bug" because I consider it is a bug: I asked many times during the development to remove this call and it was delivered as it. Maybe we can remove this label if you disagree with me.
|
process
|
docker jupyter traefik implement a native jinja engine stop to use the sysadmin tool for the jupyter containers and for traefik we need to render a jinja template for the need of my presentation meeting i wrote a script myself this tool was more a proof of concept a tool for sysadmin needs to use this tool is dangerous the tool uses my toolbox engine if i have to update the toolbox engine potentially i break a business feature my opinion it is not logic to have a call to an external tool when it is possible to embed a native library example wrappersnap launcher src main java wasdi processors jupyternotebookprocessorengine java osb append usr bin osb append line separator osb append b osb append line separator osb append opt companyexploitation common tool toolbox sysadmin code toolbox py osb append line separator osb append configuration directory opt companyexploitation common tool toolbox sysadmin configuration osb append line separator osb append module osb append line separator osb append submodule rendertemplate osb append line separator osb append template data wasdi container volume traefik notebook etc traefik template conf d notebook yml osb append line separator osb append rendered file data wasdi container volume traefik notebook etc traefik conf d nb sjupyternotebookcode yml osb append line separator osb append json inline wasdinotebookid sjupyternotebookcode osb append line separator osb append strict objective of this ticket remove these calls replace with a native engine i set the label bug because i consider it is a bug i asked many times during the development to remove this call and it was delivered as it maybe we can remove this label if you disagree with me
| 1
|
19,551
| 25,870,440,506
|
IssuesEvent
|
2022-12-14 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Wed, 14 Dec 22
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Towards Deeper and Better Multi-view Feature Fusion for 3D Semantic Segmentation
- **Authors:** Chaolong Yang, Yuyao Yan, Weiguang Zhao, Jianan Ye, Xi Yang, Amir Hussain, Kaizhu Huang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.06682
- **Pdf link:** https://arxiv.org/pdf/2212.06682
- **Abstract**
3D point clouds are rich in geometric structure information, while 2D images contain important and continuous texture information. Combining 2D information to achieve better 3D semantic segmentation has become mainstream in 3D scene understanding. Albeit the success, it still remains elusive how to fuse and process the cross-dimensional features from these two distinct spaces. Existing state-of-the-art usually exploit bidirectional projection methods to align the cross-dimensional features and realize both 2D & 3D semantic segmentation tasks. However, to enable bidirectional mapping, this framework often requires a symmetrical 2D-3D network structure, thus limiting the network's flexibility. Meanwhile, such dual-task settings may distract the network easily and lead to over-fitting in the 3D segmentation task. As limited by the network's inflexibility, fused features can only pass through a decoder network, which affects model performance due to insufficient depth. To alleviate these drawbacks, in this paper, we argue that despite its simplicity, projecting unidirectionally multi-view 2D deep semantic features into the 3D space aligned with 3D deep semantic features could lead to better feature fusion. On the one hand, the unidirectional projection enforces our model focused more on the core task, i.e., 3D segmentation; on the other hand, unlocking the bidirectional to unidirectional projection enables a deeper cross-domain semantic alignment and enjoys the flexibility to fuse better and complicated features from very different spaces. In joint 2D-3D approaches, our proposed method achieves superior performance on the ScanNetv2 benchmark for 3D semantic segmentation.
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### ROAD: Learning an Implicit Recursive Octree Auto-Decoder to Efficiently Encode 3D Shapes
- **Authors:** Sergey Zakharov, Rares Ambrus, Katherine Liu, Adrien Gaidon
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2212.06193
- **Pdf link:** https://arxiv.org/pdf/2212.06193
- **Abstract**
Compact and accurate representations of 3D shapes are central to many perception and robotics tasks. State-of-the-art learning-based methods can reconstruct single objects but scale poorly to large datasets. We present a novel recursive implicit representation to efficiently and accurately encode large datasets of complex 3D shapes by recursively traversing an implicit octree in latent space. Our implicit Recursive Octree Auto-Decoder (ROAD) learns a hierarchically structured latent space enabling state-of-the-art reconstruction results at a compression ratio above 99%. We also propose an efficient curriculum learning scheme that naturally exploits the coarse-to-fine properties of the underlying octree spatial representation. We explore the scaling law relating latent space dimension, dataset size, and reconstruction accuracy, showing that increasing the latent space dimension is enough to scale to large shape datasets. Finally, we show that our learned latent space encodes a coarse-to-fine hierarchical structure yielding reusable latents across different levels of details, and we provide qualitative evidence of generalization to novel shapes outside the training set.
## Keyword: RAW
### PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion
- **Authors:** Lemeng Wu, Dilin Wang, Meng Li, Yunyang Xiong, Raghuraman Krishnamoorthi, Qiang Liu, Vikas Chandra
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.06244
- **Pdf link:** https://arxiv.org/pdf/2212.06244
- **Abstract**
Fusing camera with LiDAR is a promising technique to improve the accuracy of 3D detection due to the complementary physical properties. While most existing methods focus on fusing camera features directly with raw LiDAR point clouds or shallow 3D features, it is observed that direct deep 3D feature fusion achieves inferior accuracy due to feature misalignment. The misalignment that originates from the feature aggregation across large receptive fields becomes increasingly severe for deep network stages. In this paper, we propose PathFusion to enable path-consistent LiDAR-camera deep feature fusion. PathFusion introduces a path consistency loss between shallow and deep features, which encourages the 2D backbone and its fusion path to transform 2D features in a way that is semantically aligned with the transform of the 3D backbone. We apply PathFusion to the prior-art fusion baseline, Focals Conv, and observe more than 1.2\% mAP improvements on the nuScenes test split consistently with and without testing-time augmentations. Moreover, PathFusion also improves KITTI AP3D (R11) by more than 0.6% on moderate level.
### You Only Need a Good Embeddings Extractor to Fix Spurious Correlations
- **Authors:** Raghav Mehta, Vítor Albiero, Li Chen, Ivan Evtimov, Tamar Glaser, Zhiheng Li, Tal Hassner
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.06254
- **Pdf link:** https://arxiv.org/pdf/2212.06254
- **Abstract**
Spurious correlations in training data often lead to robustness issues since models learn to use them as shortcuts. For example, when predicting whether an object is a cow, a model might learn to rely on its green background, so it would do poorly on a cow on a sandy background. A standard dataset for measuring state-of-the-art on methods mitigating this problem is Waterbirds. The best method (Group Distributionally Robust Optimization - GroupDRO) currently achieves 89\% worst group accuracy and standard training from scratch on raw images only gets 72\%. GroupDRO requires training a model in an end-to-end manner with subgroup labels. In this paper, we show that we can achieve up to 90\% accuracy without using any sub-group information in the training set by simply using embeddings from a large pre-trained vision model extractor and training a linear classifier on top of it. With experiments on a wide range of pre-trained models and pre-training datasets, we show that the capacity of the pre-training model and the size of the pre-training dataset matters. Our experiments reveal that high capacity vision transformers perform better compared to high capacity convolutional neural networks, and larger pre-training dataset leads to better worst-group accuracy on the spurious correlation dataset.
### Towards Deeper and Better Multi-view Feature Fusion for 3D Semantic Segmentation
- **Authors:** Chaolong Yang, Yuyao Yan, Weiguang Zhao, Jianan Ye, Xi Yang, Amir Hussain, Kaizhu Huang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.06682
- **Pdf link:** https://arxiv.org/pdf/2212.06682
- **Abstract**
3D point clouds are rich in geometric structure information, while 2D images contain important and continuous texture information. Combining 2D information to achieve better 3D semantic segmentation has become mainstream in 3D scene understanding. Albeit the success, it still remains elusive how to fuse and process the cross-dimensional features from these two distinct spaces. Existing state-of-the-art usually exploit bidirectional projection methods to align the cross-dimensional features and realize both 2D & 3D semantic segmentation tasks. However, to enable bidirectional mapping, this framework often requires a symmetrical 2D-3D network structure, thus limiting the network's flexibility. Meanwhile, such dual-task settings may distract the network easily and lead to over-fitting in the 3D segmentation task. As limited by the network's inflexibility, fused features can only pass through a decoder network, which affects model performance due to insufficient depth. To alleviate these drawbacks, in this paper, we argue that despite its simplicity, projecting unidirectionally multi-view 2D deep semantic features into the 3D space aligned with 3D deep semantic features could lead to better feature fusion. On the one hand, the unidirectional projection enforces our model focused more on the core task, i.e., 3D segmentation; on the other hand, unlocking the bidirectional to unidirectional projection enables a deeper cross-domain semantic alignment and enjoys the flexibility to fuse better and complicated features from very different spaces. In joint 2D-3D approaches, our proposed method achieves superior performance on the ScanNetv2 benchmark for 3D semantic segmentation.
## Keyword: raw image
### You Only Need a Good Embeddings Extractor to Fix Spurious Correlations
- **Authors:** Raghav Mehta, Vítor Albiero, Li Chen, Ivan Evtimov, Tamar Glaser, Zhiheng Li, Tal Hassner
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.06254
- **Pdf link:** https://arxiv.org/pdf/2212.06254
- **Abstract**
Spurious correlations in training data often lead to robustness issues since models learn to use them as shortcuts. For example, when predicting whether an object is a cow, a model might learn to rely on its green background, so it would do poorly on a cow on a sandy background. A standard dataset for measuring state-of-the-art on methods mitigating this problem is Waterbirds. The best method (Group Distributionally Robust Optimization - GroupDRO) currently achieves 89\% worst group accuracy and standard training from scratch on raw images only gets 72\%. GroupDRO requires training a model in an end-to-end manner with subgroup labels. In this paper, we show that we can achieve up to 90\% accuracy without using any sub-group information in the training set by simply using embeddings from a large pre-trained vision model extractor and training a linear classifier on top of it. With experiments on a wide range of pre-trained models and pre-training datasets, we show that the capacity of the pre-training model and the size of the pre-training dataset matters. Our experiments reveal that high capacity vision transformers perform better compared to high capacity convolutional neural networks, and larger pre-training dataset leads to better worst-group accuracy on the spurious correlation dataset.
|
2.0
|
New submissions for Wed, 14 Dec 22 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Towards Deeper and Better Multi-view Feature Fusion for 3D Semantic Segmentation
- **Authors:** Chaolong Yang, Yuyao Yan, Weiguang Zhao, Jianan Ye, Xi Yang, Amir Hussain, Kaizhu Huang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.06682
- **Pdf link:** https://arxiv.org/pdf/2212.06682
- **Abstract**
3D point clouds are rich in geometric structure information, while 2D images contain important and continuous texture information. Combining 2D information to achieve better 3D semantic segmentation has become mainstream in 3D scene understanding. Albeit the success, it still remains elusive how to fuse and process the cross-dimensional features from these two distinct spaces. Existing state-of-the-art usually exploit bidirectional projection methods to align the cross-dimensional features and realize both 2D & 3D semantic segmentation tasks. However, to enable bidirectional mapping, this framework often requires a symmetrical 2D-3D network structure, thus limiting the network's flexibility. Meanwhile, such dual-task settings may distract the network easily and lead to over-fitting in the 3D segmentation task. As limited by the network's inflexibility, fused features can only pass through a decoder network, which affects model performance due to insufficient depth. To alleviate these drawbacks, in this paper, we argue that despite its simplicity, projecting unidirectionally multi-view 2D deep semantic features into the 3D space aligned with 3D deep semantic features could lead to better feature fusion. On the one hand, the unidirectional projection enforces our model focused more on the core task, i.e., 3D segmentation; on the other hand, unlocking the bidirectional to unidirectional projection enables a deeper cross-domain semantic alignment and enjoys the flexibility to fuse better and complicated features from very different spaces. In joint 2D-3D approaches, our proposed method achieves superior performance on the ScanNetv2 benchmark for 3D semantic segmentation.
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### ROAD: Learning an Implicit Recursive Octree Auto-Decoder to Efficiently Encode 3D Shapes
- **Authors:** Sergey Zakharov, Rares Ambrus, Katherine Liu, Adrien Gaidon
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2212.06193
- **Pdf link:** https://arxiv.org/pdf/2212.06193
- **Abstract**
Compact and accurate representations of 3D shapes are central to many perception and robotics tasks. State-of-the-art learning-based methods can reconstruct single objects but scale poorly to large datasets. We present a novel recursive implicit representation to efficiently and accurately encode large datasets of complex 3D shapes by recursively traversing an implicit octree in latent space. Our implicit Recursive Octree Auto-Decoder (ROAD) learns a hierarchically structured latent space enabling state-of-the-art reconstruction results at a compression ratio above 99%. We also propose an efficient curriculum learning scheme that naturally exploits the coarse-to-fine properties of the underlying octree spatial representation. We explore the scaling law relating latent space dimension, dataset size, and reconstruction accuracy, showing that increasing the latent space dimension is enough to scale to large shape datasets. Finally, we show that our learned latent space encodes a coarse-to-fine hierarchical structure yielding reusable latents across different levels of details, and we provide qualitative evidence of generalization to novel shapes outside the training set.
## Keyword: RAW
### PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion
- **Authors:** Lemeng Wu, Dilin Wang, Meng Li, Yunyang Xiong, Raghuraman Krishnamoorthi, Qiang Liu, Vikas Chandra
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.06244
- **Pdf link:** https://arxiv.org/pdf/2212.06244
- **Abstract**
Fusing camera with LiDAR is a promising technique to improve the accuracy of 3D detection due to the complementary physical properties. While most existing methods focus on fusing camera features directly with raw LiDAR point clouds or shallow 3D features, it is observed that direct deep 3D feature fusion achieves inferior accuracy due to feature misalignment. The misalignment that originates from the feature aggregation across large receptive fields becomes increasingly severe for deep network stages. In this paper, we propose PathFusion to enable path-consistent LiDAR-camera deep feature fusion. PathFusion introduces a path consistency loss between shallow and deep features, which encourages the 2D backbone and its fusion path to transform 2D features in a way that is semantically aligned with the transform of the 3D backbone. We apply PathFusion to the prior-art fusion baseline, Focals Conv, and observe more than 1.2\% mAP improvements on the nuScenes test split consistently with and without testing-time augmentations. Moreover, PathFusion also improves KITTI AP3D (R11) by more than 0.6% on moderate level.
### You Only Need a Good Embeddings Extractor to Fix Spurious Correlations
- **Authors:** Raghav Mehta, Vítor Albiero, Li Chen, Ivan Evtimov, Tamar Glaser, Zhiheng Li, Tal Hassner
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.06254
- **Pdf link:** https://arxiv.org/pdf/2212.06254
- **Abstract**
Spurious correlations in training data often lead to robustness issues since models learn to use them as shortcuts. For example, when predicting whether an object is a cow, a model might learn to rely on its green background, so it would do poorly on a cow on a sandy background. A standard dataset for measuring state-of-the-art on methods mitigating this problem is Waterbirds. The best method (Group Distributionally Robust Optimization - GroupDRO) currently achieves 89\% worst group accuracy and standard training from scratch on raw images only gets 72\%. GroupDRO requires training a model in an end-to-end manner with subgroup labels. In this paper, we show that we can achieve up to 90\% accuracy without using any sub-group information in the training set by simply using embeddings from a large pre-trained vision model extractor and training a linear classifier on top of it. With experiments on a wide range of pre-trained models and pre-training datasets, we show that the capacity of the pre-training model and the size of the pre-training dataset matters. Our experiments reveal that high capacity vision transformers perform better compared to high capacity convolutional neural networks, and larger pre-training dataset leads to better worst-group accuracy on the spurious correlation dataset.
### Towards Deeper and Better Multi-view Feature Fusion for 3D Semantic Segmentation
- **Authors:** Chaolong Yang, Yuyao Yan, Weiguang Zhao, Jianan Ye, Xi Yang, Amir Hussain, Kaizhu Huang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.06682
- **Pdf link:** https://arxiv.org/pdf/2212.06682
- **Abstract**
3D point clouds are rich in geometric structure information, while 2D images contain important and continuous texture information. Combining 2D information to achieve better 3D semantic segmentation has become mainstream in 3D scene understanding. Albeit the success, it still remains elusive how to fuse and process the cross-dimensional features from these two distinct spaces. Existing state-of-the-art usually exploit bidirectional projection methods to align the cross-dimensional features and realize both 2D & 3D semantic segmentation tasks. However, to enable bidirectional mapping, this framework often requires a symmetrical 2D-3D network structure, thus limiting the network's flexibility. Meanwhile, such dual-task settings may distract the network easily and lead to over-fitting in the 3D segmentation task. As limited by the network's inflexibility, fused features can only pass through a decoder network, which affects model performance due to insufficient depth. To alleviate these drawbacks, in this paper, we argue that despite its simplicity, projecting unidirectionally multi-view 2D deep semantic features into the 3D space aligned with 3D deep semantic features could lead to better feature fusion. On the one hand, the unidirectional projection enforces our model focused more on the core task, i.e., 3D segmentation; on the other hand, unlocking the bidirectional to unidirectional projection enables a deeper cross-domain semantic alignment and enjoys the flexibility to fuse better and complicated features from very different spaces. In joint 2D-3D approaches, our proposed method achieves superior performance on the ScanNetv2 benchmark for 3D semantic segmentation.
## Keyword: raw image
### You Only Need a Good Embeddings Extractor to Fix Spurious Correlations
- **Authors:** Raghav Mehta, Vítor Albiero, Li Chen, Ivan Evtimov, Tamar Glaser, Zhiheng Li, Tal Hassner
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.06254
- **Pdf link:** https://arxiv.org/pdf/2212.06254
- **Abstract**
Spurious correlations in training data often lead to robustness issues since models learn to use them as shortcuts. For example, when predicting whether an object is a cow, a model might learn to rely on its green background, so it would do poorly on a cow on a sandy background. A standard dataset for measuring state-of-the-art on methods mitigating this problem is Waterbirds. The best method (Group Distributionally Robust Optimization - GroupDRO) currently achieves 89\% worst group accuracy and standard training from scratch on raw images only gets 72\%. GroupDRO requires training a model in an end-to-end manner with subgroup labels. In this paper, we show that we can achieve up to 90\% accuracy without using any sub-group information in the training set by simply using embeddings from a large pre-trained vision model extractor and training a linear classifier on top of it. With experiments on a wide range of pre-trained models and pre-training datasets, we show that the capacity of the pre-training model and the size of the pre-training dataset matters. Our experiments reveal that high capacity vision transformers perform better compared to high capacity convolutional neural networks, and larger pre-training dataset leads to better worst-group accuracy on the spurious correlation dataset.
|
process
|
new submissions for wed dec keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb towards deeper and better multi view feature fusion for semantic segmentation authors chaolong yang yuyao yan weiguang zhao jianan ye xi yang amir hussain kaizhu huang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract point clouds are rich in geometric structure information while images contain important and continuous texture information combining information to achieve better semantic segmentation has become mainstream in scene understanding albeit the success it still remains elusive how to fuse and process the cross dimensional features from these two distinct spaces existing state of the art usually exploit bidirectional projection methods to align the cross dimensional features and realize both semantic segmentation tasks however to enable bidirectional mapping this framework often requires a symmetrical network structure thus limiting the network s flexibility meanwhile such dual task settings may distract the network easily and lead to over fitting in the segmentation task as limited by the network s inflexibility fused features can only pass through a decoder network which affects model performance due to insufficient depth to alleviate these drawbacks in this paper we argue that despite its simplicity projecting unidirectionally multi view deep semantic features into the space aligned with deep semantic features could lead to better feature fusion on the one hand the unidirectional projection enforces our model focused more on the core task i e segmentation on the other hand unlocking the bidirectional to unidirectional projection enables a deeper cross domain semantic alignment and enjoys the flexibility to fuse better and complicated features from very different spaces in joint approaches our proposed method achieves superior performance on the benchmark for semantic segmentation keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression road learning an implicit recursive octree auto decoder to efficiently encode shapes authors sergey zakharov rares ambrus katherine liu adrien gaidon subjects computer vision and pattern recognition cs cv graphics cs gr robotics cs ro arxiv link pdf link abstract compact and accurate representations of shapes are central to many perception and robotics tasks state of the art learning based methods can reconstruct single objects but scale poorly to large datasets we present a novel recursive implicit representation to efficiently and accurately encode large datasets of complex shapes by recursively traversing an implicit octree in latent space our implicit recursive octree auto decoder road learns a hierarchically structured latent space enabling state of the art reconstruction results at a compression ratio above we also propose an efficient curriculum learning scheme that naturally exploits the coarse to fine properties of the underlying octree spatial representation we explore the scaling law relating latent space dimension dataset size and reconstruction accuracy showing that increasing the latent space dimension is enough to scale to large shape datasets finally we show that our learned latent space encodes a coarse to fine hierarchical structure yielding reusable latents across different levels of details and we provide qualitative evidence of generalization to novel shapes outside the training set keyword raw pathfusion path consistent lidar camera deep feature fusion authors lemeng wu dilin wang meng li yunyang xiong raghuraman krishnamoorthi qiang liu vikas chandra subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract fusing camera with lidar is a promising technique to improve the accuracy of detection due to the complementary physical properties while most existing methods focus on fusing camera features directly with raw lidar point clouds or shallow features it is observed that direct deep feature fusion achieves inferior accuracy due to feature misalignment the misalignment that originates from the feature aggregation across large receptive fields becomes increasingly severe for deep network stages in this paper we propose pathfusion to enable path consistent lidar camera deep feature fusion pathfusion introduces a path consistency loss between shallow and deep features which encourages the backbone and its fusion path to transform features in a way that is semantically aligned with the transform of the backbone we apply pathfusion to the prior art fusion baseline focals conv and observe more than map improvements on the nuscenes test split consistently with and without testing time augmentations moreover pathfusion also improves kitti by more than on moderate level you only need a good embeddings extractor to fix spurious correlations authors raghav mehta vítor albiero li chen ivan evtimov tamar glaser zhiheng li tal hassner subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract spurious correlations in training data often lead to robustness issues since models learn to use them as shortcuts for example when predicting whether an object is a cow a model might learn to rely on its green background so it would do poorly on a cow on a sandy background a standard dataset for measuring state of the art on methods mitigating this problem is waterbirds the best method group distributionally robust optimization groupdro currently achieves worst group accuracy and standard training from scratch on raw images only gets groupdro requires training a model in an end to end manner with subgroup labels in this paper we show that we can achieve up to accuracy without using any sub group information in the training set by simply using embeddings from a large pre trained vision model extractor and training a linear classifier on top of it with experiments on a wide range of pre trained models and pre training datasets we show that the capacity of the pre training model and the size of the pre training dataset matters our experiments reveal that high capacity vision transformers perform better compared to high capacity convolutional neural networks and larger pre training dataset leads to better worst group accuracy on the spurious correlation dataset towards deeper and better multi view feature fusion for semantic segmentation authors chaolong yang yuyao yan weiguang zhao jianan ye xi yang amir hussain kaizhu huang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract point clouds are rich in geometric structure information while images contain important and continuous texture information combining information to achieve better semantic segmentation has become mainstream in scene understanding albeit the success it still remains elusive how to fuse and process the cross dimensional features from these two distinct spaces existing state of the art usually exploit bidirectional projection methods to align the cross dimensional features and realize both semantic segmentation tasks however to enable bidirectional mapping this framework often requires a symmetrical network structure thus limiting the network s flexibility meanwhile such dual task settings may distract the network easily and lead to over fitting in the segmentation task as limited by the network s inflexibility fused features can only pass through a decoder network which affects model performance due to insufficient depth to alleviate these drawbacks in this paper we argue that despite its simplicity projecting unidirectionally multi view deep semantic features into the space aligned with deep semantic features could lead to better feature fusion on the one hand the unidirectional projection enforces our model focused more on the core task i e segmentation on the other hand unlocking the bidirectional to unidirectional projection enables a deeper cross domain semantic alignment and enjoys the flexibility to fuse better and complicated features from very different spaces in joint approaches our proposed method achieves superior performance on the benchmark for semantic segmentation keyword raw image you only need a good embeddings extractor to fix spurious correlations authors raghav mehta vítor albiero li chen ivan evtimov tamar glaser zhiheng li tal hassner subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract spurious correlations in training data often lead to robustness issues since models learn to use them as shortcuts for example when predicting whether an object is a cow a model might learn to rely on its green background so it would do poorly on a cow on a sandy background a standard dataset for measuring state of the art on methods mitigating this problem is waterbirds the best method group distributionally robust optimization groupdro currently achieves worst group accuracy and standard training from scratch on raw images only gets groupdro requires training a model in an end to end manner with subgroup labels in this paper we show that we can achieve up to accuracy without using any sub group information in the training set by simply using embeddings from a large pre trained vision model extractor and training a linear classifier on top of it with experiments on a wide range of pre trained models and pre training datasets we show that the capacity of the pre training model and the size of the pre training dataset matters our experiments reveal that high capacity vision transformers perform better compared to high capacity convolutional neural networks and larger pre training dataset leads to better worst group accuracy on the spurious correlation dataset
| 1
|
1,609
| 3,808,042,474
|
IssuesEvent
|
2016-03-25 12:52:14
|
ngageoint/hootenanny
|
https://api.github.com/repos/ngageoint/hootenanny
|
opened
|
"memory leak" messages seen sometimes when starting tomcat with hoot deployed to it
|
Category: Services Priority: Medium Status: Defined Type: Bug
|
I see this in certain situations when starting up tomcat after hoot has been deployed to it.
"SEVERE: The web application [/hoot-services] appears to have started a thread named [Thread-4] but has failed to stop it. This is very likely to create a memory leak."
* Reproduce the error message
* Determine if its legit
* If so, make changes to prevent it from happening
|
1.0
|
"memory leak" messages seen sometimes when starting tomcat with hoot deployed to it - I see this in certain situations when starting up tomcat after hoot has been deployed to it.
"SEVERE: The web application [/hoot-services] appears to have started a thread named [Thread-4] but has failed to stop it. This is very likely to create a memory leak."
* Reproduce the error message
* Determine if its legit
* If so, make changes to prevent it from happening
|
non_process
|
memory leak messages seen sometimes when starting tomcat with hoot deployed to it i see this in certain situations when starting up tomcat after hoot has been deployed to it severe the web application appears to have started a thread named but has failed to stop it this is very likely to create a memory leak reproduce the error message determine if its legit if so make changes to prevent it from happening
| 0
|
22,649
| 31,895,827,418
|
IssuesEvent
|
2023-09-18 01:31:57
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - group
|
Term - change Class - GeologicalContext normative Task Group - Material Sample Process - complete
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_group
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): group
* Term label (English, not normative): Group
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the lithostratigraphic group from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Bathurst, Lower Wealden
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
Change term - group - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_group
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): group
* Term label (English, not normative): Group
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the lithostratigraphic group from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Bathurst, Lower Wealden
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
change term group term change submitter efficacy justification why is this change necessary create consistency of terms for material in darwin core demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes group term label english not normative group organized in class e g occurrence event location taxon geological context definition of the term normative the full name of the lithostratigraphic group from which the cataloged item dwc materialentity was collected usage comments recommendations regarding content etc not normative examples not normative bathurst lower wealden refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
1,286
| 3,822,801,783
|
IssuesEvent
|
2016-03-30 03:46:58
|
mapbox/mapbox-gl-js
|
https://api.github.com/repos/mapbox/mapbox-gl-js
|
opened
|
Speeding up render tests
|
testing & release process
|
Render tests take a big portion of our test time (~10 min), especially after we switched on test coverage with `istanbul`. We could try to significantly speed this up:
1. Run it in two stages — the first generates all actual images, the second runs pixelmatch-powered diffing on all generated images. This allows us to run `istanbul` only on the first stage, speeding up the second one significantly.
2. Run both image generation and diffing in parallel with separate node processes (`child_process.fork`), similar to how `tile-reduce` parallelizes work. Since tests are independent, we can get a huge boost this way.
cc @lucaswoj @jfirebaugh
|
1.0
|
Speeding up render tests - Render tests take a big portion of our test time (~10 min), especially after we switched on test coverage with `istanbul`. We could try to significantly speed this up:
1. Run it in two stages — the first generates all actual images, the second runs pixelmatch-powered diffing on all generated images. This allows us to run `istanbul` only on the first stage, speeding up the second one significantly.
2. Run both image generation and diffing in parallel with separate node processes (`child_process.fork`), similar to how `tile-reduce` parallelizes work. Since tests are independent, we can get a huge boost this way.
cc @lucaswoj @jfirebaugh
|
process
|
speeding up render tests render tests take a big portion of our test time min especially after we switched on test coverage with istanbul we could try to significantly speed this up run it in two stages — the first generates all actual images the second runs pixelmatch powered diffing on all generated images this allows us to run istanbul only on the first stage speeding up the second one significantly run both image generation and diffing in parallel with separate node processes child process fork similar to how tile reduce parallelizes work since tests are independent we can get a huge boost this way cc lucaswoj jfirebaugh
| 1
|
21,129
| 28,101,560,216
|
IssuesEvent
|
2023-03-30 19:57:15
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Random Extract duplicates features
|
Processing Bug
|
### What is the bug or the crash?
I've got a vector layer with 2705 features from which I want to randonmly extract 1000 of them. Using the `Random Extract` tool, it returns a set of 1000 features but some of them are duplicated (up to four times). I've checked the `id` attribute and it is correctly set.
For being on the safe side, I've also used the `Random selection` tool to verify if it returns a correct set of features and it has worked well.
### Steps to reproduce the issue
1. Go to Toolbox
2. Click on `Random extract`
3. Select vector layer
4. Set 1000 features
### Versions
Versión de QGIS
3.22.16-Białowieża
Revisión del código de QGIS
6f08e4d7
Versión Qt
5.15.3
Versión de Python
3.9.5
Versión de GDAL/OGR
3.6.2
Versión de PROJ
9.1.1
Versión del registro de base de datos EPSG
v10.076 (2022-08-31)
Versión GEOS
3.11.1-CAPI-1.17.1
Versión de SQLite
3.39.4
Versión de PDAL
2.4.3
Versión del cliente de PostgreSQL
14.3
Versión de SpatiaLite
5.0.1
Versión de QWT
6.1.6
Versión de QScintilla2
2.13.1
Versión del SO
Windows 10 Version 2009
Complementos activos de Python
active_fire
0.3
DataPlotly
3.9.2
DEMto3D
3.51
HCMGIS
23.2.1
latlontools
3.6.7
OSMDownloader
1.0.3
pg_raster_import
3.1.0
pointsamplingtool
0.5.4
ProjectPackager
0.5.1
qdraw
3.0.2
qgisnetworklogger
0.2.0
QuickOSM
2.1.1
quick_map_services
0.19.32
sigpac_downloader
0.3
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.5
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Random Extract duplicates features - ### What is the bug or the crash?
I've got a vector layer with 2705 features from which I want to randonmly extract 1000 of them. Using the `Random Extract` tool, it returns a set of 1000 features but some of them are duplicated (up to four times). I've checked the `id` attribute and it is correctly set.
For being on the safe side, I've also used the `Random selection` tool to verify if it returns a correct set of features and it has worked well.
### Steps to reproduce the issue
1. Go to Toolbox
2. Click on `Random extract`
3. Select vector layer
4. Set 1000 features
### Versions
Versión de QGIS
3.22.16-Białowieża
Revisión del código de QGIS
6f08e4d7
Versión Qt
5.15.3
Versión de Python
3.9.5
Versión de GDAL/OGR
3.6.2
Versión de PROJ
9.1.1
Versión del registro de base de datos EPSG
v10.076 (2022-08-31)
Versión GEOS
3.11.1-CAPI-1.17.1
Versión de SQLite
3.39.4
Versión de PDAL
2.4.3
Versión del cliente de PostgreSQL
14.3
Versión de SpatiaLite
5.0.1
Versión de QWT
6.1.6
Versión de QScintilla2
2.13.1
Versión del SO
Windows 10 Version 2009
Complementos activos de Python
active_fire
0.3
DataPlotly
3.9.2
DEMto3D
3.51
HCMGIS
23.2.1
latlontools
3.6.7
OSMDownloader
1.0.3
pg_raster_import
3.1.0
pointsamplingtool
0.5.4
ProjectPackager
0.5.1
qdraw
3.0.2
qgisnetworklogger
0.2.0
QuickOSM
2.1.1
quick_map_services
0.19.32
sigpac_downloader
0.3
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.5
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
random extract duplicates features what is the bug or the crash i ve got a vector layer with features from which i want to randonmly extract of them using the random extract tool it returns a set of features but some of them are duplicated up to four times i ve checked the id attribute and it is correctly set for being on the safe side i ve also used the random selection tool to verify if it returns a correct set of features and it has worked well steps to reproduce the issue go to toolbox click on random extract select vector layer set features versions versión de qgis białowieża revisión del código de qgis versión qt versión de python versión de gdal ogr versión de proj versión del registro de base de datos epsg versión geos capi versión de sqlite versión de pdal versión del cliente de postgresql versión de spatialite versión de qwt versión de versión del so windows version complementos activos de python active fire dataplotly hcmgis latlontools osmdownloader pg raster import pointsamplingtool projectpackager qdraw qgisnetworklogger quickosm quick map services sigpac downloader db manager grassprovider metasearch processing sagaprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
492,973
| 14,223,687,088
|
IssuesEvent
|
2020-11-17 18:33:40
|
longhorn/longhorn
|
https://api.github.com/repos/longhorn/longhorn
|
opened
|
[BUG] Longhorn didn't choose to rebuild on an existing replica if there is a scheduling failed replica after
|
area/manager enhancement priority/3
|
**Describe the bug**
Longhorn didn't choose to rebuild on an existing replica if there is a scheduling failed replica.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a volume with 3 replicas on a three-node cluster.
2. Shutdown one of the replica node.
3. Wait for more than `Replica Replenishment Wait Interval` which is by default 600 seconds.
4. See volume scheduled another replica to replace the down replica, but unable to trigger rebuild since it's not scheduable.
5. Bring back the node.
6. See rebuild was started on scheduled replica instead of the existing replica.
**Expected behavior**
Ideally, the rebuild should start on the existing replica to save space in this case.
**Log**
N/A.
**Environment:**
- Longhorn version: master
- Kubernetes version: v1.19.3
- Node OS type and version: Ubuntu 20.04
**Additional context**
N/A
|
1.0
|
[BUG] Longhorn didn't choose to rebuild on an existing replica if there is a scheduling failed replica after - **Describe the bug**
Longhorn didn't choose to rebuild on an existing replica if there is a scheduling failed replica.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a volume with 3 replicas on a three-node cluster.
2. Shutdown one of the replica node.
3. Wait for more than `Replica Replenishment Wait Interval` which is by default 600 seconds.
4. See volume scheduled another replica to replace the down replica, but unable to trigger rebuild since it's not scheduable.
5. Bring back the node.
6. See rebuild was started on scheduled replica instead of the existing replica.
**Expected behavior**
Ideally, the rebuild should start on the existing replica to save space in this case.
**Log**
N/A.
**Environment:**
- Longhorn version: master
- Kubernetes version: v1.19.3
- Node OS type and version: Ubuntu 20.04
**Additional context**
N/A
|
non_process
|
longhorn didn t choose to rebuild on an existing replica if there is a scheduling failed replica after describe the bug longhorn didn t choose to rebuild on an existing replica if there is a scheduling failed replica to reproduce steps to reproduce the behavior create a volume with replicas on a three node cluster shutdown one of the replica node wait for more than replica replenishment wait interval which is by default seconds see volume scheduled another replica to replace the down replica but unable to trigger rebuild since it s not scheduable bring back the node see rebuild was started on scheduled replica instead of the existing replica expected behavior ideally the rebuild should start on the existing replica to save space in this case log n a environment longhorn version master kubernetes version node os type and version ubuntu additional context n a
| 0
|
5,028
| 7,850,468,099
|
IssuesEvent
|
2018-06-20 08:39:42
|
rivine/recordchain
|
https://api.github.com/repos/rivine/recordchain
|
closed
|
Implement keep allive
|
process_wontfix
|
each node will send messages ( http calls with jwt and ip of node ) to the orderbook.
When the orderbook doesn’t receive these messages from the node, the orders will be deleted.
A node will always send his orders when a connection with the orderbook is created.
|
1.0
|
Implement keep allive - each node will send messages ( http calls with jwt and ip of node ) to the orderbook.
When the orderbook doesn’t receive these messages from the node, the orders will be deleted.
A node will always send his orders when a connection with the orderbook is created.
|
process
|
implement keep allive each node will send messages http calls with jwt and ip of node to the orderbook when the orderbook doesn’t receive these messages from the node the orders will be deleted a node will always send his orders when a connection with the orderbook is created
| 1
|
571,512
| 17,023,315,745
|
IssuesEvent
|
2021-07-03 01:23:29
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Tag:service=parking aisle needs rendered
|
Component: mapnik Priority: major Resolution: duplicate Type: enhancement
|
**[Submitted to the original trac issue database at 11.10pm, Friday, 24th October 2008]**
http://wiki.openstreetmap.org/index.php/Tag:service%3Dparking_aisle
service=parking has yet to be rendered corectly.
|
1.0
|
Tag:service=parking aisle needs rendered - **[Submitted to the original trac issue database at 11.10pm, Friday, 24th October 2008]**
http://wiki.openstreetmap.org/index.php/Tag:service%3Dparking_aisle
service=parking has yet to be rendered corectly.
|
non_process
|
tag service parking aisle needs rendered service parking has yet to be rendered corectly
| 0
|
14,692
| 17,850,765,845
|
IssuesEvent
|
2021-09-04 02:39:21
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Coding problems when running algorithms
|
Feedback stale Processing Bug
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS developers alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
Dear Ones,
In Brazil we use UTF-8 encoding. In the list of Qgis 3.16.8-Hannover compiled 5.15.2 (QGIS version code 8c50902) there is UTF-8 appears twice.But the biggest problem is that when running any algorithm that saves a new shp layer, the UTF-8 encoding is preserved, but the new layer opens with Windows-1252 encoding. If we change the encoding to UTF-8, the data is correct in the attribute table. Otherwise, the characters appear with problems. This forces you to correct the encoding manually each time a layer resulting from processing is opened. This becomes very problematic for building models with the modeler. We have tried to correct the encoding at the end with an algorithm to "set the encoding", but it has no output, so it is not possible to start a process in UTF-8 and end it in the same encoding.
If you have any questions, please feel free to contact me.
Thank you very much.
**How to Reproduce**
1. Open a shp saved in UTF-8 encoding that contains special characters such as ô ã like the names of Brazilian cities: "São Paulo, Florianópolis, São Gonçalo";
2. Use the "correct geometry" algorithm from the process box;
3. Check the encoding of the resulting layer will be windows-1252 instead of UTF-8). Also, the attribute table will be misconfigured;
4. Adjust the encoding of the layer by right-clicking on the vector layer, information, change to UTF-8 encoding (note that it appears twice in the list, but either one works);
5. Check the attribute table again. It will be correct, but for the layer to actually be in UTF-8 encoding you need to right click on the vector layer and "export layer in UTF-8 encoding;
6. Open the exported layer, which will now open in UTF-8 and the table will be correct. But there is no way to run this process in batch, at least I didn't find it.
**QGIS and OS versions**
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
Versão do QGIS
3.16.8-Hannover
Código da versão do QGIS
8c50902e
Compilado sobre Qt
5.15.2
Rodando sobre Qt
5.15.2
Compilado sobre GDAL/OGR
3.3.0
Rodando sobre GDAL/OGR
3.3.0
Compilado sobre GEOS
3.9.1-CAPI-1.14.2
Rodando sobre GEOS
3.9.1-CAPI-1.14.2
Compilado no SQLite
3.35.2
Executando contra SQLite
3.35.2
Versão do cliente PostgreSQL
13.0
Versão SpatiaLite
5.0.1
Versão do QWT
6.1.3
Versão QScintilla2
2.11.5
Compilado com PROJ
8.0.1
Em execução com PROJ
Rel. 8.0.1, March 5th, 2021
Versão SO
Windows 10 Version 2009
Ativar complementos python
dzetsaka;
FreehandRasterGeoreferencer;
ImportPhotos;
latlontools;
mmqgis;
quick_map_services;
shapetools;
MetaSearch;
processing
Windows 10 Home Single Language
20H2
12/03/2021
19042.1110
Windows Feature Experience Pack 120.2212.3530.0
**Additional context**
<!-- Add any other context about the problem here. -->
|
1.0
|
Coding problems when running algorithms - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS developers alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
Dear Ones,
In Brazil we use UTF-8 encoding. In the list of Qgis 3.16.8-Hannover compiled 5.15.2 (QGIS version code 8c50902) there is UTF-8 appears twice.But the biggest problem is that when running any algorithm that saves a new shp layer, the UTF-8 encoding is preserved, but the new layer opens with Windows-1252 encoding. If we change the encoding to UTF-8, the data is correct in the attribute table. Otherwise, the characters appear with problems. This forces you to correct the encoding manually each time a layer resulting from processing is opened. This becomes very problematic for building models with the modeler. We have tried to correct the encoding at the end with an algorithm to "set the encoding", but it has no output, so it is not possible to start a process in UTF-8 and end it in the same encoding.
If you have any questions, please feel free to contact me.
Thank you very much.
**How to Reproduce**
1. Open a shp saved in UTF-8 encoding that contains special characters such as ô ã like the names of Brazilian cities: "São Paulo, Florianópolis, São Gonçalo";
2. Use the "correct geometry" algorithm from the process box;
3. Check the encoding of the resulting layer will be windows-1252 instead of UTF-8). Also, the attribute table will be misconfigured;
4. Adjust the encoding of the layer by right-clicking on the vector layer, information, change to UTF-8 encoding (note that it appears twice in the list, but either one works);
5. Check the attribute table again. It will be correct, but for the layer to actually be in UTF-8 encoding you need to right click on the vector layer and "export layer in UTF-8 encoding;
6. Open the exported layer, which will now open in UTF-8 and the table will be correct. But there is no way to run this process in batch, at least I didn't find it.
**QGIS and OS versions**
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
Versão do QGIS
3.16.8-Hannover
Código da versão do QGIS
8c50902e
Compilado sobre Qt
5.15.2
Rodando sobre Qt
5.15.2
Compilado sobre GDAL/OGR
3.3.0
Rodando sobre GDAL/OGR
3.3.0
Compilado sobre GEOS
3.9.1-CAPI-1.14.2
Rodando sobre GEOS
3.9.1-CAPI-1.14.2
Compilado no SQLite
3.35.2
Executando contra SQLite
3.35.2
Versão do cliente PostgreSQL
13.0
Versão SpatiaLite
5.0.1
Versão do QWT
6.1.3
Versão QScintilla2
2.11.5
Compilado com PROJ
8.0.1
Em execução com PROJ
Rel. 8.0.1, March 5th, 2021
Versão SO
Windows 10 Version 2009
Ativar complementos python
dzetsaka;
FreehandRasterGeoreferencer;
ImportPhotos;
latlontools;
mmqgis;
quick_map_services;
shapetools;
MetaSearch;
processing
Windows 10 Home Single Language
20H2
12/03/2021
19042.1110
Windows Feature Experience Pack 120.2212.3530.0
**Additional context**
<!-- Add any other context about the problem here. -->
|
process
|
coding problems when running algorithms bug fixing and feature development is a community responsibility and not the responsibility of the qgis developers alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug dear ones in brazil we use utf encoding in the list of qgis hannover compiled qgis version code there is utf appears twice but the biggest problem is that when running any algorithm that saves a new shp layer the utf encoding is preserved but the new layer opens with windows encoding if we change the encoding to utf the data is correct in the attribute table otherwise the characters appear with problems this forces you to correct the encoding manually each time a layer resulting from processing is opened this becomes very problematic for building models with the modeler we have tried to correct the encoding at the end with an algorithm to set the encoding but it has no output so it is not possible to start a process in utf and end it in the same encoding if you have any questions please feel free to contact me thank you very much how to reproduce open a shp saved in utf encoding that contains special characters such as ô ã like the names of brazilian cities são paulo florianópolis são gonçalo use the correct geometry algorithm from the process box check the encoding of the resulting layer will be windows instead of utf also the attribute table will be misconfigured adjust the encoding of the layer by right clicking on the vector layer information change to utf encoding note that it appears twice in the list but either one works check the attribute table again it will be correct but for the layer to actually be in utf encoding you need to right click on the vector layer and export layer in utf encoding open the exported layer which will now open in utf and the table will be correct but there is no way to run this process in batch at least i didn t find it qgis and os versions about click in the table ctrl a and then ctrl c finally paste here versão do qgis hannover código da versão do qgis compilado sobre qt rodando sobre qt compilado sobre gdal ogr rodando sobre gdal ogr compilado sobre geos capi rodando sobre geos capi compilado no sqlite executando contra sqlite versão do cliente postgresql versão spatialite versão do qwt versão compilado com proj em execução com proj rel march versão so windows version ativar complementos python dzetsaka freehandrastergeoreferencer importphotos latlontools mmqgis quick map services shapetools metasearch processing windows home single language windows feature experience pack additional context
| 1
|
8,492
| 10,516,234,434
|
IssuesEvent
|
2019-09-28 16:03:33
|
OpenXRay/xray-16
|
https://api.github.com/repos/OpenXRay/xray-16
|
opened
|
Teach renderer to work with and without HQ Geometry fix
|
Compatibility Modmaker Experience Player Experience Render
|
Check out this commit: https://github.com/OpenXRay/xray-16/commit/2a91825eeb30b3cf8af6b91201a25bd4a90cd01e
This introduced support for high quality models along with the loss of compatibility with the original `skin.h` shader.
Renderer should dynamically detect if shaders has installed HQ geometry fix, and work correctly no matter if it's installed or not.
|
True
|
Teach renderer to work with and without HQ Geometry fix - Check out this commit: https://github.com/OpenXRay/xray-16/commit/2a91825eeb30b3cf8af6b91201a25bd4a90cd01e
This introduced support for high quality models along with the loss of compatibility with the original `skin.h` shader.
Renderer should dynamically detect if shaders has installed HQ geometry fix, and work correctly no matter if it's installed or not.
|
non_process
|
teach renderer to work with and without hq geometry fix check out this commit this introduced support for high quality models along with the loss of compatibility with the original skin h shader renderer should dynamically detect if shaders has installed hq geometry fix and work correctly no matter if it s installed or not
| 0
|
4,639
| 7,482,346,906
|
IssuesEvent
|
2018-04-05 00:46:48
|
UnbFeelings/unb-feelings-GQA
|
https://api.github.com/repos/UnbFeelings/unb-feelings-GQA
|
closed
|
Reorganização da parte da wiki de processo
|
process wiki
|
Foi colocado pela professora: difícil distinção entre a definição do processo de trabalho(GQA, o processo geral) e o processo do auditor, existem subprocessos que não cabem em todas as auditorias
Logo, ao documentar na wiki explicitar que o subprocesso "Programar auditoria" ocorrerá apenas uma vez, isto é, as outras equipes que assumirão o papel de GQA no futuro não terão de realizá-la.
|
1.0
|
Reorganização da parte da wiki de processo - Foi colocado pela professora: difícil distinção entre a definição do processo de trabalho(GQA, o processo geral) e o processo do auditor, existem subprocessos que não cabem em todas as auditorias
Logo, ao documentar na wiki explicitar que o subprocesso "Programar auditoria" ocorrerá apenas uma vez, isto é, as outras equipes que assumirão o papel de GQA no futuro não terão de realizá-la.
|
process
|
reorganização da parte da wiki de processo foi colocado pela professora difícil distinção entre a definição do processo de trabalho gqa o processo geral e o processo do auditor existem subprocessos que não cabem em todas as auditorias logo ao documentar na wiki explicitar que o subprocesso programar auditoria ocorrerá apenas uma vez isto é as outras equipes que assumirão o papel de gqa no futuro não terão de realizá la
| 1
|
19,227
| 25,376,834,477
|
IssuesEvent
|
2022-11-21 14:42:18
|
ResqDiver1317/ThirdPeril_PerilousSkies
|
https://api.github.com/repos/ResqDiver1317/ThirdPeril_PerilousSkies
|
closed
|
Figure out why the damn CF launcher won't connect to the freaking server......
|
bug COMPLETE/RESOLVED In Process
|
Self Explanatory in the title.
|
1.0
|
Figure out why the damn CF launcher won't connect to the freaking server...... - Self Explanatory in the title.
|
process
|
figure out why the damn cf launcher won t connect to the freaking server self explanatory in the title
| 1
|
6,767
| 9,905,579,756
|
IssuesEvent
|
2019-06-27 11:58:09
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
closed
|
Preprocessor feature request: Common mask for multiple datasets
|
enhancement preprocessor
|
For multiple datasets it is necessary to produce a common mask and apply it to all of them if you want to compare respective results.
As far as I know, there is currently no preprocessor for producing and applying such a common mask.
Is there a work-around? Is this a needed/wanted feature? If I'm the only one, I can focus on doing this in the diagnostic.
|
1.0
|
Preprocessor feature request: Common mask for multiple datasets - For multiple datasets it is necessary to produce a common mask and apply it to all of them if you want to compare respective results.
As far as I know, there is currently no preprocessor for producing and applying such a common mask.
Is there a work-around? Is this a needed/wanted feature? If I'm the only one, I can focus on doing this in the diagnostic.
|
process
|
preprocessor feature request common mask for multiple datasets for multiple datasets it is necessary to produce a common mask and apply it to all of them if you want to compare respective results as far as i know there is currently no preprocessor for producing and applying such a common mask is there a work around is this a needed wanted feature if i m the only one i can focus on doing this in the diagnostic
| 1
|
21,355
| 29,188,351,583
|
IssuesEvent
|
2023-05-19 17:26:42
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Mongodb won’t work if version does fall into the “semantic-version-gte” pattern (Percona)
|
Type:Bug Priority:P1 Database/Mongo .Regression .Team/QueryProcessor :hammer_and_wrench: .Escalation
|
**Describe the bug**
After upgrading to metabase version 0.46, I can't access to any mongodb databases.
From what I've seen, in metabase database, table `metabase_database`, the dbms_version is wrong, because mongodb doest not follow semantic version.
What i get from the `dbms_version` => {"version":"5.0.14-12","semantic-version":[5,0,15,-100]}
**Logs**
```
2023-03-30 10:07:39,751 INFO driver.impl :: Initializing driver :mongo...
2023-03-30 10:07:39,753 INFO plugins.classloader :: Added URL file:/plugins/mongo.metabase-driver.jar to classpath
2023-03-30 10:07:39,755 DEBUG plugins.init-steps :: Loading plugin namespace metabase.driver.mongo...
WARNING: random-uuid already refers to: #'clojure.core/random-uuid in namespace: monger.util, being replaced by: #'monger.util/random-uuid
2023-03-30 10:07:41,857 INFO driver.impl :: Registered driver :mongo 🚚
2023-03-30 10:07:41,897 INFO metabase.util :: Load lazy loading driver :mongo took 2.1 s
2023-03-30 10:07:41,916 ERROR metabase.task :: Error initializing task :metabase.task.persist-refresh/PersistRefresh
clojure.lang.ExceptionInfo: Input to semantic-version-gte does not match schema:
[(named [nil nil nil (not ("Integer greater than or equal to zero" -100))] xv) nil]
{:type :schema.core/error, :schema [#schema.core.One{:schema [(constrained Int "Integer greater than or equal to zero")], :optional? false, :name xv} #schema.core.One{:schema [(constrained Int "Integer greater than or equal to zero")], :optional? false, :name yv}], :value [[5 0 15 -100] [4 2]], :error [(named [nil nil nil (not ("Integer greater than or equal to zero" -100))] xv) nil], :toucan2/context-trace [["execute SQL with class com.mchange.v2.c3p0.impl.NewProxyConnection" {:toucan2.jdbc.query/sql-args ["SELECT * FROM \"metabase_database\""]}] ["resolve connection" {:toucan2.connection/connectable metabase.db.connection.ApplicationDB}] ["resolve connection" {:toucan2.connection/connectable :default}] ["resolve connection" {:toucan2.connection/connectable nil}] {:toucan2.pipeline/rf #object[clojure.core$map$fn__5931$fn__5932 0x658e0f88 "clojure.core$map$fn__5931$fn__5932@658e0f88"]} ["with compiled query" {:toucan2.pipeline/compiled-query ["SELECT * FROM \"metabase_database\""]}] ["with built query" {:toucan2.pipeline/built-query {:select [:*], :from [[:metabase_database]]}}] ["with resolved query" {:toucan2.pipeline/resolved-query {}}] ["with parsed args" {:toucan2.pipeline/query-type :toucan.query-type/select.instances, :toucan2.pipeline/parsed-args {:queryable {}}}] ["with model" {:toucan2.pipeline/model :metabase.models.database/Database}] ["with unparsed args" {:toucan2.pipeline/query-type :toucan.query-type/select.instances, :toucan2.pipeline/unparsed-args (:metabase.models.database/Database)}]]}
at metabase.driver.util$fn__46893$semantic_version_gte__46898.invoke(util.clj:209)
at metabase.driver.mongo$fn__120574.invokeStatic(mongo.clj:295)
at metabase.driver.mongo$fn__120574.invoke(mongo.clj:290)
at clojure.lang.MultiFn.invoke(MultiFn.java:239)
at metabase.driver.util$features$iter__46856__46860$fn__46861.invoke(util.clj:199)
```
**To Reproduce**
**Expected behavior**
We should have a semantic version correct in `metabase_database` table
```
{"version":"5.0.14-12","semantic-version":[5,0,14]}
```
**Information about your Metabase Installation:**
You can get this information by going to Admin -> Troubleshooting, or simply post the JSON you see in that page.
{
"browser-info": {
"language": "en-US",
"platform": "Linux x86_64",
"userAgent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/111.0",
"vendor": ""
}
}
- Your databases: MongoDB
- Metabase version: 0.46
- Metabase hosting environment: Kubernetes / Docker
- Metabase internal database: postgre
**Severity**
Critical
**Additional context**
Add any other context about the problem here.
|
1.0
|
Mongodb won’t work if version does fall into the “semantic-version-gte” pattern (Percona) - **Describe the bug**
After upgrading to metabase version 0.46, I can't access to any mongodb databases.
From what I've seen, in metabase database, table `metabase_database`, the dbms_version is wrong, because mongodb doest not follow semantic version.
What i get from the `dbms_version` => {"version":"5.0.14-12","semantic-version":[5,0,15,-100]}
**Logs**
```
2023-03-30 10:07:39,751 INFO driver.impl :: Initializing driver :mongo...
2023-03-30 10:07:39,753 INFO plugins.classloader :: Added URL file:/plugins/mongo.metabase-driver.jar to classpath
2023-03-30 10:07:39,755 DEBUG plugins.init-steps :: Loading plugin namespace metabase.driver.mongo...
WARNING: random-uuid already refers to: #'clojure.core/random-uuid in namespace: monger.util, being replaced by: #'monger.util/random-uuid
2023-03-30 10:07:41,857 INFO driver.impl :: Registered driver :mongo 🚚
2023-03-30 10:07:41,897 INFO metabase.util :: Load lazy loading driver :mongo took 2.1 s
2023-03-30 10:07:41,916 ERROR metabase.task :: Error initializing task :metabase.task.persist-refresh/PersistRefresh
clojure.lang.ExceptionInfo: Input to semantic-version-gte does not match schema:
[(named [nil nil nil (not ("Integer greater than or equal to zero" -100))] xv) nil]
{:type :schema.core/error, :schema [#schema.core.One{:schema [(constrained Int "Integer greater than or equal to zero")], :optional? false, :name xv} #schema.core.One{:schema [(constrained Int "Integer greater than or equal to zero")], :optional? false, :name yv}], :value [[5 0 15 -100] [4 2]], :error [(named [nil nil nil (not ("Integer greater than or equal to zero" -100))] xv) nil], :toucan2/context-trace [["execute SQL with class com.mchange.v2.c3p0.impl.NewProxyConnection" {:toucan2.jdbc.query/sql-args ["SELECT * FROM \"metabase_database\""]}] ["resolve connection" {:toucan2.connection/connectable metabase.db.connection.ApplicationDB}] ["resolve connection" {:toucan2.connection/connectable :default}] ["resolve connection" {:toucan2.connection/connectable nil}] {:toucan2.pipeline/rf #object[clojure.core$map$fn__5931$fn__5932 0x658e0f88 "clojure.core$map$fn__5931$fn__5932@658e0f88"]} ["with compiled query" {:toucan2.pipeline/compiled-query ["SELECT * FROM \"metabase_database\""]}] ["with built query" {:toucan2.pipeline/built-query {:select [:*], :from [[:metabase_database]]}}] ["with resolved query" {:toucan2.pipeline/resolved-query {}}] ["with parsed args" {:toucan2.pipeline/query-type :toucan.query-type/select.instances, :toucan2.pipeline/parsed-args {:queryable {}}}] ["with model" {:toucan2.pipeline/model :metabase.models.database/Database}] ["with unparsed args" {:toucan2.pipeline/query-type :toucan.query-type/select.instances, :toucan2.pipeline/unparsed-args (:metabase.models.database/Database)}]]}
at metabase.driver.util$fn__46893$semantic_version_gte__46898.invoke(util.clj:209)
at metabase.driver.mongo$fn__120574.invokeStatic(mongo.clj:295)
at metabase.driver.mongo$fn__120574.invoke(mongo.clj:290)
at clojure.lang.MultiFn.invoke(MultiFn.java:239)
at metabase.driver.util$features$iter__46856__46860$fn__46861.invoke(util.clj:199)
```
**To Reproduce**
**Expected behavior**
We should have a semantic version correct in `metabase_database` table
```
{"version":"5.0.14-12","semantic-version":[5,0,14]}
```
**Information about your Metabase Installation:**
You can get this information by going to Admin -> Troubleshooting, or simply post the JSON you see in that page.
{
"browser-info": {
"language": "en-US",
"platform": "Linux x86_64",
"userAgent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/111.0",
"vendor": ""
}
}
- Your databases: MongoDB
- Metabase version: 0.46
- Metabase hosting environment: Kubernetes / Docker
- Metabase internal database: postgre
**Severity**
Critical
**Additional context**
Add any other context about the problem here.
|
process
|
mongodb won’t work if version does fall into the “semantic version gte” pattern percona describe the bug after upgrading to metabase version i can t access to any mongodb databases from what i ve seen in metabase database table metabase database the dbms version is wrong because mongodb doest not follow semantic version what i get from the dbms version version semantic version logs info driver impl initializing driver mongo info plugins classloader added url file plugins mongo metabase driver jar to classpath debug plugins init steps loading plugin namespace metabase driver mongo warning random uuid already refers to clojure core random uuid in namespace monger util being replaced by monger util random uuid info driver impl registered driver mongo 🚚 info metabase util load lazy loading driver mongo took s error metabase task error initializing task metabase task persist refresh persistrefresh clojure lang exceptioninfo input to semantic version gte does not match schema xv nil type schema core error schema optional false name xv schema core one schema optional false name yv value error xv nil context trace pipeline rf object from at metabase driver util fn semantic version gte invoke util clj at metabase driver mongo fn invokestatic mongo clj at metabase driver mongo fn invoke mongo clj at clojure lang multifn invoke multifn java at metabase driver util features iter fn invoke util clj to reproduce expected behavior we should have a semantic version correct in metabase database table version semantic version information about your metabase installation you can get this information by going to admin troubleshooting or simply post the json you see in that page browser info language en us platform linux useragent mozilla ubuntu linux rv gecko firefox vendor your databases mongodb metabase version metabase hosting environment kubernetes docker metabase internal database postgre severity critical additional context add any other context about the problem here
| 1
|
1,341
| 3,900,990,741
|
IssuesEvent
|
2016-04-18 09:00:15
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Богодухов (Харьковская обл.) - раскрыть "Звернення до міського голови"
|
In process of testing in work
|
инфо от координатора:
Чуть больше недели назад мы проводили презентацию для Богодухова, там согласились на внедрение услуги "Звернення до голови" для трех голов.
Вот контакты исполнителей этих услуг для Богодухова:
ИВАХ Алина (Представник районної державної адміністрації)
093-962-27-29
a.ivakh@bogodukhivrda.gov.ua
МИЩЕНКО Ольга (Представник міської ради)
050-174-08-68
БАБЕНКО Оксана (Представник районної ради)
066-723-34-49
bog_rr@ukr.net
https://www.facebook.com/valery.stavitsky
координатор
|
1.0
|
Богодухов (Харьковская обл.) - раскрыть "Звернення до міського голови" - инфо от координатора:
Чуть больше недели назад мы проводили презентацию для Богодухова, там согласились на внедрение услуги "Звернення до голови" для трех голов.
Вот контакты исполнителей этих услуг для Богодухова:
ИВАХ Алина (Представник районної державної адміністрації)
093-962-27-29
a.ivakh@bogodukhivrda.gov.ua
МИЩЕНКО Ольга (Представник міської ради)
050-174-08-68
БАБЕНКО Оксана (Представник районної ради)
066-723-34-49
bog_rr@ukr.net
https://www.facebook.com/valery.stavitsky
координатор
|
process
|
богодухов харьковская обл раскрыть звернення до міського голови инфо от координатора чуть больше недели назад мы проводили презентацию для богодухова там согласились на внедрение услуги звернення до голови для трех голов вот контакты исполнителей этих услуг для богодухова ивах алина представник районної державної адміністрації a ivakh bogodukhivrda gov ua мищенко ольга представник міської ради бабенко оксана представник районної ради bog rr ukr net координатор
| 1
|
308,521
| 23,252,073,373
|
IssuesEvent
|
2022-08-04 05:27:41
|
UoaWDCC/NZCSA-Frontend
|
https://api.github.com/repos/UoaWDCC/NZCSA-Frontend
|
opened
|
[Documentation] SponsorsLogoLayout and SponsorGrid
|
Type: Documentation
|
**Describe the task that needs to be done.**
Document the SponsorsLogoLayout and SponsorGrid using jsdoc and comment any methods.
In the js doc, must include the use of the file, and what is in the input props if its applicable.
**Describe how a solution to your proposed task might look like (and any alternatives considered).**
*fill in this please*
**Notes**
|
1.0
|
[Documentation] SponsorsLogoLayout and SponsorGrid - **Describe the task that needs to be done.**
Document the SponsorsLogoLayout and SponsorGrid using jsdoc and comment any methods.
In the js doc, must include the use of the file, and what is in the input props if its applicable.
**Describe how a solution to your proposed task might look like (and any alternatives considered).**
*fill in this please*
**Notes**
|
non_process
|
sponsorslogolayout and sponsorgrid describe the task that needs to be done document the sponsorslogolayout and sponsorgrid using jsdoc and comment any methods in the js doc must include the use of the file and what is in the input props if its applicable describe how a solution to your proposed task might look like and any alternatives considered fill in this please notes
| 0
|
270,027
| 28,960,382,432
|
IssuesEvent
|
2023-05-10 01:37:20
|
Nivaskumark/kernel_v4.19.72_old
|
https://api.github.com/repos/Nivaskumark/kernel_v4.19.72_old
|
reopened
|
CVE-2020-25645 (High) detected in linuxlinux-4.19.83
|
Mend: dependency security vulnerability
|
## CVE-2020-25645 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.83</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.19.72/commit/ce49083a1c14be2d13cb5e878257d293e6c748bc">ce49083a1c14be2d13cb5e878257d293e6c748bc</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/geneve.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/geneve.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel in versions before 5.9-rc7. Traffic between two Geneve endpoints may be unencrypted when IPsec is configured to encrypt traffic for the specific UDP port used by the GENEVE tunnel allowing anyone between the two endpoints to read the traffic unencrypted. The main threat from this vulnerability is to data confidentiality.
<p>Publish Date: 2020-10-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-25645>CVE-2020-25645</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1883988">https://bugzilla.redhat.com/show_bug.cgi?id=1883988</a></p>
<p>Release Date: 2020-10-13</p>
<p>Fix Resolution: v4.14.200,v4.19.148,v5.4.68,v5.8.12,v5.9-rc7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-25645 (High) detected in linuxlinux-4.19.83 - ## CVE-2020-25645 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.83</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.19.72/commit/ce49083a1c14be2d13cb5e878257d293e6c748bc">ce49083a1c14be2d13cb5e878257d293e6c748bc</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/geneve.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/geneve.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel in versions before 5.9-rc7. Traffic between two Geneve endpoints may be unencrypted when IPsec is configured to encrypt traffic for the specific UDP port used by the GENEVE tunnel allowing anyone between the two endpoints to read the traffic unencrypted. The main threat from this vulnerability is to data confidentiality.
<p>Publish Date: 2020-10-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-25645>CVE-2020-25645</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1883988">https://bugzilla.redhat.com/show_bug.cgi?id=1883988</a></p>
<p>Release Date: 2020-10-13</p>
<p>Fix Resolution: v4.14.200,v4.19.148,v5.4.68,v5.8.12,v5.9-rc7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files drivers net geneve c drivers net geneve c vulnerability details a flaw was found in the linux kernel in versions before traffic between two geneve endpoints may be unencrypted when ipsec is configured to encrypt traffic for the specific udp port used by the geneve tunnel allowing anyone between the two endpoints to read the traffic unencrypted the main threat from this vulnerability is to data confidentiality publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
489,757
| 14,111,992,725
|
IssuesEvent
|
2020-11-07 02:41:32
|
chingu-voyages/v25-geckos-team-01
|
https://api.github.com/repos/chingu-voyages/v25-geckos-team-01
|
opened
|
Create application mockup
|
UserStory priority:must_have
|
**User Story Description**
As a Developer
I want to have a mockup of the app screens
So I can implement a UI & UX that ties functions to screens to meet the needs of all users.
**Steps to Follow (optional)**
- [ ] Create a paper or digital mockup of app screens, the elements in them, actions, and navigation
- [ ] Additional steps as necessary
**Additional Considerations**
Any supplemental information including unresolved questions, links to external resources, screenshots, etc.
|
1.0
|
Create application mockup - **User Story Description**
As a Developer
I want to have a mockup of the app screens
So I can implement a UI & UX that ties functions to screens to meet the needs of all users.
**Steps to Follow (optional)**
- [ ] Create a paper or digital mockup of app screens, the elements in them, actions, and navigation
- [ ] Additional steps as necessary
**Additional Considerations**
Any supplemental information including unresolved questions, links to external resources, screenshots, etc.
|
non_process
|
create application mockup user story description as a developer i want to have a mockup of the app screens so i can implement a ui ux that ties functions to screens to meet the needs of all users steps to follow optional create a paper or digital mockup of app screens the elements in them actions and navigation additional steps as necessary additional considerations any supplemental information including unresolved questions links to external resources screenshots etc
| 0
|
22,716
| 32,039,639,073
|
IssuesEvent
|
2023-09-22 18:10:04
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
procpath 1.8.1 has 2 GuardDog issues
|
guarddog exec-base64 silent-process-execution
|
https://pypi.org/project/procpath
https://inspector.pypi.io/project/procpath
```{
"dependency": "procpath",
"version": "1.8.1",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "Procpath-1.8.1/procpath/test.py:2386",
"code": " p = subprocess.Popen(\n ['timeout', '0.25', 'tail', '---disable-inotify', '-f', f'{f.name}', f.name],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subp... )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
],
"exec-base64": [
{
"location": "Procpath-1.8.1/procpath/utility.py:27",
"code": " env = subprocess.check_output('\\n'.join(script), shell=True, encoding='utf-8')",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
]
},
"path": "/tmp/tmp0fn3pw6h/procpath"
}
}```
|
1.0
|
procpath 1.8.1 has 2 GuardDog issues - https://pypi.org/project/procpath
https://inspector.pypi.io/project/procpath
```{
"dependency": "procpath",
"version": "1.8.1",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "Procpath-1.8.1/procpath/test.py:2386",
"code": " p = subprocess.Popen(\n ['timeout', '0.25', 'tail', '---disable-inotify', '-f', f'{f.name}', f.name],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subp... )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
],
"exec-base64": [
{
"location": "Procpath-1.8.1/procpath/utility.py:27",
"code": " env = subprocess.check_output('\\n'.join(script), shell=True, encoding='utf-8')",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
]
},
"path": "/tmp/tmp0fn3pw6h/procpath"
}
}```
|
process
|
procpath has guarddog issues dependency procpath version result issues errors results silent process execution location procpath procpath test py code p subprocess popen n n stdin subprocess devnull n stdout subprocess devnull n stderr subp message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null exec location procpath procpath utility py code env subprocess check output n join script shell true encoding utf message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n path tmp procpath
| 1
|
577,279
| 17,107,111,899
|
IssuesEvent
|
2021-07-09 19:43:51
|
adirh3/Fluent-Search
|
https://api.github.com/repos/adirh3/Fluent-Search
|
closed
|
The Unpin action doesn't highlight
|
Low Priority bug
|
**Describe the bug**
The Unpin in the home screen doesn't high like the other shortcuts.
**To Reproduce**
Steps to reproduce the behavior:
1. Pin a result to the home screen
2. Right-click on the result
3. Hover mouse over the Unpin action
4. See error
**Expected behavior**
Unpin should highlight like the others
**Screenshots**
https://user-images.githubusercontent.com/85425543/123670273-a8779e80-d85a-11eb-8625-e3ed14be3177.mp4
**Desktop (please complete the following information):**
- Windows 10 Version: 21H1
- Fluent Search Version: 0.9.88.1
|
1.0
|
The Unpin action doesn't highlight - **Describe the bug**
The Unpin in the home screen doesn't high like the other shortcuts.
**To Reproduce**
Steps to reproduce the behavior:
1. Pin a result to the home screen
2. Right-click on the result
3. Hover mouse over the Unpin action
4. See error
**Expected behavior**
Unpin should highlight like the others
**Screenshots**
https://user-images.githubusercontent.com/85425543/123670273-a8779e80-d85a-11eb-8625-e3ed14be3177.mp4
**Desktop (please complete the following information):**
- Windows 10 Version: 21H1
- Fluent Search Version: 0.9.88.1
|
non_process
|
the unpin action doesn t highlight describe the bug the unpin in the home screen doesn t high like the other shortcuts to reproduce steps to reproduce the behavior pin a result to the home screen right click on the result hover mouse over the unpin action see error expected behavior unpin should highlight like the others screenshots desktop please complete the following information windows version fluent search version
| 0
|
279,252
| 30,702,485,040
|
IssuesEvent
|
2023-07-27 01:34:09
|
maddyCode23/linux-4.1.15
|
https://api.github.com/repos/maddyCode23/linux-4.1.15
|
closed
|
CVE-2019-15292 (Medium) detected in multiple libraries - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2019-15292 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 5.0.9. There is a use-after-free in atalk_proc_exit, related to net/appletalk/atalk_proc.c, net/appletalk/ddp.c, and net/appletalk/sysctl_net_atalk.c.
<p>Publish Date: 2019-08-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-15292>CVE-2019-15292</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15292">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15292</a></p>
<p>Release Date: 2019-09-03</p>
<p>Fix Resolution: v5.1-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-15292 (Medium) detected in multiple libraries - autoclosed - ## CVE-2019-15292 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b>, <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 5.0.9. There is a use-after-free in atalk_proc_exit, related to net/appletalk/atalk_proc.c, net/appletalk/ddp.c, and net/appletalk/sysctl_net_atalk.c.
<p>Publish Date: 2019-08-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-15292>CVE-2019-15292</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15292">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15292</a></p>
<p>Release Date: 2019-09-03</p>
<p>Fix Resolution: v5.1-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in multiple libraries autoclosed cve medium severity vulnerability vulnerable libraries linux stable linux stable linux stable linux stable vulnerability details an issue was discovered in the linux kernel before there is a use after free in atalk proc exit related to net appletalk atalk proc c net appletalk ddp c and net appletalk sysctl net atalk c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
3,113
| 6,143,199,432
|
IssuesEvent
|
2017-06-27 04:21:46
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Not all attributes defined on a DITA Composite are copied in the preprocessing stage
|
preprocess
|
The callback `org.dita.dost.reader.MergeTopicParser.startElement(String, String, String, Attributes)` by default copies all attributes defined on the element in the string buffer. But for a `<dita>` element it makes a fast return and does not copy the attributes defined on the <dita> element to the string buffer.
Why is this a problem?
1. The attributes xtrf and xtrc are not passed further.
2. The `<dita>` element (at least the one which is XML Schema based) could have extra proxy-namespace declarations which are not copied and the XML would become namespace not-wellformed.
By the way, maybe in the entire dost.jar StringBuffer could be replaced with StringBuilder which is faster.
|
1.0
|
Not all attributes defined on a DITA Composite are copied in the preprocessing stage - The callback `org.dita.dost.reader.MergeTopicParser.startElement(String, String, String, Attributes)` by default copies all attributes defined on the element in the string buffer. But for a `<dita>` element it makes a fast return and does not copy the attributes defined on the <dita> element to the string buffer.
Why is this a problem?
1. The attributes xtrf and xtrc are not passed further.
2. The `<dita>` element (at least the one which is XML Schema based) could have extra proxy-namespace declarations which are not copied and the XML would become namespace not-wellformed.
By the way, maybe in the entire dost.jar StringBuffer could be replaced with StringBuilder which is faster.
|
process
|
not all attributes defined on a dita composite are copied in the preprocessing stage the callback org dita dost reader mergetopicparser startelement string string string attributes by default copies all attributes defined on the element in the string buffer but for a element it makes a fast return and does not copy the attributes defined on the element to the string buffer why is this a problem the attributes xtrf and xtrc are not passed further the element at least the one which is xml schema based could have extra proxy namespace declarations which are not copied and the xml would become namespace not wellformed by the way maybe in the entire dost jar stringbuffer could be replaced with stringbuilder which is faster
| 1
|
8,853
| 11,955,294,299
|
IssuesEvent
|
2020-04-04 03:47:44
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
discuss: minikube wait for default service accont ?
|
kind/process priority/backlog triage/discuss
|
as part of this PR https://github.com/kubernetes/minikube/pull/6999 to close https://github.com/kubernetes/minikube/issues/6997
I added wait for default service account to be created before the integeration tests apply yaml files.
at first I wanted to add that to minikube itself but that added 30 seconds to minikube start on my machine. and this would add very little value to 90% of the people.
even in our high stress integeration test it is a flake behavior, that sometimes we get this errr described here https://github.com/kubernetes/minikube/issues/6997
we could do one these, so neither the kabab be burnt nor the skewer. (persian slang) :
#### option 1 switch default behaviour
we can make the current wait=true behaviour to be called wait=false (do current wait behaviour even if user dosnt wanna wait)
and make wait=false the default behavior
and then for wait=true we add waiting for default service account.
that would make minikube more usable in testing environments.
#### option 2 add a new flag, get list of componenets to wait for
we add another flag, that gets a list of things that user wants us to wait for
--wait-for: [comp1, comp2, comp3]
like
--wait-for=apiserver,default-service-account
--wait-for=all
#### option 3 change the current flag from boolean to list
--wait
with a default value of the current things we wait for
apiserver,.systempods...
and we should handle logic that if user provided true or false, we should translate that for backward comaptibility
but if user wanted to they could add more to the list
--wait=all
ot
--wait=apiserver,systempods,default_sa ...
```
This seems to be covering up an actual flaw that users can face. Shouldn't the start command wait for this?
```
_Originally posted by @tstromberg in https://github.com/kubernetes/minikube/pull/6999/files_
|
1.0
|
discuss: minikube wait for default service accont ? - as part of this PR https://github.com/kubernetes/minikube/pull/6999 to close https://github.com/kubernetes/minikube/issues/6997
I added wait for default service account to be created before the integeration tests apply yaml files.
at first I wanted to add that to minikube itself but that added 30 seconds to minikube start on my machine. and this would add very little value to 90% of the people.
even in our high stress integeration test it is a flake behavior, that sometimes we get this errr described here https://github.com/kubernetes/minikube/issues/6997
we could do one these, so neither the kabab be burnt nor the skewer. (persian slang) :
#### option 1 switch default behaviour
we can make the current wait=true behaviour to be called wait=false (do current wait behaviour even if user dosnt wanna wait)
and make wait=false the default behavior
and then for wait=true we add waiting for default service account.
that would make minikube more usable in testing environments.
#### option 2 add a new flag, get list of componenets to wait for
we add another flag, that gets a list of things that user wants us to wait for
--wait-for: [comp1, comp2, comp3]
like
--wait-for=apiserver,default-service-account
--wait-for=all
#### option 3 change the current flag from boolean to list
--wait
with a default value of the current things we wait for
apiserver,.systempods...
and we should handle logic that if user provided true or false, we should translate that for backward comaptibility
but if user wanted to they could add more to the list
--wait=all
ot
--wait=apiserver,systempods,default_sa ...
```
This seems to be covering up an actual flaw that users can face. Shouldn't the start command wait for this?
```
_Originally posted by @tstromberg in https://github.com/kubernetes/minikube/pull/6999/files_
|
process
|
discuss minikube wait for default service accont as part of this pr to close i added wait for default service account to be created before the integeration tests apply yaml files at first i wanted to add that to minikube itself but that added seconds to minikube start on my machine and this would add very little value to of the people even in our high stress integeration test it is a flake behavior that sometimes we get this errr described here we could do one these so neither the kabab be burnt nor the skewer persian slang option switch default behaviour we can make the current wait true behaviour to be called wait false do current wait behaviour even if user dosnt wanna wait and make wait false the default behavior and then for wait true we add waiting for default service account that would make minikube more usable in testing environments option add a new flag get list of componenets to wait for we add another flag that gets a list of things that user wants us to wait for wait for like wait for apiserver default service account wait for all option change the current flag from boolean to list wait with a default value of the current things we wait for apiserver systempods and we should handle logic that if user provided true or false we should translate that for backward comaptibility but if user wanted to they could add more to the list wait all ot wait apiserver systempods default sa this seems to be covering up an actual flaw that users can face shouldn t the start command wait for this originally posted by tstromberg in
| 1
|
6,067
| 8,902,731,508
|
IssuesEvent
|
2019-01-17 08:33:26
|
Juris-M/citeproc-js
|
https://api.github.com/repos/Juris-M/citeproc-js
|
closed
|
page-range-delimiter for page ranges with suffixes
|
fix in process
|
via https://forums.zotero.org/discussion/comment/322133/#Comment_322133
the page range 162a-165d does *not* convert the delimiter to an en-dash. Given that 162a and 165d individually would be treated as `is-numeric=true` and that there are no obvious downside, I think the page-range delimiter should extend to such ranges.
|
1.0
|
page-range-delimiter for page ranges with suffixes - via https://forums.zotero.org/discussion/comment/322133/#Comment_322133
the page range 162a-165d does *not* convert the delimiter to an en-dash. Given that 162a and 165d individually would be treated as `is-numeric=true` and that there are no obvious downside, I think the page-range delimiter should extend to such ranges.
|
process
|
page range delimiter for page ranges with suffixes via the page range does not convert the delimiter to an en dash given that and individually would be treated as is numeric true and that there are no obvious downside i think the page range delimiter should extend to such ranges
| 1
|
30,522
| 4,628,300,374
|
IssuesEvent
|
2016-09-28 03:30:20
|
Microsoft/vscode
|
https://api.github.com/repos/Microsoft/vscode
|
closed
|
Test: panels in sidebar
|
testplan-item
|
- [x] Any os @bpasero (would like to test it)
- [x] Any os @seanmcbreen
Complexity: 2
We are trying to make panels more discoverable and more connected between them, more details: #12277
This feature is currently experimental, thus enable it via setting `workbench.panels.showInSidebar`. Verify:
* All panels are shown as collapsed in the sidebar when the panel is hidden. If the panel is visible all panels are visible in the sidebar
* Problems view notifications are shown in the sidebar (even if the panel is hidden)
* All interactions via the sidebar makes sense and makes for a smooth ui (focus passed, active panel highlighted, automatic expansion / collapse based on panel visibility)
Current icons are temporary and all icon feedback should be sent to @bgashler1
@bpasero
|
1.0
|
Test: panels in sidebar - - [x] Any os @bpasero (would like to test it)
- [x] Any os @seanmcbreen
Complexity: 2
We are trying to make panels more discoverable and more connected between them, more details: #12277
This feature is currently experimental, thus enable it via setting `workbench.panels.showInSidebar`. Verify:
* All panels are shown as collapsed in the sidebar when the panel is hidden. If the panel is visible all panels are visible in the sidebar
* Problems view notifications are shown in the sidebar (even if the panel is hidden)
* All interactions via the sidebar makes sense and makes for a smooth ui (focus passed, active panel highlighted, automatic expansion / collapse based on panel visibility)
Current icons are temporary and all icon feedback should be sent to @bgashler1
@bpasero
|
non_process
|
test panels in sidebar any os bpasero would like to test it any os seanmcbreen complexity we are trying to make panels more discoverable and more connected between them more details this feature is currently experimental thus enable it via setting workbench panels showinsidebar verify all panels are shown as collapsed in the sidebar when the panel is hidden if the panel is visible all panels are visible in the sidebar problems view notifications are shown in the sidebar even if the panel is hidden all interactions via the sidebar makes sense and makes for a smooth ui focus passed active panel highlighted automatic expansion collapse based on panel visibility current icons are temporary and all icon feedback should be sent to bpasero
| 0
|
45,008
| 13,100,482,453
|
IssuesEvent
|
2020-08-04 00:40:57
|
AOSC-Dev/aosc-os-abbs
|
https://api.github.com/repos/AOSC-Dev/aosc-os-abbs
|
opened
|
xrdp: CVE-2020-4044
|
security to-stable
|
<!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2020-4044
**Other security advisory IDs:** DSA-4737-1
**Description:**
Ashley Newson discovered that the XRDP sessions manager was susceptible
to denial of service. A local attacker can further take advantage of
this flaw to impersonate the XRDP sessions manager and capture any user
credentials that are submitted to XRDP, approve or reject arbitrary
login credentials or to hijack existing sessions for xorgxrdp sessions.
**Patches:** from Debian
**PoC(s):** N/A
**Architectural progress (Mainline):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [ ] AMD64 `amd64`
- [ ] 32-bit Optional Environment `optenv32`
- [ ] AArch64 `arm64`
**Architectural progress (Retro):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [ ] ARMv5t+ `armel`
- [ ] ARMv7 `armhf`
- [ ] i486 `i486`
<!-- If the specified package is `noarch`, please use the stub below. -->
<!-- - [ ] Architecture-independent `noarch` -->
|
True
|
xrdp: CVE-2020-4044 - <!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2020-4044
**Other security advisory IDs:** DSA-4737-1
**Description:**
Ashley Newson discovered that the XRDP sessions manager was susceptible
to denial of service. A local attacker can further take advantage of
this flaw to impersonate the XRDP sessions manager and capture any user
credentials that are submitted to XRDP, approve or reject arbitrary
login credentials or to hijack existing sessions for xorgxrdp sessions.
**Patches:** from Debian
**PoC(s):** N/A
**Architectural progress (Mainline):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [ ] AMD64 `amd64`
- [ ] 32-bit Optional Environment `optenv32`
- [ ] AArch64 `arm64`
**Architectural progress (Retro):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [ ] ARMv5t+ `armel`
- [ ] ARMv7 `armhf`
- [ ] i486 `i486`
<!-- If the specified package is `noarch`, please use the stub below. -->
<!-- - [ ] Architecture-independent `noarch` -->
|
non_process
|
xrdp cve cve ids cve other security advisory ids dsa description ashley newson discovered that the xrdp sessions manager was susceptible to denial of service a local attacker can further take advantage of this flaw to impersonate the xrdp sessions manager and capture any user credentials that are submitted to xrdp approve or reject arbitrary login credentials or to hijack existing sessions for xorgxrdp sessions patches from debian poc s n a architectural progress mainline bit optional environment architectural progress retro armel armhf
| 0
|
92,446
| 10,743,155,912
|
IssuesEvent
|
2019-10-30 01:03:04
|
randombit/botan
|
https://api.github.com/repos/randombit/botan
|
closed
|
[Question] Choices of algorithms
|
documentation usage question
|
Just wanna know among all those algorithms supported by botan for now, which is **considered practically**:
the **most secure public key encryption** algorithm;
the **most secure symmetric encryption** algorithm;
the **most secure signature** algorithm;
i personally dont have any knowledge about cryptography (all i know right now is basics learned from the web). and i wanna employ strong cryptography in my next project.
I do **expect (greatly) increased performance cost** when choosing more secure algorithms and greater key lengths. and **currently have no thoughts on defending against quantum computation power**.
i did do lots of Google myself before asking here, but almost all related articles i found didnt do comparison between algorithms or are seriously outdated.
|
1.0
|
[Question] Choices of algorithms - Just wanna know among all those algorithms supported by botan for now, which is **considered practically**:
the **most secure public key encryption** algorithm;
the **most secure symmetric encryption** algorithm;
the **most secure signature** algorithm;
i personally dont have any knowledge about cryptography (all i know right now is basics learned from the web). and i wanna employ strong cryptography in my next project.
I do **expect (greatly) increased performance cost** when choosing more secure algorithms and greater key lengths. and **currently have no thoughts on defending against quantum computation power**.
i did do lots of Google myself before asking here, but almost all related articles i found didnt do comparison between algorithms or are seriously outdated.
|
non_process
|
choices of algorithms just wanna know among all those algorithms supported by botan for now which is considered practically the most secure public key encryption algorithm the most secure symmetric encryption algorithm the most secure signature algorithm i personally dont have any knowledge about cryptography all i know right now is basics learned from the web and i wanna employ strong cryptography in my next project i do expect greatly increased performance cost when choosing more secure algorithms and greater key lengths and currently have no thoughts on defending against quantum computation power i did do lots of google myself before asking here but almost all related articles i found didnt do comparison between algorithms or are seriously outdated
| 0
|
357
| 2,524,590,407
|
IssuesEvent
|
2015-01-20 18:47:33
|
SemanticMediaWiki/SemanticMediaWiki
|
https://api.github.com/repos/SemanticMediaWiki/SemanticMediaWiki
|
opened
|
CacheableResultCollector::findPropertyTableByType uses undefined field
|
code quality
|
It accessed a $store field, which is not defined in the class itself. It implicitly relies on the deriving classes to define one.
|
1.0
|
CacheableResultCollector::findPropertyTableByType uses undefined field - It accessed a $store field, which is not defined in the class itself. It implicitly relies on the deriving classes to define one.
|
non_process
|
cacheableresultcollector findpropertytablebytype uses undefined field it accessed a store field which is not defined in the class itself it implicitly relies on the deriving classes to define one
| 0
|
313,274
| 9,559,170,564
|
IssuesEvent
|
2019-05-03 15:57:19
|
richelbilderbeek/djog_unos_2018
|
https://api.github.com/repos/richelbilderbeek/djog_unos_2018
|
closed
|
test_agent() has a test that fails
|
medium priority
|
**Is your feature request related to a problem? Please describe.**
In test_agent() is a test that fails, but I don't now which one.
**Describe the solution you'd like**
Find the test and fix it.
**Don't forget to uncomment test_agent() in main.cpp!**
|
1.0
|
test_agent() has a test that fails - **Is your feature request related to a problem? Please describe.**
In test_agent() is a test that fails, but I don't now which one.
**Describe the solution you'd like**
Find the test and fix it.
**Don't forget to uncomment test_agent() in main.cpp!**
|
non_process
|
test agent has a test that fails is your feature request related to a problem please describe in test agent is a test that fails but i don t now which one describe the solution you d like find the test and fix it don t forget to uncomment test agent in main cpp
| 0
|
309,893
| 23,310,752,165
|
IssuesEvent
|
2022-08-08 08:02:48
|
ProbablyManuel/requiem
|
https://api.github.com/repos/ProbablyManuel/requiem
|
opened
|
Consolidate changelog
|
documentation
|
For historical reasons the changelog is split into 3 files
- a plain text file up to Requiem 1.6
- a pdf file from Requiem 1.7 to 1.9
- a markdown file since Requiem 2.0
### Expected Outcome
- The changelog is consolidated into a single markdown file, using the layout defined in #44
|
1.0
|
Consolidate changelog - For historical reasons the changelog is split into 3 files
- a plain text file up to Requiem 1.6
- a pdf file from Requiem 1.7 to 1.9
- a markdown file since Requiem 2.0
### Expected Outcome
- The changelog is consolidated into a single markdown file, using the layout defined in #44
|
non_process
|
consolidate changelog for historical reasons the changelog is split into files a plain text file up to requiem a pdf file from requiem to a markdown file since requiem expected outcome the changelog is consolidated into a single markdown file using the layout defined in
| 0
|
1,239
| 3,777,612,224
|
IssuesEvent
|
2016-03-17 20:42:38
|
sci-visus/visus-issues
|
https://api.github.com/repos/sci-visus/visus-issues
|
closed
|
unexpected data relocation when crop is applied
|
Bug Processing ViSUS
|
This uses the newly exposed "crop" functionality from https://github.com/sci-visus/nvisusio/commit/a0aed87582edcae71644bfbc3b41d2625651e84a
Using the following script in the processing node for a volume render of 2kbit1, we see that when a subsection of the volume is requested by the query, and the query is rotated, that the projected slice is not correctly transformed.
https://www.dropbox.com/s/l15llydv4p15aa4/crop_slice_not_aligned_with_volume.mov?dl=0
It's not an urgent issue, but we want to keep track of it.
|
1.0
|
unexpected data relocation when crop is applied - This uses the newly exposed "crop" functionality from https://github.com/sci-visus/nvisusio/commit/a0aed87582edcae71644bfbc3b41d2625651e84a
Using the following script in the processing node for a volume render of 2kbit1, we see that when a subsection of the volume is requested by the query, and the query is rotated, that the projected slice is not correctly transformed.
https://www.dropbox.com/s/l15llydv4p15aa4/crop_slice_not_aligned_with_volume.mov?dl=0
It's not an urgent issue, but we want to keep track of it.
|
process
|
unexpected data relocation when crop is applied this uses the newly exposed crop functionality from using the following script in the processing node for a volume render of we see that when a subsection of the volume is requested by the query and the query is rotated that the projected slice is not correctly transformed it s not an urgent issue but we want to keep track of it
| 1
|
1,181
| 3,682,103,565
|
IssuesEvent
|
2016-02-24 08:04:55
|
yamamonsatoshi/lab
|
https://api.github.com/repos/yamamonsatoshi/lab
|
closed
|
processingで光学式のbluetooth環境で実験する
|
processing 重要度高
|
- 気圧式とのセンサ特性の比較のため、単純反応課題を用いて、光学式のもので実験を行う。
- 実際に吸ってみる条件もprocessingで同様に行う。
|
1.0
|
processingで光学式のbluetooth環境で実験する - - 気圧式とのセンサ特性の比較のため、単純反応課題を用いて、光学式のもので実験を行う。
- 実際に吸ってみる条件もprocessingで同様に行う。
|
process
|
processingで光学式のbluetooth環境で実験する 気圧式とのセンサ特性の比較のため、単純反応課題を用いて、光学式のもので実験を行う。 実際に吸ってみる条件もprocessingで同様に行う。
| 1
|
6,514
| 9,604,450,230
|
IssuesEvent
|
2019-05-10 20:02:43
|
emacs-ess/ESS
|
https://api.github.com/repos/emacs-ess/ESS
|
closed
|
M-p breaks command prompt in inferior ESS for Julia
|
bug lang:julia process
|
M-p (comint-previous-input) breaks command prompt in inferior ESS for Julia.
Steps to reproduce:
1. Launch an inferior ESS terminal for Julia
2. Evaluate some expression (e.g. type `1` and hit `RET`)
3. Press `M-p` (instead of getting `julia> 1` I get `1julia>`)
|
1.0
|
M-p breaks command prompt in inferior ESS for Julia - M-p (comint-previous-input) breaks command prompt in inferior ESS for Julia.
Steps to reproduce:
1. Launch an inferior ESS terminal for Julia
2. Evaluate some expression (e.g. type `1` and hit `RET`)
3. Press `M-p` (instead of getting `julia> 1` I get `1julia>`)
|
process
|
m p breaks command prompt in inferior ess for julia m p comint previous input breaks command prompt in inferior ess for julia steps to reproduce launch an inferior ess terminal for julia evaluate some expression e g type and hit ret press m p instead of getting julia i get
| 1
|
7,345
| 10,482,105,633
|
IssuesEvent
|
2019-09-24 11:11:07
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
inheritance doesnt work when opening a project from discussion
|
Fixed Meetings Process bug Projects
|
open a new discussion
assign different users as partners
go to projects tab
click on the new meeting
create a new project
the partners dont transfer to the new project
|
1.0
|
inheritance doesnt work when opening a project from discussion - open a new discussion
assign different users as partners
go to projects tab
click on the new meeting
create a new project
the partners dont transfer to the new project
|
process
|
inheritance doesnt work when opening a project from discussion open a new discussion assign different users as partners go to projects tab click on the new meeting create a new project the partners dont transfer to the new project
| 1
|
101,598
| 21,723,760,223
|
IssuesEvent
|
2022-05-11 04:57:01
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
opened
|
Missing create function code action for check-expr
|
Type/Bug Team/LanguageServer Area/CodeAction
|
**Description:**
Missing create function code action for `check-expr`, `checkpanic-expr`, and `panic-stmt`
**Steps to reproduce:**
```
function test() {
int intCheck = check getCheck();
}
```
```
function test() {
int intCheckPanic = checkpanic getCheckPanic();
}
```
```
function test() {
panic getPanic();
}
```
**Affected Versions:**
2201.1.0
|
1.0
|
Missing create function code action for check-expr - **Description:**
Missing create function code action for `check-expr`, `checkpanic-expr`, and `panic-stmt`
**Steps to reproduce:**
```
function test() {
int intCheck = check getCheck();
}
```
```
function test() {
int intCheckPanic = checkpanic getCheckPanic();
}
```
```
function test() {
panic getPanic();
}
```
**Affected Versions:**
2201.1.0
|
non_process
|
missing create function code action for check expr description missing create function code action for check expr checkpanic expr and panic stmt steps to reproduce function test int intcheck check getcheck function test int intcheckpanic checkpanic getcheckpanic function test panic getpanic affected versions
| 0
|
387,602
| 11,463,402,947
|
IssuesEvent
|
2020-02-07 15:56:41
|
canonical-web-and-design/tutorials.ubuntu.com
|
https://api.github.com/repos/canonical-web-and-design/tutorials.ubuntu.com
|
closed
|
snap a qt application
|
Priority: Medium Tutorials Content Type: Tutorial Request
|
I would like a tutorial that explains the qt5 remote part, how to use desktop launch and the plugs needed to work nicely in all linux distros.
|
1.0
|
snap a qt application - I would like a tutorial that explains the qt5 remote part, how to use desktop launch and the plugs needed to work nicely in all linux distros.
|
non_process
|
snap a qt application i would like a tutorial that explains the remote part how to use desktop launch and the plugs needed to work nicely in all linux distros
| 0
|
12,715
| 15,089,768,108
|
IssuesEvent
|
2021-02-06 07:34:26
|
log2timeline/plaso
|
https://api.github.com/repos/log2timeline/plaso
|
closed
|
Change LinuxIssueFilePlugin to support symbolic link to /etc/issue
|
enhancement preprocessing
|
While testing with CoreOS image LinuxIssueFilePlugin fails since /etc/issue is a symbolic link
Change LinuxIssueFilePlugin to support this case
|
1.0
|
Change LinuxIssueFilePlugin to support symbolic link to /etc/issue - While testing with CoreOS image LinuxIssueFilePlugin fails since /etc/issue is a symbolic link
Change LinuxIssueFilePlugin to support this case
|
process
|
change linuxissuefileplugin to support symbolic link to etc issue while testing with coreos image linuxissuefileplugin fails since etc issue is a symbolic link change linuxissuefileplugin to support this case
| 1
|
27,953
| 8,055,689,810
|
IssuesEvent
|
2018-08-02 10:04:33
|
openshiftio/openshift.io
|
https://api.github.com/repos/openshiftio/openshift.io
|
closed
|
Build URL not set correctly on GitHub commit
|
SEV2-high team/build-cd type/bug
|

The tick mark sign should take me to build logs. It does not because the url set is wrong.
I see a URL something like this http://www.unconfigured-jenkins-location.com/job/kishansagathiya/job/app-test-1/job/master/9/display/redirect
http://www.unconfigured-jenkins-location.com should be changed to https://jenkins.openshift.io
Again in general I see logs that are publicly available and here we have to login, so I don't know how well the whole thing would work.
It is clearly a bug, but it is not something that would halt developers in anyway. However, this is really desirable from users' perspective.
|
1.0
|
Build URL not set correctly on GitHub commit - 
The tick mark sign should take me to build logs. It does not because the url set is wrong.
I see a URL something like this http://www.unconfigured-jenkins-location.com/job/kishansagathiya/job/app-test-1/job/master/9/display/redirect
http://www.unconfigured-jenkins-location.com should be changed to https://jenkins.openshift.io
Again in general I see logs that are publicly available and here we have to login, so I don't know how well the whole thing would work.
It is clearly a bug, but it is not something that would halt developers in anyway. However, this is really desirable from users' perspective.
|
non_process
|
build url not set correctly on github commit the tick mark sign should take me to build logs it does not because the url set is wrong i see a url something like this should be changed to again in general i see logs that are publicly available and here we have to login so i don t know how well the whole thing would work it is clearly a bug but it is not something that would halt developers in anyway however this is really desirable from users perspective
| 0
|
148,157
| 13,227,641,321
|
IssuesEvent
|
2020-08-18 03:49:53
|
aleksanderbrymora/firebase-hooks-react
|
https://api.github.com/repos/aleksanderbrymora/firebase-hooks-react
|
opened
|
No documentation
|
documentation enhancement
|
After some progress and getting the API pretty much set, we need a some docs ready
|
1.0
|
No documentation - After some progress and getting the API pretty much set, we need a some docs ready
|
non_process
|
no documentation after some progress and getting the api pretty much set we need a some docs ready
| 0
|
18,385
| 24,515,227,049
|
IssuesEvent
|
2022-10-11 03:59:36
|
f5devcentral/container-egress-service
|
https://api.github.com/repos/f5devcentral/container-egress-service
|
closed
|
CVE-2022-29526
|
processing
|
[CVE-2022-29526](https://nvd.nist.gov/vuln/detail/CVE-2022-29526) Published: June 23, 2022; 1:15:12 PM -0400 V3.1: 5.3 MEDIUM V2.0: 5.0 MEDIUM
> Go before 1.17.10 and 1.18.x before 1.18.2 has Incorrect Privilege Assignment. When called with a non-zero flags parameter, the Faccessat function could incorrectly report that a file is accessible.
**To Reproduce**
https://github.com/f5devcentral/container-egress-service/blob/3e8f64bb9249ae60325fa7cd71e77b078abcfef2/go.sum#L502
```
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007 h1:gG67DSER+11cZvqIMb8S8bt0vZtiN6xWYARwirrOSfE=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1 h1:SrN+KX8Art/Sf4HNj6Zcz06G7VEz+7w9tdXTPOZ7+l4=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
```
**Expected behavior**
golang.org/x/sys 0.0.0-20220412211240-33da011f77ad or newer
**Additional context**
golang: syscall: faccessat checks wrong group
|
1.0
|
CVE-2022-29526 - [CVE-2022-29526](https://nvd.nist.gov/vuln/detail/CVE-2022-29526) Published: June 23, 2022; 1:15:12 PM -0400 V3.1: 5.3 MEDIUM V2.0: 5.0 MEDIUM
> Go before 1.17.10 and 1.18.x before 1.18.2 has Incorrect Privilege Assignment. When called with a non-zero flags parameter, the Faccessat function could incorrectly report that a file is accessible.
**To Reproduce**
https://github.com/f5devcentral/container-egress-service/blob/3e8f64bb9249ae60325fa7cd71e77b078abcfef2/go.sum#L502
```
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007 h1:gG67DSER+11cZvqIMb8S8bt0vZtiN6xWYARwirrOSfE=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1 h1:SrN+KX8Art/Sf4HNj6Zcz06G7VEz+7w9tdXTPOZ7+l4=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
```
**Expected behavior**
golang.org/x/sys 0.0.0-20220412211240-33da011f77ad or newer
**Additional context**
golang: syscall: faccessat checks wrong group
|
process
|
cve published june pm medium medium go before and x before has incorrect privilege assignment when called with a non zero flags parameter the faccessat function could incorrectly report that a file is accessible to reproduce golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys go mod golang org x sys golang org x sys go mod golang org x sys srn golang org x sys go mod expected behavior golang org x sys or newer additional context golang syscall faccessat checks wrong group
| 1
|
12,478
| 14,946,047,425
|
IssuesEvent
|
2021-01-26 05:50:55
|
GoogleCloudPlatform/openmrs-fhir-analytics
|
https://api.github.com/repos/GoogleCloudPlatform/openmrs-fhir-analytics
|
closed
|
Set up a GCP project for test purposes
|
P1:must process
|
This is to give non-Googler contributors an easy way to test their changes and also to create a continuous integration test environment to be able to do end-to-end tests.
|
1.0
|
Set up a GCP project for test purposes - This is to give non-Googler contributors an easy way to test their changes and also to create a continuous integration test environment to be able to do end-to-end tests.
|
process
|
set up a gcp project for test purposes this is to give non googler contributors an easy way to test their changes and also to create a continuous integration test environment to be able to do end to end tests
| 1
|
14,205
| 17,102,798,214
|
IssuesEvent
|
2021-07-09 13:39:23
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Status of Bazel 5.0.0-pre.20210510.2
|
P1 release team-XProduct type: process
|
- Expected release date: Last week of May / First week of June
Task list:
- [x] Pick release baseline: 8a42645ec500874b0440475763ab680d5efc1e6a with cherrypicks e3c78c4eeaf4e8db3c22aa71c6c1578cb48c8dcc 1f52e9a58dd814f203797c5fbab44d9f4d53a43c
- [x] Create release: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210510.2rc1
- [x] Check downstream projects: https://buildkite.com/bazel/bazel-at-head-plus-downstream/builds/2052
- [x] Push the release:
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Status of Bazel 5.0.0-pre.20210510.2 - - Expected release date: Last week of May / First week of June
Task list:
- [x] Pick release baseline: 8a42645ec500874b0440475763ab680d5efc1e6a with cherrypicks e3c78c4eeaf4e8db3c22aa71c6c1578cb48c8dcc 1f52e9a58dd814f203797c5fbab44d9f4d53a43c
- [x] Create release: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210510.2rc1
- [x] Check downstream projects: https://buildkite.com/bazel/bazel-at-head-plus-downstream/builds/2052
- [x] Push the release:
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
status of bazel pre expected release date last week of may first week of june task list pick release baseline with cherrypicks create release check downstream projects push the release update the
| 1
|
181,238
| 14,011,492,920
|
IssuesEvent
|
2020-10-29 07:28:15
|
WordPress/twentytwentyone
|
https://api.github.com/repos/WordPress/twentytwentyone
|
closed
|
Block editor: Social icons aren’t correctly aligned
|
Low priority Needs testing [Component] Default blocks [Type] Bug
|
**Describe the bug**
When trying to create a list with 2 social icons using the Social Icons block, the icons aren’t aligned inside the editor.
**To Reproduce**
Inside the code editor, paste the following code:
```
<!-- wp:social-links -->
<ul class="wp-block-social-links"><!-- wp:social-link {"url":"#","service":"wordpress"} /-->
<!-- wp:social-link {"url":"#","service":"github"} /--></ul>
<!-- /wp:social-links -->
```
**Screenshots**
Block not focused:

Block focused:

|
1.0
|
Block editor: Social icons aren’t correctly aligned - **Describe the bug**
When trying to create a list with 2 social icons using the Social Icons block, the icons aren’t aligned inside the editor.
**To Reproduce**
Inside the code editor, paste the following code:
```
<!-- wp:social-links -->
<ul class="wp-block-social-links"><!-- wp:social-link {"url":"#","service":"wordpress"} /-->
<!-- wp:social-link {"url":"#","service":"github"} /--></ul>
<!-- /wp:social-links -->
```
**Screenshots**
Block not focused:

Block focused:

|
non_process
|
block editor social icons aren’t correctly aligned describe the bug when trying to create a list with social icons using the social icons block the icons aren’t aligned inside the editor to reproduce inside the code editor paste the following code screenshots block not focused block focused
| 0
|
56,954
| 6,535,018,850
|
IssuesEvent
|
2017-08-31 13:14:17
|
DataTables/DataTables
|
https://api.github.com/repos/DataTables/DataTables
|
closed
|
Datatable destroy() (v1.10) and fnDestroy(v1.9) generates Detached Nodes and memory leaks
|
Needs test case
|
Destroying and recreating tables in succession, generates memory leaks.
This is caused by In https://github.com/DataTables/DataTables/blob/master/media/js/jquery.dataTables.js#L9326 (v1.10)
missing a for cycle to set "null" all settings attributes.
for (var sKey in settings) {
settings[sKey] = null;
}
This problem there is in fnDestroy (v1.9) too.
An exhaustive problem's analysis: http://www.mozartrocks.ro/dt-test/patch-1.html.
I hope you'll fix this quickly.
Thanks
|
1.0
|
Datatable destroy() (v1.10) and fnDestroy(v1.9) generates Detached Nodes and memory leaks - Destroying and recreating tables in succession, generates memory leaks.
This is caused by In https://github.com/DataTables/DataTables/blob/master/media/js/jquery.dataTables.js#L9326 (v1.10)
missing a for cycle to set "null" all settings attributes.
for (var sKey in settings) {
settings[sKey] = null;
}
This problem there is in fnDestroy (v1.9) too.
An exhaustive problem's analysis: http://www.mozartrocks.ro/dt-test/patch-1.html.
I hope you'll fix this quickly.
Thanks
|
non_process
|
datatable destroy and fndestroy generates detached nodes and memory leaks destroying and recreating tables in succession generates memory leaks this is caused by in missing a for cycle to set null all settings attributes for var skey in settings settings null this problem there is in fndestroy too an exhaustive problem s analysis i hope you ll fix this quickly thanks
| 0
|
15,174
| 18,948,108,690
|
IssuesEvent
|
2021-11-18 12:27:51
|
Daviad0/Dark-Cave
|
https://api.github.com/repos/Daviad0/Dark-Cave
|
closed
|
(In Beta version) "Colorama" package isn't defined on most compilers
|
Bug Processing Issue
|
import colorama is not supported on most compilers (I've tried 3 and only one has it)
|
1.0
|
(In Beta version) "Colorama" package isn't defined on most compilers - import colorama is not supported on most compilers (I've tried 3 and only one has it)
|
process
|
in beta version colorama package isn t defined on most compilers import colorama is not supported on most compilers i ve tried and only one has it
| 1
|
639,618
| 20,759,987,925
|
IssuesEvent
|
2022-03-15 15:23:20
|
radareorg/radare2
|
https://api.github.com/repos/radareorg/radare2
|
closed
|
Reuse the XML library everywhere
|
refactor good first issue XML high-priority
|
Like it was done for JSON parsing and generation.
Currently XML is being parsed in:
- XNU kernelcache https://github.com/radare/radare2/blob/master/libr/bin/format/xnu/yxml.c
- GDB remote protocol https://github.com/radare/radare2/blob/master/shlr/gdb/src/gdbclient/xml.c
We can reduce the amount of code and bugs by using the same library everywhere.
Thus we can also add:
- [x] [Get symbols from AndroidManifest for APK files](https://github.com/radare/radare2/issues/5863)
|
1.0
|
Reuse the XML library everywhere - Like it was done for JSON parsing and generation.
Currently XML is being parsed in:
- XNU kernelcache https://github.com/radare/radare2/blob/master/libr/bin/format/xnu/yxml.c
- GDB remote protocol https://github.com/radare/radare2/blob/master/shlr/gdb/src/gdbclient/xml.c
We can reduce the amount of code and bugs by using the same library everywhere.
Thus we can also add:
- [x] [Get symbols from AndroidManifest for APK files](https://github.com/radare/radare2/issues/5863)
|
non_process
|
reuse the xml library everywhere like it was done for json parsing and generation currently xml is being parsed in xnu kernelcache gdb remote protocol we can reduce the amount of code and bugs by using the same library everywhere thus we can also add
| 0
|
205,407
| 15,613,530,991
|
IssuesEvent
|
2021-03-19 16:34:33
|
ValveSoftware/portal2
|
https://api.github.com/repos/ValveSoftware/portal2
|
closed
|
Portal 2 crash on loading second test chamber
|
Need Retest Reviewed
|
While playing portal 2, the game crashes when in the first elevator and trying to load the next section. This is immediately following the "Box and button" testing room (First test room).
Here is the error log:
https://gist.github.com/douglasjacobsen/2429adcc1edead37ef65
Here is my hardware information log:
https://gist.github.com/douglasjacobsen/e22f52d43126d464611c
For reference, I previously reported this [here](https://github.com/ValveSoftware/steam-for-linux/issues/3182#issuecomment-36583006) but was asked to move it to the portal2 tracker.
This happens every time I try to pass this level, so I haven't been able to test if it happens on any other levels.
I read through the https://github.com/ValveSoftware/portal2/issues/4 and https://github.com/ValveSoftware/portal2/issues/54 bug reports, and they sound similar but they aren't quite the same. If someone decides these are the same however, feel free to make this as a duplicate and close it.
It seemed like both of the other two didn't happen all the time and the players were able to make it through at least 2 levels before their game crashed while I can't make it past the first one.
|
1.0
|
Portal 2 crash on loading second test chamber - While playing portal 2, the game crashes when in the first elevator and trying to load the next section. This is immediately following the "Box and button" testing room (First test room).
Here is the error log:
https://gist.github.com/douglasjacobsen/2429adcc1edead37ef65
Here is my hardware information log:
https://gist.github.com/douglasjacobsen/e22f52d43126d464611c
For reference, I previously reported this [here](https://github.com/ValveSoftware/steam-for-linux/issues/3182#issuecomment-36583006) but was asked to move it to the portal2 tracker.
This happens every time I try to pass this level, so I haven't been able to test if it happens on any other levels.
I read through the https://github.com/ValveSoftware/portal2/issues/4 and https://github.com/ValveSoftware/portal2/issues/54 bug reports, and they sound similar but they aren't quite the same. If someone decides these are the same however, feel free to make this as a duplicate and close it.
It seemed like both of the other two didn't happen all the time and the players were able to make it through at least 2 levels before their game crashed while I can't make it past the first one.
|
non_process
|
portal crash on loading second test chamber while playing portal the game crashes when in the first elevator and trying to load the next section this is immediately following the box and button testing room first test room here is the error log here is my hardware information log for reference i previously reported this but was asked to move it to the tracker this happens every time i try to pass this level so i haven t been able to test if it happens on any other levels i read through the and bug reports and they sound similar but they aren t quite the same if someone decides these are the same however feel free to make this as a duplicate and close it it seemed like both of the other two didn t happen all the time and the players were able to make it through at least levels before their game crashed while i can t make it past the first one
| 0
|
20,405
| 27,064,332,941
|
IssuesEvent
|
2023-02-13 22:35:14
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/filter] Ensure OTTL configuration provides same features as existing configuration
|
enhancement priority:p2 processor/filter
|
### Component(s)
processor/filter
### Is your feature request related to a problem? Please describe.
The OTTL package is being added to the filterprocessor to support generic filtering of spans, span events, metrics, data points, and log records. To simplify the filterprocessor and its config, I propose that only the OTTL is needed, but before any existing configuration can be deprecated and removed the existing feature set of the processor must be reviewed to ensure that OTTL can handle all existing user situations. Any missing features must be added to the OTTL before this issue can be closed.
### Describe the solution you'd like
Filterprocessor feature set is reviewed and it is confirmed that OTTL can handle all existing features.
Feature gaps:
- [x] Support for `include` statements
- [x] Regex matching where the input is converted to a string if it is a bool, int, or double. https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/16434
- [x] OTTL metric conditions support dropping the metric if any of the metrics datapoints have a specific attribute or have a specific attribute with a specific value.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
1.0
|
[processor/filter] Ensure OTTL configuration provides same features as existing configuration - ### Component(s)
processor/filter
### Is your feature request related to a problem? Please describe.
The OTTL package is being added to the filterprocessor to support generic filtering of spans, span events, metrics, data points, and log records. To simplify the filterprocessor and its config, I propose that only the OTTL is needed, but before any existing configuration can be deprecated and removed the existing feature set of the processor must be reviewed to ensure that OTTL can handle all existing user situations. Any missing features must be added to the OTTL before this issue can be closed.
### Describe the solution you'd like
Filterprocessor feature set is reviewed and it is confirmed that OTTL can handle all existing features.
Feature gaps:
- [x] Support for `include` statements
- [x] Regex matching where the input is converted to a string if it is a bool, int, or double. https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/16434
- [x] OTTL metric conditions support dropping the metric if any of the metrics datapoints have a specific attribute or have a specific attribute with a specific value.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
process
|
ensure ottl configuration provides same features as existing configuration component s processor filter is your feature request related to a problem please describe the ottl package is being added to the filterprocessor to support generic filtering of spans span events metrics data points and log records to simplify the filterprocessor and its config i propose that only the ottl is needed but before any existing configuration can be deprecated and removed the existing feature set of the processor must be reviewed to ensure that ottl can handle all existing user situations any missing features must be added to the ottl before this issue can be closed describe the solution you d like filterprocessor feature set is reviewed and it is confirmed that ottl can handle all existing features feature gaps support for include statements regex matching where the input is converted to a string if it is a bool int or double ottl metric conditions support dropping the metric if any of the metrics datapoints have a specific attribute or have a specific attribute with a specific value describe alternatives you ve considered no response additional context no response
| 1
|
470,241
| 13,534,400,266
|
IssuesEvent
|
2020-09-16 05:36:27
|
moibit/tracy-mobile-app
|
https://api.github.com/repos/moibit/tracy-mobile-app
|
closed
|
SIGNUP FORM - Name field is not being fully validated, accepting numbers.
|
PRIORITY-3 bug
|
Please ensure that numbers are not allowed in the Name field of Sign-up screen
|
1.0
|
SIGNUP FORM - Name field is not being fully validated, accepting numbers. - Please ensure that numbers are not allowed in the Name field of Sign-up screen
|
non_process
|
signup form name field is not being fully validated accepting numbers please ensure that numbers are not allowed in the name field of sign up screen
| 0
|
20,242
| 26,860,165,347
|
IssuesEvent
|
2023-02-03 17:38:56
|
srophe/caesarea-data
|
https://api.github.com/repos/srophe/caesarea-data
|
closed
|
Update to TEI langUsage
|
enhancement Data Update Testimonia data template post-processor
|
Please update template to allow two drop down boxes for langUsage/language on form and in the template.
- Original Language
- Language represented in this edition
Please use the "ana" attribute and point to a taxonomy.
|
1.0
|
Update to TEI langUsage - Please update template to allow two drop down boxes for langUsage/language on form and in the template.
- Original Language
- Language represented in this edition
Please use the "ana" attribute and point to a taxonomy.
|
process
|
update to tei langusage please update template to allow two drop down boxes for langusage language on form and in the template original language language represented in this edition please use the ana attribute and point to a taxonomy
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.