Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,509
| 5,283,778,078
|
IssuesEvent
|
2017-02-07 22:15:56
|
addok/addok
|
https://api.github.com/repos/addok/addok
|
closed
|
extract_address is called before synonym matching
|
bug needs investigation string processing
|
So an adresse like "PARC D ACTIVITE DE SAUMATY 26 AV ANDRE ROUSSIN 13016 MARSEILLE 16" will no be extracted (av => avenue).
|
1.0
|
extract_address is called before synonym matching - So an adresse like "PARC D ACTIVITE DE SAUMATY 26 AV ANDRE ROUSSIN 13016 MARSEILLE 16" will no be extracted (av => avenue).
|
process
|
extract address is called before synonym matching so an adresse like parc d activite de saumaty av andre roussin marseille will no be extracted av avenue
| 1
|
107,789
| 16,762,308,654
|
IssuesEvent
|
2021-06-14 01:35:07
|
dbankws-test/clumsy-bird
|
https://api.github.com/repos/dbankws-test/clumsy-bird
|
opened
|
CVE-2017-16116 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2017-16116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>underscore.string-2.2.1.tgz</b>, <b>underscore.string-2.4.0.tgz</b>, <b>underscore.string-2.3.3.tgz</b></p></summary>
<p>
<details><summary><b>underscore.string-2.2.1.tgz</b></p></summary>
<p>String manipulation extensions for Underscore.js javascript library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore.string/-/underscore.string-2.2.1.tgz">https://registry.npmjs.org/underscore.string/-/underscore.string-2.2.1.tgz</a></p>
<p>Path to dependency file: clumsy-bird/package.json</p>
<p>Path to vulnerable library: clumsy-bird/node_modules/underscore.string/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.4.5.tgz (Root Library)
- :x: **underscore.string-2.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>underscore.string-2.4.0.tgz</b></p></summary>
<p>String manipulation extensions for Underscore.js javascript library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore.string/-/underscore.string-2.4.0.tgz">https://registry.npmjs.org/underscore.string/-/underscore.string-2.4.0.tgz</a></p>
<p>Path to dependency file: clumsy-bird/package.json</p>
<p>Path to vulnerable library: clumsy-bird/node_modules/argparse/node_modules/underscore.string/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.4.5.tgz (Root Library)
- js-yaml-2.0.5.tgz
- argparse-0.1.16.tgz
- :x: **underscore.string-2.4.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>underscore.string-2.3.3.tgz</b></p></summary>
<p>String manipulation extensions for Underscore.js javascript library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore.string/-/underscore.string-2.3.3.tgz">https://registry.npmjs.org/underscore.string/-/underscore.string-2.3.3.tgz</a></p>
<p>Path to dependency file: clumsy-bird/package.json</p>
<p>Path to vulnerable library: clumsy-bird/node_modules/grunt-legacy-log/node_modules/underscore.string/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.4.5.tgz (Root Library)
- grunt-legacy-log-0.1.3.tgz
- :x: **underscore.string-2.3.3.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The string module is a module that provides extra string operations. The string module is vulnerable to regular expression denial of service when specifically crafted untrusted user input is passed into the underscore or unescapeHTML methods.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16116>CVE-2017-16116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/745">https://www.npmjs.com/advisories/745</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: 3.3.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"underscore.string","packageVersion":"2.2.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt:0.4.5;underscore.string:2.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.3.5"},{"packageType":"javascript/Node.js","packageName":"underscore.string","packageVersion":"2.4.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt:0.4.5;js-yaml:2.0.5;argparse:0.1.16;underscore.string:2.4.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.3.5"},{"packageType":"javascript/Node.js","packageName":"underscore.string","packageVersion":"2.3.3","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt:0.4.5;grunt-legacy-log:0.1.3;underscore.string:2.3.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.3.5"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-16116","vulnerabilityDetails":"The string module is a module that provides extra string operations. The string module is vulnerable to regular expression denial of service when specifically crafted untrusted user input is passed into the underscore or unescapeHTML methods.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16116","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-16116 (High) detected in multiple libraries - ## CVE-2017-16116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>underscore.string-2.2.1.tgz</b>, <b>underscore.string-2.4.0.tgz</b>, <b>underscore.string-2.3.3.tgz</b></p></summary>
<p>
<details><summary><b>underscore.string-2.2.1.tgz</b></p></summary>
<p>String manipulation extensions for Underscore.js javascript library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore.string/-/underscore.string-2.2.1.tgz">https://registry.npmjs.org/underscore.string/-/underscore.string-2.2.1.tgz</a></p>
<p>Path to dependency file: clumsy-bird/package.json</p>
<p>Path to vulnerable library: clumsy-bird/node_modules/underscore.string/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.4.5.tgz (Root Library)
- :x: **underscore.string-2.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>underscore.string-2.4.0.tgz</b></p></summary>
<p>String manipulation extensions for Underscore.js javascript library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore.string/-/underscore.string-2.4.0.tgz">https://registry.npmjs.org/underscore.string/-/underscore.string-2.4.0.tgz</a></p>
<p>Path to dependency file: clumsy-bird/package.json</p>
<p>Path to vulnerable library: clumsy-bird/node_modules/argparse/node_modules/underscore.string/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.4.5.tgz (Root Library)
- js-yaml-2.0.5.tgz
- argparse-0.1.16.tgz
- :x: **underscore.string-2.4.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>underscore.string-2.3.3.tgz</b></p></summary>
<p>String manipulation extensions for Underscore.js javascript library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore.string/-/underscore.string-2.3.3.tgz">https://registry.npmjs.org/underscore.string/-/underscore.string-2.3.3.tgz</a></p>
<p>Path to dependency file: clumsy-bird/package.json</p>
<p>Path to vulnerable library: clumsy-bird/node_modules/grunt-legacy-log/node_modules/underscore.string/package.json</p>
<p>
Dependency Hierarchy:
- grunt-0.4.5.tgz (Root Library)
- grunt-legacy-log-0.1.3.tgz
- :x: **underscore.string-2.3.3.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The string module is a module that provides extra string operations. The string module is vulnerable to regular expression denial of service when specifically crafted untrusted user input is passed into the underscore or unescapeHTML methods.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16116>CVE-2017-16116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/745">https://www.npmjs.com/advisories/745</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: 3.3.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"underscore.string","packageVersion":"2.2.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt:0.4.5;underscore.string:2.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.3.5"},{"packageType":"javascript/Node.js","packageName":"underscore.string","packageVersion":"2.4.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt:0.4.5;js-yaml:2.0.5;argparse:0.1.16;underscore.string:2.4.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.3.5"},{"packageType":"javascript/Node.js","packageName":"underscore.string","packageVersion":"2.3.3","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt:0.4.5;grunt-legacy-log:0.1.3;underscore.string:2.3.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.3.5"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-16116","vulnerabilityDetails":"The string module is a module that provides extra string operations. The string module is vulnerable to regular expression denial of service when specifically crafted untrusted user input is passed into the underscore or unescapeHTML methods.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16116","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries underscore string tgz underscore string tgz underscore string tgz underscore string tgz string manipulation extensions for underscore js javascript library library home page a href path to dependency file clumsy bird package json path to vulnerable library clumsy bird node modules underscore string package json dependency hierarchy grunt tgz root library x underscore string tgz vulnerable library underscore string tgz string manipulation extensions for underscore js javascript library library home page a href path to dependency file clumsy bird package json path to vulnerable library clumsy bird node modules argparse node modules underscore string package json dependency hierarchy grunt tgz root library js yaml tgz argparse tgz x underscore string tgz vulnerable library underscore string tgz string manipulation extensions for underscore js javascript library library home page a href path to dependency file clumsy bird package json path to vulnerable library clumsy bird node modules grunt legacy log node modules underscore string package json dependency hierarchy grunt tgz root library grunt legacy log tgz x underscore string tgz vulnerable library found in base branch master vulnerability details the string module is a module that provides extra string operations the string module is vulnerable to regular expression denial of service when specifically crafted untrusted user input is passed into the underscore or unescapehtml methods publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt underscore string isminimumfixversionavailable true minimumfixversion packagetype javascript node js packagename underscore string packageversion packagefilepaths istransitivedependency true dependencytree grunt js yaml argparse underscore string isminimumfixversionavailable true minimumfixversion packagetype javascript node js packagename underscore string packageversion packagefilepaths istransitivedependency true dependencytree grunt grunt legacy log underscore string isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the string module is a module that provides extra string operations the string module is vulnerable to regular expression denial of service when specifically crafted untrusted user input is passed into the underscore or unescapehtml methods vulnerabilityurl
| 0
|
17,385
| 23,202,560,922
|
IssuesEvent
|
2022-08-01 23:37:29
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Windows Server 2022 - Setting affinity in .NET 6 application errors
|
area-System.Diagnostics.Process
|
Hi,
I have a Windows Server 2022 machine with 40 cores and 80 logical processors. Whenever I try and run my application on this machine and attempt to set it's affinity in code or via Task Manager, I get errors.
The error I get in code is:

`Win32Exception (87): The parameter is incorrect`
When I try and set it via Task Manager (as an admin with UAC disabled) I get:

`Unable to access or set process affinity`
Also, when I try and get the affinity mask for the process in code, I get `0` back.
After reading the docs:
https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-getprocessaffinitymask
https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-setprocessaffinitymask
https://docs.microsoft.com/en-us/windows/win32/procthread/processor-groups
It is clear that on Windows 11 and Server 2022 that threads are not bound to a single processor group any more if there are more than 64 processors and can run across multiple processor groups.
My theory is that I am unable to set the affinity for my application because it is performing actions on threads across multiple processor groups.
I have tried setting the affinity mask at startup in my application using:
```
Process proc = Process.GetCurrentProcess();
long affinityMask = (long)proc.ProcessorAffinity;
affinityMask &= 15; // First 4 processors
proc.ProcessorAffinity = (IntPtr)affinityMask;
```
But I just get the `Win32Exception (87): The parameter is incorrect`
And when I try to get the affinity mask at start up it is set to `0` meaning it was unable to get it because there are threads in multiple groups.
**So for me, is it possible to set a .NET 6 application to ONLY run in a single processor group on Windows Server 2022 before the application runs, so I can then set its affinity mask?**
|
1.0
|
Windows Server 2022 - Setting affinity in .NET 6 application errors - Hi,
I have a Windows Server 2022 machine with 40 cores and 80 logical processors. Whenever I try and run my application on this machine and attempt to set it's affinity in code or via Task Manager, I get errors.
The error I get in code is:

`Win32Exception (87): The parameter is incorrect`
When I try and set it via Task Manager (as an admin with UAC disabled) I get:

`Unable to access or set process affinity`
Also, when I try and get the affinity mask for the process in code, I get `0` back.
After reading the docs:
https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-getprocessaffinitymask
https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-setprocessaffinitymask
https://docs.microsoft.com/en-us/windows/win32/procthread/processor-groups
It is clear that on Windows 11 and Server 2022 that threads are not bound to a single processor group any more if there are more than 64 processors and can run across multiple processor groups.
My theory is that I am unable to set the affinity for my application because it is performing actions on threads across multiple processor groups.
I have tried setting the affinity mask at startup in my application using:
```
Process proc = Process.GetCurrentProcess();
long affinityMask = (long)proc.ProcessorAffinity;
affinityMask &= 15; // First 4 processors
proc.ProcessorAffinity = (IntPtr)affinityMask;
```
But I just get the `Win32Exception (87): The parameter is incorrect`
And when I try to get the affinity mask at start up it is set to `0` meaning it was unable to get it because there are threads in multiple groups.
**So for me, is it possible to set a .NET 6 application to ONLY run in a single processor group on Windows Server 2022 before the application runs, so I can then set its affinity mask?**
|
process
|
windows server setting affinity in net application errors hi i have a windows server machine with cores and logical processors whenever i try and run my application on this machine and attempt to set it s affinity in code or via task manager i get errors the error i get in code is the parameter is incorrect when i try and set it via task manager as an admin with uac disabled i get unable to access or set process affinity also when i try and get the affinity mask for the process in code i get back after reading the docs it is clear that on windows and server that threads are not bound to a single processor group any more if there are more than processors and can run across multiple processor groups my theory is that i am unable to set the affinity for my application because it is performing actions on threads across multiple processor groups i have tried setting the affinity mask at startup in my application using process proc process getcurrentprocess long affinitymask long proc processoraffinity affinitymask first processors proc processoraffinity intptr affinitymask but i just get the the parameter is incorrect and when i try to get the affinity mask at start up it is set to meaning it was unable to get it because there are threads in multiple groups so for me is it possible to set a net application to only run in a single processor group on windows server before the application runs so i can then set its affinity mask
| 1
|
701,645
| 24,102,228,311
|
IssuesEvent
|
2022-09-20 02:37:49
|
matrixorigin/matrixone
|
https://api.github.com/repos/matrixorigin/matrixone
|
closed
|
[Feature Request]: metadata management on cn and dn
|
priority/p0 kind/feature component/distributed tae
|
### Is there an existing issue for the same feature request?
- [X] I have checked the existing issues.
### Is your feature request related to a problem?
_No response_
### Describe the feature you'd like
Metadata management
metadata is organized into a two-tier structure. Directory tree and files.
> DN
- Producer
- Has the latest metadata for the shard to which it belongs
- Persist metadata to remote object storage according to a certain strategy
- Checkpoint and compact
- Tail + Main
- Fast delta calculation (Vx-Vy) = Vx-y and Marshal|Unmarshal
- Replay
> CN
- All dn's metadata is visible
- LRU cache (nice to have)
- Eval Vx-y + Vy = Vx
- Local storage
### Describe implementation you've considered
_No response_
### Documentation, Adoption, Use Case, Migration Strategy
_No response_
### Additional information
_No response_
|
1.0
|
[Feature Request]: metadata management on cn and dn - ### Is there an existing issue for the same feature request?
- [X] I have checked the existing issues.
### Is your feature request related to a problem?
_No response_
### Describe the feature you'd like
Metadata management
metadata is organized into a two-tier structure. Directory tree and files.
> DN
- Producer
- Has the latest metadata for the shard to which it belongs
- Persist metadata to remote object storage according to a certain strategy
- Checkpoint and compact
- Tail + Main
- Fast delta calculation (Vx-Vy) = Vx-y and Marshal|Unmarshal
- Replay
> CN
- All dn's metadata is visible
- LRU cache (nice to have)
- Eval Vx-y + Vy = Vx
- Local storage
### Describe implementation you've considered
_No response_
### Documentation, Adoption, Use Case, Migration Strategy
_No response_
### Additional information
_No response_
|
non_process
|
metadata management on cn and dn is there an existing issue for the same feature request i have checked the existing issues is your feature request related to a problem no response describe the feature you d like metadata management metadata is organized into a two tier structure directory tree and files dn producer has the latest metadata for the shard to which it belongs persist metadata to remote object storage according to a certain strategy checkpoint and compact tail main fast delta calculation vx vy vx y and marshal unmarshal replay cn all dn s metadata is visible lru cache nice to have eval vx y vy vx local storage describe implementation you ve considered no response documentation adoption use case migration strategy no response additional information no response
| 0
|
202,606
| 7,050,259,473
|
IssuesEvent
|
2018-01-03 04:31:51
|
squizlabs/PHP_CodeSniffer
|
https://api.github.com/repos/squizlabs/PHP_CodeSniffer
|
closed
|
CodeSniffer.conf packaged in PHAR is ignored
|
Enhancement Low Priority
|
Starting release 2.3.1 (a90531b), CodeSniffer.conf placed inside PHAR is not considered when reading configuration data. The file instead is looked for in the same directory as the *.phar file itself.
The original behavior makes sense in cases when PHP_CodeSniffer is packaged as part of another application inside its own PHAR, and the package contains with the default settings (say I want the default standard to be PSR2 instead of PEAR).
A possible solution is to consider two files: the external file overrides the settings defined in internal file. All changes are saved to the external file.
|
1.0
|
CodeSniffer.conf packaged in PHAR is ignored - Starting release 2.3.1 (a90531b), CodeSniffer.conf placed inside PHAR is not considered when reading configuration data. The file instead is looked for in the same directory as the *.phar file itself.
The original behavior makes sense in cases when PHP_CodeSniffer is packaged as part of another application inside its own PHAR, and the package contains with the default settings (say I want the default standard to be PSR2 instead of PEAR).
A possible solution is to consider two files: the external file overrides the settings defined in internal file. All changes are saved to the external file.
|
non_process
|
codesniffer conf packaged in phar is ignored starting release codesniffer conf placed inside phar is not considered when reading configuration data the file instead is looked for in the same directory as the phar file itself the original behavior makes sense in cases when php codesniffer is packaged as part of another application inside its own phar and the package contains with the default settings say i want the default standard to be instead of pear a possible solution is to consider two files the external file overrides the settings defined in internal file all changes are saved to the external file
| 0
|
288,336
| 31,861,284,050
|
IssuesEvent
|
2023-09-15 11:06:32
|
nidhi7598/linux-v4.19.72_CVE-2022-3564
|
https://api.github.com/repos/nidhi7598/linux-v4.19.72_CVE-2022-3564
|
opened
|
CVE-2023-3269 (High) detected in multiple libraries
|
Mend: dependency security vulnerability
|
## CVE-2023-3269 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.294</b>, <b>linuxlinux-4.19.294</b>, <b>linuxlinux-4.19.294</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in the memory management subsystem of the Linux kernel. The lock handling for accessing and updating virtual memory areas (VMAs) is incorrect, leading to use-after-free problems. This issue can be successfully exploited to execute arbitrary kernel code, escalate containers, and gain root privileges.
<p>Publish Date: 2023-07-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3269>CVE-2023-3269</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-3269">https://www.linuxkernelcves.com/cves/CVE-2023-3269</a></p>
<p>Release Date: 2023-07-11</p>
<p>Fix Resolution: v6.1.37,v6.3.11,v6.5-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-3269 (High) detected in multiple libraries - ## CVE-2023-3269 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.294</b>, <b>linuxlinux-4.19.294</b>, <b>linuxlinux-4.19.294</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in the memory management subsystem of the Linux kernel. The lock handling for accessing and updating virtual memory areas (VMAs) is incorrect, leading to use-after-free problems. This issue can be successfully exploited to execute arbitrary kernel code, escalate containers, and gain root privileges.
<p>Publish Date: 2023-07-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3269>CVE-2023-3269</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-3269">https://www.linuxkernelcves.com/cves/CVE-2023-3269</a></p>
<p>Release Date: 2023-07-11</p>
<p>Fix Resolution: v6.1.37,v6.3.11,v6.5-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries linuxlinux linuxlinux linuxlinux vulnerability details a vulnerability exists in the memory management subsystem of the linux kernel the lock handling for accessing and updating virtual memory areas vmas is incorrect leading to use after free problems this issue can be successfully exploited to execute arbitrary kernel code escalate containers and gain root privileges publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
3,788
| 6,774,535,605
|
IssuesEvent
|
2017-10-27 10:45:47
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
ntr: cellular detoxification of fluoride
|
cellular processes in progress PomBase
|
_From @ValWood on October 20, 2017 9:40_
standard def
Any process carried out at the cellular level that reduces or removes the toxicity of a flouride. These may include transport of flouride away from sensitive areas and to compartments or complexes whose purpose is sequestration of the toxic substance.
does this also include "export"? this isn't clear from the standard def?
"Therefore, fluoride is toxic for S. pombe even at low concentrations, and specific channels that export fluoride ions are required for normal growth."
_Copied from original issue: geneontology/go-annotation#1665_
|
1.0
|
ntr: cellular detoxification of fluoride - _From @ValWood on October 20, 2017 9:40_
standard def
Any process carried out at the cellular level that reduces or removes the toxicity of a flouride. These may include transport of flouride away from sensitive areas and to compartments or complexes whose purpose is sequestration of the toxic substance.
does this also include "export"? this isn't clear from the standard def?
"Therefore, fluoride is toxic for S. pombe even at low concentrations, and specific channels that export fluoride ions are required for normal growth."
_Copied from original issue: geneontology/go-annotation#1665_
|
process
|
ntr cellular detoxification of fluoride from valwood on october standard def any process carried out at the cellular level that reduces or removes the toxicity of a flouride these may include transport of flouride away from sensitive areas and to compartments or complexes whose purpose is sequestration of the toxic substance does this also include export this isn t clear from the standard def therefore fluoride is toxic for s pombe even at low concentrations and specific channels that export fluoride ions are required for normal growth copied from original issue geneontology go annotation
| 1
|
270,840
| 8,471,061,067
|
IssuesEvent
|
2018-10-24 07:19:27
|
goldami1/GraduationProject
|
https://api.github.com/repos/goldami1/GraduationProject
|
closed
|
Filter page graphical fixes
|
Medium Priority
|
1. Change text color to dark blue, make it bigger and change style to bold
2. On folder selection - change background to light blue
3. Add "back" sign to the left of the search bar
|
1.0
|
Filter page graphical fixes - 1. Change text color to dark blue, make it bigger and change style to bold
2. On folder selection - change background to light blue
3. Add "back" sign to the left of the search bar
|
non_process
|
filter page graphical fixes change text color to dark blue make it bigger and change style to bold on folder selection change background to light blue add back sign to the left of the search bar
| 0
|
84,503
| 3,667,498,276
|
IssuesEvent
|
2016-02-20 01:21:31
|
EsotericSoftware/kryo
|
https://api.github.com/repos/EsotericSoftware/kryo
|
closed
|
Patch to get KryoNet working with Kryo 2.21
|
bug imported Priority-Medium
|
_From [robin.ki...@gmail.com](https://code.google.com/u/114916895304570456707/) on August 18, 2013 18:32:47_
Kryonet fails tests if you try to use kryo 2.21 with it. Attached is a patch to apply to both kryo and kryonet to fix the issues.
Patch summary:
* Fix ByteBufferInputStream#read to not return -1 (indicating EOF) if length is zero.
* New ThrowableSerializer, because FieldSerializer won't work on Throwables as of Java 1.7
* KryoSerialization now supports references (and requires them for some RMI tests)
**Attachment:** [kryo_kryonet_patch.txt](http://code.google.com/p/kryo/issues/detail?id=125)
_Original issue: http://code.google.com/p/kryo/issues/detail?id=125_
|
1.0
|
Patch to get KryoNet working with Kryo 2.21 - _From [robin.ki...@gmail.com](https://code.google.com/u/114916895304570456707/) on August 18, 2013 18:32:47_
Kryonet fails tests if you try to use kryo 2.21 with it. Attached is a patch to apply to both kryo and kryonet to fix the issues.
Patch summary:
* Fix ByteBufferInputStream#read to not return -1 (indicating EOF) if length is zero.
* New ThrowableSerializer, because FieldSerializer won't work on Throwables as of Java 1.7
* KryoSerialization now supports references (and requires them for some RMI tests)
**Attachment:** [kryo_kryonet_patch.txt](http://code.google.com/p/kryo/issues/detail?id=125)
_Original issue: http://code.google.com/p/kryo/issues/detail?id=125_
|
non_process
|
patch to get kryonet working with kryo from on august kryonet fails tests if you try to use kryo with it attached is a patch to apply to both kryo and kryonet to fix the issues patch summary fix bytebufferinputstream read to not return indicating eof if length is zero new throwableserializer because fieldserializer won t work on throwables as of java kryoserialization now supports references and requires them for some rmi tests attachment original issue
| 0
|
16,729
| 21,891,347,367
|
IssuesEvent
|
2022-05-20 02:14:40
|
carbon-design-system/ibm-cloud-cognitive
|
https://api.github.com/repos/carbon-design-system/ibm-cloud-cognitive
|
opened
|
Upgrade yarn version
|
type: enhancement dependencies type: process improvement
|
## What will this achieve?
This will enable us to more easily upgrade `@carbon/ibm-products` to Carbon 11 while keeping Security and CDAI on C10.
## How will success be measured?
No broken builds or scripts
## Additional information
- Designs
- Existing code
- etc
|
1.0
|
Upgrade yarn version - ## What will this achieve?
This will enable us to more easily upgrade `@carbon/ibm-products` to Carbon 11 while keeping Security and CDAI on C10.
## How will success be measured?
No broken builds or scripts
## Additional information
- Designs
- Existing code
- etc
|
process
|
upgrade yarn version what will this achieve this will enable us to more easily upgrade carbon ibm products to carbon while keeping security and cdai on how will success be measured no broken builds or scripts additional information designs existing code etc
| 1
|
11,766
| 14,597,368,492
|
IssuesEvent
|
2020-12-20 19:49:18
|
bisq-network/bisq
|
https://api.github.com/repos/bisq-network/bisq
|
closed
|
Transaction stuck, can't open mediation because "deposit transaction" is missing
|
in:trade-process in:wallet
|
<!--
SUPPORT REQUESTS: This is for reporting bugs in the Bisq app.
If you have a support request, please join #support on Bisq's
Keybase team at https://keybase.io/team/Bisq
-->
### Description
I have a transaction that has been stuck since November 7, and when I try to open a support ticket, this is the error message it shows (See attached image below).
Also note, the trade period is stuck at 4 days, it doesn't count down.
#### Version
<!-- commit id or version number -->
1.4.2 when the error occured.
Currently running 1.5.0
### Steps to reproduce
<!--if you can reliably reproduce the bug, list the steps here -->
I don't know how to reproduce this, but this is what happened:
1) When I initially took the offer to buy, BISQ crashed suddenly. Afterwards, an error message prevented me from initiating the application.
2) Upon reseacrch, I realized my wallet got corrupted, so I deleted the wallet file, reopened BISQ, and finally used the seed to restore.
3) My wallet balance now showed 0 BTC (I had a balance since I transferred BTC for the deposit).
4) I tried to do an SPV re-sync, but after leaving it on for days, I gave up. (My processor was also running very hot/ max capacity)
5) I imported the wallet seed into Electrum, and was able to salvage the BTC I had transferred. But the transaction on BISQ has remained stuck ever since, and the seller still has his BTC locked in there.
### Expected behaviour
<!--description of the expected behavior -->
### Actual behaviour
<!-- explain what happened instead of the expected behaviour -->
### Screenshots
<!--Screenshots if gui related, drag and drop to add to the issue -->

#### Device or machine
<!-- device/machine used, operating system -->
HP-Envy-15 2040nr (2012)
Windows 7
#### Additional info
<!-- Additional information useful for debugging (e.g. logs) -->
|
1.0
|
Transaction stuck, can't open mediation because "deposit transaction" is missing - <!--
SUPPORT REQUESTS: This is for reporting bugs in the Bisq app.
If you have a support request, please join #support on Bisq's
Keybase team at https://keybase.io/team/Bisq
-->
### Description
I have a transaction that has been stuck since November 7, and when I try to open a support ticket, this is the error message it shows (See attached image below).
Also note, the trade period is stuck at 4 days, it doesn't count down.
#### Version
<!-- commit id or version number -->
1.4.2 when the error occured.
Currently running 1.5.0
### Steps to reproduce
<!--if you can reliably reproduce the bug, list the steps here -->
I don't know how to reproduce this, but this is what happened:
1) When I initially took the offer to buy, BISQ crashed suddenly. Afterwards, an error message prevented me from initiating the application.
2) Upon reseacrch, I realized my wallet got corrupted, so I deleted the wallet file, reopened BISQ, and finally used the seed to restore.
3) My wallet balance now showed 0 BTC (I had a balance since I transferred BTC for the deposit).
4) I tried to do an SPV re-sync, but after leaving it on for days, I gave up. (My processor was also running very hot/ max capacity)
5) I imported the wallet seed into Electrum, and was able to salvage the BTC I had transferred. But the transaction on BISQ has remained stuck ever since, and the seller still has his BTC locked in there.
### Expected behaviour
<!--description of the expected behavior -->
### Actual behaviour
<!-- explain what happened instead of the expected behaviour -->
### Screenshots
<!--Screenshots if gui related, drag and drop to add to the issue -->

#### Device or machine
<!-- device/machine used, operating system -->
HP-Envy-15 2040nr (2012)
Windows 7
#### Additional info
<!-- Additional information useful for debugging (e.g. logs) -->
|
process
|
transaction stuck can t open mediation because deposit transaction is missing support requests this is for reporting bugs in the bisq app if you have a support request please join support on bisq s keybase team at description i have a transaction that has been stuck since november and when i try to open a support ticket this is the error message it shows see attached image below also note the trade period is stuck at days it doesn t count down version when the error occured currently running steps to reproduce i don t know how to reproduce this but this is what happened when i initially took the offer to buy bisq crashed suddenly afterwards an error message prevented me from initiating the application upon reseacrch i realized my wallet got corrupted so i deleted the wallet file reopened bisq and finally used the seed to restore my wallet balance now showed btc i had a balance since i transferred btc for the deposit i tried to do an spv re sync but after leaving it on for days i gave up my processor was also running very hot max capacity i imported the wallet seed into electrum and was able to salvage the btc i had transferred but the transaction on bisq has remained stuck ever since and the seller still has his btc locked in there expected behaviour actual behaviour screenshots device or machine hp envy windows additional info
| 1
|
11,350
| 14,171,140,947
|
IssuesEvent
|
2020-11-12 15:21:58
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Automation Account Variable Encryption Recommendation in Azure Security Center
|
Pri2 automation/svc cxp process-automation/subsvc product-question triaged
|
Having enabled the VM Start / Stop solution in Log Analytics / Automation and deployed it successfully, Security Center issues recommendations for each of the variables as a "High" issue saying that they should be encrypted (See https://docs.microsoft.com/en-us/azure/security-center/recommendations-reference#recs-computeapp)
What I don't know is if I recreate each of the 17 variables myself (variables can only be encrypted at creation) is whether I will break the solution? Or can encryption be build in at deployment stage ?
If it is possible to correct manually, could guidance be provided in the instructions ?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Azure Automation Start/Stop VMs during off-hours overview](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Automation Account Variable Encryption Recommendation in Azure Security Center - Having enabled the VM Start / Stop solution in Log Analytics / Automation and deployed it successfully, Security Center issues recommendations for each of the variables as a "High" issue saying that they should be encrypted (See https://docs.microsoft.com/en-us/azure/security-center/recommendations-reference#recs-computeapp)
What I don't know is if I recreate each of the 17 variables myself (variables can only be encrypted at creation) is whether I will break the solution? Or can encryption be build in at deployment stage ?
If it is possible to correct manually, could guidance be provided in the instructions ?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Azure Automation Start/Stop VMs during off-hours overview](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
automation account variable encryption recommendation in azure security center having enabled the vm start stop solution in log analytics automation and deployed it successfully security center issues recommendations for each of the variables as a high issue saying that they should be encrypted see what i don t know is if i recreate each of the variables myself variables can only be encrypted at creation is whether i will break the solution or can encryption be build in at deployment stage if it is possible to correct manually could guidance be provided in the instructions document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
4,743
| 2,747,234,921
|
IssuesEvent
|
2015-04-23 00:11:02
|
azavea/nyc-trees
|
https://api.github.com/repos/azavea/nyc-trees
|
opened
|
Password reset button is unstyled
|
design
|
This is the form after you click the password reset link

|
1.0
|
Password reset button is unstyled - This is the form after you click the password reset link

|
non_process
|
password reset button is unstyled this is the form after you click the password reset link
| 0
|
12,008
| 3,562,007,992
|
IssuesEvent
|
2016-01-24 06:10:10
|
CollaboratingPlatypus/PetaPoco
|
https://api.github.com/repos/CollaboratingPlatypus/PetaPoco
|
closed
|
Building and Development - Doc review
|
documentation review
|
Doc review request - [Building and development](https://github.com/CollaboratingPlatypus/PetaPoco/wiki/Building-and-Development)
@CollaboratingPlatypus/petapoco-documentation
Community input welcome!
|
1.0
|
Building and Development - Doc review - Doc review request - [Building and development](https://github.com/CollaboratingPlatypus/PetaPoco/wiki/Building-and-Development)
@CollaboratingPlatypus/petapoco-documentation
Community input welcome!
|
non_process
|
building and development doc review doc review request collaboratingplatypus petapoco documentation community input welcome
| 0
|
8,130
| 11,309,078,275
|
IssuesEvent
|
2020-01-19 10:40:22
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
Not working when using "postcss-import" with "postcss-next".
|
:bug: Bug :pray: Help Wanted CSS Preprocessing
|
<!---
Thanks for filing an issue 😄 ! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
🐛 bug report
<!--- Provide a general summary of the issue in the title above -->
### 🎛 Configuration (.babelrc, package.json, cli command)
<!--- If describing a bug, tell us what your babel configuration looks like -->
.postcssrc
```json
{
"plugins": {
"postcss-import": {},
"postcss-cssnext": {}
}
}
```
CLI
```
$ parcel build app.js
```
app.js
```js
import './app.css';
```
app.css
```css
@import "./css/root.css";
@import "./css/body.css";
```
root.css
```css
:root {
--font: 'Helvetica';
}
```
body.css
```css
body {
font-family: var(--font);
}
```
You can make this problem with this attachment. You can do `$ yarn run start`
[worked.zip](https://github.com/parcel-bundler/parcel/files/1570849/worked.zip)
### 🤔 Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Postcss
```css
:root {
--font: 'Helvetica';
}
body {
font-family: var(--font);
}
```
Exported CSS
```css
body {
font-family: 'Helvetica';
}
```
### 😯 Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If you are seeing an error, please include the full error message and stack trace -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Exported CSS
```css
/* no root pseudo-class */
body {
font-family: var(--font);
}
```
### 💁 Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
This setting is working great when using webpack + postcss-loader.
### 🔦 Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I tried changing the syntax of postcss setting file though there is no hope.
### 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s)
| ---------------- | ----------
| Parcel | 1.2.0
| Node | 9.3.0
| npm/Yarn | Yarn 1.3.2
| Operating System | Arch Linux (macOS High Sierra 10.13.2)
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
|
1.0
|
Not working when using "postcss-import" with "postcss-next". - <!---
Thanks for filing an issue 😄 ! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
🐛 bug report
<!--- Provide a general summary of the issue in the title above -->
### 🎛 Configuration (.babelrc, package.json, cli command)
<!--- If describing a bug, tell us what your babel configuration looks like -->
.postcssrc
```json
{
"plugins": {
"postcss-import": {},
"postcss-cssnext": {}
}
}
```
CLI
```
$ parcel build app.js
```
app.js
```js
import './app.css';
```
app.css
```css
@import "./css/root.css";
@import "./css/body.css";
```
root.css
```css
:root {
--font: 'Helvetica';
}
```
body.css
```css
body {
font-family: var(--font);
}
```
You can make this problem with this attachment. You can do `$ yarn run start`
[worked.zip](https://github.com/parcel-bundler/parcel/files/1570849/worked.zip)
### 🤔 Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Postcss
```css
:root {
--font: 'Helvetica';
}
body {
font-family: var(--font);
}
```
Exported CSS
```css
body {
font-family: 'Helvetica';
}
```
### 😯 Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If you are seeing an error, please include the full error message and stack trace -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Exported CSS
```css
/* no root pseudo-class */
body {
font-family: var(--font);
}
```
### 💁 Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
This setting is working great when using webpack + postcss-loader.
### 🔦 Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I tried changing the syntax of postcss setting file though there is no hope.
### 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s)
| ---------------- | ----------
| Parcel | 1.2.0
| Node | 9.3.0
| npm/Yarn | Yarn 1.3.2
| Operating System | Arch Linux (macOS High Sierra 10.13.2)
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
|
process
|
not working when using postcss import with postcss next thanks for filing an issue 😄 before you submit please read the following search open closed issues before submitting since someone might have asked the same thing before 🐛 bug report 🎛 configuration babelrc package json cli command postcssrc json plugins postcss import postcss cssnext cli parcel build app js app js js import app css app css css import css root css import css body css root css css root font helvetica body css css body font family var font you can make this problem with this attachment you can do yarn run start 🤔 expected behavior postcss css root font helvetica body font family var font exported css css body font family helvetica 😯 current behavior exported css css no root pseudo class body font family var font 💁 possible solution this setting is working great when using webpack postcss loader 🔦 context i tried changing the syntax of postcss setting file though there is no hope 🌍 your environment software version s parcel node npm yarn yarn operating system arch linux macos high sierra love parcel please consider supporting our collective 👉
| 1
|
13,843
| 16,602,758,238
|
IssuesEvent
|
2021-06-01 22:02:05
|
ivanbukhtiyarov/elevators
|
https://api.github.com/repos/ivanbukhtiyarov/elevators
|
closed
|
Учёт номера лифта в командах и передача вызовов оператору.
|
in process
|
1. Сейчас команды не принимают номер кабины лифта, хотя должны.
Добавить дополнительный параметр elevator_id для действий:
- openDoors;
- closeDoors;
- moveToFloor;
- setWeight и другие set-ы;
- Может быть, intercomRequest и respond;
- getReadings;
- getCurrentParams
Не забыть обновить доку.
После этого станет возможным из гуи отправлять команды лифту в формате <source> <action> [<value>] [<elevator_id>].
2. Запросы лифту отправлять через operator.py, а не напрямую в Elevator.
|
1.0
|
Учёт номера лифта в командах и передача вызовов оператору. - 1. Сейчас команды не принимают номер кабины лифта, хотя должны.
Добавить дополнительный параметр elevator_id для действий:
- openDoors;
- closeDoors;
- moveToFloor;
- setWeight и другие set-ы;
- Может быть, intercomRequest и respond;
- getReadings;
- getCurrentParams
Не забыть обновить доку.
После этого станет возможным из гуи отправлять команды лифту в формате <source> <action> [<value>] [<elevator_id>].
2. Запросы лифту отправлять через operator.py, а не напрямую в Elevator.
|
process
|
учёт номера лифта в командах и передача вызовов оператору сейчас команды не принимают номер кабины лифта хотя должны добавить дополнительный параметр elevator id для действий opendoors closedoors movetofloor setweight и другие set ы может быть intercomrequest и respond getreadings getcurrentparams не забыть обновить доку после этого станет возможным из гуи отправлять команды лифту в формате запросы лифту отправлять через operator py а не напрямую в elevator
| 1
|
151,896
| 12,064,573,489
|
IssuesEvent
|
2020-04-16 08:31:20
|
RedHatInsights/insights-results-aggregator
|
https://api.github.com/repos/RedHatInsights/insights-results-aggregator
|
closed
|
Filtering by total risk works as expected
|
UI tests
|
# Filtering by total risk works as expected:
## Design:
https://marvelapp.com/852jaj9/screen/66423927 (click on the "moderate" button in the summary box to see how it works)
## Checks:
* filter is added to the list of filters
* "clear filters" link is added at the end of the filter list and it clears the filter when clicked
* only rule results that have their total risk equal to what was selected are displayed
* the same is applied to all levels of total risk (we iterate the test through all levels)
|
1.0
|
Filtering by total risk works as expected - # Filtering by total risk works as expected:
## Design:
https://marvelapp.com/852jaj9/screen/66423927 (click on the "moderate" button in the summary box to see how it works)
## Checks:
* filter is added to the list of filters
* "clear filters" link is added at the end of the filter list and it clears the filter when clicked
* only rule results that have their total risk equal to what was selected are displayed
* the same is applied to all levels of total risk (we iterate the test through all levels)
|
non_process
|
filtering by total risk works as expected filtering by total risk works as expected design click on the moderate button in the summary box to see how it works checks filter is added to the list of filters clear filters link is added at the end of the filter list and it clears the filter when clicked only rule results that have their total risk equal to what was selected are displayed the same is applied to all levels of total risk we iterate the test through all levels
| 0
|
19,818
| 26,208,162,578
|
IssuesEvent
|
2023-01-04 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Wed, 4 Jan 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### One-shot domain adaptation in video-based assessment of surgical skills
- **Authors:** Erim Yanik, Steven Schwaitzberg, Gene Yang, Xavier Intes, Suvranu De
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2301.00812
- **Pdf link:** https://arxiv.org/pdf/2301.00812
- **Abstract**
Deep Learning (DL) has achieved automatic and objective assessment of surgical skills. However, DL models are data-hungry and restricted to their training domain. This prevents them from transitioning to new tasks where data is limited. Hence, domain adaptation is crucial to implement DL in real life. Here, we propose a meta-learning model, A-VBANet, that can deliver domain-agnostic surgical skill classification via one-shot learning. We develop the A-VBANet on five laparoscopic and robotic surgical simulators. Additionally, we test it on operating room (OR) videos of laparoscopic cholecystectomy. Our model successfully adapts with accuracies up to 99.5% in one-shot and 99.9% in few-shot settings for simulated tasks and 89.7% for laparoscopic cholecystectomy. For the first time, we provide a domain-agnostic procedure for video-based assessment of surgical skills. A significant implication of this approach is that it allows the use of data from surgical simulators to assess performance in the operating room.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Dissecting Continual Learning a Structural and Data Analysis
- **Authors:** Francesco Pelosin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.01033
- **Pdf link:** https://arxiv.org/pdf/2301.01033
- **Abstract**
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Dissecting Continual Learning a Structural and Data Analysis
- **Authors:** Francesco Pelosin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.01033
- **Pdf link:** https://arxiv.org/pdf/2301.01033
- **Abstract**
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
### BS3D: Building-scale 3D Reconstruction from RGB-D Images
- **Authors:** Janne Mustaniemi, Juho Kannala, Esa Rahtu, Li Liu, Janne Heikkilä
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.01057
- **Pdf link:** https://arxiv.org/pdf/2301.01057
- **Abstract**
Various datasets have been proposed for simultaneous localization and mapping (SLAM) and related problems. Existing datasets often include small environments, have incomplete ground truth, or lack important sensor data, such as depth and infrared images. We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera. Unlike complex and expensive acquisition setups, our system enables crowd-sourcing, which can greatly benefit data-hungry algorithms. Compared to similar systems, we utilize raw depth maps for odometry computation and loop closure refinement which results in better reconstructions. We acquire a building-scale 3D dataset (BS3D) and demonstrate its value by training an improved monocular depth estimation model. As a unique experiment, we benchmark visual-inertial odometry methods using both color and active infrared images.
### DGNet: Distribution Guided Efficient Learning for Oil Spill Image Segmentation
- **Authors:** Fang Chen, Heiko Balzter, Feixiang Zhou, Peng Ren, Huiyu Zhou
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.01202
- **Pdf link:** https://arxiv.org/pdf/2301.01202
- **Abstract**
Successful implementation of oil spill segmentation in Synthetic Aperture Radar (SAR) images is vital for marine environmental protection. In this paper, we develop an effective segmentation framework named DGNet, which performs oil spill segmentation by incorporating the intrinsic distribution of backscatter values in SAR images. Specifically, our proposed segmentation network is constructed with two deep neural modules running in an interactive manner, where one is the inference module to achieve latent feature variable inference from SAR images, and the other is the generative module to produce oil spill segmentation maps by drawing the latent feature variables as inputs. Thus, to yield accurate segmentation, we take into account the intrinsic distribution of backscatter values in SAR images and embed it in our segmentation model. The intrinsic distribution originates from SAR imagery, describing the physical characteristics of oil spills. In the training process, the formulated intrinsic distribution guides efficient learning of optimal latent feature variable inference for oil spill segmentation. The efficient learning enables the training of our proposed DGNet with a small amount of image data. This is economically beneficial to oil spill segmentation where the availability of oil spill SAR image data is limited in practice. Additionally, benefiting from optimal latent feature variable inference, our proposed DGNet performs accurate oil spill segmentation. We evaluate the segmentation performance of our proposed DGNet with different metrics, and experimental evaluations demonstrate its effective segmentations.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Wed, 4 Jan 23 - ## Keyword: events
### One-shot domain adaptation in video-based assessment of surgical skills
- **Authors:** Erim Yanik, Steven Schwaitzberg, Gene Yang, Xavier Intes, Suvranu De
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2301.00812
- **Pdf link:** https://arxiv.org/pdf/2301.00812
- **Abstract**
Deep Learning (DL) has achieved automatic and objective assessment of surgical skills. However, DL models are data-hungry and restricted to their training domain. This prevents them from transitioning to new tasks where data is limited. Hence, domain adaptation is crucial to implement DL in real life. Here, we propose a meta-learning model, A-VBANet, that can deliver domain-agnostic surgical skill classification via one-shot learning. We develop the A-VBANet on five laparoscopic and robotic surgical simulators. Additionally, we test it on operating room (OR) videos of laparoscopic cholecystectomy. Our model successfully adapts with accuracies up to 99.5% in one-shot and 99.9% in few-shot settings for simulated tasks and 89.7% for laparoscopic cholecystectomy. For the first time, we provide a domain-agnostic procedure for video-based assessment of surgical skills. A significant implication of this approach is that it allows the use of data from surgical simulators to assess performance in the operating room.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Dissecting Continual Learning a Structural and Data Analysis
- **Authors:** Francesco Pelosin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.01033
- **Pdf link:** https://arxiv.org/pdf/2301.01033
- **Abstract**
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Dissecting Continual Learning a Structural and Data Analysis
- **Authors:** Francesco Pelosin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.01033
- **Pdf link:** https://arxiv.org/pdf/2301.01033
- **Abstract**
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
### BS3D: Building-scale 3D Reconstruction from RGB-D Images
- **Authors:** Janne Mustaniemi, Juho Kannala, Esa Rahtu, Li Liu, Janne Heikkilä
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.01057
- **Pdf link:** https://arxiv.org/pdf/2301.01057
- **Abstract**
Various datasets have been proposed for simultaneous localization and mapping (SLAM) and related problems. Existing datasets often include small environments, have incomplete ground truth, or lack important sensor data, such as depth and infrared images. We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera. Unlike complex and expensive acquisition setups, our system enables crowd-sourcing, which can greatly benefit data-hungry algorithms. Compared to similar systems, we utilize raw depth maps for odometry computation and loop closure refinement which results in better reconstructions. We acquire a building-scale 3D dataset (BS3D) and demonstrate its value by training an improved monocular depth estimation model. As a unique experiment, we benchmark visual-inertial odometry methods using both color and active infrared images.
### DGNet: Distribution Guided Efficient Learning for Oil Spill Image Segmentation
- **Authors:** Fang Chen, Heiko Balzter, Feixiang Zhou, Peng Ren, Huiyu Zhou
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.01202
- **Pdf link:** https://arxiv.org/pdf/2301.01202
- **Abstract**
Successful implementation of oil spill segmentation in Synthetic Aperture Radar (SAR) images is vital for marine environmental protection. In this paper, we develop an effective segmentation framework named DGNet, which performs oil spill segmentation by incorporating the intrinsic distribution of backscatter values in SAR images. Specifically, our proposed segmentation network is constructed with two deep neural modules running in an interactive manner, where one is the inference module to achieve latent feature variable inference from SAR images, and the other is the generative module to produce oil spill segmentation maps by drawing the latent feature variables as inputs. Thus, to yield accurate segmentation, we take into account the intrinsic distribution of backscatter values in SAR images and embed it in our segmentation model. The intrinsic distribution originates from SAR imagery, describing the physical characteristics of oil spills. In the training process, the formulated intrinsic distribution guides efficient learning of optimal latent feature variable inference for oil spill segmentation. The efficient learning enables the training of our proposed DGNet with a small amount of image data. This is economically beneficial to oil spill segmentation where the availability of oil spill SAR image data is limited in practice. Additionally, benefiting from optimal latent feature variable inference, our proposed DGNet performs accurate oil spill segmentation. We evaluate the segmentation performance of our proposed DGNet with different metrics, and experimental evaluations demonstrate its effective segmentations.
## Keyword: raw image
There is no result
|
process
|
new submissions for wed jan keyword events one shot domain adaptation in video based assessment of surgical skills authors erim yanik steven schwaitzberg gene yang xavier intes suvranu de subjects computer vision and pattern recognition cs cv machine learning cs lg image and video processing eess iv arxiv link pdf link abstract deep learning dl has achieved automatic and objective assessment of surgical skills however dl models are data hungry and restricted to their training domain this prevents them from transitioning to new tasks where data is limited hence domain adaptation is crucial to implement dl in real life here we propose a meta learning model a vbanet that can deliver domain agnostic surgical skill classification via one shot learning we develop the a vbanet on five laparoscopic and robotic surgical simulators additionally we test it on operating room or videos of laparoscopic cholecystectomy our model successfully adapts with accuracies up to in one shot and in few shot settings for simulated tasks and for laparoscopic cholecystectomy for the first time we provide a domain agnostic procedure for video based assessment of surgical skills a significant implication of this approach is that it allows the use of data from surgical simulators to assess performance in the operating room keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb dissecting continual learning a structural and data analysis authors francesco pelosin subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract continual learning cl is a field dedicated to devise algorithms able to achieve lifelong learning overcoming the knowledge disruption of previously acquired concepts a drawback affecting deep learning models and that goes by the name of catastrophic forgetting is a hard challenge currently deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions but whenever we expose such systems to this incremental setting performance drop very quickly overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity secondly it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data in this thesis we tackle the problem from multiple directions in a first study we show that in rehearsal based techniques systems that use memory buffer the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data secondly we propose one of the early works of incremental learning on vits architectures comparing functional weight and attention regularization approaches and propose effective novel a novel asymmetric loss at the end we conclude with a study on pretraining and how it affects the performance in continual learning raising some questions about the effective progression of the field we then conclude with some future directions and closing remarks keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw dissecting continual learning a structural and data analysis authors francesco pelosin subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract continual learning cl is a field dedicated to devise algorithms able to achieve lifelong learning overcoming the knowledge disruption of previously acquired concepts a drawback affecting deep learning models and that goes by the name of catastrophic forgetting is a hard challenge currently deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions but whenever we expose such systems to this incremental setting performance drop very quickly overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity secondly it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data in this thesis we tackle the problem from multiple directions in a first study we show that in rehearsal based techniques systems that use memory buffer the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data secondly we propose one of the early works of incremental learning on vits architectures comparing functional weight and attention regularization approaches and propose effective novel a novel asymmetric loss at the end we conclude with a study on pretraining and how it affects the performance in continual learning raising some questions about the effective progression of the field we then conclude with some future directions and closing remarks building scale reconstruction from rgb d images authors janne mustaniemi juho kannala esa rahtu li liu janne heikkilä subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract various datasets have been proposed for simultaneous localization and mapping slam and related problems existing datasets often include small environments have incomplete ground truth or lack important sensor data such as depth and infrared images we propose an easy to use framework for acquiring building scale reconstruction using a consumer depth camera unlike complex and expensive acquisition setups our system enables crowd sourcing which can greatly benefit data hungry algorithms compared to similar systems we utilize raw depth maps for odometry computation and loop closure refinement which results in better reconstructions we acquire a building scale dataset and demonstrate its value by training an improved monocular depth estimation model as a unique experiment we benchmark visual inertial odometry methods using both color and active infrared images dgnet distribution guided efficient learning for oil spill image segmentation authors fang chen heiko balzter feixiang zhou peng ren huiyu zhou subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract successful implementation of oil spill segmentation in synthetic aperture radar sar images is vital for marine environmental protection in this paper we develop an effective segmentation framework named dgnet which performs oil spill segmentation by incorporating the intrinsic distribution of backscatter values in sar images specifically our proposed segmentation network is constructed with two deep neural modules running in an interactive manner where one is the inference module to achieve latent feature variable inference from sar images and the other is the generative module to produce oil spill segmentation maps by drawing the latent feature variables as inputs thus to yield accurate segmentation we take into account the intrinsic distribution of backscatter values in sar images and embed it in our segmentation model the intrinsic distribution originates from sar imagery describing the physical characteristics of oil spills in the training process the formulated intrinsic distribution guides efficient learning of optimal latent feature variable inference for oil spill segmentation the efficient learning enables the training of our proposed dgnet with a small amount of image data this is economically beneficial to oil spill segmentation where the availability of oil spill sar image data is limited in practice additionally benefiting from optimal latent feature variable inference our proposed dgnet performs accurate oil spill segmentation we evaluate the segmentation performance of our proposed dgnet with different metrics and experimental evaluations demonstrate its effective segmentations keyword raw image there is no result
| 1
|
166,768
| 20,725,516,177
|
IssuesEvent
|
2022-03-14 01:02:59
|
cherryrm/ytmdesktop
|
https://api.github.com/repos/cherryrm/ytmdesktop
|
opened
|
CVE-2021-37712 (High) detected in tar-2.2.1.tgz
|
security vulnerability
|
## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npx/node_modules/npm/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- npx-10.2.2.tgz (Root Library)
- npm-5.1.0.tgz
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18,5.0.10,6.1.9</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"2.2.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"npx:10.2.2;npm:5.1.0;tar:2.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 4.4.18,5.0.10,6.1.9","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-37712","vulnerabilityDetails":"The npm package \"tar\" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 \"short path\" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712","cvss3Severity":"high","cvss3Score":"8.6","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Changed","C":"High","UI":"Required","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-37712 (High) detected in tar-2.2.1.tgz - ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npx/node_modules/npm/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- npx-10.2.2.tgz (Root Library)
- npm-5.1.0.tgz
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18,5.0.10,6.1.9</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"2.2.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"npx:10.2.2;npm:5.1.0;tar:2.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 4.4.18,5.0.10,6.1.9","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-37712","vulnerabilityDetails":"The npm package \"tar\" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 \"short path\" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712","cvss3Severity":"high","cvss3Score":"8.6","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Changed","C":"High","UI":"Required","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file package json path to vulnerable library node modules npx node modules npm node modules tar package json dependency hierarchy npx tgz root library npm tgz x tar tgz vulnerable library found in base branch master vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value additionally on windows systems long path portions would resolve to the same file system entities as their short path counterparts a specially crafted tar archive could thus include a directory with one form of the path followed by a symbolic link with a different string that resolves to the same file system entity followed by a file using the first form by first creating a directory and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree npx npm tar isminimumfixversionavailable true minimumfixversion tar isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value additionally on windows systems long path portions would resolve to the same file system entities as their short path counterparts a specially crafted tar archive could thus include a directory with one form of the path followed by a symbolic link with a different string that resolves to the same file system entity followed by a file using the first form by first creating a directory and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa vulnerabilityurl
| 0
|
338,424
| 24,585,315,394
|
IssuesEvent
|
2022-10-13 19:09:03
|
mozilla-mobile/android-components
|
https://api.github.com/repos/mozilla-mobile/android-components
|
closed
|
Write documentation about how to work with site permissions
|
📖 documentation
|
┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-3547)
|
1.0
|
Write documentation about how to work with site permissions -
┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-3547)
|
non_process
|
write documentation about how to work with site permissions ┆issue is synchronized with this
| 0
|
6,036
| 8,850,100,809
|
IssuesEvent
|
2019-01-08 12:15:17
|
tig-nl/postnl-magento2
|
https://api.github.com/repos/tig-nl/postnl-magento2
|
closed
|
Compatibility with Magento 2.3
|
in process on backlog
|
### Submitting issues through Github
## Please follow the guide below
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like.
- We may ask some questions or ask you to provide addition information after you placed your request.
---
### Make sure you are using the *latest* version: https://tig.nl/postnl-magento-extensies/
Issues with outdated version will be rejected.
- [ X ] I've **verified** and **I assure** that I'm running the latest version of the TIG Buckaroo Magento 2 extension.
---
### What is the purpose of your *issue*?
- [ X ] Feature request (request for a new functionality)
- [ ] Bug report (encountered problems with the TIG PostNL Magento 2 extension)
- [ ] Extension support request (request for adding support for a new extension)
- [ ] Other
---
### If the purpose of your issue is a *feature request*
- As a merchant I would like to be able to install PostNL on the upcoming Magento 2.3 version in order to stay up-to-date with the latest security patches.
### If the purpose of your issue is about anything else, please describe your issue here
- Magento 2.3 has a lot of changes, especially in the frontend. Magento version 2.3 will most likely break the extension in its current form. Changes will be required to make the extension compatible with the latest version.
|
1.0
|
Compatibility with Magento 2.3 - ### Submitting issues through Github
## Please follow the guide below
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like.
- We may ask some questions or ask you to provide addition information after you placed your request.
---
### Make sure you are using the *latest* version: https://tig.nl/postnl-magento-extensies/
Issues with outdated version will be rejected.
- [ X ] I've **verified** and **I assure** that I'm running the latest version of the TIG Buckaroo Magento 2 extension.
---
### What is the purpose of your *issue*?
- [ X ] Feature request (request for a new functionality)
- [ ] Bug report (encountered problems with the TIG PostNL Magento 2 extension)
- [ ] Extension support request (request for adding support for a new extension)
- [ ] Other
---
### If the purpose of your issue is a *feature request*
- As a merchant I would like to be able to install PostNL on the upcoming Magento 2.3 version in order to stay up-to-date with the latest security patches.
### If the purpose of your issue is about anything else, please describe your issue here
- Magento 2.3 has a lot of changes, especially in the frontend. Magento version 2.3 will most likely break the extension in its current form. Changes will be required to make the extension compatible with the latest version.
|
process
|
compatibility with magento submitting issues through github please follow the guide below put an x into all the boxes relevant to your issue like this use the preview tab to see what your issue will actually look like we may ask some questions or ask you to provide addition information after you placed your request make sure you are using the latest version issues with outdated version will be rejected i ve verified and i assure that i m running the latest version of the tig buckaroo magento extension what is the purpose of your issue feature request request for a new functionality bug report encountered problems with the tig postnl magento extension extension support request request for adding support for a new extension other if the purpose of your issue is a feature request as a merchant i would like to be able to install postnl on the upcoming magento version in order to stay up to date with the latest security patches if the purpose of your issue is about anything else please describe your issue here magento has a lot of changes especially in the frontend magento version will most likely break the extension in its current form changes will be required to make the extension compatible with the latest version
| 1
|
1,228
| 3,768,311,663
|
IssuesEvent
|
2016-03-16 03:45:40
|
spootTheLousy/saguaro
|
https://api.github.com/repos/spootTheLousy/saguaro
|
opened
|
Post form as ajax request
|
Discussion Post/text processing
|
just an idea, as long as flood control does its job, why not have the form submit post data via ajax ahead of reloading the page. We could make regist return post errors as json objects and display them like 4chs quick reply error alerts, instead of a hard redirect to an error page.
Ideally, this saves bandwith as a page doesnt have to be sent, a user won't lose their post text/images if they get an error, and it'd let us add things like clientside form item storage, clientside limiting and other neat features.
Downsides:
- Would it make submitting forms more exploitable/ is this a security risk of some sort?
- post form becomes dependent on a `.js` file
- ???
|
1.0
|
Post form as ajax request - just an idea, as long as flood control does its job, why not have the form submit post data via ajax ahead of reloading the page. We could make regist return post errors as json objects and display them like 4chs quick reply error alerts, instead of a hard redirect to an error page.
Ideally, this saves bandwith as a page doesnt have to be sent, a user won't lose their post text/images if they get an error, and it'd let us add things like clientside form item storage, clientside limiting and other neat features.
Downsides:
- Would it make submitting forms more exploitable/ is this a security risk of some sort?
- post form becomes dependent on a `.js` file
- ???
|
process
|
post form as ajax request just an idea as long as flood control does its job why not have the form submit post data via ajax ahead of reloading the page we could make regist return post errors as json objects and display them like quick reply error alerts instead of a hard redirect to an error page ideally this saves bandwith as a page doesnt have to be sent a user won t lose their post text images if they get an error and it d let us add things like clientside form item storage clientside limiting and other neat features downsides would it make submitting forms more exploitable is this a security risk of some sort post form becomes dependent on a js file
| 1
|
16,633
| 21,705,059,806
|
IssuesEvent
|
2022-05-10 08:52:12
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Extract index template management from ElasticsearchClient
|
kind/toil team/process-automation area/maintainability
|
**Description**
See https://github.com/camunda/zeebe/issues/9320 for context and motivation. The goal here is to create a new component, `TemplateReader`, which takes care of reading templates from the resources and configuring them.
This will make unit testing templates (which are part of our public API) much more easily. Although as templates are not tested at the moment, the goal here isn't to increase coverage dramatically - these single tests can be added later.
|
1.0
|
Extract index template management from ElasticsearchClient - **Description**
See https://github.com/camunda/zeebe/issues/9320 for context and motivation. The goal here is to create a new component, `TemplateReader`, which takes care of reading templates from the resources and configuring them.
This will make unit testing templates (which are part of our public API) much more easily. Although as templates are not tested at the moment, the goal here isn't to increase coverage dramatically - these single tests can be added later.
|
process
|
extract index template management from elasticsearchclient description see for context and motivation the goal here is to create a new component templatereader which takes care of reading templates from the resources and configuring them this will make unit testing templates which are part of our public api much more easily although as templates are not tested at the moment the goal here isn t to increase coverage dramatically these single tests can be added later
| 1
|
35,780
| 9,660,013,699
|
IssuesEvent
|
2019-05-20 14:38:20
|
syndesisio/syndesis
|
https://api.github.com/repos/syndesisio/syndesis
|
opened
|
Migrate to the React code base
|
cat/build cat/process group/ui
|
## This is a...
<pre><code>
[x ] Feature request
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Documentation issue or request
</code></pre>
## Description
We need to move the code from https://github.com/syndesisio/syndesis-react to this repo.
The plan is to:
1. rename `ui` to `ui-angular`
2. making all fail and keep track of all the intervention point
3. move react code to `ui-react`
4. fix all the pointers to go to `ui-react`
|
1.0
|
Migrate to the React code base - ## This is a...
<pre><code>
[x ] Feature request
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Documentation issue or request
</code></pre>
## Description
We need to move the code from https://github.com/syndesisio/syndesis-react to this repo.
The plan is to:
1. rename `ui` to `ui-angular`
2. making all fail and keep track of all the intervention point
3. move react code to `ui-react`
4. fix all the pointers to go to `ui-react`
|
non_process
|
migrate to the react code base this is a feature request regression a behavior that used to work and stopped working in a new release bug report documentation issue or request description we need to move the code from to this repo the plan is to rename ui to ui angular making all fail and keep track of all the intervention point move react code to ui react fix all the pointers to go to ui react
| 0
|
44,355
| 13,055,260,866
|
IssuesEvent
|
2020-07-30 01:07:41
|
BrianMcDonaldWS/Ignite
|
https://api.github.com/repos/BrianMcDonaldWS/Ignite
|
opened
|
CVE-2020-11023 (Medium) detected in jquery-1.9.1.js
|
security vulnerability
|
## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/Ignite/node_modules/tinycolor2/index.html</p>
<p>Path to vulnerable library: /Ignite/node_modules/tinycolor2/demo/jquery-1.9.1.js,/Ignite/node_modules/tinycolor2/test/../demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.9.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jquery - 3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11023","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing \u003coption\u003e elements from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-11023 (Medium) detected in jquery-1.9.1.js - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/Ignite/node_modules/tinycolor2/index.html</p>
<p>Path to vulnerable library: /Ignite/node_modules/tinycolor2/demo/jquery-1.9.1.js,/Ignite/node_modules/tinycolor2/test/../demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.9.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jquery - 3.5.0"}],"vulnerabilityIdentifier":"CVE-2020-11023","vulnerabilityDetails":"In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing \u003coption\u003e elements from untrusted sources - even after sanitizing it - to one of jQuery\u0027s DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jquery js cve medium severity vulnerability vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file tmp ws scm ignite node modules index html path to vulnerable library ignite node modules demo jquery js ignite node modules test demo jquery js dependency hierarchy x jquery js vulnerable library vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery vulnerabilityurl
| 0
|
20,795
| 3,634,767,493
|
IssuesEvent
|
2016-02-11 19:14:35
|
JMurk/Valve_Hydrant_Mobile_Issues
|
https://api.github.com/repos/JMurk/Valve_Hydrant_Mobile_Issues
|
opened
|
Hydrant - History - Add Detials
|
design moderate priority
|
**Update Description:** Along with the date, please add the Inspector Organization and Status to the rows in the historical inspection Lists

Should look something like:

|
1.0
|
Hydrant - History - Add Detials - **Update Description:** Along with the date, please add the Inspector Organization and Status to the rows in the historical inspection Lists

Should look something like:

|
non_process
|
hydrant history add detials update description along with the date please add the inspector organization and status to the rows in the historical inspection lists should look something like
| 0
|
20,261
| 26,878,455,870
|
IssuesEvent
|
2023-02-05 10:35:32
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
closed
|
Arrow Flight CI Failing
|
bug development-process
|
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
The arrow-flight CI is failing for what appear to be spurious reasons due to some interaction with git and Github Actions. It is not entirely clear to me what is going on yet
https://github.com/apache/arrow-rs/actions/runs/4091972526/jobs/7056398248
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
|
1.0
|
Arrow Flight CI Failing - **Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
The arrow-flight CI is failing for what appear to be spurious reasons due to some interaction with git and Github Actions. It is not entirely clear to me what is going on yet
https://github.com/apache/arrow-rs/actions/runs/4091972526/jobs/7056398248
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
|
process
|
arrow flight ci failing describe the bug a clear and concise description of what the bug is the arrow flight ci is failing for what appear to be spurious reasons due to some interaction with git and github actions it is not entirely clear to me what is going on yet to reproduce steps to reproduce the behavior expected behavior a clear and concise description of what you expected to happen additional context add any other context about the problem here
| 1
|
20,199
| 26,774,928,331
|
IssuesEvent
|
2023-01-31 16:28:58
|
GoogleCloudPlatform/spring-cloud-gcp
|
https://api.github.com/repos/GoogleCloudPlatform/spring-cloud-gcp
|
closed
|
Investigate flaky unit test in kotlin sample
|
priority: p3 type: process
|
I have been observing this flaky failure in `spring-cloud-gcp-kotlin-app-sample` in a number of recent pull requests, opening an issue for it to track and investigate.
(Example error log - [unitTests(19)](https://github.com/GoogleCloudPlatform/spring-cloud-gcp/actions/runs/4018331943/jobs/6903823230))
```
Error: Failed to execute goal org.jetbrains.kotlin:kotlin-maven-plugin:1.7.21:compile (compile) on project spring-cloud-gcp-kotlin-app-sample: Compilation failure
Error: java.lang.ExceptionInInitializerError
Error: at com.intellij.openapi.util.BuildNumber.currentVersion(BuildNumber.java:297)
Error: at com.intellij.ide.plugins.PluginManagerCore.getBuildNumber(PluginManagerCore.java:876)
Error: at com.intellij.ide.plugins.PluginManagerCore.lambda$createLoadingResult$16(PluginManagerCore.java:822)
Error: at com.intellij.ide.plugins.DescriptorListLoadingContext.getDefaultVersion(DescriptorListLoadingContext.java:145)
Error: at com.intellij.ide.plugins.IdeaPluginDescriptorImpl.readExternal(IdeaPluginDescriptorImpl.java:166)
Error: at com.intellij.ide.plugins.PluginDescriptorLoader.loadDescriptorFromJar(PluginDescriptorLoader.java:94)
Error: at com.intellij.ide.plugins.PluginManagerCore.registerExtensionPointAndExtensions(PluginManagerCore.java:1325)
Error: at com.intellij.core.CoreApplicationEnvironment.registerExtensionPointAndExtensions(CoreApplicationEnvironment.java:287)
```
A different flake that looks related: ([logs](https://github.com/GoogleCloudPlatform/spring-cloud-gcp/actions/runs/4019778222/jobs/6907268077))
```
Error: Failed to execute goal org.jetbrains.kotlin:kotlin-maven-plugin:1.7.21:compile (compile) on project spring-cloud-gcp-kotlin-app-sample: Compilation failure
Error: java.lang.NullPointerException
Error: at java.base/java.io.Reader.<init>(Reader.java:168)
Error: at java.base/java.io.InputStreamReader.<init>(InputStreamReader.java:124)
Error: at com.intellij.ide.plugins.PluginManagerCore.readBrokenPluginFile(PluginManagerCore.java:249)
Error: at com.intellij.ide.plugins.PluginManagerCore.getBrokenPluginVersions(PluginManagerCore.java:241)
Error: at com.intellij.ide.plugins.PluginManagerCore.createLoadingResult(PluginManagerCore.java:822)
Error: at com.intellij.ide.plugins.DescriptorListLoadingContext.createSingleDescriptorContext(DescriptorListLoadingContext.java:64)
Error: at com.intellij.ide.plugins.PluginManagerCore.registerExtensionPointAndExtensions(PluginManagerCore.java:1318)
Error: at com.intellij.core.CoreApplicationEnvironment.registerExtensionPointAndExtensions(CoreApplicationEnvironment.java:287)
Error: at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCoreEnvironment$Companion.registerApplicationExtensionPointsAndExtensionsFrom(KotlinCoreEnvironment.kt:617)
Error: at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCoreEnvironment$Companion.createApplicationEnvironment(KotlinCoreEnvironment.kt:587)
Error: at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCoreEnvironment$Companion.getOrCreateApplicationEnvironment(KotlinCoreEnvironment.kt:518)
Error: at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCoreEnvironment$Companion.getOrCreateApplicationEnvironmentForProduction(KotlinCoreEnvironment.kt:499)
```
|
1.0
|
Investigate flaky unit test in kotlin sample - I have been observing this flaky failure in `spring-cloud-gcp-kotlin-app-sample` in a number of recent pull requests, opening an issue for it to track and investigate.
(Example error log - [unitTests(19)](https://github.com/GoogleCloudPlatform/spring-cloud-gcp/actions/runs/4018331943/jobs/6903823230))
```
Error: Failed to execute goal org.jetbrains.kotlin:kotlin-maven-plugin:1.7.21:compile (compile) on project spring-cloud-gcp-kotlin-app-sample: Compilation failure
Error: java.lang.ExceptionInInitializerError
Error: at com.intellij.openapi.util.BuildNumber.currentVersion(BuildNumber.java:297)
Error: at com.intellij.ide.plugins.PluginManagerCore.getBuildNumber(PluginManagerCore.java:876)
Error: at com.intellij.ide.plugins.PluginManagerCore.lambda$createLoadingResult$16(PluginManagerCore.java:822)
Error: at com.intellij.ide.plugins.DescriptorListLoadingContext.getDefaultVersion(DescriptorListLoadingContext.java:145)
Error: at com.intellij.ide.plugins.IdeaPluginDescriptorImpl.readExternal(IdeaPluginDescriptorImpl.java:166)
Error: at com.intellij.ide.plugins.PluginDescriptorLoader.loadDescriptorFromJar(PluginDescriptorLoader.java:94)
Error: at com.intellij.ide.plugins.PluginManagerCore.registerExtensionPointAndExtensions(PluginManagerCore.java:1325)
Error: at com.intellij.core.CoreApplicationEnvironment.registerExtensionPointAndExtensions(CoreApplicationEnvironment.java:287)
```
A different flake that looks related: ([logs](https://github.com/GoogleCloudPlatform/spring-cloud-gcp/actions/runs/4019778222/jobs/6907268077))
```
Error: Failed to execute goal org.jetbrains.kotlin:kotlin-maven-plugin:1.7.21:compile (compile) on project spring-cloud-gcp-kotlin-app-sample: Compilation failure
Error: java.lang.NullPointerException
Error: at java.base/java.io.Reader.<init>(Reader.java:168)
Error: at java.base/java.io.InputStreamReader.<init>(InputStreamReader.java:124)
Error: at com.intellij.ide.plugins.PluginManagerCore.readBrokenPluginFile(PluginManagerCore.java:249)
Error: at com.intellij.ide.plugins.PluginManagerCore.getBrokenPluginVersions(PluginManagerCore.java:241)
Error: at com.intellij.ide.plugins.PluginManagerCore.createLoadingResult(PluginManagerCore.java:822)
Error: at com.intellij.ide.plugins.DescriptorListLoadingContext.createSingleDescriptorContext(DescriptorListLoadingContext.java:64)
Error: at com.intellij.ide.plugins.PluginManagerCore.registerExtensionPointAndExtensions(PluginManagerCore.java:1318)
Error: at com.intellij.core.CoreApplicationEnvironment.registerExtensionPointAndExtensions(CoreApplicationEnvironment.java:287)
Error: at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCoreEnvironment$Companion.registerApplicationExtensionPointsAndExtensionsFrom(KotlinCoreEnvironment.kt:617)
Error: at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCoreEnvironment$Companion.createApplicationEnvironment(KotlinCoreEnvironment.kt:587)
Error: at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCoreEnvironment$Companion.getOrCreateApplicationEnvironment(KotlinCoreEnvironment.kt:518)
Error: at org.jetbrains.kotlin.cli.jvm.compiler.KotlinCoreEnvironment$Companion.getOrCreateApplicationEnvironmentForProduction(KotlinCoreEnvironment.kt:499)
```
|
process
|
investigate flaky unit test in kotlin sample i have been observing this flaky failure in spring cloud gcp kotlin app sample in a number of recent pull requests opening an issue for it to track and investigate example error log error failed to execute goal org jetbrains kotlin kotlin maven plugin compile compile on project spring cloud gcp kotlin app sample compilation failure error java lang exceptionininitializererror error at com intellij openapi util buildnumber currentversion buildnumber java error at com intellij ide plugins pluginmanagercore getbuildnumber pluginmanagercore java error at com intellij ide plugins pluginmanagercore lambda createloadingresult pluginmanagercore java error at com intellij ide plugins descriptorlistloadingcontext getdefaultversion descriptorlistloadingcontext java error at com intellij ide plugins ideaplugindescriptorimpl readexternal ideaplugindescriptorimpl java error at com intellij ide plugins plugindescriptorloader loaddescriptorfromjar plugindescriptorloader java error at com intellij ide plugins pluginmanagercore registerextensionpointandextensions pluginmanagercore java error at com intellij core coreapplicationenvironment registerextensionpointandextensions coreapplicationenvironment java a different flake that looks related error failed to execute goal org jetbrains kotlin kotlin maven plugin compile compile on project spring cloud gcp kotlin app sample compilation failure error java lang nullpointerexception error at java base java io reader reader java error at java base java io inputstreamreader inputstreamreader java error at com intellij ide plugins pluginmanagercore readbrokenpluginfile pluginmanagercore java error at com intellij ide plugins pluginmanagercore getbrokenpluginversions pluginmanagercore java error at com intellij ide plugins pluginmanagercore createloadingresult pluginmanagercore java error at com intellij ide plugins descriptorlistloadingcontext createsingledescriptorcontext descriptorlistloadingcontext java error at com intellij ide plugins pluginmanagercore registerextensionpointandextensions pluginmanagercore java error at com intellij core coreapplicationenvironment registerextensionpointandextensions coreapplicationenvironment java error at org jetbrains kotlin cli jvm compiler kotlincoreenvironment companion registerapplicationextensionpointsandextensionsfrom kotlincoreenvironment kt error at org jetbrains kotlin cli jvm compiler kotlincoreenvironment companion createapplicationenvironment kotlincoreenvironment kt error at org jetbrains kotlin cli jvm compiler kotlincoreenvironment companion getorcreateapplicationenvironment kotlincoreenvironment kt error at org jetbrains kotlin cli jvm compiler kotlincoreenvironment companion getorcreateapplicationenvironmentforproduction kotlincoreenvironment kt
| 1
|
10,464
| 13,241,767,290
|
IssuesEvent
|
2020-08-19 08:43:43
|
prisma/prisma-engines
|
https://api.github.com/repos/prisma/prisma-engines
|
opened
|
Add test cases for the DMMF command
|
engines/query engine process/candidate
|
We should test the following things:
* The command should not fail when an invalid datasource URL is provided because no database interaction is needed to fulfill this command.
* The command should fail with a nice error if the schema does not contain any datasource.
|
1.0
|
Add test cases for the DMMF command - We should test the following things:
* The command should not fail when an invalid datasource URL is provided because no database interaction is needed to fulfill this command.
* The command should fail with a nice error if the schema does not contain any datasource.
|
process
|
add test cases for the dmmf command we should test the following things the command should not fail when an invalid datasource url is provided because no database interaction is needed to fulfill this command the command should fail with a nice error if the schema does not contain any datasource
| 1
|
10,772
| 13,595,818,572
|
IssuesEvent
|
2020-09-22 04:17:51
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Dynamic Pipeline use throug Variables
|
Pri2 devops-cicd-process/tech devops/prod support-request
|
Hi,
we want to use a common yaml for all of our application pipelines to reduce overhead of maintenance.
So we used a variable for the pipeline source. like below. In $(Pipeline) we set up the name of the Build we want to deploy.
resources:
pipelines:
- pipeline: CM
source: $(Pipeline)
But the variable is not replace with the value and we got a parsing errors:
Encountered error(s) while parsing pipeline YAML:
/azure-pipelines.yml (Line: 4, Col: 17): Pipeline Resource CM Input Must be Valid.
Is a variable use here supported?
Thanks
Robert
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Dynamic Pipeline use throug Variables - Hi,
we want to use a common yaml for all of our application pipelines to reduce overhead of maintenance.
So we used a variable for the pipeline source. like below. In $(Pipeline) we set up the name of the Build we want to deploy.
resources:
pipelines:
- pipeline: CM
source: $(Pipeline)
But the variable is not replace with the value and we got a parsing errors:
Encountered error(s) while parsing pipeline YAML:
/azure-pipelines.yml (Line: 4, Col: 17): Pipeline Resource CM Input Must be Valid.
Is a variable use here supported?
Thanks
Robert
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
dynamic pipeline use throug variables hi we want to use a common yaml for all of our application pipelines to reduce overhead of maintenance so we used a variable for the pipeline source like below in pipeline we set up the name of the build we want to deploy resources pipelines pipeline cm source pipeline but the variable is not replace with the value and we got a parsing errors encountered error s while parsing pipeline yaml azure pipelines yml line col pipeline resource cm input must be valid is a variable use here supported thanks robert document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
8,742
| 3,785,125,529
|
IssuesEvent
|
2016-03-20 09:25:43
|
easy-wi/developer
|
https://api.github.com/repos/easy-wi/developer
|
closed
|
Update to phpseclib 2.0.1
|
Codebase enhancement
|
https://github.com/phpseclib/phpseclib/releases/tag/2.0.1
Autoloader needed to support composer files in non composer enviroment
|
1.0
|
Update to phpseclib 2.0.1 - https://github.com/phpseclib/phpseclib/releases/tag/2.0.1
Autoloader needed to support composer files in non composer enviroment
|
non_process
|
update to phpseclib autoloader needed to support composer files in non composer enviroment
| 0
|
20,391
| 27,047,001,325
|
IssuesEvent
|
2023-02-13 10:28:16
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
reopened
|
incompatible_default_to_explicit_init_py
|
type: process incompatible-change team-Rules-Python not stale
|
**Flag:** `--incompatible_default_to_explicit_init_py`
**Available since:** 1.2
**Will be flipped in:** ???
**Feature tracking issue:** #7386
## Motivation
For `py_binary` and `py_test` targets, Bazel currently automatically creates empty `__init__.py` files in the runfiles tree wherever it is needed to import Python files. This is done in the ancestor directories of paths that contain Python files, up to but not including the top-level workspace directory, where an explicit `__init__.py` file must be used. Thus for example, if your `py_binary` target depends (directly or indirectly) on `//pkg/subpkg:foo.py`, but your workspace has no `//pkg:__init__.py` or `//pkg/subpkg:__init__.py` (or these files were not declared as dependencies), your target will behave as if they exist and are empty.
We want to deprecate this because it is magic at a distance. Python programmers are already used to creating `__init__.py` files in their source trees, so doing it behind their backs introduces confusion and changes the semantics of imports (since these directories will no longer be considered namespace packages). Eliminating this behavior also should allow us to remove some special runfiles logic.
## Change
`py_binary` and `py_test` already have a `legacy_create_init` attribute which effectively defaults to true. With this flag enabled, the effective default becomes false. You can still opt back into true for targets that need it, but in the future we will do another incompatible change to remove the attribute altogether.
I say "effectively" true or false because before this flag was added, the attribute was an actual boolean, and now it is a tristate that defaults to auto, where auto means consult this flag. It is possible some .bzl macro that tries to introspect a `py_binary` or `py_test`'s attributes dictionary (via `native.existing_rules`) will observe this change in attribute type, even without enabling the incompatible flag.
## Migration
If your build depended on having empty `__init__.py` files automatically created, you should explicitly create these files in your source tree and add them to the `srcs` of the relevant `py_library` targets. If for whatever reason that's not feasible at the moment, you can temporarily opt out of this change on a per-target basis even when the flag itself is enabled, by explicitly adding `legacy_create_init = True` to your targets.
## Timing
It is currently unclear how burdensome migration will be, so we do not yet know when we will flip this flag.
|
1.0
|
incompatible_default_to_explicit_init_py - **Flag:** `--incompatible_default_to_explicit_init_py`
**Available since:** 1.2
**Will be flipped in:** ???
**Feature tracking issue:** #7386
## Motivation
For `py_binary` and `py_test` targets, Bazel currently automatically creates empty `__init__.py` files in the runfiles tree wherever it is needed to import Python files. This is done in the ancestor directories of paths that contain Python files, up to but not including the top-level workspace directory, where an explicit `__init__.py` file must be used. Thus for example, if your `py_binary` target depends (directly or indirectly) on `//pkg/subpkg:foo.py`, but your workspace has no `//pkg:__init__.py` or `//pkg/subpkg:__init__.py` (or these files were not declared as dependencies), your target will behave as if they exist and are empty.
We want to deprecate this because it is magic at a distance. Python programmers are already used to creating `__init__.py` files in their source trees, so doing it behind their backs introduces confusion and changes the semantics of imports (since these directories will no longer be considered namespace packages). Eliminating this behavior also should allow us to remove some special runfiles logic.
## Change
`py_binary` and `py_test` already have a `legacy_create_init` attribute which effectively defaults to true. With this flag enabled, the effective default becomes false. You can still opt back into true for targets that need it, but in the future we will do another incompatible change to remove the attribute altogether.
I say "effectively" true or false because before this flag was added, the attribute was an actual boolean, and now it is a tristate that defaults to auto, where auto means consult this flag. It is possible some .bzl macro that tries to introspect a `py_binary` or `py_test`'s attributes dictionary (via `native.existing_rules`) will observe this change in attribute type, even without enabling the incompatible flag.
## Migration
If your build depended on having empty `__init__.py` files automatically created, you should explicitly create these files in your source tree and add them to the `srcs` of the relevant `py_library` targets. If for whatever reason that's not feasible at the moment, you can temporarily opt out of this change on a per-target basis even when the flag itself is enabled, by explicitly adding `legacy_create_init = True` to your targets.
## Timing
It is currently unclear how burdensome migration will be, so we do not yet know when we will flip this flag.
|
process
|
incompatible default to explicit init py flag incompatible default to explicit init py available since will be flipped in feature tracking issue motivation for py binary and py test targets bazel currently automatically creates empty init py files in the runfiles tree wherever it is needed to import python files this is done in the ancestor directories of paths that contain python files up to but not including the top level workspace directory where an explicit init py file must be used thus for example if your py binary target depends directly or indirectly on pkg subpkg foo py but your workspace has no pkg init py or pkg subpkg init py or these files were not declared as dependencies your target will behave as if they exist and are empty we want to deprecate this because it is magic at a distance python programmers are already used to creating init py files in their source trees so doing it behind their backs introduces confusion and changes the semantics of imports since these directories will no longer be considered namespace packages eliminating this behavior also should allow us to remove some special runfiles logic change py binary and py test already have a legacy create init attribute which effectively defaults to true with this flag enabled the effective default becomes false you can still opt back into true for targets that need it but in the future we will do another incompatible change to remove the attribute altogether i say effectively true or false because before this flag was added the attribute was an actual boolean and now it is a tristate that defaults to auto where auto means consult this flag it is possible some bzl macro that tries to introspect a py binary or py test s attributes dictionary via native existing rules will observe this change in attribute type even without enabling the incompatible flag migration if your build depended on having empty init py files automatically created you should explicitly create these files in your source tree and add them to the srcs of the relevant py library targets if for whatever reason that s not feasible at the moment you can temporarily opt out of this change on a per target basis even when the flag itself is enabled by explicitly adding legacy create init true to your targets timing it is currently unclear how burdensome migration will be so we do not yet know when we will flip this flag
| 1
|
21,786
| 30,295,032,369
|
IssuesEvent
|
2023-07-09 18:53:17
|
The-Data-Alchemists-Manipal/MindWave
|
https://api.github.com/repos/The-Data-Alchemists-Manipal/MindWave
|
closed
|
[Image Processing] : Virtual Keyboard
|
image-processing
|
It is basically a development of virtual keyboard on screen.
It helps one to avoid the usage of mechanical keyboard everywhere.
# How it's helpful?
There is no need to carry the keyboard with you. The virtual keyboard developed with help of OpenCV, CVZone is there to help everywhere you need.
Tech stacks : Python, OpenCV, CVZone and MediaPipe
|
1.0
|
[Image Processing] : Virtual Keyboard - It is basically a development of virtual keyboard on screen.
It helps one to avoid the usage of mechanical keyboard everywhere.
# How it's helpful?
There is no need to carry the keyboard with you. The virtual keyboard developed with help of OpenCV, CVZone is there to help everywhere you need.
Tech stacks : Python, OpenCV, CVZone and MediaPipe
|
process
|
virtual keyboard it is basically a development of virtual keyboard on screen it helps one to avoid the usage of mechanical keyboard everywhere how it s helpful there is no need to carry the keyboard with you the virtual keyboard developed with help of opencv cvzone is there to help everywhere you need tech stacks python opencv cvzone and mediapipe
| 1
|
15,072
| 18,767,136,358
|
IssuesEvent
|
2021-11-06 05:19:37
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
Add a POST parameter in the FileDownloader processing algorithm (Request in QGIS)
|
Processing Alg 3.22
|
### Request for documentation
From pull request QGIS/qgis#44867
Author: @Gustry
QGIS version: 3.22
**Add a POST parameter in the FileDownloader processing algorithm**
### PR Description:
## Description
Add a the choice between GET or POST when downloading a file
If POST, some DATA can be added in the query.

This will be useful to send a longer Overpass request using POST to download OSM data.
@nirvn What do you think ?
Funded by 3Liz
### Commits tagged with [need-docs] or [FEATURE]
|
1.0
|
Add a POST parameter in the FileDownloader processing algorithm (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#44867
Author: @Gustry
QGIS version: 3.22
**Add a POST parameter in the FileDownloader processing algorithm**
### PR Description:
## Description
Add a the choice between GET or POST when downloading a file
If POST, some DATA can be added in the query.

This will be useful to send a longer Overpass request using POST to download OSM data.
@nirvn What do you think ?
Funded by 3Liz
### Commits tagged with [need-docs] or [FEATURE]
|
process
|
add a post parameter in the filedownloader processing algorithm request in qgis request for documentation from pull request qgis qgis author gustry qgis version add a post parameter in the filedownloader processing algorithm pr description description add a the choice between get or post when downloading a file if post some data can be added in the query this will be useful to send a longer overpass request using post to download osm data nirvn what do you think funded by commits tagged with or
| 1
|
8,590
| 11,758,417,074
|
IssuesEvent
|
2020-03-13 15:22:39
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
opened
|
Checksum the Engine binaries (archive extraction verification)
|
kind/feature process/candidate
|
We have a hard to reproduce problem in https://github.com/prisma/prisma2/issues/1819. Right now our understanding is, that somehow during the extraction process of our engine `.gz` files something goes wrong and the resulting binary is corrupted.
To be able to avoid that, it would be helpful to have the sha256 checksum of the binary. Then after extraction, the script could compare that stored checksum to the real file checksum and handle accordingly.
(This checksum could be stored next to the archive, or in the archive itself)
|
1.0
|
Checksum the Engine binaries (archive extraction verification) - We have a hard to reproduce problem in https://github.com/prisma/prisma2/issues/1819. Right now our understanding is, that somehow during the extraction process of our engine `.gz` files something goes wrong and the resulting binary is corrupted.
To be able to avoid that, it would be helpful to have the sha256 checksum of the binary. Then after extraction, the script could compare that stored checksum to the real file checksum and handle accordingly.
(This checksum could be stored next to the archive, or in the archive itself)
|
process
|
checksum the engine binaries archive extraction verification we have a hard to reproduce problem in right now our understanding is that somehow during the extraction process of our engine gz files something goes wrong and the resulting binary is corrupted to be able to avoid that it would be helpful to have the checksum of the binary then after extraction the script could compare that stored checksum to the real file checksum and handle accordingly this checksum could be stored next to the archive or in the archive itself
| 1
|
22,488
| 31,395,544,996
|
IssuesEvent
|
2023-08-26 22:15:08
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
create-creta 0.5.1 has 1 guarddog issues
|
npm-silent-process-execution
|
```{"npm-silent-process-execution":[{"code":"\t\tconst child = spawn(\n\t\t\tpath.join(EXE_DIR, 'updater.exe'),\n\t\t\t['-p', `${process.pid}`, '-e', EXE_PATH],\n\t\t\t{\n\t\t\t\tdetached: true,\n\t\t\t\tcwd: EXE_DIR,\n\t\t\t\tstdio: 'ignore',\n\t\t\t}\n\t\t);","location":"package/templates/default/src/main/core/services/updateService/win32.ts:20","message":"This package is silently executing another executable"}]}```
|
1.0
|
create-creta 0.5.1 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":"\t\tconst child = spawn(\n\t\t\tpath.join(EXE_DIR, 'updater.exe'),\n\t\t\t['-p', `${process.pid}`, '-e', EXE_PATH],\n\t\t\t{\n\t\t\t\tdetached: true,\n\t\t\t\tcwd: EXE_DIR,\n\t\t\t\tstdio: 'ignore',\n\t\t\t}\n\t\t);","location":"package/templates/default/src/main/core/services/updateService/win32.ts:20","message":"This package is silently executing another executable"}]}```
|
process
|
create creta has guarddog issues npm silent process execution n t t t n t t t tdetached true n t t t tcwd exe dir n t t t tstdio ignore n t t t n t t location package templates default src main core services updateservice ts message this package is silently executing another executable
| 1
|
687,739
| 23,537,258,687
|
IssuesEvent
|
2022-08-19 22:57:20
|
hdmf-dev/hdmf-zarr
|
https://api.github.com/repos/hdmf-dev/hdmf-zarr
|
closed
|
Setup GitHub Actions/Pipelines for automatic CI
|
priority: critical topic: continuous integration
|
Similar to the main HDMF we should run the following all pull requests
- flake8 (or black)
- unit tests
- gallery tests for sphinx gallery as part of the docs
- docs build
- linkcheck on docs
- May Require #18 (but not sure)
For this we can most likely build off the workflows defined in HDMF https://github.com/hdmf-dev/hdmf/tree/dev/.github/workflows
|
1.0
|
Setup GitHub Actions/Pipelines for automatic CI - Similar to the main HDMF we should run the following all pull requests
- flake8 (or black)
- unit tests
- gallery tests for sphinx gallery as part of the docs
- docs build
- linkcheck on docs
- May Require #18 (but not sure)
For this we can most likely build off the workflows defined in HDMF https://github.com/hdmf-dev/hdmf/tree/dev/.github/workflows
|
non_process
|
setup github actions pipelines for automatic ci similar to the main hdmf we should run the following all pull requests or black unit tests gallery tests for sphinx gallery as part of the docs docs build linkcheck on docs may require but not sure for this we can most likely build off the workflows defined in hdmf
| 0
|
33,389
| 14,087,515,232
|
IssuesEvent
|
2020-11-05 06:35:07
|
Azure/azure-powershell
|
https://api.github.com/repos/Azure/azure-powershell
|
closed
|
Az.Synapse: New-AzSynapseSqlDatabase returns BadRequest (No registered resource provider found for location 'westeurope' and API version '2020-04-01-preview' for type 'workspaces')
|
Service Attention Synapse customer-reported question
|
Hello,
We have a Script based on PowerShell which makes the following actions:
- Creates an Container/FileSystem in a Storage Account for linking with a new Synapse Workspace instance
- Creates the Synapse Workspace instance
- Creates a Database in the Synapse Workspace instance
When we execute the third step, we receive the following error:
**Operation returned an invalid status code 'BadRequest' No registered resource provider found for location 'westeurope' and API version '2020-04-01-preview' for type 'workspaces'. The supported api-versions are '2019-06-01-preview'. The supported locations are 'westus2, eastus, northeurope, westeurope, southeastasia, australiaeast, westcentralus, southcentralus, eastus2, uksouth, westus, australiasoutheast, eastasia, brazilsouth, centralus, centralindia'.**
Based on the error information, we understand is an issue with the API version, but it´s managed internally by the Powershell Command. Can you help us with this issue?
Kind Regards
|
1.0
|
Az.Synapse: New-AzSynapseSqlDatabase returns BadRequest (No registered resource provider found for location 'westeurope' and API version '2020-04-01-preview' for type 'workspaces') - Hello,
We have a Script based on PowerShell which makes the following actions:
- Creates an Container/FileSystem in a Storage Account for linking with a new Synapse Workspace instance
- Creates the Synapse Workspace instance
- Creates a Database in the Synapse Workspace instance
When we execute the third step, we receive the following error:
**Operation returned an invalid status code 'BadRequest' No registered resource provider found for location 'westeurope' and API version '2020-04-01-preview' for type 'workspaces'. The supported api-versions are '2019-06-01-preview'. The supported locations are 'westus2, eastus, northeurope, westeurope, southeastasia, australiaeast, westcentralus, southcentralus, eastus2, uksouth, westus, australiasoutheast, eastasia, brazilsouth, centralus, centralindia'.**
Based on the error information, we understand is an issue with the API version, but it´s managed internally by the Powershell Command. Can you help us with this issue?
Kind Regards
|
non_process
|
az synapse new azsynapsesqldatabase returns badrequest no registered resource provider found for location westeurope and api version preview for type workspaces hello we have a script based on powershell which makes the following actions creates an container filesystem in a storage account for linking with a new synapse workspace instance creates the synapse workspace instance creates a database in the synapse workspace instance when we execute the third step we receive the following error operation returned an invalid status code badrequest no registered resource provider found for location westeurope and api version preview for type workspaces the supported api versions are preview the supported locations are eastus northeurope westeurope southeastasia australiaeast westcentralus southcentralus uksouth westus australiasoutheast eastasia brazilsouth centralus centralindia based on the error information we understand is an issue with the api version but it´s managed internally by the powershell command can you help us with this issue kind regards
| 0
|
5,887
| 8,705,772,025
|
IssuesEvent
|
2018-12-05 23:39:59
|
googleapis/nodejs-logging-bunyan
|
https://api.github.com/repos/googleapis/nodejs-logging-bunyan
|
opened
|
System tests quota exceeded
|
priority: p1 release blocking type: process
|
The quota for the service account used in the system tests has been exceeded causing the tests to fail. The service account should be updated with a quota that handles the system tests.
|
1.0
|
System tests quota exceeded - The quota for the service account used in the system tests has been exceeded causing the tests to fail. The service account should be updated with a quota that handles the system tests.
|
process
|
system tests quota exceeded the quota for the service account used in the system tests has been exceeded causing the tests to fail the service account should be updated with a quota that handles the system tests
| 1
|
226,863
| 25,009,193,086
|
IssuesEvent
|
2022-11-03 14:07:49
|
Dima2021/easybuggy
|
https://api.github.com/repos/Dima2021/easybuggy
|
opened
|
CVE-2014-0107 (High) detected in xalan-2.7.0.jar
|
security vulnerability
|
## CVE-2014-0107 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xalan-2.7.0.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/xalan/xalan/2.7.0/xalan-2.7.0.jar,/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xalan-2.7.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **xalan-2.7.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/easybuggy/commit/ee9713a3bfcb5beea28916be0505f2befbb1b2b8">ee9713a3bfcb5beea28916be0505f2befbb1b2b8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The TransformerFactory in Apache Xalan-Java before 2.7.2 does not properly restrict access to certain properties when FEATURE_SECURE_PROCESSING is enabled, which allows remote attackers to bypass expected restrictions and load arbitrary classes or access external resources via a crafted (1) xalan:content-header, (2) xalan:entities, (3) xslt:content-header, or (4) xslt:entities property, or a Java property that is bound to the XSLT 1.0 system-property function.
<p>Publish Date: 2014-04-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2014-0107>CVE-2014-0107</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0107">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0107</a></p>
<p>Release Date: 2014-04-15</p>
<p>Fix Resolution: 2.7.2</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2014-0107 (High) detected in xalan-2.7.0.jar - ## CVE-2014-0107 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xalan-2.7.0.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/xalan/xalan/2.7.0/xalan-2.7.0.jar,/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xalan-2.7.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **xalan-2.7.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/easybuggy/commit/ee9713a3bfcb5beea28916be0505f2befbb1b2b8">ee9713a3bfcb5beea28916be0505f2befbb1b2b8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The TransformerFactory in Apache Xalan-Java before 2.7.2 does not properly restrict access to certain properties when FEATURE_SECURE_PROCESSING is enabled, which allows remote attackers to bypass expected restrictions and load arbitrary classes or access external resources via a crafted (1) xalan:content-header, (2) xalan:entities, (3) xslt:content-header, or (4) xslt:entities property, or a Java property that is bound to the XSLT 1.0 system-property function.
<p>Publish Date: 2014-04-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2014-0107>CVE-2014-0107</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0107">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0107</a></p>
<p>Release Date: 2014-04-15</p>
<p>Fix Resolution: 2.7.2</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_process
|
cve high detected in xalan jar cve high severity vulnerability vulnerable library xalan jar path to dependency file pom xml path to vulnerable library home wss scanner repository xalan xalan xalan jar target easybuggy snapshot web inf lib xalan jar dependency hierarchy x xalan jar vulnerable library found in head commit a href found in base branch master vulnerability details the transformerfactory in apache xalan java before does not properly restrict access to certain properties when feature secure processing is enabled which allows remote attackers to bypass expected restrictions and load arbitrary classes or access external resources via a crafted xalan content header xalan entities xslt content header or xslt entities property or a java property that is bound to the xslt system property function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue
| 0
|
119,415
| 4,769,764,541
|
IssuesEvent
|
2016-10-26 13:35:11
|
evidenceontology/evidenceontology
|
https://api.github.com/repos/evidenceontology/evidenceontology
|
closed
|
New term: "evidence used in manual assertion"
|
auto-migrated GeneOntology-No Priority-Medium Type-New-Term
|
```
Hi Marcus,
There seems to be a high-level term missing in ECO. The ECO terms that are the
equivalent of manually assigned GO evidences do not have a common parent, see
below for the terms and their parents. It seems there should be a common parent
"evidence used in manual assertion" as a child of 'evidence' since there is
already ECO:0000501 "evidence used in automatic assertion".
We are starting to transition to ECO codes in our database and it would really
help us to distinguish between automatic and manual assertions.
Also, please note that there is a missing term for 'physical interaction
evidence used in manual assertion' that should map to GO:IPI
Thanks,
Rachael.
ECO:0000269 EXP ECO:0000269 experimental evidence used in manual
assertion EXP
ECO:0000314 IDA ECO:0000269 experimental evidence used in manual
assertion EXP
ECO:0000270 IEP ECO:0000269 experimental evidence used in manual
assertion EXP
ECO:0000316 IGI ECO:0000269 experimental evidence used in manual
assertion EXP
ECO:0000315 IMP ECO:0000269 experimental evidence used in manual
assertion EXP
ECO:0000318 IBA ECO:0000252 similarity evidence used in manual
assertion
ECO:0000319 IBD ECO:0000252 similarity evidence used in manual
assertion
ECO:0000320 IKR ECO:0000252 similarity evidence used in manual
assertion
ECO:0000320 IMR ECO:0000252 similarity evidence used in manual
assertion
ECO:0000321 IRD ECO:0000252 similarity evidence used in manual
assertion
ECO:0000247 ISA ECO:0000252 similarity evidence used in manual
assertion
ECO:0000255 ISM ECO:0000252 similarity evidence used in manual
assertion
ECO:0000266 ISO ECO:0000252 similarity evidence used in manual
assertion
ECO:0000250 ISS ECO:0000252 similarity evidence used in manual
assertion
ECO:0000303 NAS ECO:0000302 author statement used in manual assertion
ECO:0000304 TAS ECO:0000302 author statement used in manual assertion
ECO:0000305 IC ECO:0000305 curator inference used in manual assertion
IC
ECO:0000307 ND ECO:0000305 curator inference used in manual assertion
IC
ECO:0000317 IGC ECO:0000317 genomic context evidence used in manual
assertion IGC
ECO:0000245 RCA ECO:0000244 combinatorial evidence used in manual
assertion
Note, GO:IPI needs to map to a new ECO term 'physical interaction evidence used
in manual assertion'
ECO:0000021 IPI ECO:0000006 experimental evidence
ECO:0000021 IPI ECO:0000021 physical interaction evidence IPI
```
Original issue reported on code.google.com by `rachhunt...@hotmail.com` on 10 Sep 2013 at 8:49
|
1.0
|
New term: "evidence used in manual assertion" - ```
Hi Marcus,
There seems to be a high-level term missing in ECO. The ECO terms that are the
equivalent of manually assigned GO evidences do not have a common parent, see
below for the terms and their parents. It seems there should be a common parent
"evidence used in manual assertion" as a child of 'evidence' since there is
already ECO:0000501 "evidence used in automatic assertion".
We are starting to transition to ECO codes in our database and it would really
help us to distinguish between automatic and manual assertions.
Also, please note that there is a missing term for 'physical interaction
evidence used in manual assertion' that should map to GO:IPI
Thanks,
Rachael.
ECO:0000269 EXP ECO:0000269 experimental evidence used in manual
assertion EXP
ECO:0000314 IDA ECO:0000269 experimental evidence used in manual
assertion EXP
ECO:0000270 IEP ECO:0000269 experimental evidence used in manual
assertion EXP
ECO:0000316 IGI ECO:0000269 experimental evidence used in manual
assertion EXP
ECO:0000315 IMP ECO:0000269 experimental evidence used in manual
assertion EXP
ECO:0000318 IBA ECO:0000252 similarity evidence used in manual
assertion
ECO:0000319 IBD ECO:0000252 similarity evidence used in manual
assertion
ECO:0000320 IKR ECO:0000252 similarity evidence used in manual
assertion
ECO:0000320 IMR ECO:0000252 similarity evidence used in manual
assertion
ECO:0000321 IRD ECO:0000252 similarity evidence used in manual
assertion
ECO:0000247 ISA ECO:0000252 similarity evidence used in manual
assertion
ECO:0000255 ISM ECO:0000252 similarity evidence used in manual
assertion
ECO:0000266 ISO ECO:0000252 similarity evidence used in manual
assertion
ECO:0000250 ISS ECO:0000252 similarity evidence used in manual
assertion
ECO:0000303 NAS ECO:0000302 author statement used in manual assertion
ECO:0000304 TAS ECO:0000302 author statement used in manual assertion
ECO:0000305 IC ECO:0000305 curator inference used in manual assertion
IC
ECO:0000307 ND ECO:0000305 curator inference used in manual assertion
IC
ECO:0000317 IGC ECO:0000317 genomic context evidence used in manual
assertion IGC
ECO:0000245 RCA ECO:0000244 combinatorial evidence used in manual
assertion
Note, GO:IPI needs to map to a new ECO term 'physical interaction evidence used
in manual assertion'
ECO:0000021 IPI ECO:0000006 experimental evidence
ECO:0000021 IPI ECO:0000021 physical interaction evidence IPI
```
Original issue reported on code.google.com by `rachhunt...@hotmail.com` on 10 Sep 2013 at 8:49
|
non_process
|
new term evidence used in manual assertion hi marcus there seems to be a high level term missing in eco the eco terms that are the equivalent of manually assigned go evidences do not have a common parent see below for the terms and their parents it seems there should be a common parent evidence used in manual assertion as a child of evidence since there is already eco evidence used in automatic assertion we are starting to transition to eco codes in our database and it would really help us to distinguish between automatic and manual assertions also please note that there is a missing term for physical interaction evidence used in manual assertion that should map to go ipi thanks rachael eco exp eco experimental evidence used in manual assertion exp eco ida eco experimental evidence used in manual assertion exp eco iep eco experimental evidence used in manual assertion exp eco igi eco experimental evidence used in manual assertion exp eco imp eco experimental evidence used in manual assertion exp eco iba eco similarity evidence used in manual assertion eco ibd eco similarity evidence used in manual assertion eco ikr eco similarity evidence used in manual assertion eco imr eco similarity evidence used in manual assertion eco ird eco similarity evidence used in manual assertion eco isa eco similarity evidence used in manual assertion eco ism eco similarity evidence used in manual assertion eco iso eco similarity evidence used in manual assertion eco iss eco similarity evidence used in manual assertion eco nas eco author statement used in manual assertion eco tas eco author statement used in manual assertion eco ic eco curator inference used in manual assertion ic eco nd eco curator inference used in manual assertion ic eco igc eco genomic context evidence used in manual assertion igc eco rca eco combinatorial evidence used in manual assertion note go ipi needs to map to a new eco term physical interaction evidence used in manual assertion eco ipi eco experimental evidence eco ipi eco physical interaction evidence ipi original issue reported on code google com by rachhunt hotmail com on sep at
| 0
|
393,735
| 11,624,096,274
|
IssuesEvent
|
2020-02-27 10:09:00
|
christopherpeters-git/hololens-project
|
https://api.github.com/repos/christopherpeters-git/hololens-project
|
closed
|
fix performance issues
|
high priority
|
inspiration:
https://forums.hololens.com/discussion/409/low-fps-in-hololens-app
https://docs.microsoft.com/en-us/windows/mixed-reality/performance-recommendations-for-unity
- [ ] deactivate wireframing when Deploying to HoloLens
|
1.0
|
fix performance issues - inspiration:
https://forums.hololens.com/discussion/409/low-fps-in-hololens-app
https://docs.microsoft.com/en-us/windows/mixed-reality/performance-recommendations-for-unity
- [ ] deactivate wireframing when Deploying to HoloLens
|
non_process
|
fix performance issues inspiration deactivate wireframing when deploying to hololens
| 0
|
2,328
| 2,717,619,389
|
IssuesEvent
|
2015-04-11 13:25:03
|
garykl/human-noise
|
https://api.github.com/repos/garykl/human-noise
|
closed
|
modularize files
|
code smell
|
make module on the client side files. Each file shall provide exactly one module, that export functions and state.
|
1.0
|
modularize files - make module on the client side files. Each file shall provide exactly one module, that export functions and state.
|
non_process
|
modularize files make module on the client side files each file shall provide exactly one module that export functions and state
| 0
|
118,955
| 25,414,228,029
|
IssuesEvent
|
2022-11-22 22:00:24
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
closed
|
Code Insights: Capture group search query input has weird (not working) focus management
|
bug webapp team/code-insights
|
Originally reported by @leonore
If you try to focus input with your mouse pointer/cursor you will see that capture group asterisk button steals focus from capture group series input. This makes search query input completely unaccessible from mouse devices.
You still can navigate to the field with keyboard though.
https://user-images.githubusercontent.com/18492575/203106093-d9b303ce-4853-4bd7-8835-4f9ba10f6c3f.mov
/cc @joelkw @felixfbecker @vovakulikov
|
1.0
|
Code Insights: Capture group search query input has weird (not working) focus management - Originally reported by @leonore
If you try to focus input with your mouse pointer/cursor you will see that capture group asterisk button steals focus from capture group series input. This makes search query input completely unaccessible from mouse devices.
You still can navigate to the field with keyboard though.
https://user-images.githubusercontent.com/18492575/203106093-d9b303ce-4853-4bd7-8835-4f9ba10f6c3f.mov
/cc @joelkw @felixfbecker @vovakulikov
|
non_process
|
code insights capture group search query input has weird not working focus management originally reported by leonore if you try to focus input with your mouse pointer cursor you will see that capture group asterisk button steals focus from capture group series input this makes search query input completely unaccessible from mouse devices you still can navigate to the field with keyboard though cc joelkw felixfbecker vovakulikov
| 0
|
124,438
| 12,231,610,148
|
IssuesEvent
|
2020-05-04 08:07:43
|
NVIDIA/grcuda
|
https://api.github.com/repos/NVIDIA/grcuda
|
opened
|
Add plolyglot examples
|
documentation
|
The grCUDA repository currently does not contain any code examples in the various GraalVM languages.
Code example from the [Developer blog](https://devblogs.nvidia.com/grcuda-a-polyglot-language-binding-for-cuda-in-graalvm/) are in [https://github.com/NVIDIA-developer-blog/code-samples/tree/master/posts/grcuda](https://github.com/NVIDIA-developer-blog/code-samples/tree/master/posts/grcuda).
|
1.0
|
Add plolyglot examples - The grCUDA repository currently does not contain any code examples in the various GraalVM languages.
Code example from the [Developer blog](https://devblogs.nvidia.com/grcuda-a-polyglot-language-binding-for-cuda-in-graalvm/) are in [https://github.com/NVIDIA-developer-blog/code-samples/tree/master/posts/grcuda](https://github.com/NVIDIA-developer-blog/code-samples/tree/master/posts/grcuda).
|
non_process
|
add plolyglot examples the grcuda repository currently does not contain any code examples in the various graalvm languages code example from the are in
| 0
|
148,064
| 23,303,154,830
|
IssuesEvent
|
2022-08-07 16:22:52
|
CollaboraOnline/online
|
https://api.github.com/repos/CollaboraOnline/online
|
closed
|
Mobile: Word Count...
|
Hacktoberfest Easy Hack design CSS
|
**This is an [Easy Hack](https://collaboraonline.github.io/post/easyhacks/).**
Potential mentors: @pedropintosilva
- Remove unnecessary bottom borders (from values while keeping the ones from labels)
- Fix padding
| Word count on mobile |
:-------------------------:
| <img src="https://user-images.githubusercontent.com/65948705/124916446-fb87e900-dff2-11eb-96c7-b606389f7de7.png" width="411"> |
_on distro/collabora/co-6-4 it's ok or at least much better_
|
1.0
|
Mobile: Word Count... - **This is an [Easy Hack](https://collaboraonline.github.io/post/easyhacks/).**
Potential mentors: @pedropintosilva
- Remove unnecessary bottom borders (from values while keeping the ones from labels)
- Fix padding
| Word count on mobile |
:-------------------------:
| <img src="https://user-images.githubusercontent.com/65948705/124916446-fb87e900-dff2-11eb-96c7-b606389f7de7.png" width="411"> |
_on distro/collabora/co-6-4 it's ok or at least much better_
|
non_process
|
mobile word count this is an potential mentors pedropintosilva remove unnecessary bottom borders from values while keeping the ones from labels fix padding word count on mobile on distro collabora co it s ok or at least much better
| 0
|
22,645
| 31,895,826,870
|
IssuesEvent
|
2023-09-18 01:31:54
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - earliestEonOrLowestEonothem
|
Term - change Class - GeologicalContext normative Task Group - Material Sample Process - complete
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_earliestEonOrLowestEonothem
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): earliestEonOrLowestEonothem
* Term label (English, not normative): Earliest Eon Or Lowest Eonothem
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the earliest possible geochronologic eon or lowest chrono-stratigraphic eonothem or the informal name ("Precambrian") attributable to the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Phanerozoic, Proterozoic
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
Change term - earliestEonOrLowestEonothem - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_earliestEonOrLowestEonothem
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): earliestEonOrLowestEonothem
* Term label (English, not normative): Earliest Eon Or Lowest Eonothem
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the earliest possible geochronologic eon or lowest chrono-stratigraphic eonothem or the informal name ("Precambrian") attributable to the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Phanerozoic, Proterozoic
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
change term earliesteonorlowesteonothem term change submitter efficacy justification why is this change necessary create consistency of terms for material in darwin core demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes earliesteonorlowesteonothem term label english not normative earliest eon or lowest eonothem organized in class e g occurrence event location taxon geological context definition of the term normative the full name of the earliest possible geochronologic eon or lowest chrono stratigraphic eonothem or the informal name precambrian attributable to the stratigraphic horizon from which the cataloged item dwc materialentity was collected usage comments recommendations regarding content etc not normative examples not normative phanerozoic proterozoic refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
13,702
| 3,767,841,134
|
IssuesEvent
|
2016-03-16 00:27:36
|
stkent/amplify
|
https://api.github.com/repos/stkent/amplify
|
closed
|
Add basic case studies
|
difficulty-easy documentation
|
Include:
- screenshots
- install data
- links to store pages
- some data on increase in review and feedback email counts
|
1.0
|
Add basic case studies - Include:
- screenshots
- install data
- links to store pages
- some data on increase in review and feedback email counts
|
non_process
|
add basic case studies include screenshots install data links to store pages some data on increase in review and feedback email counts
| 0
|
15,821
| 20,015,686,735
|
IssuesEvent
|
2022-02-01 11:49:33
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Improve summaries returned by `prisma migrate diff` and drift detection
|
process/candidate kind/improvement team/migrations
|
See this one from a chinook sqlite database for example:

- We don't have any details on the created tables
- This could be challenging because the summary would get much larger on large schemas.
- There are lines on changed tables that are justified, and reflect accurate information, but absolutely don't make sense from the perspective of "I want to see what the schema looks like". These should be under the created tables.
- ...
|
1.0
|
Improve summaries returned by `prisma migrate diff` and drift detection - See this one from a chinook sqlite database for example:

- We don't have any details on the created tables
- This could be challenging because the summary would get much larger on large schemas.
- There are lines on changed tables that are justified, and reflect accurate information, but absolutely don't make sense from the perspective of "I want to see what the schema looks like". These should be under the created tables.
- ...
|
process
|
improve summaries returned by prisma migrate diff and drift detection see this one from a chinook sqlite database for example we don t have any details on the created tables this could be challenging because the summary would get much larger on large schemas there are lines on changed tables that are justified and reflect accurate information but absolutely don t make sense from the perspective of i want to see what the schema looks like these should be under the created tables
| 1
|
69,357
| 30,248,877,274
|
IssuesEvent
|
2023-07-06 18:45:52
|
BCDevOps/developer-experience
|
https://api.github.com/repos/BCDevOps/developer-experience
|
opened
|
Shared Services Workshop
|
*team/ ops and shared services*
|
**Describe the issue**
Start technical workshops for the community on the following platform shared services:
- ACS
- artifactory
- sysdig
- vault
**Definition of done**
- [ ] initial workshop
- [ ] way to evaluate the improvement from workshops (adoption rate, survey, etc.)
- [ ] plan for periodic offering
|
1.0
|
Shared Services Workshop - **Describe the issue**
Start technical workshops for the community on the following platform shared services:
- ACS
- artifactory
- sysdig
- vault
**Definition of done**
- [ ] initial workshop
- [ ] way to evaluate the improvement from workshops (adoption rate, survey, etc.)
- [ ] plan for periodic offering
|
non_process
|
shared services workshop describe the issue start technical workshops for the community on the following platform shared services acs artifactory sysdig vault definition of done initial workshop way to evaluate the improvement from workshops adoption rate survey etc plan for periodic offering
| 0
|
351,037
| 10,512,142,281
|
IssuesEvent
|
2019-09-27 17:08:19
|
Aariq/bumbl
|
https://api.github.com/repos/Aariq/bumbl
|
closed
|
add columns for .fitted and .resid when augment = TRUE
|
Priority: Medium enhancement
|
might be a natural part of adding plots (#5)
|
1.0
|
add columns for .fitted and .resid when augment = TRUE - might be a natural part of adding plots (#5)
|
non_process
|
add columns for fitted and resid when augment true might be a natural part of adding plots
| 0
|
162,940
| 12,698,140,304
|
IssuesEvent
|
2020-06-22 13:00:09
|
florent37/Flutter-AssetsAudioPlayer
|
https://api.github.com/repos/florent37/Flutter-AssetsAudioPlayer
|
closed
|
Play Pause Bug
|
Working on it bug waiting for user test
|
Flutter version : 1.17.1
Platform : Android
********************
Hey , thanks for making such an amazing package . I'm making an audio player app and there is a bug(i think). after I open a audio file , if I stop that track / or let the audio play until its duration , then go to another audio track , it won't register play/pause command. Its really strange issue. I've also tried with notification panel where I stopped a track and went to another audio file , it play / pause command doesn't work.
my code looks like this
```
class AudioPlayer with ChangeNotifier {
final assetsAudioPlayer = AssetsAudioPlayer.withId('MY_UNIQUE_ID');
//Play a song
playSong(SongInfo songInfo) {
assetsAudioPlayer.open(
Audio.file(
songInfo.filePath,
metas: Metas(
title: songInfo.title,
artist: songInfo.artist,
album: songInfo.album,
image: MetasImage.file(songInfo.albumArtwork),
),
),
showNotification: true,
notificationSettings: NotificationSettings(
customPlayPauseAction: (player) {
player.isPlaying.value
? assetsAudioPlayer.pause()
: assetsAudioPlayer.play();
},
stopEnabled: false,
),
);
notifyListeners();
}
//Play or Pause a song
playorPauseSong() {
assetsAudioPlayer.playOrPause();
notifyListeners();
}
//Stop a song
stopSong() {
assetsAudioPlayer.stop();
notifyListeners();
}
}
```
**
and onTap i use
```
AudioPlayer_player = Provider.of<AudioPlayer>(context);
_player.playSong(_query.songs[index]); //I get the songs path from a query class)
```
any help would be appreciated. thanks
|
1.0
|
Play Pause Bug - Flutter version : 1.17.1
Platform : Android
********************
Hey , thanks for making such an amazing package . I'm making an audio player app and there is a bug(i think). after I open a audio file , if I stop that track / or let the audio play until its duration , then go to another audio track , it won't register play/pause command. Its really strange issue. I've also tried with notification panel where I stopped a track and went to another audio file , it play / pause command doesn't work.
my code looks like this
```
class AudioPlayer with ChangeNotifier {
final assetsAudioPlayer = AssetsAudioPlayer.withId('MY_UNIQUE_ID');
//Play a song
playSong(SongInfo songInfo) {
assetsAudioPlayer.open(
Audio.file(
songInfo.filePath,
metas: Metas(
title: songInfo.title,
artist: songInfo.artist,
album: songInfo.album,
image: MetasImage.file(songInfo.albumArtwork),
),
),
showNotification: true,
notificationSettings: NotificationSettings(
customPlayPauseAction: (player) {
player.isPlaying.value
? assetsAudioPlayer.pause()
: assetsAudioPlayer.play();
},
stopEnabled: false,
),
);
notifyListeners();
}
//Play or Pause a song
playorPauseSong() {
assetsAudioPlayer.playOrPause();
notifyListeners();
}
//Stop a song
stopSong() {
assetsAudioPlayer.stop();
notifyListeners();
}
}
```
**
and onTap i use
```
AudioPlayer_player = Provider.of<AudioPlayer>(context);
_player.playSong(_query.songs[index]); //I get the songs path from a query class)
```
any help would be appreciated. thanks
|
non_process
|
play pause bug flutter version platform android hey thanks for making such an amazing package i m making an audio player app and there is a bug i think after i open a audio file if i stop that track or let the audio play until its duration then go to another audio track it won t register play pause command its really strange issue i ve also tried with notification panel where i stopped a track and went to another audio file it play pause command doesn t work my code looks like this class audioplayer with changenotifier final assetsaudioplayer assetsaudioplayer withid my unique id play a song playsong songinfo songinfo assetsaudioplayer open audio file songinfo filepath metas metas title songinfo title artist songinfo artist album songinfo album image metasimage file songinfo albumartwork shownotification true notificationsettings notificationsettings customplaypauseaction player player isplaying value assetsaudioplayer pause assetsaudioplayer play stopenabled false notifylisteners play or pause a song playorpausesong assetsaudioplayer playorpause notifylisteners stop a song stopsong assetsaudioplayer stop notifylisteners and ontap i use audioplayer player provider of context player playsong query songs i get the songs path from a query class any help would be appreciated thanks
| 0
|
133,174
| 10,798,555,103
|
IssuesEvent
|
2019-11-06 10:16:27
|
DevExpress/testcafe
|
https://api.github.com/repos/DevExpress/testcafe
|
closed
|
TestCafe click() function fails to scroll to the element inside ShadowDOM and click it
|
AREA: testing FREQUENCY: level 2 TYPE: bug
|
### Are you requesting a feature or reporting a bug?
Bug
### What is your Test Scenario?
Click the button in the screenshot which is outside viewport.

### What is the Current behavior?
TestCafe click() function fails to scroll to the element inside ShadowDOM and click it.
### What is the Expected behavior?
TestCafe scrolls down to the element and click it.
### What is your web application and your TestCafe test code?
Your website URL (or attach your complete example):
https://measurementslive.ni.com/measure.html
<details>
<summary>Your complete test code (or attach your test files):</summary>
<!-- Paste your test code here: -->
```js
import { Selector, ClientFunction } from "testcafe"
fixture `My first fixture`
.page `https://measurementslive.ni.com/measure.html`;
const scope = Selector(() => {
document.querySelector("#instrument1").shadowRoot.querySelector("#instrument-tab").shadowRoot.querySelector("#instrument-item-scope").shadowRoot.querySelector("#checkbox").shadowRoot.querySelector("#checkbox").click();
return document.querySelector("#instrument1").shadowRoot.querySelector("#button");
});
const blankArea = Selector(() => document.querySelector("measurements-live").shadowRoot.querySelector("measurements-live-styles > div > div > div"));
const instrumentTabButton = Selector(() => document.querySelector("#instrument1").shadowRoot.querySelector("#button"));
const reference4 = Selector(() => document.querySelector("nchannel-scope").shadowRoot.querySelector("#settings").shadowRoot.querySelector("#reference-channel4").shadowRoot.querySelector("#channel-state").shadowRoot.querySelector("#switch-housing"));
const scopeIsAttached = ClientFunction(() => document.querySelector("nchannel-scope").isAttached);
test.only('My first test', async t => {
await t
.resizeWindow(1500,900)
.click(blankArea)
.click(instrumentTabButton)
.click(scope)
.expect(scopeIsAttached()).eql(true, { timeout: 50000 })
.expect(reference4.visible).eql(true)
.click(reference4) // !! failed to click here
.debug();
});
```
</details>
<details>
<summary>Your complete configuration file (if any):</summary>
<!-- Paste your complete test config file here (even if it is huge): -->
```
```
</details>
<details>
<summary>Your complete test report:</summary>
<!-- Paste your complete result test report here (even if it is huge): -->
```
My first fixture
× My first test
1) The element that matches the specified selector is not visible.
Browser: Chrome 76.0.3809 / Windows 10.0.0
16 | .click(blankArea)
17 | .click(instrumentTabButton)
18 | .click(scope)
19 | .expect(scopeIsAttached()).eql(true, { timeout: 50000
})
20 | .expect(reference4.visible).eql(true)
> 21 | .click(reference4) // !! failed to click here
22 | .debug();
23 |});
24 |
25 |
at click (C:\github\testcafe_example.js:21:10)
at test.only (C:\github\testcafe_example.js:13:1)
at <anonymous>
(C:\Users\rgao\AppData\Roaming\npm\node_modules\testcafe\src\api\wrap-test-function.js:17:28)
at TestRun._executeTestFn
(C:\Users\rgao\AppData\Roaming\npm\node_modules\testcafe\src\test-run\index.js:288:19)
at TestRun.start
(C:\Users\rgao\AppData\Roaming\npm\node_modules\testcafe\src\test-run\index.js:337:24)
1/1 failed (46s)
```
</details>
<details>
<summary>Screenshots: see the screenshot above </summary>
</details>
### Steps to Reproduce:
<!-- Describe what we should do to reproduce the behavior you encountered. -->
1. Go to my website: https://measurementslive.ni.com/measure.html
2. Open the instrument tab (in yellow rectangle)

3. Check "Oscilliscope"

4. Scroll down the configuration pane on the right
5. Click the toggle button of "Reference 4"

### Your Environment details:
* testcafe version: 1.4.2
* node.js version: 10.14.1
* command-line arguments: testcafe chrome testfile.js
* browser name and version: Chrome 76.0.3809.132
* platform and version: Windows 10
|
1.0
|
TestCafe click() function fails to scroll to the element inside ShadowDOM and click it - ### Are you requesting a feature or reporting a bug?
Bug
### What is your Test Scenario?
Click the button in the screenshot which is outside viewport.

### What is the Current behavior?
TestCafe click() function fails to scroll to the element inside ShadowDOM and click it.
### What is the Expected behavior?
TestCafe scrolls down to the element and click it.
### What is your web application and your TestCafe test code?
Your website URL (or attach your complete example):
https://measurementslive.ni.com/measure.html
<details>
<summary>Your complete test code (or attach your test files):</summary>
<!-- Paste your test code here: -->
```js
import { Selector, ClientFunction } from "testcafe"
fixture `My first fixture`
.page `https://measurementslive.ni.com/measure.html`;
const scope = Selector(() => {
document.querySelector("#instrument1").shadowRoot.querySelector("#instrument-tab").shadowRoot.querySelector("#instrument-item-scope").shadowRoot.querySelector("#checkbox").shadowRoot.querySelector("#checkbox").click();
return document.querySelector("#instrument1").shadowRoot.querySelector("#button");
});
const blankArea = Selector(() => document.querySelector("measurements-live").shadowRoot.querySelector("measurements-live-styles > div > div > div"));
const instrumentTabButton = Selector(() => document.querySelector("#instrument1").shadowRoot.querySelector("#button"));
const reference4 = Selector(() => document.querySelector("nchannel-scope").shadowRoot.querySelector("#settings").shadowRoot.querySelector("#reference-channel4").shadowRoot.querySelector("#channel-state").shadowRoot.querySelector("#switch-housing"));
const scopeIsAttached = ClientFunction(() => document.querySelector("nchannel-scope").isAttached);
test.only('My first test', async t => {
await t
.resizeWindow(1500,900)
.click(blankArea)
.click(instrumentTabButton)
.click(scope)
.expect(scopeIsAttached()).eql(true, { timeout: 50000 })
.expect(reference4.visible).eql(true)
.click(reference4) // !! failed to click here
.debug();
});
```
</details>
<details>
<summary>Your complete configuration file (if any):</summary>
<!-- Paste your complete test config file here (even if it is huge): -->
```
```
</details>
<details>
<summary>Your complete test report:</summary>
<!-- Paste your complete result test report here (even if it is huge): -->
```
My first fixture
× My first test
1) The element that matches the specified selector is not visible.
Browser: Chrome 76.0.3809 / Windows 10.0.0
16 | .click(blankArea)
17 | .click(instrumentTabButton)
18 | .click(scope)
19 | .expect(scopeIsAttached()).eql(true, { timeout: 50000
})
20 | .expect(reference4.visible).eql(true)
> 21 | .click(reference4) // !! failed to click here
22 | .debug();
23 |});
24 |
25 |
at click (C:\github\testcafe_example.js:21:10)
at test.only (C:\github\testcafe_example.js:13:1)
at <anonymous>
(C:\Users\rgao\AppData\Roaming\npm\node_modules\testcafe\src\api\wrap-test-function.js:17:28)
at TestRun._executeTestFn
(C:\Users\rgao\AppData\Roaming\npm\node_modules\testcafe\src\test-run\index.js:288:19)
at TestRun.start
(C:\Users\rgao\AppData\Roaming\npm\node_modules\testcafe\src\test-run\index.js:337:24)
1/1 failed (46s)
```
</details>
<details>
<summary>Screenshots: see the screenshot above </summary>
</details>
### Steps to Reproduce:
<!-- Describe what we should do to reproduce the behavior you encountered. -->
1. Go to my website: https://measurementslive.ni.com/measure.html
2. Open the instrument tab (in yellow rectangle)

3. Check "Oscilliscope"

4. Scroll down the configuration pane on the right
5. Click the toggle button of "Reference 4"

### Your Environment details:
* testcafe version: 1.4.2
* node.js version: 10.14.1
* command-line arguments: testcafe chrome testfile.js
* browser name and version: Chrome 76.0.3809.132
* platform and version: Windows 10
|
non_process
|
testcafe click function fails to scroll to the element inside shadowdom and click it are you requesting a feature or reporting a bug bug what is your test scenario click the button in the screenshot which is outside viewport what is the current behavior testcafe click function fails to scroll to the element inside shadowdom and click it what is the expected behavior testcafe scrolls down to the element and click it what is your web application and your testcafe test code your website url or attach your complete example your complete test code or attach your test files js import selector clientfunction from testcafe fixture my first fixture page const scope selector document queryselector shadowroot queryselector instrument tab shadowroot queryselector instrument item scope shadowroot queryselector checkbox shadowroot queryselector checkbox click return document queryselector shadowroot queryselector button const blankarea selector document queryselector measurements live shadowroot queryselector measurements live styles div div div const instrumenttabbutton selector document queryselector shadowroot queryselector button const selector document queryselector nchannel scope shadowroot queryselector settings shadowroot queryselector reference shadowroot queryselector channel state shadowroot queryselector switch housing const scopeisattached clientfunction document queryselector nchannel scope isattached test only my first test async t await t resizewindow click blankarea click instrumenttabbutton click scope expect scopeisattached eql true timeout expect visible eql true click failed to click here debug your complete configuration file if any your complete test report my first fixture × my first test the element that matches the specified selector is not visible browser chrome windows click blankarea click instrumenttabbutton click scope expect scopeisattached eql true timeout expect visible eql true click failed to click here debug at click c github testcafe example js at test only c github testcafe example js at c users rgao appdata roaming npm node modules testcafe src api wrap test function js at testrun executetestfn c users rgao appdata roaming npm node modules testcafe src test run index js at testrun start c users rgao appdata roaming npm node modules testcafe src test run index js failed screenshots see the screenshot above steps to reproduce go to my website open the instrument tab in yellow rectangle check oscilliscope scroll down the configuration pane on the right click the toggle button of reference your environment details testcafe version node js version command line arguments testcafe chrome testfile js browser name and version chrome platform and version windows
| 0
|
343,890
| 30,698,024,419
|
IssuesEvent
|
2023-07-26 20:17:07
|
chanzuckerberg/sci-components
|
https://api.github.com/repos/chanzuckerberg/sci-components
|
closed
|
Screenshot test for Link component
|
Test Cases Epic
|
### Alignment
- [x] Align with design (and product, other engineers, if necessary) on the visual traits to include as permutation dimensions for the Chromatic test
- Find the component in this doc as a starting point: [Chromatic tests: Which component dimensions to consider looping through](https://docs.google.com/document/d/1i40YV1rX61dNzGsqbhqJAD8PL1puCecX7rnHgD_FawE/edit#)
- Permutation dimensions may include the component’s props, but it may not always be necessary to include every prop
- If the component can be interacted with, or if part of it can (e.g. an “x” button or similar within it), it should include pseudo state as a permutation dimension
- [x] Once the permutation dimensions are aligned on, list them in this issue as checkboxes to track them
### Permutation dimensions to include
- [ ] **sdsStyle:** default, dashed
- [ ] **pseudo-state:** default, hover, active, focus
### Tips for writing Chromatic tests
- Write the tests in index.stories.tsx within the component’s folder
- Name the story page LivePreview `export const LivePreview ... `
- Use or create an array per permutation dimension and loop through each to iteratively create all relevant combinations of the component
- Put pseudo-state at the lowest level
- See Button's Live Preview story for an example
- [More about Chromatic](https://storybook.js.org/docs/react/writing-tests/visual-testing#setup-chromatic-addon)
- [More about Storybook's Pseudo State addon](https://storybook.js.org/addons/storybook-addon-pseudo-states)
|
1.0
|
Screenshot test for Link component - ### Alignment
- [x] Align with design (and product, other engineers, if necessary) on the visual traits to include as permutation dimensions for the Chromatic test
- Find the component in this doc as a starting point: [Chromatic tests: Which component dimensions to consider looping through](https://docs.google.com/document/d/1i40YV1rX61dNzGsqbhqJAD8PL1puCecX7rnHgD_FawE/edit#)
- Permutation dimensions may include the component’s props, but it may not always be necessary to include every prop
- If the component can be interacted with, or if part of it can (e.g. an “x” button or similar within it), it should include pseudo state as a permutation dimension
- [x] Once the permutation dimensions are aligned on, list them in this issue as checkboxes to track them
### Permutation dimensions to include
- [ ] **sdsStyle:** default, dashed
- [ ] **pseudo-state:** default, hover, active, focus
### Tips for writing Chromatic tests
- Write the tests in index.stories.tsx within the component’s folder
- Name the story page LivePreview `export const LivePreview ... `
- Use or create an array per permutation dimension and loop through each to iteratively create all relevant combinations of the component
- Put pseudo-state at the lowest level
- See Button's Live Preview story for an example
- [More about Chromatic](https://storybook.js.org/docs/react/writing-tests/visual-testing#setup-chromatic-addon)
- [More about Storybook's Pseudo State addon](https://storybook.js.org/addons/storybook-addon-pseudo-states)
|
non_process
|
screenshot test for link component alignment align with design and product other engineers if necessary on the visual traits to include as permutation dimensions for the chromatic test find the component in this doc as a starting point permutation dimensions may include the component’s props but it may not always be necessary to include every prop if the component can be interacted with or if part of it can e g an “x” button or similar within it it should include pseudo state as a permutation dimension once the permutation dimensions are aligned on list them in this issue as checkboxes to track them permutation dimensions to include sdsstyle default dashed pseudo state default hover active focus tips for writing chromatic tests write the tests in index stories tsx within the component’s folder name the story page livepreview export const livepreview use or create an array per permutation dimension and loop through each to iteratively create all relevant combinations of the component put pseudo state at the lowest level see button s live preview story for an example
| 0
|
3,424
| 6,526,028,074
|
IssuesEvent
|
2017-08-29 18:04:10
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
closed
|
OA: failed to parse
|
in progress ontology processing problem
|
User [reported on the support list](http://ncbo-support.2288202.n4.nabble.com/bioontology-support-BioPortal-Feedback-from-michelle-td4655708.html) that they created the [OA ontology](http://bioportal.bioontology.org/ontologies/OA) in BioPortal, and none of the submissions have parsed.
|
1.0
|
OA: failed to parse - User [reported on the support list](http://ncbo-support.2288202.n4.nabble.com/bioontology-support-BioPortal-Feedback-from-michelle-td4655708.html) that they created the [OA ontology](http://bioportal.bioontology.org/ontologies/OA) in BioPortal, and none of the submissions have parsed.
|
process
|
oa failed to parse user that they created the in bioportal and none of the submissions have parsed
| 1
|
776,282
| 27,254,536,747
|
IssuesEvent
|
2023-02-22 10:31:10
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
samples: net: Sample for net/sockets/tcp doesn't have tests section
|
bug priority: low area: Networking area: Samples area: Sockets
|
**Describe the bug**
The sample https://github.com/zephyrproject-rtos/zephyr/blob/main/samples/net/sockets/tcp/sample.yaml is wrongly defined and hence not seen by twister. It doesn't have proper "tests" section.
**To Reproduce**
```
$ ./scripts/twister -T samples/ -v -p native_posix
ZEPHYR_BASE unset, using "/home/maciej/zephyrproject/zephyr"
Renaming output directory to /home/maciej/zephyrproject/zephyr/twister-out.10
INFO - Using Ninja..
INFO - Zephyr version: zephyr-v3.3.0-129-g1fcce98d1337
INFO - Using 'zephyr' toolchain.
Traceback (most recent call last):
File "./scripts/twister", line 201, in <module>
ret = main(options)
File "/home/maciej/zephyrproject/zephyr/scripts/pylib/twister/twisterlib/twister_main.py", line 120, in main
tplan.discover()
File "/home/maciej/zephyrproject/zephyr/scripts/pylib/twister/twisterlib/testplan.py", line 122, in discover
raise TwisterRuntimeError("No test cases found at the specified location...")
twisterlib.error.TwisterRuntimeError: No test cases found at the specified location...
```
**Impact**
All samples should have tests section. Without it it is impossible to verify with twister if the sample works.
|
1.0
|
samples: net: Sample for net/sockets/tcp doesn't have tests section - **Describe the bug**
The sample https://github.com/zephyrproject-rtos/zephyr/blob/main/samples/net/sockets/tcp/sample.yaml is wrongly defined and hence not seen by twister. It doesn't have proper "tests" section.
**To Reproduce**
```
$ ./scripts/twister -T samples/ -v -p native_posix
ZEPHYR_BASE unset, using "/home/maciej/zephyrproject/zephyr"
Renaming output directory to /home/maciej/zephyrproject/zephyr/twister-out.10
INFO - Using Ninja..
INFO - Zephyr version: zephyr-v3.3.0-129-g1fcce98d1337
INFO - Using 'zephyr' toolchain.
Traceback (most recent call last):
File "./scripts/twister", line 201, in <module>
ret = main(options)
File "/home/maciej/zephyrproject/zephyr/scripts/pylib/twister/twisterlib/twister_main.py", line 120, in main
tplan.discover()
File "/home/maciej/zephyrproject/zephyr/scripts/pylib/twister/twisterlib/testplan.py", line 122, in discover
raise TwisterRuntimeError("No test cases found at the specified location...")
twisterlib.error.TwisterRuntimeError: No test cases found at the specified location...
```
**Impact**
All samples should have tests section. Without it it is impossible to verify with twister if the sample works.
|
non_process
|
samples net sample for net sockets tcp doesn t have tests section describe the bug the sample is wrongly defined and hence not seen by twister it doesn t have proper tests section to reproduce scripts twister t samples v p native posix zephyr base unset using home maciej zephyrproject zephyr renaming output directory to home maciej zephyrproject zephyr twister out info using ninja info zephyr version zephyr info using zephyr toolchain traceback most recent call last file scripts twister line in ret main options file home maciej zephyrproject zephyr scripts pylib twister twisterlib twister main py line in main tplan discover file home maciej zephyrproject zephyr scripts pylib twister twisterlib testplan py line in discover raise twisterruntimeerror no test cases found at the specified location twisterlib error twisterruntimeerror no test cases found at the specified location impact all samples should have tests section without it it is impossible to verify with twister if the sample works
| 0
|
160,804
| 25,234,520,208
|
IssuesEvent
|
2022-11-14 23:03:56
|
sul-dlss/happy-heron
|
https://api.github.com/repos/sul-dlss/happy-heron
|
closed
|
User can indicate they are now setup in globus
|
design needed
|
Users who select the globus upload option but have never used globus before will need to setup a new account with globus before we can create the endpoint for them to put their files into.
These users need a way to let H2 know they have done this so we can continue with the process described in #2847 when they click "save as draft"
This could be a checkbox on the H2 edit work page for users who have selected the globus option for for which we have determined they are not yet in the globus system. Checking the box and clicking "save as draft" will continue the flow as described in #2847
|
1.0
|
User can indicate they are now setup in globus - Users who select the globus upload option but have never used globus before will need to setup a new account with globus before we can create the endpoint for them to put their files into.
These users need a way to let H2 know they have done this so we can continue with the process described in #2847 when they click "save as draft"
This could be a checkbox on the H2 edit work page for users who have selected the globus option for for which we have determined they are not yet in the globus system. Checking the box and clicking "save as draft" will continue the flow as described in #2847
|
non_process
|
user can indicate they are now setup in globus users who select the globus upload option but have never used globus before will need to setup a new account with globus before we can create the endpoint for them to put their files into these users need a way to let know they have done this so we can continue with the process described in when they click save as draft this could be a checkbox on the edit work page for users who have selected the globus option for for which we have determined they are not yet in the globus system checking the box and clicking save as draft will continue the flow as described in
| 0
|
10,664
| 13,454,347,753
|
IssuesEvent
|
2020-09-09 03:27:42
|
medic/cht-core
|
https://api.github.com/repos/medic/cht-core
|
closed
|
Release 3.10.0
|
Type: Internal process
|
# Planning
- [x] Create an [organisation wide project](https://github.com/orgs/medic/projects?query=is%3Aopen+sort%3Aname-asc) and add this issue to it. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining.
- [x] Add all the issues to be worked on to the project. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bugs.
# Development
When development is ready to begin one of the engineers should be nominated as a Release Manager. They will be responsible for making sure the following tasks are completed though not necessarily completing them.
- [x] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`.
- [x] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://github.com/medic/medic-docs/blob/master/development/update-dependencies.md). This should be done early in the release cycle so find a volunteer to take this on and assign it to them.
- [x] Go through all features and improvements scheduled for this release and raise cht-docs issues for product education to be written where appropriate. If in doubt, check with Max.
- [x] Write an update in the weekly Product Team call agenda summarising development and acceptance testing progress and identifying any blockers. The release manager is to update this every week until the version is released.
# Releasing
Once all issues have passed acceptance testing and have been merged into `master` release testing can begin.
- [x] Create a new release branch from `master` named `<major>.<minor>.x` in cht-core. Post a message to #development using this template:
```
@core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks!
```
- [x] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing.
- [x] [Import translations keys](https://github.com/medic/medic-docs/blob/master/development/translations.md#adding-new-keys) into POE and notify the #translations Slack channel translate new and updated values, for example:
```
@channel I've just updated the translations in POE. These keys have been added: "<added-list>", and these keys have been updated: "<updated-list>"
```
- [x] Create a new document in the [release-notes folder](https://github.com/medic/medic/tree/master/release-notes) in `master`. Ensure all issues are in the GH Project, that they're correct labelled, and have human readable descriptions. Use [this script](https://github.com/medic/medic/blob/master/scripts/changelog-generator) to export the issues into our changelog format. Manually document any known migration steps and known issues. Provide description, screenshots, videos, and anything else to help communicate particularly important changes. Document any required or recommended upgrades to our other products (eg: medic-conf, medic-gateway, medic-android). Assign the PR to a) the Director of Technology, and b) an SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient.
- [x] Create a Google Doc in the [blog posts folder](https://drive.google.com/drive/u/0/folders/0B2PTUNZFwxEvMHRWNTBjY2ZHNHc) with the draft of a blog post promoting the release based on the release notes above. Once it's ready ask Max and Kelly to review it.
- [x] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta.
- [x] [Export the translations](https://github.com/medic/medic-docs/blob/master/development/translations.md#exporting-changes-from-poeditor-to-github), delete empty translation files and commit to `master`. Cherry-pick the commit into the release branch.
- [x] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/medic/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release.
- [x] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>`
- [x] Upgrade the `demo-cht.dev` instance to this version
- [x] Follow the instructions for [releasing other products](https://github.com/medic/medic-docs/blob/master/development/releasing.md) that have been updated in this project (eg: medic-conf, medic-gateway, medic-android).
- [x] Add the release to the [Supported versions](https://github.com/medic/medic-docs/blob/master/installation/supported-software.md#supported-versions) and update the EOL date and status of previous releases.
- [x] Announce the release in #products and #cht-contributors using this template:
```
@channel *We're excited to announce the release of {{version}}*
New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs.
Read the release notes for full details: {{url}}
Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our software support documentation: https://github.com/medic/medic-docs/blob/master/installation/supported-software.md#supported-versions
To see what's scheduled for the next releases have a read of the product roadmap: https://github.com/orgs/medic/projects?query=is%3Aopen+sort%3Aname-asc
```
- [x] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/), under the "Product - Releases" category. You can use the previous message and omit `@channel`.
- [x] Mark this issue "done" and close the project.
|
1.0
|
Release 3.10.0 - # Planning
- [x] Create an [organisation wide project](https://github.com/orgs/medic/projects?query=is%3Aopen+sort%3Aname-asc) and add this issue to it. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining.
- [x] Add all the issues to be worked on to the project. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bugs.
# Development
When development is ready to begin one of the engineers should be nominated as a Release Manager. They will be responsible for making sure the following tasks are completed though not necessarily completing them.
- [x] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`.
- [x] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://github.com/medic/medic-docs/blob/master/development/update-dependencies.md). This should be done early in the release cycle so find a volunteer to take this on and assign it to them.
- [x] Go through all features and improvements scheduled for this release and raise cht-docs issues for product education to be written where appropriate. If in doubt, check with Max.
- [x] Write an update in the weekly Product Team call agenda summarising development and acceptance testing progress and identifying any blockers. The release manager is to update this every week until the version is released.
# Releasing
Once all issues have passed acceptance testing and have been merged into `master` release testing can begin.
- [x] Create a new release branch from `master` named `<major>.<minor>.x` in cht-core. Post a message to #development using this template:
```
@core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks!
```
- [x] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing.
- [x] [Import translations keys](https://github.com/medic/medic-docs/blob/master/development/translations.md#adding-new-keys) into POE and notify the #translations Slack channel translate new and updated values, for example:
```
@channel I've just updated the translations in POE. These keys have been added: "<added-list>", and these keys have been updated: "<updated-list>"
```
- [x] Create a new document in the [release-notes folder](https://github.com/medic/medic/tree/master/release-notes) in `master`. Ensure all issues are in the GH Project, that they're correct labelled, and have human readable descriptions. Use [this script](https://github.com/medic/medic/blob/master/scripts/changelog-generator) to export the issues into our changelog format. Manually document any known migration steps and known issues. Provide description, screenshots, videos, and anything else to help communicate particularly important changes. Document any required or recommended upgrades to our other products (eg: medic-conf, medic-gateway, medic-android). Assign the PR to a) the Director of Technology, and b) an SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient.
- [x] Create a Google Doc in the [blog posts folder](https://drive.google.com/drive/u/0/folders/0B2PTUNZFwxEvMHRWNTBjY2ZHNHc) with the draft of a blog post promoting the release based on the release notes above. Once it's ready ask Max and Kelly to review it.
- [x] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta.
- [x] [Export the translations](https://github.com/medic/medic-docs/blob/master/development/translations.md#exporting-changes-from-poeditor-to-github), delete empty translation files and commit to `master`. Cherry-pick the commit into the release branch.
- [x] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/medic/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release.
- [x] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>`
- [x] Upgrade the `demo-cht.dev` instance to this version
- [x] Follow the instructions for [releasing other products](https://github.com/medic/medic-docs/blob/master/development/releasing.md) that have been updated in this project (eg: medic-conf, medic-gateway, medic-android).
- [x] Add the release to the [Supported versions](https://github.com/medic/medic-docs/blob/master/installation/supported-software.md#supported-versions) and update the EOL date and status of previous releases.
- [x] Announce the release in #products and #cht-contributors using this template:
```
@channel *We're excited to announce the release of {{version}}*
New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs.
Read the release notes for full details: {{url}}
Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our software support documentation: https://github.com/medic/medic-docs/blob/master/installation/supported-software.md#supported-versions
To see what's scheduled for the next releases have a read of the product roadmap: https://github.com/orgs/medic/projects?query=is%3Aopen+sort%3Aname-asc
```
- [x] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/), under the "Product - Releases" category. You can use the previous message and omit `@channel`.
- [x] Mark this issue "done" and close the project.
|
process
|
release planning create an and add this issue to it we use so if there are breaking changes increment the major otherwise if there are new features increment the minor otherwise increment the service pack breaking changes in our case relate to updated software requirements egs couchdb node minimum browser versions broken backwards compatibility in an api or a major visual update that requires user retraining add all the issues to be worked on to the project ideally each minor release will have one or two features a handful of improvements and plenty of bugs development when development is ready to begin one of the engineers should be nominated as a release manager they will be responsible for making sure the following tasks are completed though not necessarily completing them set the version number in package json and package lock json and submit a pr the easiest way to do this is to use npm no git tag version version raise a new issue called update dependencies for with a description that links to this should be done early in the release cycle so find a volunteer to take this on and assign it to them go through all features and improvements scheduled for this release and raise cht docs issues for product education to be written where appropriate if in doubt check with max write an update in the weekly product team call agenda summarising development and acceptance testing progress and identifying any blockers the release manager is to update this every week until the version is released releasing once all issues have passed acceptance testing and have been merged into master release testing can begin create a new release branch from master named x in cht core post a message to development using this template core devs i ve just created the x release branch please be aware that any further changes intended for this release will have to be merged to master then backported thanks build a beta named beta by pushing a git tag and when ci completes successfully notify the qa team that it s ready for release testing into poe and notify the translations slack channel translate new and updated values for example channel i ve just updated the translations in poe these keys have been added and these keys have been updated create a new document in the in master ensure all issues are in the gh project that they re correct labelled and have human readable descriptions use to export the issues into our changelog format manually document any known migration steps and known issues provide description screenshots videos and anything else to help communicate particularly important changes document any required or recommended upgrades to our other products eg medic conf medic gateway medic android assign the pr to a the director of technology and b an sre to review and confirm the documentation on upgrade instructions and breaking changes is sufficient create a google doc in the with the draft of a blog post promoting the release based on the release notes above once it s ready ask max and kelly to review it until release testing passes make sure regressions are fixed in master cherry pick them into the release branch and release another beta delete empty translation files and commit to master cherry pick the commit into the release branch create a release in github from the release branch so it shows up under the with the naming convention this will create the git tag automatically link to the release notes in the description of the release confirm the release build completes successfully and the new release is available on the make sure that the document has new entry with id medic medic upgrade the demo cht dev instance to this version follow the instructions for that have been updated in this project eg medic conf medic gateway medic android add the release to the and update the eol date and status of previous releases announce the release in products and cht contributors using this template channel we re excited to announce the release of version new features include key features we ve also implemented loads of other improvements and fixed a heap of bugs read the release notes for full details url following our support policy versions versions are no longer supported projects running these versions should start planning to upgrade in the near future for more details read our software support documentation to see what s scheduled for the next releases have a read of the product roadmap announce the release on the under the product releases category you can use the previous message and omit channel mark this issue done and close the project
| 1
|
22,214
| 30,763,386,741
|
IssuesEvent
|
2023-07-30 01:26:02
|
shirou/gopsutil
|
https://api.github.com/repos/shirou/gopsutil
|
closed
|
Why get processName from `/proc/(pid)/comm` should be at most 14 characters ?
|
os:linux package:process
|
**Describe the bug**
Why get processName from `/proc/(pid)/comm` should be at most 14 letters ?
Refs https://github.com/shirou/gopsutil/blob/master/process/process_linux.go#L795
**To Reproduce**
1. My python script named `admin_worker.py`, just example
```python
#!/usr/local/easyops/python/bin/python
# -*- coding: utf-8 -*-
from time import sleep
while True:
print("process...")
sleep(1)
```
2. Start `admin_worker.py`
```shell
$ chmod +x admin_worker.py
$ ./admin_worker.py
```
3. In Linux, running my go program
```go
// `pid` should be found by `ps` command in machine.
proce, _ := process.NewProcess(int32(pid))
name, _ := proce.Name()
fmt.Println(name)
```
I find the `name` is `/usr/local/easyops/python/bin/python`
**Expected behavior**
I expect the name is `admin_worker.py`, because i use `cat /proc/(pid)/comm` in machine, i get my expected value.
**Environment (please complete the following information):**
- [ ] Windows: [paste the result of `ver`]
- [x] Linux: [paste contents of `/etc/os-release` and the result of `uname -a`]
- [ ] Mac OS: [paste the result of `sw_vers` and `uname -a`
- [ ] FreeBSD: [paste the result of `freebsd-version -k -r -u` and `uname -a`]
- [ ] OpenBSD: [paste the result of `uname -a`]
os-release
```
```
uname -a
```
```
**Additional context**
[Cross-compiling? Paste the command you are using to cross-compile and the result of the corresponding `go env`]
No.
|
1.0
|
Why get processName from `/proc/(pid)/comm` should be at most 14 characters ? - **Describe the bug**
Why get processName from `/proc/(pid)/comm` should be at most 14 letters ?
Refs https://github.com/shirou/gopsutil/blob/master/process/process_linux.go#L795
**To Reproduce**
1. My python script named `admin_worker.py`, just example
```python
#!/usr/local/easyops/python/bin/python
# -*- coding: utf-8 -*-
from time import sleep
while True:
print("process...")
sleep(1)
```
2. Start `admin_worker.py`
```shell
$ chmod +x admin_worker.py
$ ./admin_worker.py
```
3. In Linux, running my go program
```go
// `pid` should be found by `ps` command in machine.
proce, _ := process.NewProcess(int32(pid))
name, _ := proce.Name()
fmt.Println(name)
```
I find the `name` is `/usr/local/easyops/python/bin/python`
**Expected behavior**
I expect the name is `admin_worker.py`, because i use `cat /proc/(pid)/comm` in machine, i get my expected value.
**Environment (please complete the following information):**
- [ ] Windows: [paste the result of `ver`]
- [x] Linux: [paste contents of `/etc/os-release` and the result of `uname -a`]
- [ ] Mac OS: [paste the result of `sw_vers` and `uname -a`
- [ ] FreeBSD: [paste the result of `freebsd-version -k -r -u` and `uname -a`]
- [ ] OpenBSD: [paste the result of `uname -a`]
os-release
```
```
uname -a
```
```
**Additional context**
[Cross-compiling? Paste the command you are using to cross-compile and the result of the corresponding `go env`]
No.
|
process
|
why get processname from proc pid comm should be at most characters describe the bug why get processname from proc pid comm should be at most letters refs to reproduce my python script named admin worker py just example python usr local easyops python bin python coding utf from time import sleep while true print process sleep start admin worker py shell chmod x admin worker py admin worker py in linux running my go program go pid should be found by ps command in machine proce process newprocess pid name proce name fmt println name i find the name is usr local easyops python bin python expected behavior i expect the name is admin worker py because i use cat proc pid comm in machine i get my expected value environment please complete the following information windows linux mac os paste the result of sw vers and uname a freebsd openbsd os release uname a additional context no
| 1
|
20,303
| 26,942,174,891
|
IssuesEvent
|
2023-02-08 03:48:11
|
MicahWW/Python-Games
|
https://api.github.com/repos/MicahWW/Python-Games
|
opened
|
Communicating preference strength
|
process
|
Less of a code issue as a communication idea -- as I was writing "this is a hill I will die on" about a detail in my last commit it occurred to me that "this is a hill I will die on" is a lot of words and it might be nice to have a shorthand for "I feel very strongly about this" or "eh, I don't really care either way" or "I actively dislike my own solution, please offer suggestions/help," all of which seem to be common sentiments on this project (at least from me).
Maybe just a scale out of 10? If we put something like "intensity 1" to mean "I'm really unsure here, suggestions wanted" and "intensity 10" to mean "I will literally leave/kick you from the project if you don't get on board with this" maybe that would communicate the same thing in a lot fewer words? (I have yet to reach a 10, the quote that inspired this paragraph was like an 8, which is not really well communicated by "hill I will die on," which is another good reason for this to exist.)
The strength on this issue is "intensity 4," I would like this to be a thing but like, I'm not confident in my specific implementation and would be open to discussion.
|
1.0
|
Communicating preference strength - Less of a code issue as a communication idea -- as I was writing "this is a hill I will die on" about a detail in my last commit it occurred to me that "this is a hill I will die on" is a lot of words and it might be nice to have a shorthand for "I feel very strongly about this" or "eh, I don't really care either way" or "I actively dislike my own solution, please offer suggestions/help," all of which seem to be common sentiments on this project (at least from me).
Maybe just a scale out of 10? If we put something like "intensity 1" to mean "I'm really unsure here, suggestions wanted" and "intensity 10" to mean "I will literally leave/kick you from the project if you don't get on board with this" maybe that would communicate the same thing in a lot fewer words? (I have yet to reach a 10, the quote that inspired this paragraph was like an 8, which is not really well communicated by "hill I will die on," which is another good reason for this to exist.)
The strength on this issue is "intensity 4," I would like this to be a thing but like, I'm not confident in my specific implementation and would be open to discussion.
|
process
|
communicating preference strength less of a code issue as a communication idea as i was writing this is a hill i will die on about a detail in my last commit it occurred to me that this is a hill i will die on is a lot of words and it might be nice to have a shorthand for i feel very strongly about this or eh i don t really care either way or i actively dislike my own solution please offer suggestions help all of which seem to be common sentiments on this project at least from me maybe just a scale out of if we put something like intensity to mean i m really unsure here suggestions wanted and intensity to mean i will literally leave kick you from the project if you don t get on board with this maybe that would communicate the same thing in a lot fewer words i have yet to reach a the quote that inspired this paragraph was like an which is not really well communicated by hill i will die on which is another good reason for this to exist the strength on this issue is intensity i would like this to be a thing but like i m not confident in my specific implementation and would be open to discussion
| 1
|
10,467
| 13,244,742,668
|
IssuesEvent
|
2020-08-19 13:29:06
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Filebeat copy_fields processor can recurse, leading to crash
|
:Processors Team:Integrations [zube]: Ready bug
|
It's possible for the `copy_fields` processor to fall into infinite recursion if we point at two fields with the same root:
```
- copy_fields:
fields:
- from: message
to: message.original
```
The `from` and `to` fields will point to the same root, leading an eventual crash. It appears `copy_fields` doesn't do an _actual_ copy and just moves references. @urso discovered the root cause here.
|
1.0
|
Filebeat copy_fields processor can recurse, leading to crash - It's possible for the `copy_fields` processor to fall into infinite recursion if we point at two fields with the same root:
```
- copy_fields:
fields:
- from: message
to: message.original
```
The `from` and `to` fields will point to the same root, leading an eventual crash. It appears `copy_fields` doesn't do an _actual_ copy and just moves references. @urso discovered the root cause here.
|
process
|
filebeat copy fields processor can recurse leading to crash it s possible for the copy fields processor to fall into infinite recursion if we point at two fields with the same root copy fields fields from message to message original the from and to fields will point to the same root leading an eventual crash it appears copy fields doesn t do an actual copy and just moves references urso discovered the root cause here
| 1
|
189,169
| 6,794,815,195
|
IssuesEvent
|
2017-11-01 13:42:42
|
liam2/larray
|
https://api.github.com/repos/liam2/larray
|
opened
|
Extract array from CSV/Excel/HDF file line by line
|
enhancement hard priority: high
|
Without a [MultiIndex LArray](https://github.com/liam2/larray/issues/28) version, we are limited to 2D arrays.
This feature is required to allow to "[use a buffer to load data in viewer](https://github.com/larray-project/larray-editor/issues/33)".
|
1.0
|
Extract array from CSV/Excel/HDF file line by line - Without a [MultiIndex LArray](https://github.com/liam2/larray/issues/28) version, we are limited to 2D arrays.
This feature is required to allow to "[use a buffer to load data in viewer](https://github.com/larray-project/larray-editor/issues/33)".
|
non_process
|
extract array from csv excel hdf file line by line without a version we are limited to arrays this feature is required to allow to
| 0
|
929
| 3,391,393,064
|
IssuesEvent
|
2015-11-30 15:23:19
|
refugeetech/platform
|
https://api.github.com/repos/refugeetech/platform
|
opened
|
Set up Git development branch and train team on Git Flow
|
Open Process
|
All development should take place in sub branches of the development branch. The project will use a [git-flow](https://danielkummer.github.io/git-flow-cheatsheet/) convention to name branches and collaborate.
# Task
* [ ] Set up development branch
* [ ] Train team members on Git Flow
|
1.0
|
Set up Git development branch and train team on Git Flow - All development should take place in sub branches of the development branch. The project will use a [git-flow](https://danielkummer.github.io/git-flow-cheatsheet/) convention to name branches and collaborate.
# Task
* [ ] Set up development branch
* [ ] Train team members on Git Flow
|
process
|
set up git development branch and train team on git flow all development should take place in sub branches of the development branch the project will use a convention to name branches and collaborate task set up development branch train team members on git flow
| 1
|
17,164
| 4,147,565,306
|
IssuesEvent
|
2016-06-15 07:38:54
|
AIPDB/AIPDB
|
https://api.github.com/repos/AIPDB/AIPDB
|
opened
|
AIPDB®: Documentation
|
documentation
|
**Documentation of AIPDB®:**
- documentation, by using markdown.
**Note:** "docs", folder should be in root directory.
|
1.0
|
AIPDB®: Documentation - **Documentation of AIPDB®:**
- documentation, by using markdown.
**Note:** "docs", folder should be in root directory.
|
non_process
|
aipdb® documentation documentation of aipdb® documentation by using markdown note docs folder should be in root directory
| 0
|
192,358
| 22,215,938,201
|
IssuesEvent
|
2022-06-08 01:39:05
|
AlexRogalskiy/github-action-random-proverb
|
https://api.github.com/repos/AlexRogalskiy/github-action-random-proverb
|
closed
|
CVE-2021-32803 (High) detected in tar-6.1.0.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-32803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/node_modules/tar/package.json,/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- editorconfig-checker-3.3.0.tgz (Root Library)
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-random-proverb/commit/6106d25aaf6baf172fcdb0654a19388df3b492a6">6106d25aaf6baf172fcdb0654a19388df3b492a6</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.2, 5.0.7, 4.4.15, and 3.2.3 has an arbitrary File Creation/Overwrite vulnerability via insufficient symlink protection. `node-tar` aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary `stat` calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory. This order of operations resulted in the directory being created and added to the `node-tar` directory cache. When a directory is present in the directory cache, subsequent calls to mkdir for that directory are skipped. However, this is also where `node-tar` checks for symlinks occur. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass `node-tar` symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.3, 4.4.15, 5.0.7 and 6.1.2.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32803>CVE-2021-32803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw">https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution: tar - 3.2.3, 4.4.15, 5.0.7, 6.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-32803 (High) detected in tar-6.1.0.tgz - autoclosed - ## CVE-2021-32803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/node_modules/tar/package.json,/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- editorconfig-checker-3.3.0.tgz (Root Library)
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-random-proverb/commit/6106d25aaf6baf172fcdb0654a19388df3b492a6">6106d25aaf6baf172fcdb0654a19388df3b492a6</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.2, 5.0.7, 4.4.15, and 3.2.3 has an arbitrary File Creation/Overwrite vulnerability via insufficient symlink protection. `node-tar` aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary `stat` calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory. This order of operations resulted in the directory being created and added to the `node-tar` directory cache. When a directory is present in the directory cache, subsequent calls to mkdir for that directory are skipped. However, this is also where `node-tar` checks for symlinks occur. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass `node-tar` symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.3, 4.4.15, 5.0.7 and 6.1.2.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32803>CVE-2021-32803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw">https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution: tar - 3.2.3, 4.4.15, 5.0.7, 6.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tar tgz autoclosed cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file package json path to vulnerable library node modules npm node modules tar package json node modules tar package json dependency hierarchy editorconfig checker tgz root library x tar tgz vulnerable library found in head commit a href vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite vulnerability via insufficient symlink protection node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory this order of operations resulted in the directory being created and added to the node tar directory cache when a directory is present in the directory cache subsequent calls to mkdir for that directory are skipped however this is also where node tar checks for symlinks occur by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite this issue was addressed in releases and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar step up your open source security game with whitesource
| 0
|
179,337
| 6,623,966,210
|
IssuesEvent
|
2017-09-22 09:32:55
|
crowdAI/crowdai
|
https://api.github.com/repos/crowdAI/crowdai
|
closed
|
Need to replace template text in email
|
high priority
|
Text currently says "Use this area to offer a short preview of your email's content."
Need to replace with "Your summary of activity in crowdAI."
|
1.0
|
Need to replace template text in email - Text currently says "Use this area to offer a short preview of your email's content."
Need to replace with "Your summary of activity in crowdAI."
|
non_process
|
need to replace template text in email text currently says use this area to offer a short preview of your email s content need to replace with your summary of activity in crowdai
| 0
|
14,955
| 18,435,279,479
|
IssuesEvent
|
2021-10-14 12:23:35
|
GSA/EDX
|
https://api.github.com/repos/GSA/EDX
|
closed
|
Ongoing lessons learned/best practices discussions with FAS DX staff
|
process professional-development
|
Gail and I are meeting with Ryan Williams and Kelly Deao from time to time to learn about FAS DX best practices and lessons learned around change management and comms - "work agile, communicate waterfall"
Goal is to apply some of their best practices where we see it fits for EDX work.
What we have learned is:
- They struggled with the challenges we do around working agile and communicating waterfall.
- Do the best you can do with what you have and work with key stakeholders to determine the work and the communication.
- When communicating, they found that they would draft a comms and by the time the comms was ready to send out, it needed severely updating cause the work had evolved so much and so fast!
- OVERCOMMUNICATION is key! Communicate UP AND DOWN. Track who’s in your meetings and make sure they communicate up and down (to the best of your ability setting them up for success)! "Can’t have crickets and not communicate, while at same time you have to rely upon people to show up and mitigate that risk"
- Lessons learned: they engaged the Advisory Board too early and thus people felt they weren’t being utilized or we didn’t know what we were doing and the meetings were a waste of time. They also do a sliding scale rating on how they did in leading the meetings or not - that way you have insight on where you’re at feedback loops - after every Advisory Board meeting (2x/quarter)
- As much as you repeat yourself, we found that we have to tell them over and over again to get the message across
- Record your meetings, esp any demos! (We didn’t do closed captioning - we should have done it but we didn’t.)
- Their change resistance: This is MY tool and I don’t want people to take it over! Ownership (not stewardship). We try to direct them to the bigger picture of doing it for the enterprise, our customers, our suppliers, etc.
4/14/21: We met with Ryan
9/30/21: We met with Kelly
Next steps:
As a result of the latest meeting with Kelly, we will also meet with the sam.gov team (Christy Hermansen and Kim Goldman) to hear their lessons learned and promising practices, per Kelly's recommendation since sam.gov team helped them.
|
1.0
|
Ongoing lessons learned/best practices discussions with FAS DX staff - Gail and I are meeting with Ryan Williams and Kelly Deao from time to time to learn about FAS DX best practices and lessons learned around change management and comms - "work agile, communicate waterfall"
Goal is to apply some of their best practices where we see it fits for EDX work.
What we have learned is:
- They struggled with the challenges we do around working agile and communicating waterfall.
- Do the best you can do with what you have and work with key stakeholders to determine the work and the communication.
- When communicating, they found that they would draft a comms and by the time the comms was ready to send out, it needed severely updating cause the work had evolved so much and so fast!
- OVERCOMMUNICATION is key! Communicate UP AND DOWN. Track who’s in your meetings and make sure they communicate up and down (to the best of your ability setting them up for success)! "Can’t have crickets and not communicate, while at same time you have to rely upon people to show up and mitigate that risk"
- Lessons learned: they engaged the Advisory Board too early and thus people felt they weren’t being utilized or we didn’t know what we were doing and the meetings were a waste of time. They also do a sliding scale rating on how they did in leading the meetings or not - that way you have insight on where you’re at feedback loops - after every Advisory Board meeting (2x/quarter)
- As much as you repeat yourself, we found that we have to tell them over and over again to get the message across
- Record your meetings, esp any demos! (We didn’t do closed captioning - we should have done it but we didn’t.)
- Their change resistance: This is MY tool and I don’t want people to take it over! Ownership (not stewardship). We try to direct them to the bigger picture of doing it for the enterprise, our customers, our suppliers, etc.
4/14/21: We met with Ryan
9/30/21: We met with Kelly
Next steps:
As a result of the latest meeting with Kelly, we will also meet with the sam.gov team (Christy Hermansen and Kim Goldman) to hear their lessons learned and promising practices, per Kelly's recommendation since sam.gov team helped them.
|
process
|
ongoing lessons learned best practices discussions with fas dx staff gail and i are meeting with ryan williams and kelly deao from time to time to learn about fas dx best practices and lessons learned around change management and comms work agile communicate waterfall goal is to apply some of their best practices where we see it fits for edx work what we have learned is they struggled with the challenges we do around working agile and communicating waterfall do the best you can do with what you have and work with key stakeholders to determine the work and the communication when communicating they found that they would draft a comms and by the time the comms was ready to send out it needed severely updating cause the work had evolved so much and so fast overcommunication is key communicate up and down track who’s in your meetings and make sure they communicate up and down to the best of your ability setting them up for success can’t have crickets and not communicate while at same time you have to rely upon people to show up and mitigate that risk lessons learned they engaged the advisory board too early and thus people felt they weren’t being utilized or we didn’t know what we were doing and the meetings were a waste of time they also do a sliding scale rating on how they did in leading the meetings or not that way you have insight on where you’re at feedback loops after every advisory board meeting quarter as much as you repeat yourself we found that we have to tell them over and over again to get the message across record your meetings esp any demos we didn’t do closed captioning we should have done it but we didn’t their change resistance this is my tool and i don’t want people to take it over ownership not stewardship we try to direct them to the bigger picture of doing it for the enterprise our customers our suppliers etc we met with ryan we met with kelly next steps as a result of the latest meeting with kelly we will also meet with the sam gov team christy hermansen and kim goldman to hear their lessons learned and promising practices per kelly s recommendation since sam gov team helped them
| 1
|
17,296
| 23,111,109,686
|
IssuesEvent
|
2022-07-27 13:07:06
|
quark-engine/quark-engine
|
https://api.github.com/repos/quark-engine/quark-engine
|
closed
|
Prepare to release version v22.7.1
|
issue-processing-state-06
|
Update the version number in `__init__.py` for releasing the latest version of Quark.
In this version, the following changes will be included.
+ #366
+ #368
+ #370
+ #371
+ #372
+ #373
|
1.0
|
Prepare to release version v22.7.1 - Update the version number in `__init__.py` for releasing the latest version of Quark.
In this version, the following changes will be included.
+ #366
+ #368
+ #370
+ #371
+ #372
+ #373
|
process
|
prepare to release version update the version number in init py for releasing the latest version of quark in this version the following changes will be included
| 1
|
18,245
| 24,323,960,940
|
IssuesEvent
|
2022-09-30 13:18:21
|
km4ack/patmenu2
|
https://api.github.com/repos/km4ack/patmenu2
|
closed
|
Allow Pat Menu to close without closing modems
|
enhancement in process
|
Can Pat Menu be used as a launcher app and allowed to close after the menus are started? Take the packet modem for example: Once direwolf, kissattach, rig control, etc are started, allow Pat Menu to be closed without closing the processes that were started?
|
1.0
|
Allow Pat Menu to close without closing modems - Can Pat Menu be used as a launcher app and allowed to close after the menus are started? Take the packet modem for example: Once direwolf, kissattach, rig control, etc are started, allow Pat Menu to be closed without closing the processes that were started?
|
process
|
allow pat menu to close without closing modems can pat menu be used as a launcher app and allowed to close after the menus are started take the packet modem for example once direwolf kissattach rig control etc are started allow pat menu to be closed without closing the processes that were started
| 1
|
17,519
| 23,329,287,244
|
IssuesEvent
|
2022-08-09 02:18:43
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[Enhancement][FLINK-28084] Pulsar Connector UnorderedReader should disable retry and delete reconsume logic
|
compute/data-processing type/enhancement
|
UnroderdPulsarSourceReader currently calls reconsume, but this feature relys on retry topic. But if retry topic is enabled the initial search will only support earliest and lates (because it will be a multiconsumer impl). We plan to delete the reconsume logic to get rid of dependency on retry topic and should disable retry.
|
1.0
|
[Enhancement][FLINK-28084] Pulsar Connector UnorderedReader should disable retry and delete reconsume logic - UnroderdPulsarSourceReader currently calls reconsume, but this feature relys on retry topic. But if retry topic is enabled the initial search will only support earliest and lates (because it will be a multiconsumer impl). We plan to delete the reconsume logic to get rid of dependency on retry topic and should disable retry.
|
process
|
pulsar connector unorderedreader should disable retry and delete reconsume logic unroderdpulsarsourcereader currently calls reconsume but this feature relys on retry topic but if retry topic is enabled the initial search will only support earliest and lates because it will be a multiconsumer impl we plan to delete the reconsume logic to get rid of dependency on retry topic and should disable retry
| 1
|
442,938
| 12,753,731,347
|
IssuesEvent
|
2020-06-28 00:19:55
|
projectacrn/acrn-hypervisor
|
https://api.github.com/repos/projectacrn/acrn-hypervisor
|
closed
|
Top-level makefile cleans before building
|
priority: medium status: closed type: bug
|
It's impossible to do this:
```
make -C hypervisor menuconfig
make hypervisor
```
Because the first thing `make hypervisor` does is a clean, which removes the config file.
|
1.0
|
Top-level makefile cleans before building - It's impossible to do this:
```
make -C hypervisor menuconfig
make hypervisor
```
Because the first thing `make hypervisor` does is a clean, which removes the config file.
|
non_process
|
top level makefile cleans before building it s impossible to do this make c hypervisor menuconfig make hypervisor because the first thing make hypervisor does is a clean which removes the config file
| 0
|
329,545
| 28,281,654,334
|
IssuesEvent
|
2023-04-08 04:11:19
|
AY2223S2-CS2103T-F12-1/tp
|
https://api.github.com/repos/AY2223S2-CS2103T-F12-1/tp
|
closed
|
[PE-D][Tester B] Behaviour of 'edit-doc' does not match UG
|
priority.High severity.High type.Docs Tester B
|
From the user guide, the only fields that can be edited are name and phone number. However, the app apparently allows changes to email, specialty, years of experience and tags.

<!--session: 1680242167410-baa1a437-ebfc-4bee-8e68-edd39b9d65b9--><!--Version: Web v3.4.7-->
-------------
Labels: `severity.Low` `type.FeatureFlaw`
original: Yaladah/ped#1
|
1.0
|
[PE-D][Tester B] Behaviour of 'edit-doc' does not match UG - From the user guide, the only fields that can be edited are name and phone number. However, the app apparently allows changes to email, specialty, years of experience and tags.

<!--session: 1680242167410-baa1a437-ebfc-4bee-8e68-edd39b9d65b9--><!--Version: Web v3.4.7-->
-------------
Labels: `severity.Low` `type.FeatureFlaw`
original: Yaladah/ped#1
|
non_process
|
behaviour of edit doc does not match ug from the user guide the only fields that can be edited are name and phone number however the app apparently allows changes to email specialty years of experience and tags labels severity low type featureflaw original yaladah ped
| 0
|
5,076
| 3,145,203,834
|
IssuesEvent
|
2015-09-14 16:51:13
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Package is not generated when an exception is thrown
|
C: Code Generation P: Urgent R: Fixed T: Defect
|
Exceptions that are thrown while generating objects should not impair the object's generation, or hinder it. Yet the following logic fails to catch such exceptions:
```java
@Override
public final List<AttributeDefinition> getConstants() {
if (constants == null) {
constants = new ArrayList<AttributeDefinition>();
try {
constants = getConstants0();
}
catch (SQLException e) {
log.error("Error while initialising package", e);
}
}
return constants;
}
```
The `getConstants0()` method doesn't throw a `SQLException`, but a jOOQ `DataAccessException`. This means that no exception is caught and the package generation fails.
|
1.0
|
Package is not generated when an exception is thrown - Exceptions that are thrown while generating objects should not impair the object's generation, or hinder it. Yet the following logic fails to catch such exceptions:
```java
@Override
public final List<AttributeDefinition> getConstants() {
if (constants == null) {
constants = new ArrayList<AttributeDefinition>();
try {
constants = getConstants0();
}
catch (SQLException e) {
log.error("Error while initialising package", e);
}
}
return constants;
}
```
The `getConstants0()` method doesn't throw a `SQLException`, but a jOOQ `DataAccessException`. This means that no exception is caught and the package generation fails.
|
non_process
|
package is not generated when an exception is thrown exceptions that are thrown while generating objects should not impair the object s generation or hinder it yet the following logic fails to catch such exceptions java override public final list getconstants if constants null constants new arraylist try constants catch sqlexception e log error error while initialising package e return constants the method doesn t throw a sqlexception but a jooq dataaccessexception this means that no exception is caught and the package generation fails
| 0
|
157,933
| 13,724,370,322
|
IssuesEvent
|
2020-10-03 13:57:52
|
damymetzke/node-build-util
|
https://api.github.com/repos/damymetzke/node-build-util
|
closed
|
add project synopsis
|
documentation
|
A brief overview of project.
This includes:
- Project goals
- What the project contains
- When and why to use the project
|
1.0
|
add project synopsis - A brief overview of project.
This includes:
- Project goals
- What the project contains
- When and why to use the project
|
non_process
|
add project synopsis a brief overview of project this includes project goals what the project contains when and why to use the project
| 0
|
578,562
| 17,147,938,933
|
IssuesEvent
|
2021-07-13 16:36:42
|
guardicore/monkey
|
https://api.github.com/repos/guardicore/monkey
|
closed
|
Basic ransomware reporting
|
Complexity: High Feature Priority: High sp/5
|
# Description
As a blue team member, I want a concise report that indicates whether or not the ransomware payload was successful, so that I can have a clear understanding of the risks that ransomware pose to my network.
# Acceptance Criteria
- A new ransomware report tab appears in the Monkey Island reporting page if the ransomware payload was run.
- Statistics showing:
- The # of machines successfully exploited vs attempted
- The # for each exploiter
- The ransomware report contains a table showing:
- Which machines were compromised.
- Which mechanism/exploit was used to propagate to that machine.
- The # of files that were successfully encrypted.
# Tasks
- [x] Add a new reporting tab (0d) - @shreyamalviya
- [x] Don't display reporting tab if no encryption/readme enabled
- [x] Process telemetry and generate
- [x] Statistics (monkey_island/cc/models/edge.py). (0d) - @shreyamalviya
- [x] Data for table (0d) - @VakarisZ
- [x] Provide an API endpoint that can be queried by the UI to retrieve the report details (0d) - @shreyamalviya
- [x] Display statistics information in a statistics component (0d) - @shreyamalviya
- [x] Display ransomware encryption information in a table component (0d) - @VakarisZ
|
1.0
|
Basic ransomware reporting - # Description
As a blue team member, I want a concise report that indicates whether or not the ransomware payload was successful, so that I can have a clear understanding of the risks that ransomware pose to my network.
# Acceptance Criteria
- A new ransomware report tab appears in the Monkey Island reporting page if the ransomware payload was run.
- Statistics showing:
- The # of machines successfully exploited vs attempted
- The # for each exploiter
- The ransomware report contains a table showing:
- Which machines were compromised.
- Which mechanism/exploit was used to propagate to that machine.
- The # of files that were successfully encrypted.
# Tasks
- [x] Add a new reporting tab (0d) - @shreyamalviya
- [x] Don't display reporting tab if no encryption/readme enabled
- [x] Process telemetry and generate
- [x] Statistics (monkey_island/cc/models/edge.py). (0d) - @shreyamalviya
- [x] Data for table (0d) - @VakarisZ
- [x] Provide an API endpoint that can be queried by the UI to retrieve the report details (0d) - @shreyamalviya
- [x] Display statistics information in a statistics component (0d) - @shreyamalviya
- [x] Display ransomware encryption information in a table component (0d) - @VakarisZ
|
non_process
|
basic ransomware reporting description as a blue team member i want a concise report that indicates whether or not the ransomware payload was successful so that i can have a clear understanding of the risks that ransomware pose to my network acceptance criteria a new ransomware report tab appears in the monkey island reporting page if the ransomware payload was run statistics showing the of machines successfully exploited vs attempted the for each exploiter the ransomware report contains a table showing which machines were compromised which mechanism exploit was used to propagate to that machine the of files that were successfully encrypted tasks add a new reporting tab shreyamalviya don t display reporting tab if no encryption readme enabled process telemetry and generate statistics monkey island cc models edge py shreyamalviya data for table vakarisz provide an api endpoint that can be queried by the ui to retrieve the report details shreyamalviya display statistics information in a statistics component shreyamalviya display ransomware encryption information in a table component vakarisz
| 0
|
11,787
| 14,617,520,055
|
IssuesEvent
|
2020-12-22 14:54:19
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
I am confused by the discussion of how to reference variables defined in one job in another job
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement ready-to-doc
|
In the discussion of [libraries and variables](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#use-outputs-in-a-different-job), there is an example of how to reference a variable defined in one job in a subsequent job. The example is:
````jobs:
- job: A
steps:
- task: MyTask@1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- job: B
dependsOn: A
variables:
# map the output variable from A into this job
varFromA: $[ dependencies.A.outputs['ProduceVar.MyVar'] ]
steps:
- script: echo $(varFromA) # this step uses the mapped-in variable`
````
I see where `ProduceVar` comes from. I see where `A` comes from. `varFromA` makes sense to me.
But where does `MyVar` come from? Is `dependencies` a keyword or an identifier. If an identifier, then what does it identify?
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
I am confused by the discussion of how to reference variables defined in one job in another job - In the discussion of [libraries and variables](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#use-outputs-in-a-different-job), there is an example of how to reference a variable defined in one job in a subsequent job. The example is:
````jobs:
- job: A
steps:
- task: MyTask@1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- job: B
dependsOn: A
variables:
# map the output variable from A into this job
varFromA: $[ dependencies.A.outputs['ProduceVar.MyVar'] ]
steps:
- script: echo $(varFromA) # this step uses the mapped-in variable`
````
I see where `ProduceVar` comes from. I see where `A` comes from. `varFromA` makes sense to me.
But where does `MyVar` come from? Is `dependencies` a keyword or an identifier. If an identifier, then what does it identify?
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
i am confused by the discussion of how to reference variables defined in one job in another job in the discussion of there is an example of how to reference a variable defined in one job in a subsequent job the example is jobs job a steps task mytask this step generates the output variable name producevar because we re going to depend on it we need to name the step job b dependson a variables map the output variable from a into this job varfroma steps script echo varfroma this step uses the mapped in variable i see where producevar comes from i see where a comes from varfroma makes sense to me but where does myvar come from is dependencies a keyword or an identifier if an identifier then what does it identify document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
353,830
| 10,559,394,099
|
IssuesEvent
|
2019-10-04 11:28:15
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
support.mozilla.org - see bug description
|
browser-firefox-mobile engine-gecko priority-important
|
<!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 4.4.2; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://support.mozilla.org/1/mobile/68.2/Android/en-US/mobile-help
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 4.4.2
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: my site is not is not working
**Steps to Reproduce**:
I go on YouTude and I load it it say that this site is in trusted
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190923132102</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
support.mozilla.org - see bug description - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 4.4.2; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://support.mozilla.org/1/mobile/68.2/Android/en-US/mobile-help
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 4.4.2
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: my site is not is not working
**Steps to Reproduce**:
I go on YouTude and I load it it say that this site is in trusted
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190923132102</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
support mozilla org see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description my site is not is not working steps to reproduce i go on youtude and i load it it say that this site is in trusted browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
12,632
| 15,016,312,402
|
IssuesEvent
|
2021-02-01 09:25:48
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
Deployed Solution (Blog) does not appear under My Workloads
|
process_duplicate type_bug
|
Deployed a Blog and after back in Marketplace Dashboard , when i click under Blog > My Workloads , i am unable to see my Deployed Solution here :

Though it appears under "Deployed Solutions" :

|
1.0
|
Deployed Solution (Blog) does not appear under My Workloads - Deployed a Blog and after back in Marketplace Dashboard , when i click under Blog > My Workloads , i am unable to see my Deployed Solution here :

Though it appears under "Deployed Solutions" :

|
process
|
deployed solution blog does not appear under my workloads deployed a blog and after back in marketplace dashboard when i click under blog my workloads i am unable to see my deployed solution here though it appears under deployed solutions
| 1
|
8,307
| 11,463,791,322
|
IssuesEvent
|
2020-02-07 16:40:39
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Custom materials can't be edited after closing app
|
Process Heating bug
|
Cannot edit or even delete custom material after closing app
If you click edit - it brings up a blank form
if you try to delete, the delete button in the popup won't activate (you can delete gas fuels)
|
1.0
|
Custom materials can't be edited after closing app - Cannot edit or even delete custom material after closing app
If you click edit - it brings up a blank form
if you try to delete, the delete button in the popup won't activate (you can delete gas fuels)
|
process
|
custom materials can t be edited after closing app cannot edit or even delete custom material after closing app if you click edit it brings up a blank form if you try to delete the delete button in the popup won t activate you can delete gas fuels
| 1
|
959
| 3,419,134,276
|
IssuesEvent
|
2015-12-08 07:59:18
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Дніпропетровськ ГоловАПУ - Інформаційна довідка з містобудівного кадастру
|
In process of testing
|
Може надаватись фізичним, юридичним особам та ФОП у строк до 10 днів.
Може містити:
Викопіювання з генерального плану міста
Викопіювання з плану зонування
Схему визначення червоних ліній
Схему щодо перспектив розвитку інженерної інфраструктури міста
Кадастровий план
Зелені зони, що не підлягають забудові
Надається ГоловАПУ, є домовленість про запуск
|
1.0
|
Дніпропетровськ ГоловАПУ - Інформаційна довідка з містобудівного кадастру - Може надаватись фізичним, юридичним особам та ФОП у строк до 10 днів.
Може містити:
Викопіювання з генерального плану міста
Викопіювання з плану зонування
Схему визначення червоних ліній
Схему щодо перспектив розвитку інженерної інфраструктури міста
Кадастровий план
Зелені зони, що не підлягають забудові
Надається ГоловАПУ, є домовленість про запуск
|
process
|
дніпропетровськ головапу інформаційна довідка з містобудівного кадастру може надаватись фізичним юридичним особам та фоп у строк до днів може містити викопіювання з генерального плану міста викопіювання з плану зонування схему визначення червоних ліній схему щодо перспектив розвитку інженерної інфраструктури міста кадастровий план зелені зони що не підлягають забудові надається головапу є домовленість про запуск
| 1
|
107,693
| 4,313,906,898
|
IssuesEvent
|
2016-07-22 12:36:27
|
wp-crm/wp-crm
|
https://api.github.com/repos/wp-crm/wp-crm
|
closed
|
Uneditable fields.
|
core bug high priority workflow/review
|
Makes attribute non changable, once a value has been entered.
But the issue that when you mark any attribute Uneditable there won't be ability to fill them in new users


|
1.0
|
Uneditable fields. - Makes attribute non changable, once a value has been entered.
But the issue that when you mark any attribute Uneditable there won't be ability to fill them in new users


|
non_process
|
uneditable fields makes attribute non changable once a value has been entered but the issue that when you mark any attribute uneditable there won t be ability to fill them in new users
| 0
|
3,660
| 6,694,646,313
|
IssuesEvent
|
2017-10-10 03:24:47
|
york-region-tpss/stp
|
https://api.github.com/repos/york-region-tpss/stp
|
opened
|
Watering Assignment - Watering Tree Type Breakdown
|
enhancement process workflow
|
Further break down the tree types which need watering to be broadleaves, conifers, and other trees.
workflow en
|
1.0
|
Watering Assignment - Watering Tree Type Breakdown - Further break down the tree types which need watering to be broadleaves, conifers, and other trees.
workflow en
|
process
|
watering assignment watering tree type breakdown further break down the tree types which need watering to be broadleaves conifers and other trees workflow en
| 1
|
39,919
| 6,782,112,009
|
IssuesEvent
|
2017-10-30 06:09:55
|
middleman/middleman-asciidoc
|
https://api.github.com/repos/middleman/middleman-asciidoc
|
closed
|
Rewrite file extension .adoc in relative links to .html
|
documentation enhancement
|
`link:doc/linux.adoc[Linux]` is converted to `<a href="doc/linux.adoc">Linux</a>`, but the actual target file is `doc/linux.html`, not `doc/linux.adoc`.
|
1.0
|
Rewrite file extension .adoc in relative links to .html - `link:doc/linux.adoc[Linux]` is converted to `<a href="doc/linux.adoc">Linux</a>`, but the actual target file is `doc/linux.html`, not `doc/linux.adoc`.
|
non_process
|
rewrite file extension adoc in relative links to html link doc linux adoc is converted to linux but the actual target file is doc linux html not doc linux adoc
| 0
|
97,805
| 16,245,163,115
|
IssuesEvent
|
2021-05-07 13:58:05
|
TIBCOSoftware/TCSTK-custom-form-app-plugin
|
https://api.github.com/repos/TIBCOSoftware/TCSTK-custom-form-app-plugin
|
closed
|
CVE-2021-23382 (Medium) detected in postcss-7.0.32.tgz, postcss-7.0.21.tgz
|
security vulnerability
|
## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.32.tgz</b>, <b>postcss-7.0.21.tgz</b></p></summary>
<p>
<details><summary><b>postcss-7.0.32.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz</a></p>
<p>Path to dependency file: TCSTK-custom-form-app-plugin/package.json</p>
<p>Path to vulnerable library: TCSTK-custom-form-app-plugin/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.1100.7.tgz (Root Library)
- :x: **postcss-7.0.32.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: TCSTK-custom-form-app-plugin/package.json</p>
<p>Path to vulnerable library: TCSTK-custom-form-app-plugin/node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.1100.7.tgz (Root Library)
- resolve-url-loader-3.1.2.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.32","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@angular-devkit/build-angular:0.1100.7;postcss:7.0.32","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.21","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@angular-devkit/build-angular:0.1100.7;resolve-url-loader:3.1.2;postcss:7.0.21","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23382","vulnerabilityDetails":"The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \\/\\*\\s* sourceMappingURL\u003d(.*).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23382 (Medium) detected in postcss-7.0.32.tgz, postcss-7.0.21.tgz - ## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.32.tgz</b>, <b>postcss-7.0.21.tgz</b></p></summary>
<p>
<details><summary><b>postcss-7.0.32.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz</a></p>
<p>Path to dependency file: TCSTK-custom-form-app-plugin/package.json</p>
<p>Path to vulnerable library: TCSTK-custom-form-app-plugin/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.1100.7.tgz (Root Library)
- :x: **postcss-7.0.32.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: TCSTK-custom-form-app-plugin/package.json</p>
<p>Path to vulnerable library: TCSTK-custom-form-app-plugin/node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.1100.7.tgz (Root Library)
- resolve-url-loader-3.1.2.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.32","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@angular-devkit/build-angular:0.1100.7;postcss:7.0.32","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.21","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@angular-devkit/build-angular:0.1100.7;resolve-url-loader:3.1.2;postcss:7.0.21","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23382","vulnerabilityDetails":"The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \\/\\*\\s* sourceMappingURL\u003d(.*).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in postcss tgz postcss tgz cve medium severity vulnerability vulnerable libraries postcss tgz postcss tgz postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file tcstk custom form app plugin package json path to vulnerable library tcstk custom form app plugin node modules postcss package json dependency hierarchy build angular tgz root library x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file tcstk custom form app plugin package json path to vulnerable library tcstk custom form app plugin node modules resolve url loader node modules postcss package json dependency hierarchy build angular tgz root library resolve url loader tgz x postcss tgz vulnerable library found in base branch master vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree angular devkit build angular postcss isminimumfixversionavailable true minimumfixversion postcss packagetype javascript node js packagename postcss packageversion packagefilepaths istransitivedependency true dependencytree angular devkit build angular resolve url loader postcss isminimumfixversionavailable true minimumfixversion postcss basebranches vulnerabilityidentifier cve vulnerabilitydetails the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl vulnerabilityurl
| 0
|
19,378
| 25,509,977,041
|
IssuesEvent
|
2022-11-28 12:23:08
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[IDP] [PM] [MFA Disabled] Submit button is disabled in the following scenario
|
Bug P2 Participant manager Process: Fixed Process: Tested dev
|
**Pre-condition:** mfa should be disabled in the PM
**Steps:**
1. Login to PM
2. Navigate to admins section
3. Add admin in the application with phone number in the following format +919999999999
4. Click on the account creation link of added admin
5. Complete all the fields
6. Click on 'Submit' button and Verify
**AR:** Submit button is disabled in the following scenario
**ER:** Submit button should not be disabled in the following scenario
[screen-capture (13).webm](https://user-images.githubusercontent.com/86007179/203272936-f49b56f2-49e6-49a4-a7cf-06b480b47b86.webm)
|
2.0
|
[IDP] [PM] [MFA Disabled] Submit button is disabled in the following scenario - **Pre-condition:** mfa should be disabled in the PM
**Steps:**
1. Login to PM
2. Navigate to admins section
3. Add admin in the application with phone number in the following format +919999999999
4. Click on the account creation link of added admin
5. Complete all the fields
6. Click on 'Submit' button and Verify
**AR:** Submit button is disabled in the following scenario
**ER:** Submit button should not be disabled in the following scenario
[screen-capture (13).webm](https://user-images.githubusercontent.com/86007179/203272936-f49b56f2-49e6-49a4-a7cf-06b480b47b86.webm)
|
process
|
submit button is disabled in the following scenario pre condition mfa should be disabled in the pm steps login to pm navigate to admins section add admin in the application with phone number in the following format click on the account creation link of added admin complete all the fields click on submit button and verify ar submit button is disabled in the following scenario er submit button should not be disabled in the following scenario
| 1
|
420,006
| 12,231,368,850
|
IssuesEvent
|
2020-05-04 07:37:07
|
acidanthera/bugtracker
|
https://api.github.com/repos/acidanthera/bugtracker
|
closed
|
2020.05 release planning
|
priority:normal project:general
|
- [x] OpenCore 0.5.8
- [x] MacInfoPkg 2.1.2
- [x] AppleSupportPkg 2.1.7
- [x] Lilu 1.4.4
- [x] AppleALC 1.4.9
- [x] WhateverGreen 1.3.9
- [x] HibernationFixup 1.3.3
- [x] VoodooInput 1.0.5
- [x] VoodooPS2 2.1.4
- [x] VirtualSMC 1.1.3
- [x] RTCMemoryFixup 1.0.5
- [x] HibernationFixup 1.3.3
- [x] BrcmPatchRAM 2.5.3
- [x] AirportBrcmFixup 2.0.7
Mainly a bugfix release, want us to stabilise more as the previous month was a bit rushed due to 10.15.4.
|
1.0
|
2020.05 release planning - - [x] OpenCore 0.5.8
- [x] MacInfoPkg 2.1.2
- [x] AppleSupportPkg 2.1.7
- [x] Lilu 1.4.4
- [x] AppleALC 1.4.9
- [x] WhateverGreen 1.3.9
- [x] HibernationFixup 1.3.3
- [x] VoodooInput 1.0.5
- [x] VoodooPS2 2.1.4
- [x] VirtualSMC 1.1.3
- [x] RTCMemoryFixup 1.0.5
- [x] HibernationFixup 1.3.3
- [x] BrcmPatchRAM 2.5.3
- [x] AirportBrcmFixup 2.0.7
Mainly a bugfix release, want us to stabilise more as the previous month was a bit rushed due to 10.15.4.
|
non_process
|
release planning opencore macinfopkg applesupportpkg lilu applealc whatevergreen hibernationfixup voodooinput virtualsmc rtcmemoryfixup hibernationfixup brcmpatchram airportbrcmfixup mainly a bugfix release want us to stabilise more as the previous month was a bit rushed due to
| 0
|
2,605
| 5,357,117,616
|
IssuesEvent
|
2017-02-20 17:24:38
|
ElliotAOram/GhostPyramid
|
https://api.github.com/repos/ElliotAOram/GhostPyramid
|
closed
|
Feature 1: Capture video feed from external camera
|
Image Processing
|
Add functionality to capture the output from a camera. This should work with both the internal webcam and an external webcam (The test should be carried out using the internal webcam as tests will not normally include external webcams at this point in time)
---
* Entry tasks:
* [ ] Create video processing application
* [ ] Create unit test for video processing application
* Design feature:
* [ ] Consult overall model and decide if change is required
* Build by feature:
* [ ] Write tests for camera feed feature
* [ ] Write code to pass tests
* [ ] Refactor where required
|
1.0
|
Feature 1: Capture video feed from external camera - Add functionality to capture the output from a camera. This should work with both the internal webcam and an external webcam (The test should be carried out using the internal webcam as tests will not normally include external webcams at this point in time)
---
* Entry tasks:
* [ ] Create video processing application
* [ ] Create unit test for video processing application
* Design feature:
* [ ] Consult overall model and decide if change is required
* Build by feature:
* [ ] Write tests for camera feed feature
* [ ] Write code to pass tests
* [ ] Refactor where required
|
process
|
feature capture video feed from external camera add functionality to capture the output from a camera this should work with both the internal webcam and an external webcam the test should be carried out using the internal webcam as tests will not normally include external webcams at this point in time entry tasks create video processing application create unit test for video processing application design feature consult overall model and decide if change is required build by feature write tests for camera feed feature write code to pass tests refactor where required
| 1
|
103,917
| 16,613,241,107
|
IssuesEvent
|
2021-06-02 13:57:08
|
rammatzkvosky/789
|
https://api.github.com/repos/rammatzkvosky/789
|
opened
|
CVE-2020-15586 (Medium) detected in gogo1.12.6
|
security vulnerability
|
## CVE-2020-15586 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gogo1.12.6</b></p></summary>
<p>
<p>The Go programming language</p>
<p>Library home page: <a href=https://github.com/golang/go.git>https://github.com/golang/go.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/789/commit/a94ab1af7b954e06163acb325d4b035831f88835">a94ab1af7b954e06163acb325d4b035831f88835</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>canner/goroot/src/net/http/server.go</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>canner/goroot/src/net/http/server.go</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Go before 1.13.13 and 1.14.x before 1.14.5 has a data race in some net/http servers, as demonstrated by the httputil.ReverseProxy Handler, because it reads a request body and writes a response at the same time.
<p>Publish Date: 2020-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15586>CVE-2020-15586</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15586">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15586</a></p>
<p>Release Date: 2020-07-17</p>
<p>Fix Resolution: 1.13.13,1.14.5</p>
</p>
</details>
<p></p>
|
True
|
CVE-2020-15586 (Medium) detected in gogo1.12.6 - ## CVE-2020-15586 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gogo1.12.6</b></p></summary>
<p>
<p>The Go programming language</p>
<p>Library home page: <a href=https://github.com/golang/go.git>https://github.com/golang/go.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/789/commit/a94ab1af7b954e06163acb325d4b035831f88835">a94ab1af7b954e06163acb325d4b035831f88835</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>canner/goroot/src/net/http/server.go</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>canner/goroot/src/net/http/server.go</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Go before 1.13.13 and 1.14.x before 1.14.5 has a data race in some net/http servers, as demonstrated by the httputil.ReverseProxy Handler, because it reads a request body and writes a response at the same time.
<p>Publish Date: 2020-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15586>CVE-2020-15586</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15586">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-15586</a></p>
<p>Release Date: 2020-07-17</p>
<p>Fix Resolution: 1.13.13,1.14.5</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in cve medium severity vulnerability vulnerable library the go programming language library home page a href found in head commit a href found in base branch master vulnerable source files canner goroot src net http server go canner goroot src net http server go vulnerability details go before and x before has a data race in some net http servers as demonstrated by the httputil reverseproxy handler because it reads a request body and writes a response at the same time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
93,838
| 11,812,218,964
|
IssuesEvent
|
2020-03-19 19:43:25
|
cityofaustin/techstack
|
https://api.github.com/repos/cityofaustin/techstack
|
closed
|
Dev-design sync 3/17
|
Meeting Team: Design + Research Team: Dev
|
- [x] Adding language for location/service hour on location pages
- [x] publish queue—further info
- [x] Janis builds batphone/manual process for emergencies
- [x] Site search requirements #4118 and #4145
- [x] document analytics #4147 and outbound links
|
1.0
|
Dev-design sync 3/17 - - [x] Adding language for location/service hour on location pages
- [x] publish queue—further info
- [x] Janis builds batphone/manual process for emergencies
- [x] Site search requirements #4118 and #4145
- [x] document analytics #4147 and outbound links
|
non_process
|
dev design sync adding language for location service hour on location pages publish queue—further info janis builds batphone manual process for emergencies site search requirements and document analytics and outbound links
| 0
|
7,950
| 11,137,554,528
|
IssuesEvent
|
2019-12-20 19:40:56
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Bug: Drawers for experience and education aren't expanding on first click
|
Apply Process Bug State Dept.
|
Environment: Test
Steps to reproduce:
1) Apply for an internship
2) on the work experience page, click on the plus sign for a work experience (or education) to expand
- on the first try the row expands and quickly closes
- must click again for it to expand and stay open
|
1.0
|
Bug: Drawers for experience and education aren't expanding on first click - Environment: Test
Steps to reproduce:
1) Apply for an internship
2) on the work experience page, click on the plus sign for a work experience (or education) to expand
- on the first try the row expands and quickly closes
- must click again for it to expand and stay open
|
process
|
bug drawers for experience and education aren t expanding on first click environment test steps to reproduce apply for an internship on the work experience page click on the plus sign for a work experience or education to expand on the first try the row expands and quickly closes must click again for it to expand and stay open
| 1
|
53,416
| 11,049,152,828
|
IssuesEvent
|
2019-12-09 22:52:04
|
Chicago/design-cds-bootstrap
|
https://api.github.com/repos/Chicago/design-cds-bootstrap
|
closed
|
Make an example landing page
|
code
|
- [x] header
- [x] hero or carousel
- [x] other content
- [x] footer
|
1.0
|
Make an example landing page -
- [x] header
- [x] hero or carousel
- [x] other content
- [x] footer
|
non_process
|
make an example landing page header hero or carousel other content footer
| 0
|
20,118
| 26,656,331,274
|
IssuesEvent
|
2023-01-25 17:07:50
|
gfx-rs/naga
|
https://api.github.com/repos/gfx-rs/naga
|
closed
|
[meta] shader transformations
|
kind: feature area: processing
|
This is a list of generic shader transformations that we need to do for WebGPU:
- [x] out-of-bound handling of accessing arrays in uniform and storage buffers. The index must be either clamped, or zeroes returned.
- [x] out-of-bound handling of accessing storage textures.
- [ ] zero-initialization of threadgroup memory in compute shaders
|
1.0
|
[meta] shader transformations - This is a list of generic shader transformations that we need to do for WebGPU:
- [x] out-of-bound handling of accessing arrays in uniform and storage buffers. The index must be either clamped, or zeroes returned.
- [x] out-of-bound handling of accessing storage textures.
- [ ] zero-initialization of threadgroup memory in compute shaders
|
process
|
shader transformations this is a list of generic shader transformations that we need to do for webgpu out of bound handling of accessing arrays in uniform and storage buffers the index must be either clamped or zeroes returned out of bound handling of accessing storage textures zero initialization of threadgroup memory in compute shaders
| 1
|
258,064
| 27,563,850,782
|
IssuesEvent
|
2023-03-08 01:10:58
|
LynRodWS/alcor
|
https://api.github.com/repos/LynRodWS/alcor
|
opened
|
CVE-2019-20444 (High) detected in netty-codec-http-4.1.36.Final.jar
|
security vulnerability
|
## CVE-2019-20444 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.36.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p>
<p>Path to dependency file: /services/api_gateway/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec-http/4.1.36.Final/netty-codec-http-4.1.36.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-gateway-2.1.2.RELEASE.jar (Root Library)
- spring-boot-starter-webflux-2.1.6.RELEASE.jar
- spring-boot-starter-reactor-netty-2.1.6.RELEASE.jar
- reactor-netty-0.8.9.RELEASE.jar
- :x: **netty-codec-http-4.1.36.Final.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
HttpObjectDecoder.java in Netty before 4.1.44 allows an HTTP header that lacks a colon, which might be interpreted as a separate header with an incorrect syntax, or might be interpreted as an "invalid fold."
<p>Publish Date: 2020-01-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-20444>CVE-2019-20444</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444</a></p>
<p>Release Date: 2020-01-29</p>
<p>Fix Resolution (io.netty:netty-codec-http): 4.1.44.Final</p>
<p>Direct dependency fix Resolution (org.springframework.cloud:spring-cloud-starter-gateway): 2.2.2.RELEASE</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2019-20444 (High) detected in netty-codec-http-4.1.36.Final.jar - ## CVE-2019-20444 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.36.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p>
<p>Path to dependency file: /services/api_gateway/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec-http/4.1.36.Final/netty-codec-http-4.1.36.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-gateway-2.1.2.RELEASE.jar (Root Library)
- spring-boot-starter-webflux-2.1.6.RELEASE.jar
- spring-boot-starter-reactor-netty-2.1.6.RELEASE.jar
- reactor-netty-0.8.9.RELEASE.jar
- :x: **netty-codec-http-4.1.36.Final.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
HttpObjectDecoder.java in Netty before 4.1.44 allows an HTTP header that lacks a colon, which might be interpreted as a separate header with an incorrect syntax, or might be interpreted as an "invalid fold."
<p>Publish Date: 2020-01-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-20444>CVE-2019-20444</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444</a></p>
<p>Release Date: 2020-01-29</p>
<p>Fix Resolution (io.netty:netty-codec-http): 4.1.44.Final</p>
<p>Direct dependency fix Resolution (org.springframework.cloud:spring-cloud-starter-gateway): 2.2.2.RELEASE</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_process
|
cve high detected in netty codec http final jar cve high severity vulnerability vulnerable library netty codec http final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file services api gateway pom xml path to vulnerable library home wss scanner repository io netty netty codec http final netty codec http final jar dependency hierarchy spring cloud starter gateway release jar root library spring boot starter webflux release jar spring boot starter reactor netty release jar reactor netty release jar x netty codec http final jar vulnerable library found in base branch master vulnerability details httpobjectdecoder java in netty before allows an http header that lacks a colon which might be interpreted as a separate header with an incorrect syntax or might be interpreted as an invalid fold publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty codec http final direct dependency fix resolution org springframework cloud spring cloud starter gateway release rescue worker helmet automatic remediation is available for this issue
| 0
|
10,142
| 13,044,162,507
|
IssuesEvent
|
2020-07-29 03:47:33
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `RegexpSig` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `RegexpSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `RegexpSig` from TiDB -
## Description
Port the scalar function `RegexpSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function regexpsig from tidb description port the scalar function regexpsig from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
18,476
| 24,550,702,045
|
IssuesEvent
|
2022-10-12 12:23:43
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Resources > Special characters are inserted into file names shared for RTE or PDFs
|
Bug P1 iOS Process: Fixed Process: Tested dev Process: Reopened
|
Steps:
1. Install the iOS app
2. Signup/login and enroll into study
3. Navigate to resources
4. Open any resource with RTE/PDF configured
5. Click on share and choose email
6. Open the file attached and observe the file name
Actual: Special characters are inserted into file names shared for RTE or PDFs
Expected: File name should be proper
**Issue not observed in older GCP versions eg. 2.0.9**

|
3.0
|
[iOS] Resources > Special characters are inserted into file names shared for RTE or PDFs - Steps:
1. Install the iOS app
2. Signup/login and enroll into study
3. Navigate to resources
4. Open any resource with RTE/PDF configured
5. Click on share and choose email
6. Open the file attached and observe the file name
Actual: Special characters are inserted into file names shared for RTE or PDFs
Expected: File name should be proper
**Issue not observed in older GCP versions eg. 2.0.9**

|
process
|
resources special characters are inserted into file names shared for rte or pdfs steps install the ios app signup login and enroll into study navigate to resources open any resource with rte pdf configured click on share and choose email open the file attached and observe the file name actual special characters are inserted into file names shared for rte or pdfs expected file name should be proper issue not observed in older gcp versions eg
| 1
|
35,032
| 12,308,661,085
|
IssuesEvent
|
2020-05-12 07:38:33
|
benchabot/joplin
|
https://api.github.com/repos/benchabot/joplin
|
opened
|
WS-2020-0068 (High) detected in multiple libraries
|
security vulnerability
|
## WS-2020-0068 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>yargs-parser-13.1.2.tgz</b>, <b>yargs-parser-13.1.1.tgz</b>, <b>yargs-parser-16.1.0.tgz</b>, <b>yargs-parser-5.0.0.tgz</b>, <b>yargs-parser-7.0.0.tgz</b>, <b>yargs-parser-11.1.1.tgz</b></p></summary>
<p>
<details><summary><b>yargs-parser-13.1.2.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-13.1.2.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-13.1.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/Modules/TinyMCE/JoplinLists/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/joplin/Modules/TinyMCE/JoplinLists/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- bedrock-8.1.1.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- yargs-13.3.2.tgz
- :x: **yargs-parser-13.1.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-13.1.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-13.1.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-13.1.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/ElectronClient/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/joplin/Clipper/popup/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/Clipper/popup/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/Clipper/popup/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- react-native-vector-icons-6.6.0.tgz (Root Library)
- yargs-13.3.0.tgz
- :x: **yargs-parser-13.1.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-16.1.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-16.1.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-16.1.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/ElectronClient/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/joplin/Clipper/popup/node_modules/meow/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/Clipper/popup/node_modules/meow/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- electron-builder-22.3.2.tgz (Root Library)
- yargs-15.1.0.tgz
- :x: **yargs-parser-16.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-5.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/ReactNativeClient/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/joplin/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- gulp-4.0.2.tgz (Root Library)
- gulp-cli-2.2.0.tgz
- yargs-7.1.0.tgz
- :x: **yargs-parser-5.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-7.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/CliClient/package.json</p>
<p>Path to vulnerable library: /joplin/CliClient/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- :x: **yargs-parser-7.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-11.1.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/Tools/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/joplin/Tools/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/Tools/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- yargs-12.0.5.tgz (Root Library)
- :x: **yargs-parser-11.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/benchabot/joplin/commit/c73e3ff9ac9ce99322e28b08589d0cde405067a8">c73e3ff9ac9ce99322e28b08589d0cde405067a8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected versions of yargs-parser are vulnerable to prototype pollution. Arguments are not properly sanitized, allowing an attacker to modify the prototype of Object, causing the addition or modification of an existing property that will exist on all objects. Parsing the argument --foo.__proto__.bar baz' adds a bar property with value baz to all objects. This is only exploitable if attackers have control over the arguments being passed to yargs-parser.
<p>Publish Date: 2020-05-01
<p>URL: <a href=https://www.npmjs.com/advisories/1500>WS-2020-0068</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/package/yargs-parser">https://www.npmjs.com/package/yargs-parser</a></p>
<p>Release Date: 2020-05-04</p>
<p>Fix Resolution: https://www.npmjs.com/package/yargs-parser/v/18.1.2,https://www.npmjs.com/package/yargs-parser/v/15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2020-0068 (High) detected in multiple libraries - ## WS-2020-0068 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>yargs-parser-13.1.2.tgz</b>, <b>yargs-parser-13.1.1.tgz</b>, <b>yargs-parser-16.1.0.tgz</b>, <b>yargs-parser-5.0.0.tgz</b>, <b>yargs-parser-7.0.0.tgz</b>, <b>yargs-parser-11.1.1.tgz</b></p></summary>
<p>
<details><summary><b>yargs-parser-13.1.2.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-13.1.2.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-13.1.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/Modules/TinyMCE/JoplinLists/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/joplin/Modules/TinyMCE/JoplinLists/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- bedrock-8.1.1.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- yargs-13.3.2.tgz
- :x: **yargs-parser-13.1.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-13.1.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-13.1.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-13.1.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/ElectronClient/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/joplin/Clipper/popup/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/Clipper/popup/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/Clipper/popup/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- react-native-vector-icons-6.6.0.tgz (Root Library)
- yargs-13.3.0.tgz
- :x: **yargs-parser-13.1.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-16.1.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-16.1.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-16.1.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/ElectronClient/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/joplin/Clipper/popup/node_modules/meow/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/Clipper/popup/node_modules/meow/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- electron-builder-22.3.2.tgz (Root Library)
- yargs-15.1.0.tgz
- :x: **yargs-parser-16.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-5.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/ReactNativeClient/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/joplin/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- gulp-4.0.2.tgz (Root Library)
- gulp-cli-2.2.0.tgz
- yargs-7.1.0.tgz
- :x: **yargs-parser-5.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-7.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/CliClient/package.json</p>
<p>Path to vulnerable library: /joplin/CliClient/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- :x: **yargs-parser-7.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-11.1.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/joplin/Tools/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/joplin/Tools/node_modules/yargs-parser/package.json,/tmp/ws-scm/joplin/Tools/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- yargs-12.0.5.tgz (Root Library)
- :x: **yargs-parser-11.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/benchabot/joplin/commit/c73e3ff9ac9ce99322e28b08589d0cde405067a8">c73e3ff9ac9ce99322e28b08589d0cde405067a8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected versions of yargs-parser are vulnerable to prototype pollution. Arguments are not properly sanitized, allowing an attacker to modify the prototype of Object, causing the addition or modification of an existing property that will exist on all objects. Parsing the argument --foo.__proto__.bar baz' adds a bar property with value baz to all objects. This is only exploitable if attackers have control over the arguments being passed to yargs-parser.
<p>Publish Date: 2020-05-01
<p>URL: <a href=https://www.npmjs.com/advisories/1500>WS-2020-0068</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/package/yargs-parser">https://www.npmjs.com/package/yargs-parser</a></p>
<p>Release Date: 2020-05-04</p>
<p>Fix Resolution: https://www.npmjs.com/package/yargs-parser/v/18.1.2,https://www.npmjs.com/package/yargs-parser/v/15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws high detected in multiple libraries ws high severity vulnerability vulnerable libraries yargs parser tgz yargs parser tgz yargs parser tgz yargs parser tgz yargs parser tgz yargs parser tgz yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm joplin modules tinymce joplinlists package json path to vulnerable library tmp ws scm joplin modules tinymce joplinlists node modules yargs parser package json dependency hierarchy bedrock tgz root library webpack dev server tgz yargs tgz x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm joplin electronclient package json path to vulnerable library tmp ws scm joplin clipper popup node modules yargs parser package json tmp ws scm joplin clipper popup node modules yargs parser package json tmp ws scm joplin clipper popup node modules yargs parser package json dependency hierarchy react native vector icons tgz root library yargs tgz x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm joplin electronclient package json path to vulnerable library tmp ws scm joplin clipper popup node modules meow node modules yargs parser package json tmp ws scm joplin clipper popup node modules meow node modules yargs parser package json dependency hierarchy electron builder tgz root library yargs tgz x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm joplin reactnativeclient package json path to vulnerable library tmp ws scm joplin node modules yargs parser package json tmp ws scm joplin node modules yargs parser package json tmp ws scm joplin node modules yargs parser package json tmp ws scm joplin node modules yargs parser package json dependency hierarchy gulp tgz root library gulp cli tgz yargs tgz x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm joplin cliclient package json path to vulnerable library joplin cliclient node modules yargs parser package json dependency hierarchy x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm joplin tools package json path to vulnerable library tmp ws scm joplin tools node modules yargs parser package json tmp ws scm joplin tools node modules yargs parser package json dependency hierarchy yargs tgz root library x yargs parser tgz vulnerable library found in head commit a href vulnerability details affected versions of yargs parser are vulnerable to prototype pollution arguments are not properly sanitized allowing an attacker to modify the prototype of object causing the addition or modification of an existing property that will exist on all objects parsing the argument foo proto bar baz adds a bar property with value baz to all objects this is only exploitable if attackers have control over the arguments being passed to yargs parser publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
188,464
| 15,162,806,441
|
IssuesEvent
|
2021-02-12 11:10:51
|
automatiko-io/automatiko-engine
|
https://api.github.com/repos/automatiko-io/automatiko-engine
|
closed
|
document event publishers
|
0.2.0 documentation
|
Event publishers that should be documented
- websocket
- messaging
- elastic search
|
1.0
|
document event publishers - Event publishers that should be documented
- websocket
- messaging
- elastic search
|
non_process
|
document event publishers event publishers that should be documented websocket messaging elastic search
| 0
|
7,353
| 10,483,631,460
|
IssuesEvent
|
2019-09-24 14:15:48
|
monmouth-college-cs/sddc-reading-notes-monmouth-college-comp-335
|
https://api.github.com/repos/monmouth-college-cs/sddc-reading-notes-monmouth-college-comp-335
|
opened
|
Vote on Summary of Summaries
|
Requirements process vote
|
Similar to last time, vote on your top 2 Summary of Summaries. Try to have this in as soon as possible!
|
1.0
|
Vote on Summary of Summaries - Similar to last time, vote on your top 2 Summary of Summaries. Try to have this in as soon as possible!
|
process
|
vote on summary of summaries similar to last time vote on your top summary of summaries try to have this in as soon as possible
| 1
|
286,581
| 8,790,363,074
|
IssuesEvent
|
2018-12-21 08:47:19
|
medic/medic-webapp
|
https://api.github.com/repos/medic/medic-webapp
|
closed
|
Forms still available after deleting them using medic-conf
|
API Priority: 1 - High Type: Bug
|
The forms deleted using medic-conf are still available and usable from the web app.
**Steps to reproduce**:
- Deleted an existing form that was visible in the web app under "New Action":
`medic-conf --url http://admin:<password>@localhost:5988 delete-forms -- report_id`
It says:
```
INFO Starting action: delete-forms…
INFO delete-forms complete.
INFO Deleted form: report_id
```
- Checked the form in "New Action", it is still visible and usable.
- Checked in Medic Mobile administration console - Forms: The form is not there.
- Checked in couchdb (medic-client/forms): The form is not there.
**What should happen**:
- The deleted form should no longer be visible in the web app.
**What actually happens**:
- The deleted form was available for use.
**Environment**:
- Instance: _(demo-cht.dev.medicmobile.org)_, also replicated in local instance
- Browser: _(Chrome)_
- Client platform: _(Windows)_
- App: _(webapp, medic-conf)_
- Version: _(3.0.0)_
**Other**:
I also tried by reloading the web app several times, logging out and in again.
The form disappeared only after clearing the site data from (Chrome) Developer Tools - Application - Clear storage - Clear site data - Reload
|
1.0
|
Forms still available after deleting them using medic-conf - The forms deleted using medic-conf are still available and usable from the web app.
**Steps to reproduce**:
- Deleted an existing form that was visible in the web app under "New Action":
`medic-conf --url http://admin:<password>@localhost:5988 delete-forms -- report_id`
It says:
```
INFO Starting action: delete-forms…
INFO delete-forms complete.
INFO Deleted form: report_id
```
- Checked the form in "New Action", it is still visible and usable.
- Checked in Medic Mobile administration console - Forms: The form is not there.
- Checked in couchdb (medic-client/forms): The form is not there.
**What should happen**:
- The deleted form should no longer be visible in the web app.
**What actually happens**:
- The deleted form was available for use.
**Environment**:
- Instance: _(demo-cht.dev.medicmobile.org)_, also replicated in local instance
- Browser: _(Chrome)_
- Client platform: _(Windows)_
- App: _(webapp, medic-conf)_
- Version: _(3.0.0)_
**Other**:
I also tried by reloading the web app several times, logging out and in again.
The form disappeared only after clearing the site data from (Chrome) Developer Tools - Application - Clear storage - Clear site data - Reload
|
non_process
|
forms still available after deleting them using medic conf the forms deleted using medic conf are still available and usable from the web app steps to reproduce deleted an existing form that was visible in the web app under new action medic conf url delete forms report id it says info starting action delete forms… info delete forms complete info deleted form report id checked the form in new action it is still visible and usable checked in medic mobile administration console forms the form is not there checked in couchdb medic client forms the form is not there what should happen the deleted form should no longer be visible in the web app what actually happens the deleted form was available for use environment instance demo cht dev medicmobile org also replicated in local instance browser chrome client platform windows app webapp medic conf version other i also tried by reloading the web app several times logging out and in again the form disappeared only after clearing the site data from chrome developer tools application clear storage clear site data reload
| 0
|
19,171
| 25,276,976,025
|
IssuesEvent
|
2022-11-16 13:19:55
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
"Week of Year" and "Week" differs
|
Type:Bug Querying/Processor .Correctness Misc/Timezones
|
- Your browser and the version: Chrome 57.0
- Your databases: Redshift
- Metabase version: 0.23.1
- Metabase internal database: Postgres
I followed several issue discussions
https://github.com/metabase/metabase/issues/1987
https://github.com/metabase/metabase/issues/1635
https://github.com/metabase/metabase/issues/1779
http://discourse.metabase.com/t/how-are-mysql-dates-queried-for-this-week-and-last-week/185/2
It seems at some point "add 1 day" solution was added. When I do Group By "Week of Year" in the Question interface the produced SQL expression is as follows:
`CAST(extract(week from (<datecolumn> + INTERVAL '1 day')) AS integer) AS "date"`
What is the reason for this add 1 day? It doesn't make sense as it means that the week number is returned for following day and not current day.
Similarly Day of Week produces `(CAST(extract(dow from <datecolumn>) AS integer) + 1) AS "date"` which adds 1 again.
`select CAST(extract(dow from '2017-04-28'::date) AS integer)` returns 5
but above expression returns 6. Again Sunday based but seems to be forced/hacked way of setting date.
I feel like ISO standards define the timezone, timestamp and calendar definitions pretty clearly and are fundamental concepts expected to be consistent during any analysis. Why not use the source database defaults? Why try to hack standards like calendars and times?
|
1.0
|
"Week of Year" and "Week" differs - - Your browser and the version: Chrome 57.0
- Your databases: Redshift
- Metabase version: 0.23.1
- Metabase internal database: Postgres
I followed several issue discussions
https://github.com/metabase/metabase/issues/1987
https://github.com/metabase/metabase/issues/1635
https://github.com/metabase/metabase/issues/1779
http://discourse.metabase.com/t/how-are-mysql-dates-queried-for-this-week-and-last-week/185/2
It seems at some point "add 1 day" solution was added. When I do Group By "Week of Year" in the Question interface the produced SQL expression is as follows:
`CAST(extract(week from (<datecolumn> + INTERVAL '1 day')) AS integer) AS "date"`
What is the reason for this add 1 day? It doesn't make sense as it means that the week number is returned for following day and not current day.
Similarly Day of Week produces `(CAST(extract(dow from <datecolumn>) AS integer) + 1) AS "date"` which adds 1 again.
`select CAST(extract(dow from '2017-04-28'::date) AS integer)` returns 5
but above expression returns 6. Again Sunday based but seems to be forced/hacked way of setting date.
I feel like ISO standards define the timezone, timestamp and calendar definitions pretty clearly and are fundamental concepts expected to be consistent during any analysis. Why not use the source database defaults? Why try to hack standards like calendars and times?
|
process
|
week of year and week differs your browser and the version chrome your databases redshift metabase version metabase internal database postgres i followed several issue discussions it seems at some point add day solution was added when i do group by week of year in the question interface the produced sql expression is as follows cast extract week from interval day as integer as date what is the reason for this add day it doesn t make sense as it means that the week number is returned for following day and not current day similarly day of week produces cast extract dow from as integer as date which adds again select cast extract dow from date as integer returns but above expression returns again sunday based but seems to be forced hacked way of setting date i feel like iso standards define the timezone timestamp and calendar definitions pretty clearly and are fundamental concepts expected to be consistent during any analysis why not use the source database defaults why try to hack standards like calendars and times
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.