Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
221
2,649,432,275
IssuesEvent
2015-03-14 22:21:53
eiskaltdcpp/eiskaltdcpp
https://api.github.com/repos/eiskaltdcpp/eiskaltdcpp
opened
gcc -Wall + Qt UI
imported Type-DevProcess
_From [egikpetrov](https://code.google.com/u/egikpetrov/) on March 04, 2012 02:18:49_ было бы неплохо пофиксить все варнинги для -Wall в qt ui **Attachment:** [-Wall.list](http://code.google.com/p/eiskaltdc/issues/detail?id=1295) _Original issue: http://code.google.com/p/eiskaltdc/issues/detail?id=1295_
1.0
gcc -Wall + Qt UI - _From [egikpetrov](https://code.google.com/u/egikpetrov/) on March 04, 2012 02:18:49_ было бы неплохо пофиксить все варнинги для -Wall в qt ui **Attachment:** [-Wall.list](http://code.google.com/p/eiskaltdc/issues/detail?id=1295) _Original issue: http://code.google.com/p/eiskaltdc/issues/detail?id=1295_
process
gcc wall qt ui from on march было бы неплохо пофиксить все варнинги для wall в qt ui attachment original issue
1
4,692
7,527,852,472
IssuesEvent
2018-04-13 18:33:24
w3c/distributed-tracing
https://api.github.com/repos/w3c/distributed-tracing
closed
What is the milestone of this spec?
process
I am aware there are definitely plenty details to discuss for this spec. But, I suggest we can do some works in implementation level. For do that, we should set a milestone, like 0.1 or 1.0, as you like. So the tracer/APM-agents can try to implement the spec, and find out what is next for everyone. @adriancole @bogdandrutu @SergeyKanzhelev
1.0
What is the milestone of this spec? - I am aware there are definitely plenty details to discuss for this spec. But, I suggest we can do some works in implementation level. For do that, we should set a milestone, like 0.1 or 1.0, as you like. So the tracer/APM-agents can try to implement the spec, and find out what is next for everyone. @adriancole @bogdandrutu @SergeyKanzhelev
process
what is the milestone of this spec i am aware there are definitely plenty details to discuss for this spec but i suggest we can do some works in implementation level for do that we should set a milestone like or as you like so the tracer apm agents can try to implement the spec and find out what is next for everyone adriancole bogdandrutu sergeykanzhelev
1
12,497
14,961,147,116
IssuesEvent
2021-01-27 07:15:01
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] [Audit Logs] "participantId" is displayed null for the events
Bug P2 Process: Fixed Process: Tested dev iOS
**Events:** Module: **Consent Service(Participant Datastore):** 1. USER_ENROLLED_INTO_STUDY 2. READ_OPERATION_SUCCEEDED_FOR_SIGNED_CONSENT_DOCUMENT 3. SIGNED_CONSENT_DOCUMENT_SAVED Module: **Enroll(Participant Datastore)** 4. STUDY_STATE_SAVED_OR_UPDATED_FOR_PARTICIPANT 5. READ_OPERATION_SUCCEEDED_FOR_STUDY_INFO Module: **Response Datastore** 6. PARTICIPANT_ID_GENERATED Sample snippet for event `PARTICIPANT_ID_GENERATED` event ``` { "insertId": "1s3rqdog25k19og", "jsonPayload": { "occurred": 1608726951718, "userIp": "35.222.67.4", "sourceApplicationVersion": "1.0", "source": "PARTICIPANT USER DATASTORE", "studyId": null, "siteId": null, "appVersion": "1.0.146", "mobilePlatform": "IOS", "eventCode": "PARTICIPANT_ID_GENERATED", "userAccessLevel": null, "resourceServer": null, "userId": "ad4b3ff0n3178t484bl8bdch9bd1ba58d9f8", "correlationId": "EC2192BC-A5B3-4040-AD48-EC1530795682", "participantId": null, "destinationApplicationVersion": "1.0", "studyVersion": null, "platformVersion": "1.0", "appId": "", "destination": "RESPONSE DATASTORE", "description": null }, "resource": { "type": "global", "labels": { "project_id": "mystudies-open-impl-track1-dev" } }, "timestamp": "2020-12-23T12:35:51.718Z", "severity": "INFO", "logName": "projects/mystudies-open-impl-track1-dev/logs/application-audit-log", "receiveTimestamp": "2020-12-23T12:35:51.864217708Z" } ```
2.0
[iOS] [Audit Logs] "participantId" is displayed null for the events - **Events:** Module: **Consent Service(Participant Datastore):** 1. USER_ENROLLED_INTO_STUDY 2. READ_OPERATION_SUCCEEDED_FOR_SIGNED_CONSENT_DOCUMENT 3. SIGNED_CONSENT_DOCUMENT_SAVED Module: **Enroll(Participant Datastore)** 4. STUDY_STATE_SAVED_OR_UPDATED_FOR_PARTICIPANT 5. READ_OPERATION_SUCCEEDED_FOR_STUDY_INFO Module: **Response Datastore** 6. PARTICIPANT_ID_GENERATED Sample snippet for event `PARTICIPANT_ID_GENERATED` event ``` { "insertId": "1s3rqdog25k19og", "jsonPayload": { "occurred": 1608726951718, "userIp": "35.222.67.4", "sourceApplicationVersion": "1.0", "source": "PARTICIPANT USER DATASTORE", "studyId": null, "siteId": null, "appVersion": "1.0.146", "mobilePlatform": "IOS", "eventCode": "PARTICIPANT_ID_GENERATED", "userAccessLevel": null, "resourceServer": null, "userId": "ad4b3ff0n3178t484bl8bdch9bd1ba58d9f8", "correlationId": "EC2192BC-A5B3-4040-AD48-EC1530795682", "participantId": null, "destinationApplicationVersion": "1.0", "studyVersion": null, "platformVersion": "1.0", "appId": "", "destination": "RESPONSE DATASTORE", "description": null }, "resource": { "type": "global", "labels": { "project_id": "mystudies-open-impl-track1-dev" } }, "timestamp": "2020-12-23T12:35:51.718Z", "severity": "INFO", "logName": "projects/mystudies-open-impl-track1-dev/logs/application-audit-log", "receiveTimestamp": "2020-12-23T12:35:51.864217708Z" } ```
process
participantid is displayed null for the events events module consent service participant datastore user enrolled into study read operation succeeded for signed consent document signed consent document saved module enroll participant datastore study state saved or updated for participant read operation succeeded for study info module response datastore participant id generated sample snippet for event participant id generated event insertid jsonpayload occurred userip sourceapplicationversion source participant user datastore studyid null siteid null appversion mobileplatform ios eventcode participant id generated useraccesslevel null resourceserver null userid correlationid participantid null destinationapplicationversion studyversion null platformversion appid destination response datastore description null resource type global labels project id mystudies open impl dev timestamp severity info logname projects mystudies open impl dev logs application audit log receivetimestamp
1
276,916
24,031,682,295
IssuesEvent
2022-09-15 15:29:16
runtimeverification/k
https://api.github.com/repos/runtimeverification/k
closed
Bisimulation
testing
The [bisimulation tests](https://github.com/kframework/k/tree/master/k-distribution/tests/regression-new/bisimulation) use features from `KREFLECTION` which will not be supported, but it should be simple to rewrite the test to use pattern matching.
1.0
Bisimulation - The [bisimulation tests](https://github.com/kframework/k/tree/master/k-distribution/tests/regression-new/bisimulation) use features from `KREFLECTION` which will not be supported, but it should be simple to rewrite the test to use pattern matching.
non_process
bisimulation the use features from kreflection which will not be supported but it should be simple to rewrite the test to use pattern matching
0
7,836
11,011,726,085
IssuesEvent
2019-12-04 16:49:02
90301/TextReplace
https://api.github.com/repos/90301/TextReplace
closed
Text Log Filterer / Stats
Log Processor
Have something that can handle logs Process line by line. Able to find text insdie quotes in line filtering code program creation capability. Extra functionality: - [x] Multi-Line-Output Block --- Documentation ORDER OF OPERATIONS MATTERS A LOT Settings can be changed in the middle of processing text. for debugging, all outputs of intermediate steps can be traced. ## Processor Settings TEXTUNIT([LINE,FULLDOC]) - turns this from a line my line processor to a full text processor ## Filtering MustContain("SomeText". [Optional] REGEX Support) - the finds all rows that match IncludeExtra(int above,int below) - must be above the operation / op group NotContain("SomeText". [Optional] REGEX Support) ## Collection Inside(:) - Gets the text past (to the right of) the first instance of : ContainedInside(") - Gets all text inside of a specific tag / character. EX toasy was a "marko" toaster would return "marko"
1.0
Text Log Filterer / Stats - Have something that can handle logs Process line by line. Able to find text insdie quotes in line filtering code program creation capability. Extra functionality: - [x] Multi-Line-Output Block --- Documentation ORDER OF OPERATIONS MATTERS A LOT Settings can be changed in the middle of processing text. for debugging, all outputs of intermediate steps can be traced. ## Processor Settings TEXTUNIT([LINE,FULLDOC]) - turns this from a line my line processor to a full text processor ## Filtering MustContain("SomeText". [Optional] REGEX Support) - the finds all rows that match IncludeExtra(int above,int below) - must be above the operation / op group NotContain("SomeText". [Optional] REGEX Support) ## Collection Inside(:) - Gets the text past (to the right of) the first instance of : ContainedInside(") - Gets all text inside of a specific tag / character. EX toasy was a "marko" toaster would return "marko"
process
text log filterer stats have something that can handle logs process line by line able to find text insdie quotes in line filtering code program creation capability extra functionality multi line output block documentation order of operations matters a lot settings can be changed in the middle of processing text for debugging all outputs of intermediate steps can be traced processor settings textunit turns this from a line my line processor to a full text processor filtering mustcontain sometext regex support the finds all rows that match includeextra int above int below must be above the operation op group notcontain sometext regex support collection inside gets the text past to the right of the first instance of containedinside gets all text inside of a specific tag character ex toasy was a marko toaster would return marko
1
645,039
20,992,915,437
IssuesEvent
2022-03-29 10:58:55
AY2122S2-CS2103T-W15-2/tp
https://api.github.com/repos/AY2122S2-CS2103T-W15-2/tp
closed
As a user, I can choose which client to delete if there are clients with similar names
type.Story priority.High
So I can accurately delete clients from my HustleBook
1.0
As a user, I can choose which client to delete if there are clients with similar names - So I can accurately delete clients from my HustleBook
non_process
as a user i can choose which client to delete if there are clients with similar names so i can accurately delete clients from my hustlebook
0
43,561
13,020,407,562
IssuesEvent
2020-07-27 02:59:20
LightC0der/arunbhandari.github.io
https://api.github.com/repos/LightC0der/arunbhandari.github.io
opened
CVE-2015-9251 (Medium) detected in jquery-1.9.1.min.js
security vulnerability
## CVE-2015-9251 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.9.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js</a></p> <p>Path to vulnerable library: /arunbhandari.github.io/assets/js/vendor/jquery-1.9.1.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.9.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LightC0der/arunbhandari.github.io/commit/241096b6dd14739925eca764bd8ab9a25a8003c6">241096b6dd14739925eca764bd8ab9a25a8003c6</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v3.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2015-9251 (Medium) detected in jquery-1.9.1.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.9.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js</a></p> <p>Path to vulnerable library: /arunbhandari.github.io/assets/js/vendor/jquery-1.9.1.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.9.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LightC0der/arunbhandari.github.io/commit/241096b6dd14739925eca764bd8ab9a25a8003c6">241096b6dd14739925eca764bd8ab9a25a8003c6</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v3.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library arunbhandari github io assets js vendor jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
0
16,938
22,286,954,483
IssuesEvent
2022-06-11 19:40:16
sparc4-dev/astropop
https://api.github.com/repos/sparc4-dev/astropop
closed
Routine to compute only shifts in image registration
enhancement image-processing
Make a function to compute the shifts separated in image registration.
1.0
Routine to compute only shifts in image registration - Make a function to compute the shifts separated in image registration.
process
routine to compute only shifts in image registration make a function to compute the shifts separated in image registration
1
8,737
11,866,330,340
IssuesEvent
2020-03-26 03:20:35
trynmaps/metrics-mvp
https://api.github.com/repos/trynmaps/metrics-mvp
closed
Make our contribution process more obvious
Process
At the top of the readme, link to our onboarding docs: * https://docs.google.com/document/d/1qzzBKpQPkcKkz9b47yJHAB95ip8d3HjYV1wPIYaUlBU/edit#heading=h.reybhpkqyi34 * https://docs.google.com/document/d/1WZt0LkGS6AsotYjF7VSMIWD1dXrzsKX4lRYKE_uhJWE/edit We need to reduce the number of new people who start working on projects and never finish them, and don't introduce themselves to us. People who jump on one issue, start it, and never finish really slow us down. So we should be more explicit about asking people to join our Slack and call into meetings, etc. before we let them contribute -- and a good first step is to put that more obviously in the readme.
1.0
Make our contribution process more obvious - At the top of the readme, link to our onboarding docs: * https://docs.google.com/document/d/1qzzBKpQPkcKkz9b47yJHAB95ip8d3HjYV1wPIYaUlBU/edit#heading=h.reybhpkqyi34 * https://docs.google.com/document/d/1WZt0LkGS6AsotYjF7VSMIWD1dXrzsKX4lRYKE_uhJWE/edit We need to reduce the number of new people who start working on projects and never finish them, and don't introduce themselves to us. People who jump on one issue, start it, and never finish really slow us down. So we should be more explicit about asking people to join our Slack and call into meetings, etc. before we let them contribute -- and a good first step is to put that more obviously in the readme.
process
make our contribution process more obvious at the top of the readme link to our onboarding docs we need to reduce the number of new people who start working on projects and never finish them and don t introduce themselves to us people who jump on one issue start it and never finish really slow us down so we should be more explicit about asking people to join our slack and call into meetings etc before we let them contribute and a good first step is to put that more obviously in the readme
1
42,087
12,876,211,977
IssuesEvent
2020-07-11 03:17:44
tt9133github/zkui
https://api.github.com/repos/tt9133github/zkui
opened
CVE-2017-3586 (Medium) detected in mysql-connector-java-5.1.31.jar
security vulnerability
## CVE-2017-3586 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.31.jar</b></p></summary> <p>MySQL JDBC Type 4 driver</p> <p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p> <p>Path to dependency file: /tmp/ws-scm/zkui/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.31/mysql-connector-java-5.1.31.jar</p> <p> Dependency Hierarchy: - :x: **mysql-connector-java-5.1.31.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/tt9133github/zkui/commit/68cd62cead44c0c462f92c8bc9fc2b02708bab32">68cd62cead44c0c462f92c8bc9fc2b02708bab32</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily "exploitable" vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data. CVSS 3.0 Base Score 6.4 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:N). <p>Publish Date: 2017-04-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3586>CVE-2017-3586</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1444406">https://bugzilla.redhat.com/show_bug.cgi?id=1444406</a></p> <p>Release Date: 2017-04-24</p> <p>Fix Resolution: 5.1.42</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-3586 (Medium) detected in mysql-connector-java-5.1.31.jar - ## CVE-2017-3586 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.31.jar</b></p></summary> <p>MySQL JDBC Type 4 driver</p> <p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p> <p>Path to dependency file: /tmp/ws-scm/zkui/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.31/mysql-connector-java-5.1.31.jar</p> <p> Dependency Hierarchy: - :x: **mysql-connector-java-5.1.31.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/tt9133github/zkui/commit/68cd62cead44c0c462f92c8bc9fc2b02708bab32">68cd62cead44c0c462f92c8bc9fc2b02708bab32</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily "exploitable" vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data. CVSS 3.0 Base Score 6.4 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:N). <p>Publish Date: 2017-04-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3586>CVE-2017-3586</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1444406">https://bugzilla.redhat.com/show_bug.cgi?id=1444406</a></p> <p>Release Date: 2017-04-24</p> <p>Fix Resolution: 5.1.42</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in mysql connector java jar cve medium severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file tmp ws scm zkui pom xml path to vulnerable library canner repository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data as well as unauthorized read access to a subset of mysql connectors accessible data cvss base score confidentiality and integrity impacts cvss vector cvss av n ac l pr l ui n s c c l i l a n publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
57,889
14,239,349,916
IssuesEvent
2020-11-18 20:00:05
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
Can't run tests in Eclipse
:Delivery/Build >bug Team:Delivery needs:triage
In #64841 started validating the codebases existed in our security policy. This broke running tests in Eclipse because we include a special policy for IntelliJ if we're not running in gradle. Eclipse doesn't run in gradle and doesn't have the IntelliJ policy.
1.0
Can't run tests in Eclipse - In #64841 started validating the codebases existed in our security policy. This broke running tests in Eclipse because we include a special policy for IntelliJ if we're not running in gradle. Eclipse doesn't run in gradle and doesn't have the IntelliJ policy.
non_process
can t run tests in eclipse in started validating the codebases existed in our security policy this broke running tests in eclipse because we include a special policy for intellij if we re not running in gradle eclipse doesn t run in gradle and doesn t have the intellij policy
0
8,320
11,487,336,665
IssuesEvent
2020-02-11 11:44:50
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
PAMP-triggered immunity
high priority multi-species process
PAMP- triggered immunity (PTI) usually first line of defence 
Upon association with PAMPs, the pattern-recognition receptors activate a downstream mitogen-activated protein kinase signaling cascade that culminates in transcriptional activation and generation of the innate immune responses. For PTI I find . GO:0002221 pattern recognition receptor signaling pathway Any series of molecular signals generated as a consequence of a pattern recognition receptor (PRR) binding to one of its physiological ligands. PRRs bind pathogen-associated molecular pattern (PAMPs), structures conserved among microbial species, or damage-associated molecular pattern (DAMPs), endogenous molecules released from damaged cells. PMID:15199967 so NTR could be a child of GO:0002221 NTR PAMP-triggered immunity signalling pathway Any series of molecular signals generated as a consequence of a pattern recognition receptor (PRR) binding to one of its physiological ligands to activate a plant innate immune response. PAMP-triggered immunity PRRs bind pathogen-associated molecular pattern (PAMPs), structures conserved among microbial species. exact synonym PTI signalling ~I would also be happy for this to be a synonym. BUT I'm a bit concerned about the mention of damage-associated molecular pattern (DAMPs) because these AFAIK are not innate immune response, they are inflammatory response. I really think these should be split.~ the DAMPs are also regulating the immune response via damage detection Note that I could also use: GO:0035420 MAPK cascade involved in innate immune response (0 annotations) it might be best to get rid of this ?
1.0
PAMP-triggered immunity - PAMP- triggered immunity (PTI) usually first line of defence 
Upon association with PAMPs, the pattern-recognition receptors activate a downstream mitogen-activated protein kinase signaling cascade that culminates in transcriptional activation and generation of the innate immune responses. For PTI I find . GO:0002221 pattern recognition receptor signaling pathway Any series of molecular signals generated as a consequence of a pattern recognition receptor (PRR) binding to one of its physiological ligands. PRRs bind pathogen-associated molecular pattern (PAMPs), structures conserved among microbial species, or damage-associated molecular pattern (DAMPs), endogenous molecules released from damaged cells. PMID:15199967 so NTR could be a child of GO:0002221 NTR PAMP-triggered immunity signalling pathway Any series of molecular signals generated as a consequence of a pattern recognition receptor (PRR) binding to one of its physiological ligands to activate a plant innate immune response. PAMP-triggered immunity PRRs bind pathogen-associated molecular pattern (PAMPs), structures conserved among microbial species. exact synonym PTI signalling ~I would also be happy for this to be a synonym. BUT I'm a bit concerned about the mention of damage-associated molecular pattern (DAMPs) because these AFAIK are not innate immune response, they are inflammatory response. I really think these should be split.~ the DAMPs are also regulating the immune response via damage detection Note that I could also use: GO:0035420 MAPK cascade involved in innate immune response (0 annotations) it might be best to get rid of this ?
process
pamp triggered immunity pamp triggered immunity pti usually first line of defence 
upon association with pamps the pattern recognition receptors activate a downstream mitogen activated protein kinase signaling cascade that culminates in transcriptional activation and generation of the innate immune responses for pti i find go pattern recognition receptor signaling pathway any series of molecular signals generated as a consequence of a pattern recognition receptor prr binding to one of its physiological ligands prrs bind pathogen associated molecular pattern pamps structures conserved among microbial species or damage associated molecular pattern damps endogenous molecules released from damaged cells pmid so ntr could be a child of go ntr pamp triggered immunity signalling pathway any series of molecular signals generated as a consequence of a pattern recognition receptor prr binding to one of its physiological ligands to activate a plant innate immune response pamp triggered immunity prrs bind pathogen associated molecular pattern pamps structures conserved among microbial species exact synonym pti signalling i would also be happy for this to be a synonym but i m a bit concerned about the mention of damage associated molecular pattern damps because these afaik are not innate immune response they are inflammatory response i really think these should be split the damps are also regulating the immune response via damage detection note that i could also use go mapk cascade involved in innate immune response annotations it might be best to get rid of this
1
16,077
20,249,040,126
IssuesEvent
2022-02-14 16:11:53
Bone008/orbiteye
https://api.github.com/repos/Bone008/orbiteye
opened
Create TypeScript interface for data import
data processing
- [ ] Define and document TypeScript interface for the data - [ ] Create mappings from shortcodes to full strings (e.g. Owner/Country, Launch Site)
1.0
Create TypeScript interface for data import - - [ ] Define and document TypeScript interface for the data - [ ] Create mappings from shortcodes to full strings (e.g. Owner/Country, Launch Site)
process
create typescript interface for data import define and document typescript interface for the data create mappings from shortcodes to full strings e g owner country launch site
1
38,459
19,276,165,560
IssuesEvent
2021-12-10 12:07:01
grafana/grafana
https://api.github.com/repos/grafana/grafana
closed
Grafana 8.1.3 Web UI GPU usage is too high
needs more info type/performance bot/no new info
<!-- Please use this template to create your bug report. By providing as much info as possible you help us understand the issue, reproduce it and resolve it for you quicker. Therefore take a couple of extra minutes to make sure you have provided all info needed. PROTIP: record your screen and attach it as a gif to showcase the issue. - Questions should be posted to: https://community.grafana.com - Use query inspector to troubleshoot issues: https://bit.ly/2XNF6YS - How to record and attach gif: https://bit.ly/2Mi8T6K --> **What happened**: ![image](https://user-images.githubusercontent.com/3882312/141237217-ba8a4af5-58b8-4772-9391-068efc6d1b01.png) **What you expected to happen**: When the new window that pops up in the webUI is displayed in the foreground, it sometimes causes the Grafana tab to fill up the CPU. Through the analysis of the task manager of the Chrome browser, the GPU operation is too frequent at this time. When closing the new window that Grafana pops up, or closing the Grafana tab, or letting the browser work in the background, the CPU utilization will drop to a normal level. **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: **Environment**: - Grafana version: 8.1.3 - Data source type & version: mysql8 - OS Grafana is installed on: MacOs 10.14.6 - User OS & Browser: Chrome 95.0.4638.69(正式版本) (x86_64) - Grafana plugins: - vertamedia-clickhouse-plugin_linux_amd64 - grafadruid-druid-datasource_linux_amd64 - Others:
True
Grafana 8.1.3 Web UI GPU usage is too high - <!-- Please use this template to create your bug report. By providing as much info as possible you help us understand the issue, reproduce it and resolve it for you quicker. Therefore take a couple of extra minutes to make sure you have provided all info needed. PROTIP: record your screen and attach it as a gif to showcase the issue. - Questions should be posted to: https://community.grafana.com - Use query inspector to troubleshoot issues: https://bit.ly/2XNF6YS - How to record and attach gif: https://bit.ly/2Mi8T6K --> **What happened**: ![image](https://user-images.githubusercontent.com/3882312/141237217-ba8a4af5-58b8-4772-9391-068efc6d1b01.png) **What you expected to happen**: When the new window that pops up in the webUI is displayed in the foreground, it sometimes causes the Grafana tab to fill up the CPU. Through the analysis of the task manager of the Chrome browser, the GPU operation is too frequent at this time. When closing the new window that Grafana pops up, or closing the Grafana tab, or letting the browser work in the background, the CPU utilization will drop to a normal level. **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: **Environment**: - Grafana version: 8.1.3 - Data source type & version: mysql8 - OS Grafana is installed on: MacOs 10.14.6 - User OS & Browser: Chrome 95.0.4638.69(正式版本) (x86_64) - Grafana plugins: - vertamedia-clickhouse-plugin_linux_amd64 - grafadruid-druid-datasource_linux_amd64 - Others:
non_process
grafana web ui gpu usage is too high please use this template to create your bug report by providing as much info as possible you help us understand the issue reproduce it and resolve it for you quicker therefore take a couple of extra minutes to make sure you have provided all info needed protip record your screen and attach it as a gif to showcase the issue questions should be posted to use query inspector to troubleshoot issues how to record and attach gif what happened what you expected to happen when the new window that pops up in the webui is displayed in the foreground it sometimes causes the grafana tab to fill up the cpu through the analysis of the task manager of the chrome browser the gpu operation is too frequent at this time when closing the new window that grafana pops up or closing the grafana tab or letting the browser work in the background the cpu utilization will drop to a normal level how to reproduce it as minimally and precisely as possible anything else we need to know environment grafana version data source type version os grafana is installed on macos user os browser chrome (正式版本) grafana plugins vertamedia clickhouse plugin linux grafadruid druid datasource linux others
0
1,881
4,712,135,815
IssuesEvent
2016-10-14 15:49:33
CERNDocumentServer/cds
https://api.github.com/repos/CERNDocumentServer/cds
closed
Process: Initialise webhooks on cds
avc_processing
Make sure that the webhooks the commits - <https://github.com/CERNDocumentServer/cds/commit/1375c895e3c92c0fec428333e2e4a53c8b968cc8> - <https://github.com/CERNDocumentServer/cds/commit/3fa7d068acc6e98b3742bd29fa132a1b0ed9e4e6> - <https://github.com/CERNDocumentServer/cds/commit/9b95232438d5f8d0d2a459b4cacfa880ab71819c> are well integrated on CDS and the endpoints are working as designed
1.0
Process: Initialise webhooks on cds - Make sure that the webhooks the commits - <https://github.com/CERNDocumentServer/cds/commit/1375c895e3c92c0fec428333e2e4a53c8b968cc8> - <https://github.com/CERNDocumentServer/cds/commit/3fa7d068acc6e98b3742bd29fa132a1b0ed9e4e6> - <https://github.com/CERNDocumentServer/cds/commit/9b95232438d5f8d0d2a459b4cacfa880ab71819c> are well integrated on CDS and the endpoints are working as designed
process
process initialise webhooks on cds make sure that the webhooks the commits are well integrated on cds and the endpoints are working as designed
1
16,215
20,742,598,443
IssuesEvent
2022-03-14 19:13:06
kitzeslab/opensoundscape
https://api.github.com/repos/kitzeslab/opensoundscape
opened
ImgToTensor and ImgToTensorGreyscale should be one class
preprocessing 0.7.0
can have a "mode" parameter for "RGB" or "L"
1.0
ImgToTensor and ImgToTensorGreyscale should be one class - can have a "mode" parameter for "RGB" or "L"
process
imgtotensor and imgtotensorgreyscale should be one class can have a mode parameter for rgb or l
1
2,078
4,892,266,247
IssuesEvent
2016-11-18 19:11:36
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Stack Trace on Depreciation Warnings
process question
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v7.0.0 * **Platform**: Darwin bret-mbr.local 16.1.0 Darwin Kernel Version 16.1.0: Thu Oct 13 21:26:57 PDT 2016; root:xnu-3789.21.3~60/RELEASE_X86_64 x86_64 * **Subsystem**: <!-- Enter your issue details below this comment. --> It would be handy if warnings provided a stack trace when thrown, so that I can track down the source of the warning and possibly PR upstream fixes. Right now, a lot of modules are throwing the missing `new Buffer` warning, but don't provide any hints at where to look to make the fix. @mikeal mentioned that this should be pretty easy to do possibly: https://twitter.com/mikeal/status/796072579563851776
1.0
Stack Trace on Depreciation Warnings - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v7.0.0 * **Platform**: Darwin bret-mbr.local 16.1.0 Darwin Kernel Version 16.1.0: Thu Oct 13 21:26:57 PDT 2016; root:xnu-3789.21.3~60/RELEASE_X86_64 x86_64 * **Subsystem**: <!-- Enter your issue details below this comment. --> It would be handy if warnings provided a stack trace when thrown, so that I can track down the source of the warning and possibly PR upstream fixes. Right now, a lot of modules are throwing the missing `new Buffer` warning, but don't provide any hints at where to look to make the fix. @mikeal mentioned that this should be pretty easy to do possibly: https://twitter.com/mikeal/status/796072579563851776
process
stack trace on depreciation warnings thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform darwin bret mbr local darwin kernel version thu oct pdt root xnu release subsystem it would be handy if warnings provided a stack trace when thrown so that i can track down the source of the warning and possibly pr upstream fixes right now a lot of modules are throwing the missing new buffer warning but don t provide any hints at where to look to make the fix mikeal mentioned that this should be pretty easy to do possibly
1
20,035
26,520,107,612
IssuesEvent
2023-01-19 01:17:59
CSE201-project/PaperFriend-desktop-app
https://api.github.com/repos/CSE201-project/PaperFriend-desktop-app
closed
Code the add friend functionality
file processing frontend
Same as for activities. Once it's done for activities it should be easy to do it with friends. You can use function templates or overloading ;-)
1.0
Code the add friend functionality - Same as for activities. Once it's done for activities it should be easy to do it with friends. You can use function templates or overloading ;-)
process
code the add friend functionality same as for activities once it s done for activities it should be easy to do it with friends you can use function templates or overloading
1
19,454
25,737,029,083
IssuesEvent
2022-12-08 01:50:54
GoogleCloudPlatform/emblem
https://api.github.com/repos/GoogleCloudPlatform/emblem
closed
Design: Determine trace span creation, collection, and propagation tooling
type: process priority: p2 component: website component: content-api persona: developer component: delivery
Identify the appropriate tracing toolchain for this application. We need to balance library ecosystem maturity with longevity. OpenTelemetry is recommended but not necessarily ready, OpenCensus is mature but is expected to be replaced by OpenTelemetry. Our use cases include: * Propagating trace between all systems that are part of the flow in processing a request * Easy creation of custom spans around long-running or potentially complex operations * Injection of trace details into application logs (something structured logging may provide separately) * Ensuring telemetry data reaches Cloud Tracing before container termination Part of #43
1.0
Design: Determine trace span creation, collection, and propagation tooling - Identify the appropriate tracing toolchain for this application. We need to balance library ecosystem maturity with longevity. OpenTelemetry is recommended but not necessarily ready, OpenCensus is mature but is expected to be replaced by OpenTelemetry. Our use cases include: * Propagating trace between all systems that are part of the flow in processing a request * Easy creation of custom spans around long-running or potentially complex operations * Injection of trace details into application logs (something structured logging may provide separately) * Ensuring telemetry data reaches Cloud Tracing before container termination Part of #43
process
design determine trace span creation collection and propagation tooling identify the appropriate tracing toolchain for this application we need to balance library ecosystem maturity with longevity opentelemetry is recommended but not necessarily ready opencensus is mature but is expected to be replaced by opentelemetry our use cases include propagating trace between all systems that are part of the flow in processing a request easy creation of custom spans around long running or potentially complex operations injection of trace details into application logs something structured logging may provide separately ensuring telemetry data reaches cloud tracing before container termination part of
1
20,010
26,483,791,225
IssuesEvent
2023-01-17 16:27:06
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
qgis_process: bypass the need to manually create QGIS project for algorithms with expressions
Processing Feature Request
### Feature description Certain algorithms center on the use of QGIS expressions to generate new values or new geometries, e.g. `native:fieldcalculator` and `native:geometrybyexpression`. The functions in these expressions refer to layers and layer fields that must be present in the QGIS project, e.g. in `get_feature('test_line', 'fid', '1')` or in `$geometry` (referring the selected input layer). A similar case is the use of expressions in [data-defined overrides](https://docs.qgis.org/testing/en/docs/user_manual/introduction/general_tools.html#the-data-defined-override-widget) for parameters of a processing algorithm (e.g. for `DISTANCE` in `native:buffer`). In reproducible scripting outside of QGIS and Python, `qgis_process` is called in specific steps (e.g. in bash, or in R using package [qgisprocess](https://github.com/paleolimbot/qgisprocess)). When calling `qgis_process` in this context to process geospatial layers that just live in a **file** (e.g. GeoPackage), the above algorithms will only work when a QGIS project is first created interactively that contains these layers. This is not ideal from the scripting perspective (see _Additional context_). Would one of the following **solutions** be sensible and feasible to add, in order to bypass the manual creation of a QGIS project for the algorithms that use expressions: - providing an algorithm (that can be run with `qgis_process`) that creates a simple QGIS project. As input it would need filepath+layer references, a CRS and a project file name. Perhaps restricted (if needed) to the case of layers with the same explicit CRS (or none, assuming the same). - extending the QGIS expressions framework so that layers and fields can be referred from filepath+layer. ### Additional context From the perspective of reproducible scripting outside of QGIS and Python, the need for a QGIS project in above algorithms poses a problem compared to the many algorithms not needing a QGIS project. AFAIK such project creation cannot be scripted using a `qgis_process` algorithm. Hence (irreproducible) manual intervention is needed – creating a QGIS project. AFAIK the only current solution is to store a manually crafted QGIS project file next to the script, but it feels like overkill for scripted geospatial processing. Also, the processing step that uses QGIS expressions will often be used on intermediate results from earlier scripting, which will in general be different when the same script is used with new or updated input data. The latter implies that care must be taken that the QGIS project is referring the updated intermediate results. With relation to data-defined overrides, it's currently unclear how to pass them to `qgis_process`; see #50482 for that. This idea originated in https://github.com/paleolimbot/qgisprocess/issues/104.
1.0
qgis_process: bypass the need to manually create QGIS project for algorithms with expressions - ### Feature description Certain algorithms center on the use of QGIS expressions to generate new values or new geometries, e.g. `native:fieldcalculator` and `native:geometrybyexpression`. The functions in these expressions refer to layers and layer fields that must be present in the QGIS project, e.g. in `get_feature('test_line', 'fid', '1')` or in `$geometry` (referring the selected input layer). A similar case is the use of expressions in [data-defined overrides](https://docs.qgis.org/testing/en/docs/user_manual/introduction/general_tools.html#the-data-defined-override-widget) for parameters of a processing algorithm (e.g. for `DISTANCE` in `native:buffer`). In reproducible scripting outside of QGIS and Python, `qgis_process` is called in specific steps (e.g. in bash, or in R using package [qgisprocess](https://github.com/paleolimbot/qgisprocess)). When calling `qgis_process` in this context to process geospatial layers that just live in a **file** (e.g. GeoPackage), the above algorithms will only work when a QGIS project is first created interactively that contains these layers. This is not ideal from the scripting perspective (see _Additional context_). Would one of the following **solutions** be sensible and feasible to add, in order to bypass the manual creation of a QGIS project for the algorithms that use expressions: - providing an algorithm (that can be run with `qgis_process`) that creates a simple QGIS project. As input it would need filepath+layer references, a CRS and a project file name. Perhaps restricted (if needed) to the case of layers with the same explicit CRS (or none, assuming the same). - extending the QGIS expressions framework so that layers and fields can be referred from filepath+layer. ### Additional context From the perspective of reproducible scripting outside of QGIS and Python, the need for a QGIS project in above algorithms poses a problem compared to the many algorithms not needing a QGIS project. AFAIK such project creation cannot be scripted using a `qgis_process` algorithm. Hence (irreproducible) manual intervention is needed – creating a QGIS project. AFAIK the only current solution is to store a manually crafted QGIS project file next to the script, but it feels like overkill for scripted geospatial processing. Also, the processing step that uses QGIS expressions will often be used on intermediate results from earlier scripting, which will in general be different when the same script is used with new or updated input data. The latter implies that care must be taken that the QGIS project is referring the updated intermediate results. With relation to data-defined overrides, it's currently unclear how to pass them to `qgis_process`; see #50482 for that. This idea originated in https://github.com/paleolimbot/qgisprocess/issues/104.
process
qgis process bypass the need to manually create qgis project for algorithms with expressions feature description certain algorithms center on the use of qgis expressions to generate new values or new geometries e g native fieldcalculator and native geometrybyexpression the functions in these expressions refer to layers and layer fields that must be present in the qgis project e g in get feature test line fid or in geometry referring the selected input layer a similar case is the use of expressions in for parameters of a processing algorithm e g for distance in native buffer in reproducible scripting outside of qgis and python qgis process is called in specific steps e g in bash or in r using package when calling qgis process in this context to process geospatial layers that just live in a file e g geopackage the above algorithms will only work when a qgis project is first created interactively that contains these layers this is not ideal from the scripting perspective see additional context would one of the following solutions be sensible and feasible to add in order to bypass the manual creation of a qgis project for the algorithms that use expressions providing an algorithm that can be run with qgis process that creates a simple qgis project as input it would need filepath layer references a crs and a project file name perhaps restricted if needed to the case of layers with the same explicit crs or none assuming the same extending the qgis expressions framework so that layers and fields can be referred from filepath layer additional context from the perspective of reproducible scripting outside of qgis and python the need for a qgis project in above algorithms poses a problem compared to the many algorithms not needing a qgis project afaik such project creation cannot be scripted using a qgis process algorithm hence irreproducible manual intervention is needed – creating a qgis project afaik the only current solution is to store a manually crafted qgis project file next to the script but it feels like overkill for scripted geospatial processing also the processing step that uses qgis expressions will often be used on intermediate results from earlier scripting which will in general be different when the same script is used with new or updated input data the latter implies that care must be taken that the qgis project is referring the updated intermediate results with relation to data defined overrides it s currently unclear how to pass them to qgis process see for that this idea originated in
1
94,979
27,348,342,422
IssuesEvent
2023-02-27 07:34:13
expo/expo
https://api.github.com/repos/expo/expo
closed
[EAS][Android][SDK 46] Android .apk crashes after launch
needs validation incomplete issue: missing or invalid repro Development Builds
### Summary Hi everyone. Runs fine on emulator, but .apk builds made with `eas build` keep crashing, looks like there's no js linked to them. Same result after running `./gradlew assembleDebug` locally. Need help. No issues with iOS. Thanks! ### Managed or bare workflow? bare ### What platform(s) does this occur on? Android ### Package versions _No response_ ### Environment ``` expo-env-info 1.0.5 environment info: System: OS: macOS 13.2.1 Shell: 5.8.1 - /bin/zsh Binaries: Node: 18.9.1 - /usr/local/bin/node Yarn: 1.22.17 - /usr/local/bin/yarn npm: 8.19.1 - /usr/local/bin/npm Watchman: 2022.09.19.00 - /usr/local/bin/watchman Managers: CocoaPods: 1.11.3 - /Users/usr/.rbenv/shims/pod SDKs: iOS SDK: Platforms: DriverKit 22.2, iOS 16.2, macOS 13.1, tvOS 16.1, watchOS 9.1 Android SDK: API Levels: 23, 24, 25, 26, 27, 28, 29, 30, 31, 33 Build Tools: 28.0.3, 29.0.2, 29.0.3, 30.0.2, 30.0.3, 31.0.0, 32.1.0, 33.0.0 System Images: android-30 | Google APIs Intel x86 Atom, android-30 | Google Play Intel x86 Atom, android-32 | Google APIs Intel x86 Atom_64 IDEs: Android Studio: 2022.1 AI-221.6008.13.2211.9514443 Xcode: 14.2/14C18 - /usr/bin/xcodebuild npmGlobalPackages: expo-cli: 6.1.0 Expo Workflow: managed ``` ### Reproducible demo eas.json config for apk (actually, tried multiple different configurations): ``` "development": { "channel": "production", "android": { "autoIncrement": "versionCode", "buildType": "apk" }, "ios": { "resourceClass": "m1-medium" } } ``` ### Stacktrace (if a crash is involved) What I'm getting via logcat when debugging .apk: `java.lang.RuntimeException: Unable to load script. Make sure you're either running Metro (run 'npx react-native start') or that your bundle 'index.android.bundle' is packaged correctly for release. `
1.0
[EAS][Android][SDK 46] Android .apk crashes after launch - ### Summary Hi everyone. Runs fine on emulator, but .apk builds made with `eas build` keep crashing, looks like there's no js linked to them. Same result after running `./gradlew assembleDebug` locally. Need help. No issues with iOS. Thanks! ### Managed or bare workflow? bare ### What platform(s) does this occur on? Android ### Package versions _No response_ ### Environment ``` expo-env-info 1.0.5 environment info: System: OS: macOS 13.2.1 Shell: 5.8.1 - /bin/zsh Binaries: Node: 18.9.1 - /usr/local/bin/node Yarn: 1.22.17 - /usr/local/bin/yarn npm: 8.19.1 - /usr/local/bin/npm Watchman: 2022.09.19.00 - /usr/local/bin/watchman Managers: CocoaPods: 1.11.3 - /Users/usr/.rbenv/shims/pod SDKs: iOS SDK: Platforms: DriverKit 22.2, iOS 16.2, macOS 13.1, tvOS 16.1, watchOS 9.1 Android SDK: API Levels: 23, 24, 25, 26, 27, 28, 29, 30, 31, 33 Build Tools: 28.0.3, 29.0.2, 29.0.3, 30.0.2, 30.0.3, 31.0.0, 32.1.0, 33.0.0 System Images: android-30 | Google APIs Intel x86 Atom, android-30 | Google Play Intel x86 Atom, android-32 | Google APIs Intel x86 Atom_64 IDEs: Android Studio: 2022.1 AI-221.6008.13.2211.9514443 Xcode: 14.2/14C18 - /usr/bin/xcodebuild npmGlobalPackages: expo-cli: 6.1.0 Expo Workflow: managed ``` ### Reproducible demo eas.json config for apk (actually, tried multiple different configurations): ``` "development": { "channel": "production", "android": { "autoIncrement": "versionCode", "buildType": "apk" }, "ios": { "resourceClass": "m1-medium" } } ``` ### Stacktrace (if a crash is involved) What I'm getting via logcat when debugging .apk: `java.lang.RuntimeException: Unable to load script. Make sure you're either running Metro (run 'npx react-native start') or that your bundle 'index.android.bundle' is packaged correctly for release. `
non_process
android apk crashes after launch summary hi everyone runs fine on emulator but apk builds made with eas build keep crashing looks like there s no js linked to them same result after running gradlew assembledebug locally need help no issues with ios thanks managed or bare workflow bare what platform s does this occur on android package versions no response environment expo env info environment info system os macos shell bin zsh binaries node usr local bin node yarn usr local bin yarn npm usr local bin npm watchman usr local bin watchman managers cocoapods users usr rbenv shims pod sdks ios sdk platforms driverkit ios macos tvos watchos android sdk api levels build tools system images android google apis intel atom android google play intel atom android google apis intel atom ides android studio ai xcode usr bin xcodebuild npmglobalpackages expo cli expo workflow managed reproducible demo eas json config for apk actually tried multiple different configurations development channel production android autoincrement versioncode buildtype apk ios resourceclass medium stacktrace if a crash is involved what i m getting via logcat when debugging apk java lang runtimeexception unable to load script make sure you re either running metro run npx react native start or that your bundle index android bundle is packaged correctly for release
0
208,663
16,133,611,264
IssuesEvent
2021-04-29 08:55:41
mikecao/umami
https://api.github.com/repos/mikecao/umami
closed
API Docs
documentation
Would you consider having API Docs? I just started using Umami and love it already, though for me I'd need to send custom requests if someone visited `example.com/twitter`, which is currently handled by a express `res.redirect()` and a rendered page. If those'd exist, I'd consider writing a express middleware so that it could be used similar to: ```js let umami = require('express-umami') let stats = new umami({ host: "url", website_id: "id" }) app.use(stats) ``` IMO this'd make it easier to implement it into more websites and quite possibly even a lot easier for someone to collect usage stats for their API
1.0
API Docs - Would you consider having API Docs? I just started using Umami and love it already, though for me I'd need to send custom requests if someone visited `example.com/twitter`, which is currently handled by a express `res.redirect()` and a rendered page. If those'd exist, I'd consider writing a express middleware so that it could be used similar to: ```js let umami = require('express-umami') let stats = new umami({ host: "url", website_id: "id" }) app.use(stats) ``` IMO this'd make it easier to implement it into more websites and quite possibly even a lot easier for someone to collect usage stats for their API
non_process
api docs would you consider having api docs i just started using umami and love it already though for me i d need to send custom requests if someone visited example com twitter which is currently handled by a express res redirect and a rendered page if those d exist i d consider writing a express middleware so that it could be used similar to js let umami require express umami let stats new umami host url website id id app use stats imo this d make it easier to implement it into more websites and quite possibly even a lot easier for someone to collect usage stats for their api
0
6,113
8,971,968,753
IssuesEvent
2019-01-29 17:05:07
prusa3d/Slic3r
https://api.github.com/repos/prusa3d/Slic3r
closed
"Send to printer" does not wait for post-processing scripts to exit before sending file to printer.
OctoPrint post processing scripts
### Version Slic3r-1.39.1-beta-prusa3d-linux64-full-201802131106 ### Operating system type + version lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.5 LTS Release: 14.04 Codename: trusty uname -a Linux betlog 3.13.0-142-generic #191-Ubuntu SMP Fri Feb 2 12:13:35 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux ### Behavior * _Describe the problem_ "Send to printer" does not wait for post-processing scripts to exit before sending file to printer. * _Steps needed to reproduce the problem_ Add a script requiring user input to the post-processing scripts field. Send a file to printer. * _If this is a command-line slicing issue, include the options used_ * _Is this a new feature request?_ Maybe. I am not sure if this is expected behaviour or an oversight. * _Expected Results_ Post-processing scripts should wait for errorcode/exit status 0 before sending the gcode to printer. Optionally each post-processing script in the postporcessing-script field could be followed by "&&" or similar to indicate that a clean exit is required before continuing, and if not then an error message to appear, and sending to printer is aborted. * _Actual Results_ Press "Send to printer". Script terminal appears, user enters input, script completes and exits. Script effect is not present in the gcode on printer. #### STL/Config (.ZIP) where problem occurs Example M25 insertion script that requires user input. [M25-at-layer-postprocessor.zip](https://github.com/prusa3d/Slic3r/files/1765037/M25-at-layer-postprocessor.zip)
1.0
"Send to printer" does not wait for post-processing scripts to exit before sending file to printer. - ### Version Slic3r-1.39.1-beta-prusa3d-linux64-full-201802131106 ### Operating system type + version lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.5 LTS Release: 14.04 Codename: trusty uname -a Linux betlog 3.13.0-142-generic #191-Ubuntu SMP Fri Feb 2 12:13:35 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux ### Behavior * _Describe the problem_ "Send to printer" does not wait for post-processing scripts to exit before sending file to printer. * _Steps needed to reproduce the problem_ Add a script requiring user input to the post-processing scripts field. Send a file to printer. * _If this is a command-line slicing issue, include the options used_ * _Is this a new feature request?_ Maybe. I am not sure if this is expected behaviour or an oversight. * _Expected Results_ Post-processing scripts should wait for errorcode/exit status 0 before sending the gcode to printer. Optionally each post-processing script in the postporcessing-script field could be followed by "&&" or similar to indicate that a clean exit is required before continuing, and if not then an error message to appear, and sending to printer is aborted. * _Actual Results_ Press "Send to printer". Script terminal appears, user enters input, script completes and exits. Script effect is not present in the gcode on printer. #### STL/Config (.ZIP) where problem occurs Example M25 insertion script that requires user input. [M25-at-layer-postprocessor.zip](https://github.com/prusa3d/Slic3r/files/1765037/M25-at-layer-postprocessor.zip)
process
send to printer does not wait for post processing scripts to exit before sending file to printer version beta full operating system type version lsb release a no lsb modules are available distributor id ubuntu description ubuntu lts release codename trusty uname a linux betlog generic ubuntu smp fri feb utc gnu linux behavior describe the problem send to printer does not wait for post processing scripts to exit before sending file to printer steps needed to reproduce the problem add a script requiring user input to the post processing scripts field send a file to printer if this is a command line slicing issue include the options used is this a new feature request maybe i am not sure if this is expected behaviour or an oversight expected results post processing scripts should wait for errorcode exit status before sending the gcode to printer optionally each post processing script in the postporcessing script field could be followed by or similar to indicate that a clean exit is required before continuing and if not then an error message to appear and sending to printer is aborted actual results press send to printer script terminal appears user enters input script completes and exits script effect is not present in the gcode on printer stl config zip where problem occurs example insertion script that requires user input
1
16,931
22,274,747,077
IssuesEvent
2022-06-10 15:30:17
googleapis/google-cloud-python
https://api.github.com/repos/googleapis/google-cloud-python
closed
Client libraries need out-of-band tests against prereleases of third party dependencies
type: process
/cc @busunkim96, @tswast, @crwilcox I had originally worked to add `--pre` to the `testing/constraints-3.9.txt` files of the "manual" libraries: - https://github.com/googleapis/python-api-core/pull/247 - https://github.com/googleapis/python-cloud-core/pull/129 - https://github.com/googleapis/google-resumable-media-python/pull/250 - https://github.com/googleapis/python-firestore/pull/415 - https://github.com/googleapis/python-bigtable/pull/402 - https://github.com/googleapis/python-datastore/pull/207 - https://github.com/googleapis/python-storage/pull/534 - https://github.com/googleapis/python-spanner/pull/479 However, in today's meeting the consensus was that such testing needs to happend out-of-band from the normal PR presubmit testing. I have therefore reverted the already-merged PRs which had only that change: - https://github.com/googleapis/python-firestore/pull/426 - https://github.com/googleapis/python-bigtable/pull/406 - https://github.com/googleapis/python-datastore/pull/213 For the already-merged PRs which contained more changes, I have created PRs which back out only the constraint change: - https://github.com/googleapis/python-cloud-core/pull/132 - https://github.com/googleapis/python-spanner/pull/527 For the not-yet-merged PRs, I have backed out the constraint change, and re-titled them to signal the remaining changes: - https://github.com/googleapis/python-api-core/pull/247 - https://github.com/googleapis/google-resumable-media-python/pull/250 - https://github.com/googleapis/python-storage/pull/534 Going forward, we need to work out how to do this testing out-of-band. - https://github.com/googleapis/python-bigquery/pull/449 added a `prerelease_deps` session to `noxfile.py` (not run by default, because `noxfile.py` lists [explicit default sessions](https://github.com/tswast/python-bigquery/blob/97eb986001b2fbe13b3ffcbf1a8241e1302f2948/noxfile.py#L33-L45) That PR sets up a separate `presubmit` job to run that specific `nox` session. - Should we instead run such tests in nightly (`continuous`) builds?
1.0
Client libraries need out-of-band tests against prereleases of third party dependencies - /cc @busunkim96, @tswast, @crwilcox I had originally worked to add `--pre` to the `testing/constraints-3.9.txt` files of the "manual" libraries: - https://github.com/googleapis/python-api-core/pull/247 - https://github.com/googleapis/python-cloud-core/pull/129 - https://github.com/googleapis/google-resumable-media-python/pull/250 - https://github.com/googleapis/python-firestore/pull/415 - https://github.com/googleapis/python-bigtable/pull/402 - https://github.com/googleapis/python-datastore/pull/207 - https://github.com/googleapis/python-storage/pull/534 - https://github.com/googleapis/python-spanner/pull/479 However, in today's meeting the consensus was that such testing needs to happend out-of-band from the normal PR presubmit testing. I have therefore reverted the already-merged PRs which had only that change: - https://github.com/googleapis/python-firestore/pull/426 - https://github.com/googleapis/python-bigtable/pull/406 - https://github.com/googleapis/python-datastore/pull/213 For the already-merged PRs which contained more changes, I have created PRs which back out only the constraint change: - https://github.com/googleapis/python-cloud-core/pull/132 - https://github.com/googleapis/python-spanner/pull/527 For the not-yet-merged PRs, I have backed out the constraint change, and re-titled them to signal the remaining changes: - https://github.com/googleapis/python-api-core/pull/247 - https://github.com/googleapis/google-resumable-media-python/pull/250 - https://github.com/googleapis/python-storage/pull/534 Going forward, we need to work out how to do this testing out-of-band. - https://github.com/googleapis/python-bigquery/pull/449 added a `prerelease_deps` session to `noxfile.py` (not run by default, because `noxfile.py` lists [explicit default sessions](https://github.com/tswast/python-bigquery/blob/97eb986001b2fbe13b3ffcbf1a8241e1302f2948/noxfile.py#L33-L45) That PR sets up a separate `presubmit` job to run that specific `nox` session. - Should we instead run such tests in nightly (`continuous`) builds?
process
client libraries need out of band tests against prereleases of third party dependencies cc tswast crwilcox i had originally worked to add pre to the testing constraints txt files of the manual libraries however in today s meeting the consensus was that such testing needs to happend out of band from the normal pr presubmit testing i have therefore reverted the already merged prs which had only that change for the already merged prs which contained more changes i have created prs which back out only the constraint change for the not yet merged prs i have backed out the constraint change and re titled them to signal the remaining changes going forward we need to work out how to do this testing out of band added a prerelease deps session to noxfile py not run by default because noxfile py lists that pr sets up a separate presubmit job to run that specific nox session should we instead run such tests in nightly continuous builds
1
17,399
23,217,991,394
IssuesEvent
2022-08-02 15:32:21
MPMG-DCC-UFMG/C01
https://api.github.com/repos/MPMG-DCC-UFMG/C01
opened
Renderização Cross-Browser
[2] Alta Prioridade [0] Desenvolvimento [1] Investigação [3] Processamento Dinâmico
## Comportamento Esperado É necessário investigar os motivos pelos quais, apesar da adição de novos navegadores no sistema, as coletas que pareciam ser desbloqueadas por essa funcionalidade não foram possíveis. > Essa issue é uma continuação da issue https://github.com/MPMG-DCC-UFMG/C01/issues/3528. ## Comportamento Atual Hoje temos duas issues (https://github.com/MPMG-DCC-UFMG/C01/issues/635 e https://github.com/MPMG-DCC-UFMG/C01/issues/3528) cujos impedimentos se devem às diferenças entre a renderização manual e a do sistema, comprovada pela comparação entre screenshots manuais e os do processamento dinâmico. Ao investigar essas issues, foram criados scripts Playwright com navegadores diferentes e a renderização usando o Firefox foi bem sucedida, indicando que as coletas poderiam ser desbloqueadas com a resolução da issue #3055, que adiciona opções de navegadores. Essas investigações estão registradas [aqui](https://github.com/MPMG-DCC-UFMG/C01/issues/806#issuecomment-1117945072) e [aqui](https://github.com/MPMG-DCC-UFMG/C01/issues/3528#issuecomment-1151640812). O problema, porém, não foi resolvido. Aparentemente, a feature era necessária, mas não suficiente para desbloquear essas coletas. ## Passos para reproduzir o erro Seguir as instruções das issues #635 e #3528 ## Especificações da Coleta Coletores informados nas issues #635 e #3528 ## Sistema (caso necessário) - MP ou local: ambos - Branch específica: 3055 --> dev - Sistema diferente: não ## Screenshots (caso necessário) N.A.
1.0
Renderização Cross-Browser - ## Comportamento Esperado É necessário investigar os motivos pelos quais, apesar da adição de novos navegadores no sistema, as coletas que pareciam ser desbloqueadas por essa funcionalidade não foram possíveis. > Essa issue é uma continuação da issue https://github.com/MPMG-DCC-UFMG/C01/issues/3528. ## Comportamento Atual Hoje temos duas issues (https://github.com/MPMG-DCC-UFMG/C01/issues/635 e https://github.com/MPMG-DCC-UFMG/C01/issues/3528) cujos impedimentos se devem às diferenças entre a renderização manual e a do sistema, comprovada pela comparação entre screenshots manuais e os do processamento dinâmico. Ao investigar essas issues, foram criados scripts Playwright com navegadores diferentes e a renderização usando o Firefox foi bem sucedida, indicando que as coletas poderiam ser desbloqueadas com a resolução da issue #3055, que adiciona opções de navegadores. Essas investigações estão registradas [aqui](https://github.com/MPMG-DCC-UFMG/C01/issues/806#issuecomment-1117945072) e [aqui](https://github.com/MPMG-DCC-UFMG/C01/issues/3528#issuecomment-1151640812). O problema, porém, não foi resolvido. Aparentemente, a feature era necessária, mas não suficiente para desbloquear essas coletas. ## Passos para reproduzir o erro Seguir as instruções das issues #635 e #3528 ## Especificações da Coleta Coletores informados nas issues #635 e #3528 ## Sistema (caso necessário) - MP ou local: ambos - Branch específica: 3055 --> dev - Sistema diferente: não ## Screenshots (caso necessário) N.A.
process
renderização cross browser comportamento esperado é necessário investigar os motivos pelos quais apesar da adição de novos navegadores no sistema as coletas que pareciam ser desbloqueadas por essa funcionalidade não foram possíveis essa issue é uma continuação da issue comportamento atual hoje temos duas issues e cujos impedimentos se devem às diferenças entre a renderização manual e a do sistema comprovada pela comparação entre screenshots manuais e os do processamento dinâmico ao investigar essas issues foram criados scripts playwright com navegadores diferentes e a renderização usando o firefox foi bem sucedida indicando que as coletas poderiam ser desbloqueadas com a resolução da issue que adiciona opções de navegadores essas investigações estão registradas e o problema porém não foi resolvido aparentemente a feature era necessária mas não suficiente para desbloquear essas coletas passos para reproduzir o erro seguir as instruções das issues e especificações da coleta coletores informados nas issues e sistema caso necessário mp ou local ambos branch específica dev sistema diferente não screenshots caso necessário n a
1
10,087
13,044,161,994
IssuesEvent
2020-07-29 03:47:28
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `SubStringAndDuration` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `SubStringAndDuration` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `SubStringAndDuration` from TiDB - ## Description Port the scalar function `SubStringAndDuration` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function substringandduration from tidb description port the scalar function substringandduration from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
1
6,850
9,992,042,523
IssuesEvent
2019-07-11 12:37:27
yodaos-project/ShadowNode
https://api.github.com/repos/yodaos-project/ShadowNode
reopened
process: process.exit should immediately exit
bug process
<!-- Thank you for reporting a possible bug in ShadowNode. Please fill in as much of the template below as you can. Version: output of `iotjs -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify the affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you can. --> * **Version**: 0.10.x * **Platform**: all * **Subsystem**: process <!-- Please provide more details below this comment. --> As Node.js Documentation shows the correct behavior: > Calling `process.exit()` will force the process to exit as quickly as possible even if there are still asynchronous operations pending that have not yet completed fully, including I/O operations to process.stdout and process.stderr. Referenced at #372.
1.0
process: process.exit should immediately exit - <!-- Thank you for reporting a possible bug in ShadowNode. Please fill in as much of the template below as you can. Version: output of `iotjs -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify the affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you can. --> * **Version**: 0.10.x * **Platform**: all * **Subsystem**: process <!-- Please provide more details below this comment. --> As Node.js Documentation shows the correct behavior: > Calling `process.exit()` will force the process to exit as quickly as possible even if there are still asynchronous operations pending that have not yet completed fully, including I/O operations to process.stdout and process.stderr. Referenced at #372.
process
process process exit should immediately exit thank you for reporting a possible bug in shadownode please fill in as much of the template below as you can version output of iotjs v platform output of uname a unix or version and or bit windows subsystem if known please specify the affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you can version x platform all subsystem process as node js documentation shows the correct behavior calling process exit will force the process to exit as quickly as possible even if there are still asynchronous operations pending that have not yet completed fully including i o operations to process stdout and process stderr referenced at
1
6,170
9,082,083,561
IssuesEvent
2019-02-17 08:54:32
linnovate/root
https://api.github.com/repos/linnovate/root
opened
in search, in tasks and project no sub tasks/projects are shown
2.0.6 Process bug
search for a task/project with sub tasks/projects no sub tasks are shown(like in #1484 )
1.0
in search, in tasks and project no sub tasks/projects are shown - search for a task/project with sub tasks/projects no sub tasks are shown(like in #1484 )
process
in search in tasks and project no sub tasks projects are shown search for a task project with sub tasks projects no sub tasks are shown like in
1
4,738
7,595,641,917
IssuesEvent
2018-04-27 06:37:57
SafeNetConsulting-Milwaukee/feedbot
https://api.github.com/repos/SafeNetConsulting-Milwaukee/feedbot
closed
Rename repository & associated artifacts to indicate new "Feedbot" app name
in progress process task
After initial votes in #7, we're going w/ **Feedbot** as the Slack App name. It's fixed in Slack, but needs to be changed on this repo, as well as in various Azure resources
1.0
Rename repository & associated artifacts to indicate new "Feedbot" app name - After initial votes in #7, we're going w/ **Feedbot** as the Slack App name. It's fixed in Slack, but needs to be changed on this repo, as well as in various Azure resources
process
rename repository associated artifacts to indicate new feedbot app name after initial votes in we re going w feedbot as the slack app name it s fixed in slack but needs to be changed on this repo as well as in various azure resources
1
20,248
26,866,371,971
IssuesEvent
2023-02-04 00:34:25
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[ESPECIALISTA] [SCRUM] [KANBAN] [SALVADOR] [REMOTO] Scrum Master na [KINVO]
SALVADOR DESENVOLVIMENTO DE SOFTWARE GESTÃO DE PROJETOS SCRUM REMOTO KANBAN HELP WANTED ESPECIALISTA GESTAO POR PROCESSOS SCRUM MASTER GESTAO DE PESSOAS Stale
<!-- ================================================== POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS! Use: "Desenvolvedor Front-end" ao invés de "Front-End Developer" \o/ Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]` ================================================== --> ## Descrição da vaga - Atribuição principal/missão - Você irá atuar com três papéis fundamentais: Protetor do Scrum, Líder Servidor e Comunicador e Integrador. - Protetor do Scrum - Deve garantir que todos os participantes do processo também conheçam os princípios do scrum, atuando muitas vezes como um professor, e deve assegurar que todos respeitem e sigam o que é definido na metodologia. - Líder Servidor - Você não é um gerente, um chefe, deve agir principalmente como um líder servidor, um facilitador dentro do time. Deve estar sempre à disposição tanto do time de desenvolvimento quanto do Product Owner, apoiando o andamento de todo o processo. - Comunicador e Integração - Deve facilitar toda essa comunicação, facilitar e moderar as discussões e ajudar para que as tomadas de decisões sejam realizadas em conjunto e de forma assertiva, o mais ágil possível. ### Atividades/responsabilidade - Ter autoridade e autonomia sobre o processo; - Orientar o time e empresa sobre melhores práticas ágeis e de Scrum; - Guiar o time a se auto-organizar; - Remover impedimentos que poderiam potencialmente atrasar ou prejudicar a entrega e produtividade do time; - Construir um ambiente seguro e de confiança para que problemas possam ser revelados sem implicações negativas, retribuição ou julgamento; com ênfase na resolução do problema; - Facilitar o processo com o objetivo de entregar as atividades, sem coagir, atribuir ou ditar a atividade; - Facilitar discussões, tomadas de decisão e resolução de conflitos; - Liderar e trabalhar com múltiplos times ágeis; - Assistir o time com comunicações internas e externas, garantindo transparência e acessibilidade a informações; - Garantir que os papéis do Scrum estejam claros, bem definidos e performando bem; - Organizar a facilitar cerimônias scrum; - Acompanhar e organizar métricas ágeis, assistindo a performance e auto-organização do time. ## Local - Salvador ## Benefícios - Informações diretamente com o responsável/ recrutador da vaga ## Requisitos **Obrigatórios:** - Comunicação clara e eficiente; - Experiência como Scrum Master ou atividades afins; - Domínio sobre princípios fundamentais do Kanban; - Conhecimento profundo sobre princípios e processo Scrum; - Vivência em desenvolvimento de software; - Experiência com processo e gestão de times; **Diferenciais:** - Experiência com times distribuídos; - Certificação de Scrum Master; ## Contratação - a combinar ## Nossa empresa - O Kinvo é uma fintech da área de investimentos criada em 2017 com a missão de empoderar o investidor. Desenvolvemos uma plataforma que permite o investidor cadastrar seus investimentos, independente de qual instituição financeira ele está, consolidar esses dados como uma carteira de investimentos, e a partir daí extrair métricas sobre a saúde dos seus investimentos. - No Kinvo (como empresa), um dos maiores investimentos são nas pessoas. Trabalhamos em um ambiente leve, que tem como base a confiança. Somos um time que tem muita sede em atingir os objetivo e que tem em seus sócios o maior exemplo disso. Nossos maiores aprendizados vem dos desafios! Venha crescer e evoluir com a gente! ## Como se candidatar - [Clique aqui para se candidatar](https://kinvo.abler.com.br/vagas/scrum-master-771971)
1.0
[ESPECIALISTA] [SCRUM] [KANBAN] [SALVADOR] [REMOTO] Scrum Master na [KINVO] - <!-- ================================================== POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS! Use: "Desenvolvedor Front-end" ao invés de "Front-End Developer" \o/ Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]` ================================================== --> ## Descrição da vaga - Atribuição principal/missão - Você irá atuar com três papéis fundamentais: Protetor do Scrum, Líder Servidor e Comunicador e Integrador. - Protetor do Scrum - Deve garantir que todos os participantes do processo também conheçam os princípios do scrum, atuando muitas vezes como um professor, e deve assegurar que todos respeitem e sigam o que é definido na metodologia. - Líder Servidor - Você não é um gerente, um chefe, deve agir principalmente como um líder servidor, um facilitador dentro do time. Deve estar sempre à disposição tanto do time de desenvolvimento quanto do Product Owner, apoiando o andamento de todo o processo. - Comunicador e Integração - Deve facilitar toda essa comunicação, facilitar e moderar as discussões e ajudar para que as tomadas de decisões sejam realizadas em conjunto e de forma assertiva, o mais ágil possível. ### Atividades/responsabilidade - Ter autoridade e autonomia sobre o processo; - Orientar o time e empresa sobre melhores práticas ágeis e de Scrum; - Guiar o time a se auto-organizar; - Remover impedimentos que poderiam potencialmente atrasar ou prejudicar a entrega e produtividade do time; - Construir um ambiente seguro e de confiança para que problemas possam ser revelados sem implicações negativas, retribuição ou julgamento; com ênfase na resolução do problema; - Facilitar o processo com o objetivo de entregar as atividades, sem coagir, atribuir ou ditar a atividade; - Facilitar discussões, tomadas de decisão e resolução de conflitos; - Liderar e trabalhar com múltiplos times ágeis; - Assistir o time com comunicações internas e externas, garantindo transparência e acessibilidade a informações; - Garantir que os papéis do Scrum estejam claros, bem definidos e performando bem; - Organizar a facilitar cerimônias scrum; - Acompanhar e organizar métricas ágeis, assistindo a performance e auto-organização do time. ## Local - Salvador ## Benefícios - Informações diretamente com o responsável/ recrutador da vaga ## Requisitos **Obrigatórios:** - Comunicação clara e eficiente; - Experiência como Scrum Master ou atividades afins; - Domínio sobre princípios fundamentais do Kanban; - Conhecimento profundo sobre princípios e processo Scrum; - Vivência em desenvolvimento de software; - Experiência com processo e gestão de times; **Diferenciais:** - Experiência com times distribuídos; - Certificação de Scrum Master; ## Contratação - a combinar ## Nossa empresa - O Kinvo é uma fintech da área de investimentos criada em 2017 com a missão de empoderar o investidor. Desenvolvemos uma plataforma que permite o investidor cadastrar seus investimentos, independente de qual instituição financeira ele está, consolidar esses dados como uma carteira de investimentos, e a partir daí extrair métricas sobre a saúde dos seus investimentos. - No Kinvo (como empresa), um dos maiores investimentos são nas pessoas. Trabalhamos em um ambiente leve, que tem como base a confiança. Somos um time que tem muita sede em atingir os objetivo e que tem em seus sócios o maior exemplo disso. Nossos maiores aprendizados vem dos desafios! Venha crescer e evoluir com a gente! ## Como se candidatar - [Clique aqui para se candidatar](https://kinvo.abler.com.br/vagas/scrum-master-771971)
process
scrum master na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na descrição da vaga atribuição principal missão você irá atuar com três papéis fundamentais protetor do scrum líder servidor e comunicador e integrador protetor do scrum deve garantir que todos os participantes do processo também conheçam os princípios do scrum atuando muitas vezes como um professor e deve assegurar que todos respeitem e sigam o que é definido na metodologia líder servidor você não é um gerente um chefe deve agir principalmente como um líder servidor um facilitador dentro do time deve estar sempre à disposição tanto do time de desenvolvimento quanto do product owner apoiando o andamento de todo o processo comunicador e integração deve facilitar toda essa comunicação facilitar e moderar as discussões e ajudar para que as tomadas de decisões sejam realizadas em conjunto e de forma assertiva o mais ágil possível atividades responsabilidade ter autoridade e autonomia sobre o processo orientar o time e empresa sobre melhores práticas ágeis e de scrum guiar o time a se auto organizar remover impedimentos que poderiam potencialmente atrasar ou prejudicar a entrega e produtividade do time construir um ambiente seguro e de confiança para que problemas possam ser revelados sem implicações negativas retribuição ou julgamento com ênfase na resolução do problema facilitar o processo com o objetivo de entregar as atividades sem coagir atribuir ou ditar a atividade facilitar discussões tomadas de decisão e resolução de conflitos liderar e trabalhar com múltiplos times ágeis assistir o time com comunicações internas e externas garantindo transparência e acessibilidade a informações garantir que os papéis do scrum estejam claros bem definidos e performando bem organizar a facilitar cerimônias scrum acompanhar e organizar métricas ágeis assistindo a performance e auto organização do time local salvador benefícios informações diretamente com o responsável recrutador da vaga requisitos obrigatórios comunicação clara e eficiente experiência como scrum master ou atividades afins domínio sobre princípios fundamentais do kanban conhecimento profundo sobre princípios e processo scrum vivência em desenvolvimento de software experiência com processo e gestão de times diferenciais experiência com times distribuídos certificação de scrum master contratação a combinar nossa empresa o kinvo é uma fintech da área de investimentos criada em com a missão de empoderar o investidor desenvolvemos uma plataforma que permite o investidor cadastrar seus investimentos independente de qual instituição financeira ele está consolidar esses dados como uma carteira de investimentos e a partir daí extrair métricas sobre a saúde dos seus investimentos no kinvo como empresa um dos maiores investimentos são nas pessoas trabalhamos em um ambiente leve que tem como base a confiança somos um time que tem muita sede em atingir os objetivo e que tem em seus sócios o maior exemplo disso nossos maiores aprendizados vem dos desafios venha crescer e evoluir com a gente como se candidatar
1
219,533
24,501,338,608
IssuesEvent
2022-10-10 13:02:33
nidhi7598/linux-3.0.35
https://api.github.com/repos/nidhi7598/linux-3.0.35
opened
CVE-2019-15222 (Medium) detected in linuxlinux-3.0.40
security vulnerability
## CVE-2019-15222 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.40</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/usb/helper.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/usb/helper.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/usb/helper.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel before 5.2.8. There is a NULL pointer dereference caused by a malicious USB device in the sound/usb/helper.c (motu_microbookii) driver. <p>Publish Date: 2019-08-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15222>CVE-2019-15222</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15222">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15222</a></p> <p>Release Date: 2019-09-06</p> <p>Fix Resolution: v5.3-rc3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-15222 (Medium) detected in linuxlinux-3.0.40 - ## CVE-2019-15222 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.40</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/usb/helper.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/usb/helper.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/usb/helper.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel before 5.2.8. There is a NULL pointer dereference caused by a malicious USB device in the sound/usb/helper.c (motu_microbookii) driver. <p>Publish Date: 2019-08-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15222>CVE-2019-15222</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15222">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15222</a></p> <p>Release Date: 2019-09-06</p> <p>Fix Resolution: v5.3-rc3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files sound usb helper c sound usb helper c sound usb helper c vulnerability details an issue was discovered in the linux kernel before there is a null pointer dereference caused by a malicious usb device in the sound usb helper c motu microbookii driver publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
171,600
20,979,675,091
IssuesEvent
2022-03-28 18:35:06
opensearch-project/OpenSearch-Dashboards
https://api.github.com/repos/opensearch-project/OpenSearch-Dashboards
closed
CVE-2022-0686 (High) detected in url-parse-1.5.7.tgz
security vulnerability medium severity cve
## CVE-2022-0686 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.5.7.tgz</b></p></summary> <p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p> <p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.5.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.5.7.tgz</a></p> <p> Dependency Hierarchy: - eui-29.3.2.tgz (Root Library) - :x: **url-parse-1.5.7.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.8. <p>Publish Date: 2022-02-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0686>CVE-2022-0686</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0686">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0686</a></p> <p>Release Date: 2022-02-20</p> <p>Fix Resolution (url-parse): 1.5.8</p> <p>Direct dependency fix Resolution (@elastic/eui): 29.4.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"@elastic/eui","packageVersion":"29.3.2","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"@elastic/eui:29.3.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"29.4.0","isBinary":true}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-0686","vulnerabilityDetails":"Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.8.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0686","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2022-0686 (High) detected in url-parse-1.5.7.tgz - ## CVE-2022-0686 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.5.7.tgz</b></p></summary> <p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p> <p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.5.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.5.7.tgz</a></p> <p> Dependency Hierarchy: - eui-29.3.2.tgz (Root Library) - :x: **url-parse-1.5.7.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.8. <p>Publish Date: 2022-02-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0686>CVE-2022-0686</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0686">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0686</a></p> <p>Release Date: 2022-02-20</p> <p>Fix Resolution (url-parse): 1.5.8</p> <p>Direct dependency fix Resolution (@elastic/eui): 29.4.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"@elastic/eui","packageVersion":"29.3.2","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"@elastic/eui:29.3.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"29.4.0","isBinary":true}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-0686","vulnerabilityDetails":"Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.8.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0686","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in url parse tgz cve high severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href dependency hierarchy eui tgz root library x url parse tgz vulnerable library found in base branch main vulnerability details authorization bypass through user controlled key in npm url parse prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse direct dependency fix resolution elastic eui isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree elastic eui isminimumfixversionavailable true minimumfixversion isbinary true basebranches vulnerabilityidentifier cve vulnerabilitydetails authorization bypass through user controlled key in npm url parse prior to vulnerabilityurl
0
99,474
30,469,576,722
IssuesEvent
2023-07-17 12:49:34
wellcomecollection/wellcomecollection.org
https://api.github.com/repos/wellcomecollection/wellcomecollection.org
closed
What's up with the "deploy updown" job in Buildkite?
Builds and CI :recycle:
**What** This sure looks like something's not going right: ``` $ docker-compose -f docker-compose.yml -p buildkitea7d64b9492984244856ac3b53c1ab1d6 run --name buildkitea7d64b9492984244856ac3b53c1ab1d6_updown_build_4207 -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN --rm updown yarn checks --   | yarn run v1.22.5   | $ node checks.js   | (node:29) UnhandledPromiseRejectionWarning: Error: Request failed with status code 400   | at createError (/usr/src/app/webapp/node_modules/axios/lib/core/createError.js:16:15)   | at settle (/usr/src/app/webapp/node_modules/axios/lib/core/settle.js:17:12)   | at IncomingMessage.handleStreamEnd (/usr/src/app/webapp/node_modules/axios/lib/adapters/http.js:244:11)   | at IncomingMessage.emit (events.js:327:22)   | at IncomingMessage.EventEmitter.emit (domain.js:486:12)   | at endReadableNT (_stream_readable.js:1327:12)   | at processTicksAndRejections (internal/process/task_queues.js:80:21)   | (Use `node --trace-warnings ...` to show where the warning was created)   | (node:29) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 2)   | (node:29) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.   | Done in 0.69s.   |   ``` **Where** e.g. https://buildkite.com/wellcomecollection/experience/builds/4207#4912763d-c77a-496e-8940-f4f29fcf5cc2
1.0
What's up with the "deploy updown" job in Buildkite? - **What** This sure looks like something's not going right: ``` $ docker-compose -f docker-compose.yml -p buildkitea7d64b9492984244856ac3b53c1ab1d6 run --name buildkitea7d64b9492984244856ac3b53c1ab1d6_updown_build_4207 -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN --rm updown yarn checks --   | yarn run v1.22.5   | $ node checks.js   | (node:29) UnhandledPromiseRejectionWarning: Error: Request failed with status code 400   | at createError (/usr/src/app/webapp/node_modules/axios/lib/core/createError.js:16:15)   | at settle (/usr/src/app/webapp/node_modules/axios/lib/core/settle.js:17:12)   | at IncomingMessage.handleStreamEnd (/usr/src/app/webapp/node_modules/axios/lib/adapters/http.js:244:11)   | at IncomingMessage.emit (events.js:327:22)   | at IncomingMessage.EventEmitter.emit (domain.js:486:12)   | at endReadableNT (_stream_readable.js:1327:12)   | at processTicksAndRejections (internal/process/task_queues.js:80:21)   | (Use `node --trace-warnings ...` to show where the warning was created)   | (node:29) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 2)   | (node:29) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.   | Done in 0.69s.   |   ``` **Where** e.g. https://buildkite.com/wellcomecollection/experience/builds/4207#4912763d-c77a-496e-8940-f4f29fcf5cc2
non_process
what s up with the deploy updown job in buildkite what this sure looks like something s not going right docker compose f docker compose yml p run name updown build e aws access key id e aws secret access key e aws session token rm updown yarn checks   yarn run   node checks js   node unhandledpromiserejectionwarning error request failed with status code   at createerror usr src app webapp node modules axios lib core createerror js   at settle usr src app webapp node modules axios lib core settle js   at incomingmessage handlestreamend usr src app webapp node modules axios lib adapters http js   at incomingmessage emit events js   at incomingmessage eventemitter emit domain js   at endreadablent stream readable js   at processticksandrejections internal process task queues js   use node trace warnings to show where the warning was created   node unhandledpromiserejectionwarning unhandled promise rejection this error originated either by throwing inside of an async function without a catch block or by rejecting a promise which was not handled with catch to terminate the node process on unhandled promise rejection use the cli flag unhandled rejections strict see rejection id   node deprecationwarning unhandled promise rejections are deprecated in the future promise rejections that are not handled will terminate the node js process with a non zero exit code   done in     where e g
0
220
2,649,316,348
IssuesEvent
2015-03-14 20:00:09
eiskaltdcpp/eiskaltdcpp
https://api.github.com/repos/eiskaltdcpp/eiskaltdcpp
opened
Пользовательские скрипты
imported Type-DevProcess
_From [tehnic...@yandex.ru](https://code.google.com/u/102614106087232079264/) on April 04, 2010 17:58:16_ В этот issue предлагаю кидать ссылки на качественные пользовательские скрипты, которые можно использовать командой /sh в нашем клиенте. Требования к скриптам: 1) Сверху шапка с указанием копирайта и лицензии. Предпочтительная лицензия: GPLv3 и выше. 2) Краткое описание и/или пример аутпута. 3) Собственно код. Хорошие скрипты будут добавлены в проект (каталог ./eiskaltdcpp/examples/), а после установки они будут доступны в каталоге программы, например: /usr/share/eiskaltdcpp/examples/ Первым претендентом на добавление, является скрипт от пользователя nS.microsoft: http://pastebin.com/wrP4hZUt Его желательно дополнить: см. пункт 2. Название же тоже полезно давать разумное. В данной случае можно назвать его: xmms2_audacious2.ru_RU.UTF-8.php Жду одобрения от dein.negativ перед добавлением. _Original issue: http://code.google.com/p/eiskaltdc/issues/detail?id=344_
1.0
Пользовательские скрипты - _From [tehnic...@yandex.ru](https://code.google.com/u/102614106087232079264/) on April 04, 2010 17:58:16_ В этот issue предлагаю кидать ссылки на качественные пользовательские скрипты, которые можно использовать командой /sh в нашем клиенте. Требования к скриптам: 1) Сверху шапка с указанием копирайта и лицензии. Предпочтительная лицензия: GPLv3 и выше. 2) Краткое описание и/или пример аутпута. 3) Собственно код. Хорошие скрипты будут добавлены в проект (каталог ./eiskaltdcpp/examples/), а после установки они будут доступны в каталоге программы, например: /usr/share/eiskaltdcpp/examples/ Первым претендентом на добавление, является скрипт от пользователя nS.microsoft: http://pastebin.com/wrP4hZUt Его желательно дополнить: см. пункт 2. Название же тоже полезно давать разумное. В данной случае можно назвать его: xmms2_audacious2.ru_RU.UTF-8.php Жду одобрения от dein.negativ перед добавлением. _Original issue: http://code.google.com/p/eiskaltdc/issues/detail?id=344_
process
пользовательские скрипты from on april в этот issue предлагаю кидать ссылки на качественные пользовательские скрипты которые можно использовать командой sh в нашем клиенте требования к скриптам сверху шапка с указанием копирайта и лицензии предпочтительная лицензия и выше краткое описание и или пример аутпута собственно код хорошие скрипты будут добавлены в проект каталог eiskaltdcpp examples а после установки они будут доступны в каталоге программы например usr share eiskaltdcpp examples первым претендентом на добавление является скрипт от пользователя ns microsoft его желательно дополнить см пункт название же тоже полезно давать разумное в данной случае можно назвать его ru ru utf php жду одобрения от dein negativ перед добавлением original issue
1
7,125
10,271,600,464
IssuesEvent
2019-08-23 14:28:54
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
not applicable to v10
assigned-to-author doc-enhancement machine-learning/svc team-data-science-process/subsvc triaged
This is not the latest command line azcopy. v10 is the latest and these instructions don't apply to v10. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 010df970-ef2f-eeaf-ebd6-eb9f6542519e * Version Independent ID: 0ab93f97-c3b8-dca9-f328-a28066c3122a * Content: [Copy Blob storage data with AzCopy - Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/move-data-to-azure-blob-using-azcopy#feedback) * Content Source: [articles/machine-learning/team-data-science-process/move-data-to-azure-blob-using-azcopy.md](https://github.com/Microsoft/azure-docs/blob/master/articles/machine-learning/team-data-science-process/move-data-to-azure-blob-using-azcopy.md) * Service: **machine-learning** * Sub-service: **team-data-science-process** * GitHub Login: @marktab * Microsoft Alias: **tdsp**
1.0
not applicable to v10 - This is not the latest command line azcopy. v10 is the latest and these instructions don't apply to v10. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 010df970-ef2f-eeaf-ebd6-eb9f6542519e * Version Independent ID: 0ab93f97-c3b8-dca9-f328-a28066c3122a * Content: [Copy Blob storage data with AzCopy - Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/move-data-to-azure-blob-using-azcopy#feedback) * Content Source: [articles/machine-learning/team-data-science-process/move-data-to-azure-blob-using-azcopy.md](https://github.com/Microsoft/azure-docs/blob/master/articles/machine-learning/team-data-science-process/move-data-to-azure-blob-using-azcopy.md) * Service: **machine-learning** * Sub-service: **team-data-science-process** * GitHub Login: @marktab * Microsoft Alias: **tdsp**
process
not applicable to this is not the latest command line azcopy is the latest and these instructions don t apply to document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id eeaf version independent id content content source service machine learning sub service team data science process github login marktab microsoft alias tdsp
1
3,282
2,537,622,571
IssuesEvent
2015-01-26 21:50:51
inboundnow/leads
https://api.github.com/repos/inboundnow/leads
opened
Email Response UI
Medium Priority
In the Email Response Setup UI it asks the user to enable or disable Email Confirmation. Since our autoresponder doesn't prompt user for any sort of confirmation this language is confusing. It should be changed. When marketing automation is complete we should remove this entire section and instead replace it with a call to action to join as a premium member.
1.0
Email Response UI - In the Email Response Setup UI it asks the user to enable or disable Email Confirmation. Since our autoresponder doesn't prompt user for any sort of confirmation this language is confusing. It should be changed. When marketing automation is complete we should remove this entire section and instead replace it with a call to action to join as a premium member.
non_process
email response ui in the email response setup ui it asks the user to enable or disable email confirmation since our autoresponder doesn t prompt user for any sort of confirmation this language is confusing it should be changed when marketing automation is complete we should remove this entire section and instead replace it with a call to action to join as a premium member
0
271,606
8,485,731,090
IssuesEvent
2018-10-26 08:45:41
robot-lab/judyst-web-crawler
https://api.github.com/repos/robot-lab/judyst-web-crawler
closed
Добавление главного атрибута supertype в заголовки при их сборке в web_crawler
priority/top type/feature type/task
# Task request ## Цель задачи Необходимо будет добавить еще один главный атрибут: надтип (supertype). Примеры значений: КСРФ для решений Конституционного суда, НКРФ для налогового кодекса и т.п. Потребуется для поиска ссылок на определенные надтипы в базе (смотрите задачу 2 в ТЗ, цитата: "указать общее количество совместного применения ГК и НК"), обработки документа в соответствии с заданным надтипом в link_analysis. ## Решение задачи В веб-краулере сейчас реализован только парсинг ksrf, поэтому для решения задачи пока что достаточно записывать префикс ID решения в словарь заголовка с ключом 'supertype'. ## Дополнительный контекст или ссылки на связанные с данной задачей issues https://github.com/robot-lab/judyst-web-crawler/issues/9
1.0
Добавление главного атрибута supertype в заголовки при их сборке в web_crawler - # Task request ## Цель задачи Необходимо будет добавить еще один главный атрибут: надтип (supertype). Примеры значений: КСРФ для решений Конституционного суда, НКРФ для налогового кодекса и т.п. Потребуется для поиска ссылок на определенные надтипы в базе (смотрите задачу 2 в ТЗ, цитата: "указать общее количество совместного применения ГК и НК"), обработки документа в соответствии с заданным надтипом в link_analysis. ## Решение задачи В веб-краулере сейчас реализован только парсинг ksrf, поэтому для решения задачи пока что достаточно записывать префикс ID решения в словарь заголовка с ключом 'supertype'. ## Дополнительный контекст или ссылки на связанные с данной задачей issues https://github.com/robot-lab/judyst-web-crawler/issues/9
non_process
добавление главного атрибута supertype в заголовки при их сборке в web crawler task request цель задачи необходимо будет добавить еще один главный атрибут надтип supertype примеры значений ксрф для решений конституционного суда нкрф для налогового кодекса и т п потребуется для поиска ссылок на определенные надтипы в базе смотрите задачу в тз цитата указать общее количество совместного применения гк и нк обработки документа в соответствии с заданным надтипом в link analysis решение задачи в веб краулере сейчас реализован только парсинг ksrf поэтому для решения задачи пока что достаточно записывать префикс id решения в словарь заголовка с ключом supertype дополнительный контекст или ссылки на связанные с данной задачей issues
0
21,198
28,215,381,246
IssuesEvent
2023-04-05 08:34:33
nodejs/node
https://api.github.com/repos/nodejs/node
closed
stdout/stderr buffering considerations
net process libuv stdio
_I tried to discuss this some time ago at IRC, but postponed it for quite a long time. Also I started the discussion of this in #1741, but I would like to extract the more specific discussion to a separate issue._ I could miss some details, but will try to give a quick overview here. Several issues here: 1. Many calls to `console.log` (e.g. calling it in a loop) could chew up all the memory and die — #1741, #2970, #3171, #18013. 2. `console.log` has different behavior while printing to a terminal and being redirected to a file. — https://github.com/nodejs/node/issues/1741#issuecomment-105333932. 3. Output is sometimes truncated — #6297, there were other ones as far as I remember. 4. The behaviour seems to differ across platforms. As I understand it — the output has an implicit write buffer (as it's non-blocking) of unlimited size. One approach to fixing this would be to: 1. Introduce an explicit cyclic write buffer. 2. Make writes to that cyclic buffer blocking. 3. Make writes from the buffer to the actual output non blocking. 4. When the cyclic buffer reaches it's maximum size (e.g. 10 MiB) — block further writes to the buffer until a corresponding part of it is freed. 5. On (normal) exit, make sure the buffer is flushed. For almost all cases, except for the ones that are currently broken, this would behave as a non-blocking buffer (because writes to the buffer are considerably faster than writes from the buffer to file/terminal). For cases when the data is being piped to the output too quickly and when the output file/terminal does not manage to output it at the same rate — the write would turn into a blocking operation. It would also be blocking at the exit until all the data is written. ## Another approach would be to monitor (and limit) the size of data that is contained in the implicit buffer coming from the async queue, and make the operations block when that limit is reached.
1.0
stdout/stderr buffering considerations - _I tried to discuss this some time ago at IRC, but postponed it for quite a long time. Also I started the discussion of this in #1741, but I would like to extract the more specific discussion to a separate issue._ I could miss some details, but will try to give a quick overview here. Several issues here: 1. Many calls to `console.log` (e.g. calling it in a loop) could chew up all the memory and die — #1741, #2970, #3171, #18013. 2. `console.log` has different behavior while printing to a terminal and being redirected to a file. — https://github.com/nodejs/node/issues/1741#issuecomment-105333932. 3. Output is sometimes truncated — #6297, there were other ones as far as I remember. 4. The behaviour seems to differ across platforms. As I understand it — the output has an implicit write buffer (as it's non-blocking) of unlimited size. One approach to fixing this would be to: 1. Introduce an explicit cyclic write buffer. 2. Make writes to that cyclic buffer blocking. 3. Make writes from the buffer to the actual output non blocking. 4. When the cyclic buffer reaches it's maximum size (e.g. 10 MiB) — block further writes to the buffer until a corresponding part of it is freed. 5. On (normal) exit, make sure the buffer is flushed. For almost all cases, except for the ones that are currently broken, this would behave as a non-blocking buffer (because writes to the buffer are considerably faster than writes from the buffer to file/terminal). For cases when the data is being piped to the output too quickly and when the output file/terminal does not manage to output it at the same rate — the write would turn into a blocking operation. It would also be blocking at the exit until all the data is written. ## Another approach would be to monitor (and limit) the size of data that is contained in the implicit buffer coming from the async queue, and make the operations block when that limit is reached.
process
stdout stderr buffering considerations i tried to discuss this some time ago at irc but postponed it for quite a long time also i started the discussion of this in but i would like to extract the more specific discussion to a separate issue i could miss some details but will try to give a quick overview here several issues here many calls to console log e g calling it in a loop could chew up all the memory and die —   console log has different behavior while printing to a terminal and being redirected to a file — output is sometimes truncated — there were other ones as far as i remember the behaviour seems to differ across platforms as i understand it — the output has an implicit write buffer as it s non blocking of unlimited size one approach to fixing this would be to introduce an explicit cyclic write buffer make writes to that cyclic buffer blocking make writes from the buffer to the actual output non blocking when the cyclic buffer reaches it s maximum size e g mib — block further writes to the buffer until a corresponding part of it is freed on normal exit make sure the buffer is flushed for almost all cases except for the ones that are currently broken this would behave as a non blocking buffer because writes to the buffer are considerably faster than writes from the buffer to file terminal for cases when the data is being piped to the output too quickly and when the output file terminal does not manage to output it at the same rate — the write would turn into a blocking operation it would also be blocking at the exit until all the data is written another approach would be to monitor and limit the size of data that is contained in the implicit buffer coming from the async queue and make the operations block when that limit is reached
1
296,404
22,297,734,670
IssuesEvent
2022-06-13 04:58:55
supabase/supabase
https://api.github.com/repos/supabase/supabase
opened
How to add extra data in google signIn in user_metadata or even If we can add it or not
documentation
# Improve documentation ## Link https://supabase.com/docs/reference/javascript/auth-signin#examples ## Describe the problem Here I can see you have described multiple options and I am using redirectTo and it is working fine but now I want to send extra data just like in email signUp how can we do this there I can see "queryParams" but cannot verify the use of it ## Describe the improvement detailed description of what is and use of each option and how to use it ## Additional context ![image](https://user-images.githubusercontent.com/99470670/173281336-e17bdbb6-3282-4abc-8ace-37157c74ec6a.png)
1.0
How to add extra data in google signIn in user_metadata or even If we can add it or not - # Improve documentation ## Link https://supabase.com/docs/reference/javascript/auth-signin#examples ## Describe the problem Here I can see you have described multiple options and I am using redirectTo and it is working fine but now I want to send extra data just like in email signUp how can we do this there I can see "queryParams" but cannot verify the use of it ## Describe the improvement detailed description of what is and use of each option and how to use it ## Additional context ![image](https://user-images.githubusercontent.com/99470670/173281336-e17bdbb6-3282-4abc-8ace-37157c74ec6a.png)
non_process
how to add extra data in google signin in user metadata or even if we can add it or not improve documentation link describe the problem here i can see you have described multiple options and i am using redirectto and it is working fine but now i want to send extra data just like in email signup how can we do this there i can see queryparams but cannot verify the use of it describe the improvement detailed description of what is and use of each option and how to use it additional context
0
195
2,658,617,965
IssuesEvent
2015-03-18 16:30:06
YePpHa/YouTubeCenter
https://api.github.com/repos/YePpHa/YouTubeCenter
closed
Tabs created by AlienTube aren't dimmed by YTC's "Lights Off" feature
Compatibility Request
Hi: I love YTC's "Light's Off" feature and have it enabled by default. I also like AlienTube as it replaces YouTube's awful comment section with the much-better Reddit comments linked to the same video. http://alientube.co The only problem is, AlienTube's tabs are bright white and stand out amidst the sea of soothing darkness. Can these two add-ons play nicer together?
True
Tabs created by AlienTube aren't dimmed by YTC's "Lights Off" feature - Hi: I love YTC's "Light's Off" feature and have it enabled by default. I also like AlienTube as it replaces YouTube's awful comment section with the much-better Reddit comments linked to the same video. http://alientube.co The only problem is, AlienTube's tabs are bright white and stand out amidst the sea of soothing darkness. Can these two add-ons play nicer together?
non_process
tabs created by alientube aren t dimmed by ytc s lights off feature hi i love ytc s light s off feature and have it enabled by default i also like alientube as it replaces youtube s awful comment section with the much better reddit comments linked to the same video the only problem is alientube s tabs are bright white and stand out amidst the sea of soothing darkness can these two add ons play nicer together
0
9,954
12,978,998,890
IssuesEvent
2020-07-22 00:39:23
knative/serving
https://api.github.com/repos/knative/serving
closed
Couldn't get knative serving installed with kubeadm
area/networking kind/process kind/question lifecycle/stale
<!-- If you need to report a security issue with Knative, send an email to knative-security@googlegroups.com. --> <!-- ## In what area(s)? Remove the '> ' to select: > /area API > /area autoscale > /area build > /area monitoring /area networking > /area test-and-release Other classifications: > /kind good-first-issue /kind process > /kind spec --> ## Ask your question here: Hi Knative experts, I am trying to setup a knative cluster using kubeadm, however, i couldn't get this installed. The kubenetes setup is using vagrant with libvirtd with private networking, and kubenetes cluster is applied with flannel network plugin. And I am following the guide in https://knative.dev/docs/install/any-kubernetes-cluster/, and stucked in the step of "Configure DNS" as i don't have a external DNS as well as a static public ip address, all I need is to setup the knative in a virual enviroment so that I can play with it. I have tried with the approach with the cluster-local-gateway as well as manually configure the configmap using xip.io domain according to the document. I always stuck with the following error: "webhook.serving.knative.dev": Post https://webhook.knative-serving.svc:443/defaulting?timeout=30s Following are the steps i have done to try to make this work. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.13.0/serving-crds.yaml kubectl apply --filename https://github.com/knative/serving/releases/download/v0.13.0/serving-core.yaml curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh Download and unpack Istio export ISTIO_VERSION=1.3.6 curl -L https://git.io/getLatestIstio | sh - cd istio-${ISTIO_VERSION} for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Namespace metadata: name: istio-system labels: istio-injection: disabled EOF helm template --namespace=istio-system \ --set prometheus.enabled=false \ --set mixer.enabled=false \ --set mixer.policy.enabled=false \ --set mixer.telemetry.enabled=false \ `# Pilot doesn't need a sidecar.` \ --set pilot.sidecar=false \ --set pilot.resources.requests.memory=128Mi \ `# Disable galley (and things requiring galley).` \ --set galley.enabled=false \ --set global.useMCP=false \ `# Disable security / policy.` \ --set security.enabled=false \ --set global.disablePolicyChecks=true \ `# Disable sidecar injection.` \ --set sidecarInjectorWebhook.enabled=false \ --set global.proxy.autoInject=disabled \ --set global.omitSidecarInjectorConfigMap=true \ --set gateways.istio-ingressgateway.autoscaleMin=1 \ --set gateways.istio-ingressgateway.autoscaleMax=2 \ `# Set pilot trace sampling to 100%` \ --set pilot.traceSampling=100 \ --set global.mtls.auto=false \ install/kubernetes/helm/istio \ > ./istio-lean.yaml kubectl apply -f istio-lean.yaml kubectl get pod -n istio-system helm template --namespace=istio-system \ --set gateways.custom-gateway.autoscaleMin=1 \ --set gateways.custom-gateway.autoscaleMax=2 \ --set gateways.custom-gateway.cpu.targetAverageUtilization=60 \ --set gateways.custom-gateway.labels.app='cluster-local-gateway' \ --set gateways.custom-gateway.labels.istio='cluster-local-gateway' \ --set gateways.custom-gateway.type='ClusterIP' \ --set gateways.istio-ingressgateway.enabled=false \ --set gateways.istio-egressgateway.enabled=false \ --set gateways.istio-ilbgateway.enabled=false \ --set global.mtls.auto=false \ install/kubernetes/helm/istio \ -f install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml \ | sed -e "s/custom-gateway/cluster-local-gateway/g" -e "s/customgateway/clusterlocalgateway/g" \ > ./istio-local-gateway.yaml kubectl apply -f istio-local-gateway.yaml kubectl get pods --namespace istio-system kubectl apply --filename https://github.com/knative/serving/releases/download/v0.13.0/serving-istio.yaml <stuck here> Please help me understand how the knative networking works with istios, a good reference to help me to resolve this issue will sufficient. Thanks
1.0
Couldn't get knative serving installed with kubeadm - <!-- If you need to report a security issue with Knative, send an email to knative-security@googlegroups.com. --> <!-- ## In what area(s)? Remove the '> ' to select: > /area API > /area autoscale > /area build > /area monitoring /area networking > /area test-and-release Other classifications: > /kind good-first-issue /kind process > /kind spec --> ## Ask your question here: Hi Knative experts, I am trying to setup a knative cluster using kubeadm, however, i couldn't get this installed. The kubenetes setup is using vagrant with libvirtd with private networking, and kubenetes cluster is applied with flannel network plugin. And I am following the guide in https://knative.dev/docs/install/any-kubernetes-cluster/, and stucked in the step of "Configure DNS" as i don't have a external DNS as well as a static public ip address, all I need is to setup the knative in a virual enviroment so that I can play with it. I have tried with the approach with the cluster-local-gateway as well as manually configure the configmap using xip.io domain according to the document. I always stuck with the following error: "webhook.serving.knative.dev": Post https://webhook.knative-serving.svc:443/defaulting?timeout=30s Following are the steps i have done to try to make this work. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.13.0/serving-crds.yaml kubectl apply --filename https://github.com/knative/serving/releases/download/v0.13.0/serving-core.yaml curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh Download and unpack Istio export ISTIO_VERSION=1.3.6 curl -L https://git.io/getLatestIstio | sh - cd istio-${ISTIO_VERSION} for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Namespace metadata: name: istio-system labels: istio-injection: disabled EOF helm template --namespace=istio-system \ --set prometheus.enabled=false \ --set mixer.enabled=false \ --set mixer.policy.enabled=false \ --set mixer.telemetry.enabled=false \ `# Pilot doesn't need a sidecar.` \ --set pilot.sidecar=false \ --set pilot.resources.requests.memory=128Mi \ `# Disable galley (and things requiring galley).` \ --set galley.enabled=false \ --set global.useMCP=false \ `# Disable security / policy.` \ --set security.enabled=false \ --set global.disablePolicyChecks=true \ `# Disable sidecar injection.` \ --set sidecarInjectorWebhook.enabled=false \ --set global.proxy.autoInject=disabled \ --set global.omitSidecarInjectorConfigMap=true \ --set gateways.istio-ingressgateway.autoscaleMin=1 \ --set gateways.istio-ingressgateway.autoscaleMax=2 \ `# Set pilot trace sampling to 100%` \ --set pilot.traceSampling=100 \ --set global.mtls.auto=false \ install/kubernetes/helm/istio \ > ./istio-lean.yaml kubectl apply -f istio-lean.yaml kubectl get pod -n istio-system helm template --namespace=istio-system \ --set gateways.custom-gateway.autoscaleMin=1 \ --set gateways.custom-gateway.autoscaleMax=2 \ --set gateways.custom-gateway.cpu.targetAverageUtilization=60 \ --set gateways.custom-gateway.labels.app='cluster-local-gateway' \ --set gateways.custom-gateway.labels.istio='cluster-local-gateway' \ --set gateways.custom-gateway.type='ClusterIP' \ --set gateways.istio-ingressgateway.enabled=false \ --set gateways.istio-egressgateway.enabled=false \ --set gateways.istio-ilbgateway.enabled=false \ --set global.mtls.auto=false \ install/kubernetes/helm/istio \ -f install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml \ | sed -e "s/custom-gateway/cluster-local-gateway/g" -e "s/customgateway/clusterlocalgateway/g" \ > ./istio-local-gateway.yaml kubectl apply -f istio-local-gateway.yaml kubectl get pods --namespace istio-system kubectl apply --filename https://github.com/knative/serving/releases/download/v0.13.0/serving-istio.yaml <stuck here> Please help me understand how the knative networking works with istios, a good reference to help me to resolve this issue will sufficient. Thanks
process
couldn t get knative serving installed with kubeadm in what area s remove the to select area api area autoscale area build area monitoring area networking area test and release other classifications kind good first issue kind process kind spec ask your question here hi knative experts i am trying to setup a knative cluster using kubeadm however i couldn t get this installed the kubenetes setup is using vagrant with libvirtd with private networking and kubenetes cluster is applied with flannel network plugin and i am following the guide in and stucked in the step of configure dns as i don t have a external dns as well as a static public ip address all i need is to setup the knative in a virual enviroment so that i can play with it i have tried with the approach with the cluster local gateway as well as manually configure the configmap using xip io domain according to the document i always stuck with the following error webhook serving knative dev post following are the steps i have done to try to make this work kubectl apply filename kubectl apply filename curl fssl o get helm sh chmod get helm sh get helm sh download and unpack istio export istio version curl l sh cd istio istio version for i in install kubernetes helm istio init files crd yaml do kubectl apply f i done cat eof kubectl apply f apiversion kind namespace metadata name istio system labels istio injection disabled eof helm template namespace istio system set prometheus enabled false set mixer enabled false set mixer policy enabled false set mixer telemetry enabled false pilot doesn t need a sidecar set pilot sidecar false set pilot resources requests memory disable galley and things requiring galley set galley enabled false set global usemcp false disable security policy set security enabled false set global disablepolicychecks true disable sidecar injection set sidecarinjectorwebhook enabled false set global proxy autoinject disabled set global omitsidecarinjectorconfigmap true set gateways istio ingressgateway autoscalemin set gateways istio ingressgateway autoscalemax set pilot trace sampling to set pilot tracesampling set global mtls auto false install kubernetes helm istio istio lean yaml kubectl apply f istio lean yaml kubectl get pod n istio system helm template namespace istio system set gateways custom gateway autoscalemin set gateways custom gateway autoscalemax set gateways custom gateway cpu targetaverageutilization set gateways custom gateway labels app cluster local gateway set gateways custom gateway labels istio cluster local gateway set gateways custom gateway type clusterip set gateways istio ingressgateway enabled false set gateways istio egressgateway enabled false set gateways istio ilbgateway enabled false set global mtls auto false install kubernetes helm istio f install kubernetes helm istio example values values istio gateways yaml sed e s custom gateway cluster local gateway g e s customgateway clusterlocalgateway g istio local gateway yaml kubectl apply f istio local gateway yaml kubectl get pods namespace istio system kubectl apply filename please help me understand how the knative networking works with istios a good reference to help me to resolve this issue will sufficient thanks
1
21,771
30,287,428,303
IssuesEvent
2023-07-08 21:34:07
winter-telescope/mirar
https://api.github.com/repos/winter-telescope/mirar
closed
[BUG] Separate masks and weights
bug processors
Mask image should be used as weight if no weight is available, afterwards should be independent.
1.0
[BUG] Separate masks and weights - Mask image should be used as weight if no weight is available, afterwards should be independent.
process
separate masks and weights mask image should be used as weight if no weight is available afterwards should be independent
1
16,984
22,347,950,986
IssuesEvent
2022-06-15 09:27:03
UserOfficeProject/user-office-project-issue-tracker
https://api.github.com/repos/UserOfficeProject/user-office-project-issue-tracker
closed
Train the LSF FAP secretary to make template changes
origin: project type: process area: uop/stfc
These changes include: - [ ] #270 - [ ] #315 - [ ] #250
1.0
Train the LSF FAP secretary to make template changes - These changes include: - [ ] #270 - [ ] #315 - [ ] #250
process
train the lsf fap secretary to make template changes these changes include
1
580
3,060,127,956
IssuesEvent
2015-08-14 18:50:41
Microsoft/poshtools
https://api.github.com/repos/Microsoft/poshtools
closed
Disable Attaching to the PowerShell Tools Host
Process Attaching task
We should not allow users to attach to our own host service. PowerShell does let users attach to the original host process, and doing so produces negative/unexpected/bad behavior.
1.0
Disable Attaching to the PowerShell Tools Host - We should not allow users to attach to our own host service. PowerShell does let users attach to the original host process, and doing so produces negative/unexpected/bad behavior.
process
disable attaching to the powershell tools host we should not allow users to attach to our own host service powershell does let users attach to the original host process and doing so produces negative unexpected bad behavior
1
222,743
24,711,286,449
IssuesEvent
2022-10-20 01:10:18
hiucimon/react-hooks-redux-template
https://api.github.com/repos/hiucimon/react-hooks-redux-template
closed
CVE-2022-37599 (Medium) detected in loader-utils-1.2.3.tgz - autoclosed
security vulnerability
## CVE-2022-37599 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loader-utils-1.2.3.tgz</b></p></summary> <p>utils for webpack loaders</p> <p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz</a></p> <p>Path to dependency file: /react-hooks-redux-template/package.json</p> <p>Path to vulnerable library: /node_modules/loader-utils/package.json</p> <p> Dependency Hierarchy: - react-scripts-2.1.3.tgz (Root Library) - webpack-2.4.1.tgz - :x: **loader-utils-1.2.3.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the resourcePath variable in interpolateName.js. <p>Publish Date: 2022-10-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37599>CVE-2022-37599</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-37599 (Medium) detected in loader-utils-1.2.3.tgz - autoclosed - ## CVE-2022-37599 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loader-utils-1.2.3.tgz</b></p></summary> <p>utils for webpack loaders</p> <p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz</a></p> <p>Path to dependency file: /react-hooks-redux-template/package.json</p> <p>Path to vulnerable library: /node_modules/loader-utils/package.json</p> <p> Dependency Hierarchy: - react-scripts-2.1.3.tgz (Root Library) - webpack-2.4.1.tgz - :x: **loader-utils-1.2.3.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the resourcePath variable in interpolateName.js. <p>Publish Date: 2022-10-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37599>CVE-2022-37599</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in loader utils tgz autoclosed cve medium severity vulnerability vulnerable library loader utils tgz utils for webpack loaders library home page a href path to dependency file react hooks redux template package json path to vulnerable library node modules loader utils package json dependency hierarchy react scripts tgz root library webpack tgz x loader utils tgz vulnerable library vulnerability details a regular expression denial of service redos flaw was found in function interpolatename in interpolatename js in webpack loader utils via the resourcepath variable in interpolatename js publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
0
25,523
25,339,843,041
IssuesEvent
2022-11-18 20:23:31
HydrologicEngineeringCenter/DSSExcelPlugin
https://api.github.com/repos/HydrologicEngineeringCenter/DSSExcelPlugin
closed
Some reasonable defaults should be created for DSS Parts (B and C)
usability
B part will be name of excel file C Part will be the column name if that exists (or 'value 1', 'value 2', ... )
True
Some reasonable defaults should be created for DSS Parts (B and C) - B part will be name of excel file C Part will be the column name if that exists (or 'value 1', 'value 2', ... )
non_process
some reasonable defaults should be created for dss parts b and c b part will be name of excel file c part will be the column name if that exists or value value
0
2,803
5,735,580,951
IssuesEvent
2017-04-22 00:04:48
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
TestCommonPriorityAndTimeProperties failed in CI on macOS
area-System.Diagnostics.Process blocking-clean-ci os-mac-os-x test-run-core
https://ci.dot.net/job/dotnet_corefx/job/master/job/osx10.12_debug_prtest/123/consoleText ``` System.Diagnostics.Tests.ProcessThreadTests.TestCommonPriorityAndTimeProperties [FAIL] Assert.True() Failure Expected: True Actual: False Stack Trace: /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx10.12_debug_prtest/src/System.Diagnostics.Process/tests/ProcessThreadTests.cs(26,0): at System.Diagnostics.Tests.ProcessThreadTests.TestCommonPriorityAndTimeProperties() ```
1.0
TestCommonPriorityAndTimeProperties failed in CI on macOS - https://ci.dot.net/job/dotnet_corefx/job/master/job/osx10.12_debug_prtest/123/consoleText ``` System.Diagnostics.Tests.ProcessThreadTests.TestCommonPriorityAndTimeProperties [FAIL] Assert.True() Failure Expected: True Actual: False Stack Trace: /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx10.12_debug_prtest/src/System.Diagnostics.Process/tests/ProcessThreadTests.cs(26,0): at System.Diagnostics.Tests.ProcessThreadTests.TestCommonPriorityAndTimeProperties() ```
process
testcommonpriorityandtimeproperties failed in ci on macos system diagnostics tests processthreadtests testcommonpriorityandtimeproperties assert true failure expected true actual false stack trace users dotnet bot j workspace dotnet corefx master debug prtest src system diagnostics process tests processthreadtests cs at system diagnostics tests processthreadtests testcommonpriorityandtimeproperties
1
1,356
3,910,711,845
IssuesEvent
2016-04-20 00:24:33
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
opened
Support LOCK TABLE
CONNECTION POOL MYSQL PROTOCOL QUERY PROCESSOR
When disabling multiplexing, ProxySQL doesn't support `LOCK TABLE` but only `LOCK TABLES`
1.0
Support LOCK TABLE - When disabling multiplexing, ProxySQL doesn't support `LOCK TABLE` but only `LOCK TABLES`
process
support lock table when disabling multiplexing proxysql doesn t support lock table but only lock tables
1
37,429
15,294,622,884
IssuesEvent
2021-02-24 02:54:29
Azure/azure-sdk-for-js
https://api.github.com/repos/Azure/azure-sdk-for-js
closed
Breaking OTEL API change: getCurrentSpan() -> api.getSpan(api.context.active())
Client Event Hubs Service Bus customer-reported needs-team-attention question
- **Package Name**: @azure/core-tracing - [x] **nodejs** - **version**: 14 **Describe the bug** A breaking API change in Open Telemetry 0.15.0x (previous version, current is 0.16.0x) seems to have caused Service Bus and Event Hub to stop sending traces. **The breaking API change is described here:** https://github.com/open-telemetry/opentelemetry-js#0140-to-0150 **Summary:** The Open Telemetry API has changed and there is no longer a `getCurrentSpan()` the (see notice above). The usage has changed to: `api.getSpan(api.context.active())`. **To Reproduce** Steps to reproduce the behavior: 1. Use `setTracer()` as documented **Expected behavior** Service Bus and Event Hub to be instrumented
1.0
Breaking OTEL API change: getCurrentSpan() -> api.getSpan(api.context.active()) - - **Package Name**: @azure/core-tracing - [x] **nodejs** - **version**: 14 **Describe the bug** A breaking API change in Open Telemetry 0.15.0x (previous version, current is 0.16.0x) seems to have caused Service Bus and Event Hub to stop sending traces. **The breaking API change is described here:** https://github.com/open-telemetry/opentelemetry-js#0140-to-0150 **Summary:** The Open Telemetry API has changed and there is no longer a `getCurrentSpan()` the (see notice above). The usage has changed to: `api.getSpan(api.context.active())`. **To Reproduce** Steps to reproduce the behavior: 1. Use `setTracer()` as documented **Expected behavior** Service Bus and Event Hub to be instrumented
non_process
breaking otel api change getcurrentspan api getspan api context active package name azure core tracing nodejs version describe the bug a breaking api change in open telemetry previous version current is seems to have caused service bus and event hub to stop sending traces the breaking api change is described here summary the open telemetry api has changed and there is no longer a getcurrentspan the see notice above the usage has changed to api getspan api context active to reproduce steps to reproduce the behavior use settracer as documented expected behavior service bus and event hub to be instrumented
0
28,683
13,782,973,429
IssuesEvent
2020-10-08 18:28:20
InfectedLibraries/Biohazrd
https://api.github.com/repos/InfectedLibraries/Biohazrd
opened
Try to translate simple inline methods literally
Area-OutputGeneration Area-Translation Concept-OutputPerformance
Inline methods in C++ are frequently used for totally trivial methods that could be translated to C# mechanically. For example, `ImColor` in ImGui: https://github.com/ocornut/imgui/blob/e5cb04b132cba94f902beb6186cb58b864777012/imgui.h#L1953-L1968 This should be implemented to cover simple cases, and if the method is too complex it just falls back to invoking the method via P/Invoke.
True
Try to translate simple inline methods literally - Inline methods in C++ are frequently used for totally trivial methods that could be translated to C# mechanically. For example, `ImColor` in ImGui: https://github.com/ocornut/imgui/blob/e5cb04b132cba94f902beb6186cb58b864777012/imgui.h#L1953-L1968 This should be implemented to cover simple cases, and if the method is too complex it just falls back to invoking the method via P/Invoke.
non_process
try to translate simple inline methods literally inline methods in c are frequently used for totally trivial methods that could be translated to c mechanically for example imcolor in imgui this should be implemented to cover simple cases and if the method is too complex it just falls back to invoking the method via p invoke
0
5,674
29,507,751,979
IssuesEvent
2023-06-03 14:24:45
jupyter-naas/awesome-notebooks
https://api.github.com/repos/jupyter-naas/awesome-notebooks
closed
Naas - Send image asset to Notion page
templates maintainer
This notebook sends an naas image asset to a Notion page. If your page is in a notion database, you will be able to vizualise the chart in Gallery (display page content). The image asset will be updated (deleted and added) to make sure the graph display is always up to date in Notion.
True
Naas - Send image asset to Notion page - This notebook sends an naas image asset to a Notion page. If your page is in a notion database, you will be able to vizualise the chart in Gallery (display page content). The image asset will be updated (deleted and added) to make sure the graph display is always up to date in Notion.
non_process
naas send image asset to notion page this notebook sends an naas image asset to a notion page if your page is in a notion database you will be able to vizualise the chart in gallery display page content the image asset will be updated deleted and added to make sure the graph display is always up to date in notion
0
9,479
12,477,733,822
IssuesEvent
2020-05-29 15:26:33
MHRA/products
https://api.github.com/repos/MHRA/products
closed
PARs - Authentication
EPIC - PARs process HIGH PRIORITY :arrow_double_up:
**User want** As a developer I would like to know what technical solution will be needed to authenticate the PARs process so that I can plan the work required Technical acceptance criteria The best method is identified that will provide authenticated, logged, write-only access to submit PAR metadata and files.
1.0
PARs - Authentication - **User want** As a developer I would like to know what technical solution will be needed to authenticate the PARs process so that I can plan the work required Technical acceptance criteria The best method is identified that will provide authenticated, logged, write-only access to submit PAR metadata and files.
process
pars authentication user want as a developer i would like to know what technical solution will be needed to authenticate the pars process so that i can plan the work required technical acceptance criteria the best method is identified that will provide authenticated logged write only access to submit par metadata and files
1
665,649
22,324,598,286
IssuesEvent
2022-06-14 09:30:30
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.apkmirror.com - see bug description
browser-firefox priority-normal engine-gecko
<!-- @browser: Firefox 102.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101 Firefox/102.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/105849 --> **URL**: https://www.apkmirror.com/ **Browser / Version**: Firefox 102.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: File Extension for apk bundles is not being added when you go to download them **Steps to Reproduce**: I had noticed that when you go to download the bundled app from the site which are apps with the files for multiple dpis and other changes common among android that it was not adding the file extension apkm which would allow it to be opened by the apmirror app on the phone but would instead it would have the extension com because of how the filenames for files on the site are. I tried downloading the same file using Firefox and Chrome Dev on my computer and was able to confirm this was the issue. When downloaded with Firefox, the filename for the file I tested this with was saved as 1066546622_4arch_3dpi_14lang_7f91929d98363ea23ad89f996a731b41_apkmirror.com while on Chrome its saved as 1066546622_4arch_3dpi_14lang_7f91929d98363ea23ad89f996a731b41_apkmirror.com.apkm . The apk files themselves download just fine, it just seems to be the bundle files that are having the issue, and I'm not sure why that is. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/6/cbb46fb3-8b81-4eaf-b8b3-66dea8854e78.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.apkmirror.com - see bug description - <!-- @browser: Firefox 102.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101 Firefox/102.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/105849 --> **URL**: https://www.apkmirror.com/ **Browser / Version**: Firefox 102.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: File Extension for apk bundles is not being added when you go to download them **Steps to Reproduce**: I had noticed that when you go to download the bundled app from the site which are apps with the files for multiple dpis and other changes common among android that it was not adding the file extension apkm which would allow it to be opened by the apmirror app on the phone but would instead it would have the extension com because of how the filenames for files on the site are. I tried downloading the same file using Firefox and Chrome Dev on my computer and was able to confirm this was the issue. When downloaded with Firefox, the filename for the file I tested this with was saved as 1066546622_4arch_3dpi_14lang_7f91929d98363ea23ad89f996a731b41_apkmirror.com while on Chrome its saved as 1066546622_4arch_3dpi_14lang_7f91929d98363ea23ad89f996a731b41_apkmirror.com.apkm . The apk files themselves download just fine, it just seems to be the bundle files that are having the issue, and I'm not sure why that is. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/6/cbb46fb3-8b81-4eaf-b8b3-66dea8854e78.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
see bug description url browser version firefox operating system windows tested another browser yes chrome problem type something else description file extension for apk bundles is not being added when you go to download them steps to reproduce i had noticed that when you go to download the bundled app from the site which are apps with the files for multiple dpis and other changes common among android that it was not adding the file extension apkm which would allow it to be opened by the apmirror app on the phone but would instead it would have the extension com because of how the filenames for files on the site are i tried downloading the same file using firefox and chrome dev on my computer and was able to confirm this was the issue when downloaded with firefox the filename for the file i tested this with was saved as apkmirror com while on chrome its saved as apkmirror com apkm the apk files themselves download just fine it just seems to be the bundle files that are having the issue and i m not sure why that is view the screenshot img alt screenshot src browser configuration none from with ❤️
0
130,438
10,615,979,456
IssuesEvent
2019-10-12 08:23:32
hwmrocker/smtplibaio
https://api.github.com/repos/hwmrocker/smtplibaio
closed
add an integration test
hacktoberfest tests
Plan is it to have a possibility to run automated tests. You should start an smtp server (could be with docker, or another python thread) Craft an email and send it. Verify that it was received.
1.0
add an integration test - Plan is it to have a possibility to run automated tests. You should start an smtp server (could be with docker, or another python thread) Craft an email and send it. Verify that it was received.
non_process
add an integration test plan is it to have a possibility to run automated tests you should start an smtp server could be with docker or another python thread craft an email and send it verify that it was received
0
2,345
5,148,701,726
IssuesEvent
2017-01-13 12:16:50
Alfresco/alfresco-ng2-components
https://api.github.com/repos/Alfresco/alfresco-ng2-components
opened
Inconsistent handling of process with `null` name
bug comp: activiti-processList
<!-- PLEASE FILL OUT THE FOLLOWING INFORMATION, THIS WILL HELP US TO RESOLVE YOUR PROBLEM FASTER. REMEMBER FOR SUPPORT REQUESTS YOU CAN ALSO ASK ON OUR GITTER CHAT: Please ask before on our gitter channel https://gitter.im/Alfresco/alfresco-ng2-components --> **Type of issue:** (check with "[x]") ``` - [ ] New feature request - [x] Bug - [ ] Support request ``` **Current behavior:** A process instance which completes straight away never gets its supplied name set (see Alfresco/activiti-bpm-suite#3362. The processlist components handle this `null` value in different ways, but we should decide on the correct way and apply this consistently. <img width="1377" alt="screen shot 2017-01-13 at 12 05 33" src="https://cloud.githubusercontent.com/assets/1727487/21929672/eba68312-d989-11e6-8c13-38f2053ce3a7.png"> **Expected behavior:** <!-- Describe the expected behavior. --> A default name should be generated from the process definition name and the creation time, in the event of the `name` property being `null`, similar to the Angular1 app. This should be displayed in the process instance list and also as the content of the heading inside the process instance detail component. **Steps to reproduce the issue:** <!-- Describe the steps to reproduce the issue. --> 1. Create a new process instance that has no user tasks in it and which therefore will complete straight away and give it a name 2. Click the 'Completed' process filter and observe how the name is shown in the process instance list (*No name*) vs in the title of the process instance detail component (text *null* is displayed) **Component name and version:** <!-- Example: ng2-alfresco-login. Check before if this issue is still present in the most recent version --> **Browser and version:** <!-- [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] --> **Node version (for build issues):** <!-- To check the version: node --version --> **New feature request:** <!-- Describe the feature, motivation and the concrete use case (only in case of new feature request) -->
1.0
Inconsistent handling of process with `null` name - <!-- PLEASE FILL OUT THE FOLLOWING INFORMATION, THIS WILL HELP US TO RESOLVE YOUR PROBLEM FASTER. REMEMBER FOR SUPPORT REQUESTS YOU CAN ALSO ASK ON OUR GITTER CHAT: Please ask before on our gitter channel https://gitter.im/Alfresco/alfresco-ng2-components --> **Type of issue:** (check with "[x]") ``` - [ ] New feature request - [x] Bug - [ ] Support request ``` **Current behavior:** A process instance which completes straight away never gets its supplied name set (see Alfresco/activiti-bpm-suite#3362. The processlist components handle this `null` value in different ways, but we should decide on the correct way and apply this consistently. <img width="1377" alt="screen shot 2017-01-13 at 12 05 33" src="https://cloud.githubusercontent.com/assets/1727487/21929672/eba68312-d989-11e6-8c13-38f2053ce3a7.png"> **Expected behavior:** <!-- Describe the expected behavior. --> A default name should be generated from the process definition name and the creation time, in the event of the `name` property being `null`, similar to the Angular1 app. This should be displayed in the process instance list and also as the content of the heading inside the process instance detail component. **Steps to reproduce the issue:** <!-- Describe the steps to reproduce the issue. --> 1. Create a new process instance that has no user tasks in it and which therefore will complete straight away and give it a name 2. Click the 'Completed' process filter and observe how the name is shown in the process instance list (*No name*) vs in the title of the process instance detail component (text *null* is displayed) **Component name and version:** <!-- Example: ng2-alfresco-login. Check before if this issue is still present in the most recent version --> **Browser and version:** <!-- [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] --> **Node version (for build issues):** <!-- To check the version: node --version --> **New feature request:** <!-- Describe the feature, motivation and the concrete use case (only in case of new feature request) -->
process
inconsistent handling of process with null name please fill out the following information this will help us to resolve your problem faster remember for support requests you can also ask on our gitter chat please ask before on our gitter channel type of issue check with new feature request bug support request current behavior a process instance which completes straight away never gets its supplied name set see alfresco activiti bpm suite the processlist components handle this null value in different ways but we should decide on the correct way and apply this consistently img width alt screen shot at src expected behavior a default name should be generated from the process definition name and the creation time in the event of the name property being null similar to the app this should be displayed in the process instance list and also as the content of the heading inside the process instance detail component steps to reproduce the issue create a new process instance that has no user tasks in it and which therefore will complete straight away and give it a name click the completed process filter and observe how the name is shown in the process instance list no name vs in the title of the process instance detail component text null is displayed component name and version browser and version node version for build issues new feature request
1
13,178
15,605,790,293
IssuesEvent
2021-03-19 06:55:02
bitpal/bitpal_umbrella
https://api.github.com/repos/bitpal/bitpal_umbrella
opened
Examine volatility control
Payment processor enhancement
Would be nice to have some way of minimizing the volatility. Some ideas: * Accept stablecoins (not a big fan of them, but it's an option) * Have a service that immediately sends the coins to an exchange and sells them for fiat (should run your own server for this to make sense I guess) * Use something like detoken/anyhedge
1.0
Examine volatility control - Would be nice to have some way of minimizing the volatility. Some ideas: * Accept stablecoins (not a big fan of them, but it's an option) * Have a service that immediately sends the coins to an exchange and sells them for fiat (should run your own server for this to make sense I guess) * Use something like detoken/anyhedge
process
examine volatility control would be nice to have some way of minimizing the volatility some ideas accept stablecoins not a big fan of them but it s an option have a service that immediately sends the coins to an exchange and sells them for fiat should run your own server for this to make sense i guess use something like detoken anyhedge
1
248,662
18,858,103,565
IssuesEvent
2021-11-12 09:23:20
pragyan01/pe
https://api.github.com/repos/pragyan01/pe
opened
typo in UG
type.DocumentationBug severity.High
![10.JPG](https://raw.githubusercontent.com/pragyan01/pe/main/files/ddd1eec1-7cbb-42cd-8d80-8e94b75e2ad7.JPG) Point 2 says "...using the command java -jar tp.jar". However, filename of jar file was set to "[CS2113T-W11-2][Mint]". New CLI users wont know and this instruction will not let them even run the program. <!--session: 1636701567948-9bec3971-ad00-438d-8b93-15f11e19afff--> <!--Version: Web v3.4.1-->
1.0
typo in UG - ![10.JPG](https://raw.githubusercontent.com/pragyan01/pe/main/files/ddd1eec1-7cbb-42cd-8d80-8e94b75e2ad7.JPG) Point 2 says "...using the command java -jar tp.jar". However, filename of jar file was set to "[CS2113T-W11-2][Mint]". New CLI users wont know and this instruction will not let them even run the program. <!--session: 1636701567948-9bec3971-ad00-438d-8b93-15f11e19afff--> <!--Version: Web v3.4.1-->
non_process
typo in ug point says using the command java jar tp jar however filename of jar file was set to new cli users wont know and this instruction will not let them even run the program
0
403,344
11,839,453,808
IssuesEvent
2020-03-23 17:10:56
dmwm/CRABServer
https://api.github.com/repos/dmwm/CRABServer
opened
reorganize TW's recurring.log
Priority: Medium Status: Available Type: Enhancement
- [ ] different logs for different actions, at least tape recall and renewproxy should be separated - [ ] some kind of rotation, since now the file just grows forever
1.0
reorganize TW's recurring.log - - [ ] different logs for different actions, at least tape recall and renewproxy should be separated - [ ] some kind of rotation, since now the file just grows forever
non_process
reorganize tw s recurring log different logs for different actions at least tape recall and renewproxy should be separated some kind of rotation since now the file just grows forever
0
414
2,852,274,039
IssuesEvent
2015-06-01 12:46:13
genomizer/genomizer-server
https://api.github.com/repos/genomizer/genomizer-server
closed
Test suite broken by recent hotfix merge
bug High priority Processing
I've temporarily disabled a bunch of tests in c1b0418bd01fb8e403808cd3564c7f118e5f1fa9 broken by the recent rushed merge. We should fix and reenable them.
1.0
Test suite broken by recent hotfix merge - I've temporarily disabled a bunch of tests in c1b0418bd01fb8e403808cd3564c7f118e5f1fa9 broken by the recent rushed merge. We should fix and reenable them.
process
test suite broken by recent hotfix merge i ve temporarily disabled a bunch of tests in broken by the recent rushed merge we should fix and reenable them
1
676,593
23,130,726,192
IssuesEvent
2022-07-28 10:06:51
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
Invalid `map<anydata>` to `json` cast doesn't panic
Type/Bug Priority/Blocker Team/jBallerina Points/2
**Description:** $title. **Steps to reproduce:** ```ballerina import ballerina/io; public function main() { record {| anydata body; |} badRequest = { body: { code: "ERROR_CODE", details: "ERROR_DETAILS" } }; io:println(badRequest?.body is json); // false // So the following cast should panic, but doesn't atm. json j = <json> badRequest?.body; } ``` **Affected Versions:** 2201.2.0-SNAPSHOT
1.0
Invalid `map<anydata>` to `json` cast doesn't panic - **Description:** $title. **Steps to reproduce:** ```ballerina import ballerina/io; public function main() { record {| anydata body; |} badRequest = { body: { code: "ERROR_CODE", details: "ERROR_DETAILS" } }; io:println(badRequest?.body is json); // false // So the following cast should panic, but doesn't atm. json j = <json> badRequest?.body; } ``` **Affected Versions:** 2201.2.0-SNAPSHOT
non_process
invalid map to json cast doesn t panic description title steps to reproduce ballerina import ballerina io public function main record anydata body badrequest body code error code details error details io println badrequest body is json false so the following cast should panic but doesn t atm json j badrequest body affected versions snapshot
0
20,311
26,952,782,675
IssuesEvent
2023-02-08 12:57:36
helmholtz-analytics/heat
https://api.github.com/repos/helmholtz-analytics/heat
opened
Provide Fast Fourier Transform (FFT)
enhancement signal processing linalg GSoC2023
**Feature functionality** Provide routines for Fast Fourier Transform (FFT) and its inverse for $n$-dimensional `DNDarray`'s. Hereby, Fourier transform along a single axis or along all axes should be possible. **Additional context** Add any other context or screenshots about the feature request here.
1.0
Provide Fast Fourier Transform (FFT) - **Feature functionality** Provide routines for Fast Fourier Transform (FFT) and its inverse for $n$-dimensional `DNDarray`'s. Hereby, Fourier transform along a single axis or along all axes should be possible. **Additional context** Add any other context or screenshots about the feature request here.
process
provide fast fourier transform fft feature functionality provide routines for fast fourier transform fft and its inverse for n dimensional dndarray s hereby fourier transform along a single axis or along all axes should be possible additional context add any other context or screenshots about the feature request here
1
21,378
29,202,228,594
IssuesEvent
2023-05-21 00:36:56
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Remoto] Product Manager na Coodesh
SALVADOR GESTÃO DE PROJETOS JIRA REQUISITOS REMOTO PROCESSOS GITHUB UMA POWER BI APIs NEGÓCIOS PRODUCT MANAGER Stale
## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/product-manager-152815535?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>Techsocial</strong> está em busca de <strong><ins>Product Manager</ins></strong> para compor seu time!</p> <p></p> <p>Somos uma empresa de Soluções Tecnológicas, que busca transformar os dados e informações de nossos clientes em resultados. Evoluímos a partir de consultoria em Gestão Empresarial, somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas. A Tech é uma empresa inovadora! Desenvolvemos e aportamos inteligência em softwares, aplicativos, RPAs, APIs entre outras soluções digitais.&nbsp;</p> <p><strong>Responsabilidades:</strong></p> <ul> <li>Entendimento dos módulos do nossa Plataforma;&nbsp;</li> <li>Conhecimento dos nossos negócios e suas peculiaridades;</li> <li>Identificação de gaps e oportunidades de melhoria;&nbsp;</li> <li>Auxílio no mapeamento do processo e levantamento de requisitos de mudanças;</li> <li>Confecção de tickets de melhoria alinhado ao time de processos;&nbsp;</li> <li>Apresentação e revisão de tickets junto ao time de desenvolvimento;&nbsp;</li> <li>Definição de prioridades de implementação;&nbsp;</li> <li>Identificação de interfaces do(s) módulo(s) que representa para que requisitos e regras não sejam alterados sem alinhamento e definição com outros módulos, etc.</li> </ul> ## Techsocial: <p>Somos uma empresa de Soluções Tecnológicas, que busca transformar os dados e informações de nossos clientes em resultados. Evoluímos a partir de consultoria em Gestão Empresarial, somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas.</p> <p>A Tech é uma empresa inovadora! Desenvolvemos e aportamos inteligência em softwares, aplicativos, RPAs, APIs entre outras soluções digitais. Nossa missão é simplificar os processos de nossos clientes por meio da tecnologia e estruturar grandes bancos de dados para garimparmos e lapidarmos as melhores informações para as empresas.</p><a href='https://coodesh.com/empresas/techsocial-tecnologia-e-consultoria-ltda'>Veja mais no site</a> ## Habilidades: - JIRA - Análise de requisitos - Gestão e Negociação com Cliente ## Local: 100% Remoto ## Requisitos: - Conhecimento em Levantamento de requisitos; - Mapeamento de Processos; - Experiência em Gestão de Projetos; - Experiência em Gestão de Produtos. ## Diferenciais: - Power BI; - Experíência na Plataforma Jira. ## Benefícios: - Convênio Médico; - Trabalho Remoto. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Product Manager na Techsocial](https://coodesh.com/vagas/product-manager-152815535?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Categoria Gestão em TI
1.0
[Remoto] Product Manager na Coodesh - ## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/product-manager-152815535?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>Techsocial</strong> está em busca de <strong><ins>Product Manager</ins></strong> para compor seu time!</p> <p></p> <p>Somos uma empresa de Soluções Tecnológicas, que busca transformar os dados e informações de nossos clientes em resultados. Evoluímos a partir de consultoria em Gestão Empresarial, somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas. A Tech é uma empresa inovadora! Desenvolvemos e aportamos inteligência em softwares, aplicativos, RPAs, APIs entre outras soluções digitais.&nbsp;</p> <p><strong>Responsabilidades:</strong></p> <ul> <li>Entendimento dos módulos do nossa Plataforma;&nbsp;</li> <li>Conhecimento dos nossos negócios e suas peculiaridades;</li> <li>Identificação de gaps e oportunidades de melhoria;&nbsp;</li> <li>Auxílio no mapeamento do processo e levantamento de requisitos de mudanças;</li> <li>Confecção de tickets de melhoria alinhado ao time de processos;&nbsp;</li> <li>Apresentação e revisão de tickets junto ao time de desenvolvimento;&nbsp;</li> <li>Definição de prioridades de implementação;&nbsp;</li> <li>Identificação de interfaces do(s) módulo(s) que representa para que requisitos e regras não sejam alterados sem alinhamento e definição com outros módulos, etc.</li> </ul> ## Techsocial: <p>Somos uma empresa de Soluções Tecnológicas, que busca transformar os dados e informações de nossos clientes em resultados. Evoluímos a partir de consultoria em Gestão Empresarial, somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas.</p> <p>A Tech é uma empresa inovadora! Desenvolvemos e aportamos inteligência em softwares, aplicativos, RPAs, APIs entre outras soluções digitais. Nossa missão é simplificar os processos de nossos clientes por meio da tecnologia e estruturar grandes bancos de dados para garimparmos e lapidarmos as melhores informações para as empresas.</p><a href='https://coodesh.com/empresas/techsocial-tecnologia-e-consultoria-ltda'>Veja mais no site</a> ## Habilidades: - JIRA - Análise de requisitos - Gestão e Negociação com Cliente ## Local: 100% Remoto ## Requisitos: - Conhecimento em Levantamento de requisitos; - Mapeamento de Processos; - Experiência em Gestão de Projetos; - Experiência em Gestão de Produtos. ## Diferenciais: - Power BI; - Experíência na Plataforma Jira. ## Benefícios: - Convênio Médico; - Trabalho Remoto. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Product Manager na Techsocial](https://coodesh.com/vagas/product-manager-152815535?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Categoria Gestão em TI
process
product manager na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a techsocial está em busca de product manager para compor seu time somos uma empresa de soluções tecnológicas que busca transformar os dados e informações de nossos clientes em resultados evoluímos a partir de consultoria em gestão empresarial somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas a tech é uma empresa inovadora desenvolvemos e aportamos inteligência em softwares aplicativos rpas apis entre outras soluções digitais nbsp responsabilidades entendimento dos módulos do nossa plataforma nbsp conhecimento dos nossos negócios e suas peculiaridades identificação de gaps e oportunidades de melhoria nbsp auxílio no mapeamento do processo e levantamento de requisitos de mudanças confecção de tickets de melhoria alinhado ao time de processos nbsp apresentação e revisão de tickets junto ao time de desenvolvimento nbsp definição de prioridades de implementação nbsp identificação de interfaces do s módulo s que representa para que requisitos e regras não sejam alterados sem alinhamento e definição com outros módulos etc techsocial somos uma empresa de soluções tecnológicas que busca transformar os dados e informações de nossos clientes em resultados evoluímos a partir de consultoria em gestão empresarial somando as múltiplas competências e experiência de nossos profissionais às inovações tecnológicas a tech é uma empresa inovadora desenvolvemos e aportamos inteligência em softwares aplicativos rpas apis entre outras soluções digitais nossa missão é simplificar os processos de nossos clientes por meio da tecnologia e estruturar grandes bancos de dados para garimparmos e lapidarmos as melhores informações para as empresas habilidades jira análise de requisitos gestão e negociação com cliente local remoto requisitos conhecimento em levantamento de requisitos mapeamento de processos experiência em gestão de projetos experiência em gestão de produtos diferenciais power bi experíência na plataforma jira benefícios convênio médico trabalho remoto como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto categoria gestão em ti
1
1,567
4,165,248,802
IssuesEvent
2016-06-19 11:03:11
ryankeefe92/Episodes
https://api.github.com/repos/ryankeefe92/Episodes
opened
When episodes are in a queue (for downloading with aria or to be processed), they should be prioritized chronologically (does this already happen because everything goes alphabetically?) (see comments)
download: feature process:
* If S02E01 and S02E03 are added at the same time, S02E01 would go first, and then if S02E02 is added while S02E01 is still downloading/processing, it should go next even though S02E03 has been there for longer
1.0
When episodes are in a queue (for downloading with aria or to be processed), they should be prioritized chronologically (does this already happen because everything goes alphabetically?) (see comments) - * If S02E01 and S02E03 are added at the same time, S02E01 would go first, and then if S02E02 is added while S02E01 is still downloading/processing, it should go next even though S02E03 has been there for longer
process
when episodes are in a queue for downloading with aria or to be processed they should be prioritized chronologically does this already happen because everything goes alphabetically see comments if and are added at the same time would go first and then if is added while is still downloading processing it should go next even though has been there for longer
1
209,186
7,166,117,066
IssuesEvent
2018-01-29 16:18:50
dreamfly-io/foundation-grpc
https://api.github.com/repos/dreamfly-io/foundation-grpc
closed
implement common name resolver for grpc
component/core kind/feature priority/P1 status/done
Refactory name resolver from foundation-etcd project.
1.0
implement common name resolver for grpc - Refactory name resolver from foundation-etcd project.
non_process
implement common name resolver for grpc refactory name resolver from foundation etcd project
0
9,017
12,125,199,824
IssuesEvent
2020-04-22 15:12:32
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
[Processing]Last used format is not proposed by default in file selector dialog when running in batch
Feature Request Processing
Author Name: **Harrissou Santanna** (@DelazJ) Original Redmine Issue: [20128](https://issues.qgis.org/issues/20128) Redmine category:processing/gui --- Open an algorithm dialog switch to batch mode click the browser icon for input layer in the select file dialog, browse to a folder and use the drop-down menu of the formats (let's say esri shapefile) to filter layer. Select any and add. (You may also run the alg at this step) Click on another row in the batch dialog to add another file; in the file selector, it would be nice to have the previously selected format (esri shapefile) instead of having to scroll down to again select the proper format. People will likely use the same format than the default one and this will, if i'm not wrong, align it with other file selector widgets.
1.0
[Processing]Last used format is not proposed by default in file selector dialog when running in batch - Author Name: **Harrissou Santanna** (@DelazJ) Original Redmine Issue: [20128](https://issues.qgis.org/issues/20128) Redmine category:processing/gui --- Open an algorithm dialog switch to batch mode click the browser icon for input layer in the select file dialog, browse to a folder and use the drop-down menu of the formats (let's say esri shapefile) to filter layer. Select any and add. (You may also run the alg at this step) Click on another row in the batch dialog to add another file; in the file selector, it would be nice to have the previously selected format (esri shapefile) instead of having to scroll down to again select the proper format. People will likely use the same format than the default one and this will, if i'm not wrong, align it with other file selector widgets.
process
last used format is not proposed by default in file selector dialog when running in batch author name harrissou santanna delazj original redmine issue redmine category processing gui open an algorithm dialog switch to batch mode click the browser icon for input layer in the select file dialog browse to a folder and use the drop down menu of the formats let s say esri shapefile to filter layer select any and add you may also run the alg at this step click on another row in the batch dialog to add another file in the file selector it would be nice to have the previously selected format esri shapefile instead of having to scroll down to again select the proper format people will likely use the same format than the default one and this will if i m not wrong align it with other file selector widgets
1
7,922
11,098,833,387
IssuesEvent
2019-12-16 15:54:15
GoogleCloudPlatform/java-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/java-docs-samples
opened
Fix ignored test FhirResourceTests.test_FhirResourceConditionalPatch()
type: process
## In which file did you encounter the issue? https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/healthcare/v1beta1/src/test/java/snippets/healthcare/FhirResourceTests.java#L174 ### Did you change the file? If so, how? No ## Describe the issue The test is ignored.
1.0
Fix ignored test FhirResourceTests.test_FhirResourceConditionalPatch() - ## In which file did you encounter the issue? https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/healthcare/v1beta1/src/test/java/snippets/healthcare/FhirResourceTests.java#L174 ### Did you change the file? If so, how? No ## Describe the issue The test is ignored.
process
fix ignored test fhirresourcetests test fhirresourceconditionalpatch in which file did you encounter the issue did you change the file if so how no describe the issue the test is ignored
1
78,070
22,107,205,542
IssuesEvent
2022-06-01 17:57:10
dotnet/arcade
https://api.github.com/repos/dotnet/arcade
closed
Build failed: dotnet-arcade-validation-official/main #20220531.6
Build Failed
Build [#20220531.6](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=1799192) partiallySucceeded ## :warning: : internal / dotnet-arcade-validation-official partiallySucceeded ### Summary **Finished** - Wed, 01 Jun 2022 17:55:09 GMT **Duration** - 132 minutes **Requested for** - Microsoft.VisualStudio.Services.TFS **Reason** - schedule ### Details #### Promote Arcade to '.NET Eng - Latest' channel - :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1799192/logs/400) - The latest build on 'main' branch for the 'runtime' repository was not successful. ### Changes
1.0
Build failed: dotnet-arcade-validation-official/main #20220531.6 - Build [#20220531.6](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=1799192) partiallySucceeded ## :warning: : internal / dotnet-arcade-validation-official partiallySucceeded ### Summary **Finished** - Wed, 01 Jun 2022 17:55:09 GMT **Duration** - 132 minutes **Requested for** - Microsoft.VisualStudio.Services.TFS **Reason** - schedule ### Details #### Promote Arcade to '.NET Eng - Latest' channel - :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/1799192/logs/400) - The latest build on 'main' branch for the 'runtime' repository was not successful. ### Changes
non_process
build failed dotnet arcade validation official main build partiallysucceeded warning internal dotnet arcade validation official partiallysucceeded summary finished wed jun gmt duration minutes requested for microsoft visualstudio services tfs reason schedule details promote arcade to net eng latest channel warning the latest build on main branch for the runtime repository was not successful changes
0
10,109
13,044,162,175
IssuesEvent
2020-07-29 03:47:30
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `CurrentTime0Arg` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `CurrentTime0Arg` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @iosmanthus ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `CurrentTime0Arg` from TiDB - ## Description Port the scalar function `CurrentTime0Arg` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @iosmanthus ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function from tidb description port the scalar function from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
1
242,576
20,254,110,441
IssuesEvent
2022-02-14 21:04:04
rspott/WAF-test02
https://api.github.com/repos/rspott/WAF-test02
opened
Periodically perform external and/or internal workload security audits
WARP-Import test1 Security Security & Compliance Compliance
<a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-audit#review-critical-access">Periodically perform external and/or internal workload security audits</a> <p><b>Why Consider This?</b></p> Compliance is important for several reasons. Aside from signifying levels of standards, like ISO 27001 and others, noncompliance with regulatory guidelines may bring sanctions and penalties. <p><b>Context</b></p> <p><b>Suggested Actions</b></p> <p><span>Use Azure Defender (Azure Security Center) to"nbsp; continuously assess and monitor your compliance score. </span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/security-center/security-center-compliance-dashboard#assess-your-regulatory-compliance" target="_blank"><span>https://docs.microsoft.com/en-us/azure/security-center/security-center-compliance-dashboard#assess-your-regulatory-compliance</span></a><span /></p>
1.0
Periodically perform external and/or internal workload security audits - <a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-audit#review-critical-access">Periodically perform external and/or internal workload security audits</a> <p><b>Why Consider This?</b></p> Compliance is important for several reasons. Aside from signifying levels of standards, like ISO 27001 and others, noncompliance with regulatory guidelines may bring sanctions and penalties. <p><b>Context</b></p> <p><b>Suggested Actions</b></p> <p><span>Use Azure Defender (Azure Security Center) to"nbsp; continuously assess and monitor your compliance score. </span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/security-center/security-center-compliance-dashboard#assess-your-regulatory-compliance" target="_blank"><span>https://docs.microsoft.com/en-us/azure/security-center/security-center-compliance-dashboard#assess-your-regulatory-compliance</span></a><span /></p>
non_process
periodically perform external and or internal workload security audits why consider this compliance is important for several reasons aside from signifying levels of standards like iso and others noncompliance with regulatory guidelines may bring sanctions and penalties context suggested actions use azure defender azure security center to nbsp continuously assess and monitor your compliance score learn more
0
16,923
22,268,396,739
IssuesEvent
2022-06-10 09:44:32
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Flaky test statistics reports nonsensical names
kind/bug team/distributed team/process-automation area/test
**Describe the bug** It happens sometimes, looking at the flaky test dashboard, that some builds report nonsensical names for their failures. The top reported failing test is simply `No`, for example. It's unclear if the error is in the Jenkins pipeline or in the plugin, but in order to properly understand how often a test fails, we need to collect the right data. **To Reproduce** I unfortunately couldn't find an existing build with this as we only keep 10 builds, but run a branch, or just develop, and it will eventually happen. **Expected behavior** We report the right flaky tests in order to collect statistics properly.
1.0
Flaky test statistics reports nonsensical names - **Describe the bug** It happens sometimes, looking at the flaky test dashboard, that some builds report nonsensical names for their failures. The top reported failing test is simply `No`, for example. It's unclear if the error is in the Jenkins pipeline or in the plugin, but in order to properly understand how often a test fails, we need to collect the right data. **To Reproduce** I unfortunately couldn't find an existing build with this as we only keep 10 builds, but run a branch, or just develop, and it will eventually happen. **Expected behavior** We report the right flaky tests in order to collect statistics properly.
process
flaky test statistics reports nonsensical names describe the bug it happens sometimes looking at the flaky test dashboard that some builds report nonsensical names for their failures the top reported failing test is simply no for example it s unclear if the error is in the jenkins pipeline or in the plugin but in order to properly understand how often a test fails we need to collect the right data to reproduce i unfortunately couldn t find an existing build with this as we only keep builds but run a branch or just develop and it will eventually happen expected behavior we report the right flaky tests in order to collect statistics properly
1
236
2,685,585,401
IssuesEvent
2015-03-30 03:05:51
MoonStorm/trNgGrid
https://api.github.com/repos/MoonStorm/trNgGrid
closed
Angular 1.3.15 breaks 3.1.0 RC
incompatibility released
There is a change from Angular 1.3.14 to 1.3.15 (released 17 March 2015) that breaks trNgGrid 3.1.0 RC. The expected behavior is that the full grid should render; instead, `<thead>` and `<tfoot>` render, but `<tbody>` does not. All `debugMode` log items register as they should. See [plunker](http://plnkr.co/edit/uRK4aybUSUNbClgdsXpn?p=preview). If you just change `1.3.15` to `1.3.14`, trNgGrid will render correctly.
True
Angular 1.3.15 breaks 3.1.0 RC - There is a change from Angular 1.3.14 to 1.3.15 (released 17 March 2015) that breaks trNgGrid 3.1.0 RC. The expected behavior is that the full grid should render; instead, `<thead>` and `<tfoot>` render, but `<tbody>` does not. All `debugMode` log items register as they should. See [plunker](http://plnkr.co/edit/uRK4aybUSUNbClgdsXpn?p=preview). If you just change `1.3.15` to `1.3.14`, trNgGrid will render correctly.
non_process
angular breaks rc there is a change from angular to released march that breaks trnggrid rc the expected behavior is that the full grid should render instead and render but does not all debugmode log items register as they should see if you just change to trnggrid will render correctly
0
470,814
13,546,639,104
IssuesEvent
2020-09-17 01:52:30
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
Possible use-after-free of rx_msg->tx_block in kernel/mailbox.c
bug priority: low
**Describe the bug** In function mbox_message_dispose(), rx_msg->tx_block is freed on line 186: k_mem_pool_free(&rx_msg->tx_block); But it is used on line 187: rx_msg->tx_block.data = NULL;
1.0
Possible use-after-free of rx_msg->tx_block in kernel/mailbox.c - **Describe the bug** In function mbox_message_dispose(), rx_msg->tx_block is freed on line 186: k_mem_pool_free(&rx_msg->tx_block); But it is used on line 187: rx_msg->tx_block.data = NULL;
non_process
possible use after free of rx msg tx block in kernel mailbox c describe the bug in function mbox message dispose rx msg tx block is freed on line k mem pool free rx msg tx block but it is used on line rx msg tx block data null
0
46,114
5,785,056,194
IssuesEvent
2017-05-01 00:39:00
eriq-augustine/psl
https://api.github.com/repos/eriq-augustine/psl
opened
Reasoner Re-Architecture Testing Megathread
Refactor Testing
Put things here that need to be tested and possible reworked with the new reasoner re-architecture. - ExecutableReasoner (#68) - BooleanMCSat - BooleanMaxWalkSat - ADMMReasoner.getDualIncompatibility() - This first updates the global variables with the local values, but this doesn't sound right, since it doesn't set them back.
1.0
Reasoner Re-Architecture Testing Megathread - Put things here that need to be tested and possible reworked with the new reasoner re-architecture. - ExecutableReasoner (#68) - BooleanMCSat - BooleanMaxWalkSat - ADMMReasoner.getDualIncompatibility() - This first updates the global variables with the local values, but this doesn't sound right, since it doesn't set them back.
non_process
reasoner re architecture testing megathread put things here that need to be tested and possible reworked with the new reasoner re architecture executablereasoner booleanmcsat booleanmaxwalksat admmreasoner getdualincompatibility this first updates the global variables with the local values but this doesn t sound right since it doesn t set them back
0
19,788
26,169,968,046
IssuesEvent
2023-01-01 19:53:55
AbdElAziz333/Canary
https://api.github.com/repos/AbdElAziz333/Canary
closed
Canary prevents Immersive Engineering from showing graphic multiblock building instructions.
in processing
### Version Information Canary 0.0.10 for 1.18.2 ### Expected Behavior The first page of any multiblock machine in the engineer's manual shows an animation on how to build the structure step-by-step. ### Actual Behavior The first page of any multiblock machine in the engineer's manual doesn't show said animation. ### Reproduction Steps 1. Install both Canary and Immersive Engineering. 2. Get the Engineer's Manual. 3. Open any page about any multiblock machine (Arc furnace or Sawmill for example) 4. There is supposed to be an animation on the top of the page, but it doesn't show. Leaving the top of the page blank. ### Other Information An example of what Engineer's manual looks like without Canary : https://i.imgur.com/GLKMKQB.png With canary, it looks the same expect there isn't the multiblock structure showing.
1.0
Canary prevents Immersive Engineering from showing graphic multiblock building instructions. - ### Version Information Canary 0.0.10 for 1.18.2 ### Expected Behavior The first page of any multiblock machine in the engineer's manual shows an animation on how to build the structure step-by-step. ### Actual Behavior The first page of any multiblock machine in the engineer's manual doesn't show said animation. ### Reproduction Steps 1. Install both Canary and Immersive Engineering. 2. Get the Engineer's Manual. 3. Open any page about any multiblock machine (Arc furnace or Sawmill for example) 4. There is supposed to be an animation on the top of the page, but it doesn't show. Leaving the top of the page blank. ### Other Information An example of what Engineer's manual looks like without Canary : https://i.imgur.com/GLKMKQB.png With canary, it looks the same expect there isn't the multiblock structure showing.
process
canary prevents immersive engineering from showing graphic multiblock building instructions version information canary for expected behavior the first page of any multiblock machine in the engineer s manual shows an animation on how to build the structure step by step actual behavior the first page of any multiblock machine in the engineer s manual doesn t show said animation reproduction steps install both canary and immersive engineering get the engineer s manual open any page about any multiblock machine arc furnace or sawmill for example there is supposed to be an animation on the top of the page but it doesn t show leaving the top of the page blank other information an example of what engineer s manual looks like without canary with canary it looks the same expect there isn t the multiblock structure showing
1
161,512
12,547,166,950
IssuesEvent
2020-06-05 22:12:44
molpopgen/fwdpp
https://api.github.com/repos/molpopgen/fwdpp
closed
Test runs too long
unit testing
One of the integration tests runs a simulation for far too long, and it is not really necessary.
1.0
Test runs too long - One of the integration tests runs a simulation for far too long, and it is not really necessary.
non_process
test runs too long one of the integration tests runs a simulation for far too long and it is not really necessary
0
397,502
27,168,114,356
IssuesEvent
2023-02-17 16:55:10
Xolvez/DD2480-JavaScript
https://api.github.com/repos/Xolvez/DD2480-JavaScript
opened
Refactor the function `FlashSort.flashSort`
documentation
According to criteria 4.2.3, refactor the function `FlashSort.flashSort` (15 CCN) in order to reduce its complexity by at least 35% (9 CCN).
1.0
Refactor the function `FlashSort.flashSort` - According to criteria 4.2.3, refactor the function `FlashSort.flashSort` (15 CCN) in order to reduce its complexity by at least 35% (9 CCN).
non_process
refactor the function flashsort flashsort according to criteria refactor the function flashsort flashsort ccn in order to reduce its complexity by at least ccn
0
153,842
19,708,624,923
IssuesEvent
2022-01-13 01:46:16
artsking/linux-4.19.72_CVE-2020-14386
https://api.github.com/repos/artsking/linux-4.19.72_CVE-2020-14386
opened
CVE-2020-15436 (Medium) detected in linux-yoctov5.4.51
security vulnerability
## CVE-2020-15436 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/block_dev.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/block_dev.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Use-after-free vulnerability in fs/block_dev.c in the Linux kernel before 5.8 allows local users to gain privileges or cause a denial of service by leveraging improper access to a certain error field. <p>Publish Date: 2020-11-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15436>CVE-2020-15436</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-15436">https://www.linuxkernelcves.com/cves/CVE-2020-15436</a></p> <p>Release Date: 2020-11-23</p> <p>Fix Resolution: v4.4.229, v4.9.229, v4.14.186, v4.19.130, v5.4.49, v5.7.6, v5.8-rc2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-15436 (Medium) detected in linux-yoctov5.4.51 - ## CVE-2020-15436 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/block_dev.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/block_dev.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Use-after-free vulnerability in fs/block_dev.c in the Linux kernel before 5.8 allows local users to gain privileges or cause a denial of service by leveraging improper access to a certain error field. <p>Publish Date: 2020-11-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15436>CVE-2020-15436</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-15436">https://www.linuxkernelcves.com/cves/CVE-2020-15436</a></p> <p>Release Date: 2020-11-23</p> <p>Fix Resolution: v4.4.229, v4.9.229, v4.14.186, v4.19.130, v5.4.49, v5.7.6, v5.8-rc2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linux cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files fs block dev c fs block dev c vulnerability details use after free vulnerability in fs block dev c in the linux kernel before allows local users to gain privileges or cause a denial of service by leveraging improper access to a certain error field publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
9,106
12,191,183,575
IssuesEvent
2020-04-29 10:39:10
didi/mpx
https://api.github.com/repos/didi/mpx
closed
[Bug report]TS模版创建项目,输出Web,不能运行,mpx-keep-alive、router-view未注册
processing
**问题描述** TS模版创建项目,输出Web,不能运行,mpx-keep-alive、router-view未注册 ![image](https://user-images.githubusercontent.com/687757/80399141-4edcb280-88eb-11ea-8890-31519fdef11b.png) **复现步骤** ![image](https://user-images.githubusercontent.com/687757/80399031-22c13180-88eb-11ea-8c7b-432a206e2c12.png) ![image](https://user-images.githubusercontent.com/687757/80399451-c7dc0a00-88eb-11ea-9e44-4fe5f72c2ee6.png)
1.0
[Bug report]TS模版创建项目,输出Web,不能运行,mpx-keep-alive、router-view未注册 - **问题描述** TS模版创建项目,输出Web,不能运行,mpx-keep-alive、router-view未注册 ![image](https://user-images.githubusercontent.com/687757/80399141-4edcb280-88eb-11ea-8890-31519fdef11b.png) **复现步骤** ![image](https://user-images.githubusercontent.com/687757/80399031-22c13180-88eb-11ea-8c7b-432a206e2c12.png) ![image](https://user-images.githubusercontent.com/687757/80399451-c7dc0a00-88eb-11ea-9e44-4fe5f72c2ee6.png)
process
ts模版创建项目,输出web,不能运行,mpx keep alive、router view未注册 问题描述 ts模版创建项目,输出web,不能运行,mpx keep alive、router view未注册 复现步骤
1
91,604
18,669,010,335
IssuesEvent
2021-10-30 10:41:58
Onelinerhub/onelinerhub
https://api.github.com/repos/Onelinerhub/onelinerhub
closed
Write shortest possible code: python how to take screenshot (python)
help wanted good first issue code python
Please write shortest code example for this question: **python how to take screenshot** in python ### How to do it: 1. Go to [python codes](https://github.com/Onelinerhub/onelinerhub/tree/main/python) 2. Create new file (named in underscore case, should contain key words from title) with `md` extension (markdown file). 3. Propose new file with following content (please use all three blocks if possible - title, code itself and explanations list): ~~~ # python how to take screenshot ```python code part1 part2 part3 ... ``` - part1 - explain code part 1 - part2 - explain code part 2 - ... ~~~ More [advanced template](https://github.com/Onelinerhub/onelinerhub/blob/main/template.md) for examples and linked solutions. More [docs here](https://github.com/Onelinerhub/onelinerhub#onelinerhub).
1.0
Write shortest possible code: python how to take screenshot (python) - Please write shortest code example for this question: **python how to take screenshot** in python ### How to do it: 1. Go to [python codes](https://github.com/Onelinerhub/onelinerhub/tree/main/python) 2. Create new file (named in underscore case, should contain key words from title) with `md` extension (markdown file). 3. Propose new file with following content (please use all three blocks if possible - title, code itself and explanations list): ~~~ # python how to take screenshot ```python code part1 part2 part3 ... ``` - part1 - explain code part 1 - part2 - explain code part 2 - ... ~~~ More [advanced template](https://github.com/Onelinerhub/onelinerhub/blob/main/template.md) for examples and linked solutions. More [docs here](https://github.com/Onelinerhub/onelinerhub#onelinerhub).
non_process
write shortest possible code python how to take screenshot python please write shortest code example for this question python how to take screenshot in python how to do it go to create new file named in underscore case should contain key words from title with md extension markdown file propose new file with following content please use all three blocks if possible title code itself and explanations list python how to take screenshot python code explain code part explain code part more for examples and linked solutions more
0
344,948
10,350,617,784
IssuesEvent
2019-09-05 03:34:38
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
Check commands for spawning default instruments
Fixed Low Priority QA
Default instruments are odd and doesn't work. Change them for so,ething reliable.
1.0
Check commands for spawning default instruments - Default instruments are odd and doesn't work. Change them for so,ething reliable.
non_process
check commands for spawning default instruments default instruments are odd and doesn t work change them for so ething reliable
0
219,570
24,501,520,861
IssuesEvent
2022-10-10 13:10:18
kyverno/kyverno
https://api.github.com/repos/kyverno/kyverno
closed
[Feature] Periodic image scanning
enhancement security
### Problem Statement Need to scan the Kyverno images periodically for vulnerabilities and be notified. ### Solution Description Create a GitHub Action which does this using Trivy and/or Grype. Open an issue if any are found. ### Alternatives _No response_ ### Additional Context https://github.com/marketplace/actions/create-an-issue ### Slack discussion _No response_ ### Research - [X] I have read and followed the documentation AND the [troubleshooting guide](https://kyverno.io/docs/troubleshooting/). - [X] I have searched other issues in this repository and mine is not recorded.
True
[Feature] Periodic image scanning - ### Problem Statement Need to scan the Kyverno images periodically for vulnerabilities and be notified. ### Solution Description Create a GitHub Action which does this using Trivy and/or Grype. Open an issue if any are found. ### Alternatives _No response_ ### Additional Context https://github.com/marketplace/actions/create-an-issue ### Slack discussion _No response_ ### Research - [X] I have read and followed the documentation AND the [troubleshooting guide](https://kyverno.io/docs/troubleshooting/). - [X] I have searched other issues in this repository and mine is not recorded.
non_process
periodic image scanning problem statement need to scan the kyverno images periodically for vulnerabilities and be notified solution description create a github action which does this using trivy and or grype open an issue if any are found alternatives no response additional context slack discussion no response research i have read and followed the documentation and the i have searched other issues in this repository and mine is not recorded
0
29,295
5,622,810,749
IssuesEvent
2017-04-04 13:41:52
JGCRI/gcamdata
https://api.github.com/repos/JGCRI/gcamdata
opened
Ethiopia FAO data
aglu documentation question
Ethiopia appears twice in the FAO data used in LA101.ag_FAO_R_C_Y. Both "countries" have the same iso code, but different country names. The old data system (and now the DSR) aggregate the two countries together, but more documentation/checking on the problem would be good.
1.0
Ethiopia FAO data - Ethiopia appears twice in the FAO data used in LA101.ag_FAO_R_C_Y. Both "countries" have the same iso code, but different country names. The old data system (and now the DSR) aggregate the two countries together, but more documentation/checking on the problem would be good.
non_process
ethiopia fao data ethiopia appears twice in the fao data used in ag fao r c y both countries have the same iso code but different country names the old data system and now the dsr aggregate the two countries together but more documentation checking on the problem would be good
0
4,889
7,763,754,409
IssuesEvent
2018-06-01 17:43:19
StrikeNP/trac_test
https://api.github.com/repos/StrikeNP/trac_test
closed
Link Trac site to internal group page (Trac #3)
Migrated from Trac post_processing senkbeil@uwm.edu task
In order for this CLUBB Trac site to be useful to the group, it must be readily accessible. A link should be placed on the internal group page. Also there is no component for internal site issues. Perhaps this means I am using this trac site wrong. Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/3 ```json { "status": "closed", "changetime": "2009-05-13T18:09:47", "description": "In order for this CLUBB Trac site to be useful to the group, it must be readily accessible. A link should be placed on the internal group page.\n\nAlso there is no component for internal site issues. Perhaps this means I am using this trac site wrong.", "reporter": "fasching@uwm.edu", "cc": "", "resolution": "Verified by V. Larson", "_ts": "1242238187000000", "component": "post_processing", "summary": "Link Trac site to internal group page", "priority": "minor", "keywords": "", "time": "2009-05-01T21:17:44", "milestone": "", "owner": "senkbeil@uwm.edu", "type": "task" } ```
1.0
Link Trac site to internal group page (Trac #3) - In order for this CLUBB Trac site to be useful to the group, it must be readily accessible. A link should be placed on the internal group page. Also there is no component for internal site issues. Perhaps this means I am using this trac site wrong. Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/3 ```json { "status": "closed", "changetime": "2009-05-13T18:09:47", "description": "In order for this CLUBB Trac site to be useful to the group, it must be readily accessible. A link should be placed on the internal group page.\n\nAlso there is no component for internal site issues. Perhaps this means I am using this trac site wrong.", "reporter": "fasching@uwm.edu", "cc": "", "resolution": "Verified by V. Larson", "_ts": "1242238187000000", "component": "post_processing", "summary": "Link Trac site to internal group page", "priority": "minor", "keywords": "", "time": "2009-05-01T21:17:44", "milestone": "", "owner": "senkbeil@uwm.edu", "type": "task" } ```
process
link trac site to internal group page trac in order for this clubb trac site to be useful to the group it must be readily accessible a link should be placed on the internal group page also there is no component for internal site issues perhaps this means i am using this trac site wrong migrated from json status closed changetime description in order for this clubb trac site to be useful to the group it must be readily accessible a link should be placed on the internal group page n nalso there is no component for internal site issues perhaps this means i am using this trac site wrong reporter fasching uwm edu cc resolution verified by v larson ts component post processing summary link trac site to internal group page priority minor keywords time milestone owner senkbeil uwm edu type task
1
160
2,582,968,595
IssuesEvent
2015-02-15 20:59:29
dalehenrich/metacello-work
https://api.github.com/repos/dalehenrich/metacello-work
closed
Pharo4.0: Class category name 'Metacello-TestsPharo20MC' for the class 'MetacelloTestsPackageSet' is inconsistent with the package name 'Metacello-TestsCommonMC.pharo20'
in process
https://travis-ci.org/dalehenrich/metacello-work/jobs/50861935#L309
1.0
Pharo4.0: Class category name 'Metacello-TestsPharo20MC' for the class 'MetacelloTestsPackageSet' is inconsistent with the package name 'Metacello-TestsCommonMC.pharo20' - https://travis-ci.org/dalehenrich/metacello-work/jobs/50861935#L309
process
class category name metacello for the class metacellotestspackageset is inconsistent with the package name metacello testscommonmc
1
21,375
29,202,228,165
IssuesEvent
2023-05-21 00:36:50
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Remoto] Fullstack Developer ASP.NET Core na Coodesh
SALVADOR PJ FULL-STACK SQL REACT REQUISITOS REMOTO ASP.NET BACKEND GITHUB AZURE SEGURANÇA UMA C ERP AUTOMAÇÃO DE PROCESSOS Stale
## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/fullstack-developer-aspnet-core-202940490?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>Guardian RH </strong>está em busca de <strong><em>Fullstack Developer ASP.NET</em></strong> para compor seu time!</p> <p></p> <p>O <strong>Guardian RH</strong> é um sistema de gerenciamento e segurança de dados que te auxilia nas rotinas diárias de folha de pagamento. Somos uma equipe especializada com mais de 20 anos de atuação em RH e folha de pagamento, em empresas dos mais variados segmentos. Juntamos tecnologia, experiência em folha de pagamento e e-social para descomplicar seu dia a dia.</p> ## Guardian RH: <p>Se você busca saber como a automação de processos tem impactado nas rotinas de folha de pagamento, compliance e no atendimento ao E-Social, conheça o Guardian! Somos uma solução completa de automação para conferência, gestão e otimização da folha de pagamento integrada ao seu ERP.</p> <p>Garantimos para você máxima segurança, assertividade e eficiência nas rotinas de fechamento de folha, ponto, jornada trabalhada, horas extras, férias, sindicato, SST, comparativos de arquivos de retorno e vários outros!&nbsp;</p><a href='https://coodesh.com/empresas/guardian-tecnologia-da-informacao-ltda'>Veja mais no site</a> ## Habilidades: - Asp.Net - .NET Core - React.js ## Local: 100% Remoto ## Requisitos: - Experiência em ASP.NET Core e React; - Sólida experiência em backend com conhecimentos em frontend; - Experiência produzir códigos bem estruturados e fundamentados. ## Diferenciais: - Conhecimentos em SQL e Azure. ## Benefícios: - 15 Dias de recesso remunerado - 100% Remoto ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Fullstack Developer ASP.NET Core na Guardian RH](https://coodesh.com/vagas/fullstack-developer-aspnet-core-202940490?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime PJ #### Categoria Full-Stack
1.0
[Remoto] Fullstack Developer ASP.NET Core na Coodesh - ## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/fullstack-developer-aspnet-core-202940490?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>Guardian RH </strong>está em busca de <strong><em>Fullstack Developer ASP.NET</em></strong> para compor seu time!</p> <p></p> <p>O <strong>Guardian RH</strong> é um sistema de gerenciamento e segurança de dados que te auxilia nas rotinas diárias de folha de pagamento. Somos uma equipe especializada com mais de 20 anos de atuação em RH e folha de pagamento, em empresas dos mais variados segmentos. Juntamos tecnologia, experiência em folha de pagamento e e-social para descomplicar seu dia a dia.</p> ## Guardian RH: <p>Se você busca saber como a automação de processos tem impactado nas rotinas de folha de pagamento, compliance e no atendimento ao E-Social, conheça o Guardian! Somos uma solução completa de automação para conferência, gestão e otimização da folha de pagamento integrada ao seu ERP.</p> <p>Garantimos para você máxima segurança, assertividade e eficiência nas rotinas de fechamento de folha, ponto, jornada trabalhada, horas extras, férias, sindicato, SST, comparativos de arquivos de retorno e vários outros!&nbsp;</p><a href='https://coodesh.com/empresas/guardian-tecnologia-da-informacao-ltda'>Veja mais no site</a> ## Habilidades: - Asp.Net - .NET Core - React.js ## Local: 100% Remoto ## Requisitos: - Experiência em ASP.NET Core e React; - Sólida experiência em backend com conhecimentos em frontend; - Experiência produzir códigos bem estruturados e fundamentados. ## Diferenciais: - Conhecimentos em SQL e Azure. ## Benefícios: - 15 Dias de recesso remunerado - 100% Remoto ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Fullstack Developer ASP.NET Core na Guardian RH](https://coodesh.com/vagas/fullstack-developer-aspnet-core-202940490?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime PJ #### Categoria Full-Stack
process
fullstack developer asp net core na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a guardian rh está em busca de fullstack developer asp net para compor seu time o guardian rh é um sistema de gerenciamento e segurança de dados que te auxilia nas rotinas diárias de folha de pagamento somos uma equipe especializada com mais de anos de atuação em rh e folha de pagamento em empresas dos mais variados segmentos juntamos tecnologia experiência em folha de pagamento e e social para descomplicar seu dia a dia guardian rh se você busca saber como a automação de processos tem impactado nas rotinas de folha de pagamento compliance e no atendimento ao e social conheça o guardian somos uma solução completa de automação para conferência gestão e otimização da folha de pagamento integrada ao seu erp garantimos para você máxima segurança assertividade e eficiência nas rotinas de fechamento de folha ponto jornada trabalhada horas extras férias sindicato sst comparativos de arquivos de retorno e vários outros nbsp habilidades asp net net core react js local remoto requisitos experiência em asp net core e react sólida experiência em backend com conhecimentos em frontend experiência produzir códigos bem estruturados e fundamentados diferenciais conhecimentos em sql e azure benefícios dias de recesso remunerado remoto como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria full stack
1
5,827
8,664,542,663
IssuesEvent
2018-11-28 20:29:33
lightningWhite/weatherLearning
https://api.github.com/repos/lightningWhite/weatherLearning
closed
Bin the targets
dataProcessing to do
After selecting the binning categories, we need to implement the actual binning of the "weather_description.csv" data (targets).
1.0
Bin the targets - After selecting the binning categories, we need to implement the actual binning of the "weather_description.csv" data (targets).
process
bin the targets after selecting the binning categories we need to implement the actual binning of the weather description csv data targets
1
74,598
25,202,080,338
IssuesEvent
2022-11-13 08:22:03
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
Button: Button href atribute rendering can be wrong when generate window.open
:lady_beetle: defect :bangbang: needs-triage
### Describe the bug When set href on button component like: <p:button href="http://primefaces.com/teste"> they render this: ... onclick="window.open('http:\\/\\/primefaces.com\\/test','_parent')" .. Thats not a problem in modern browser because they can handle it, but in a webview on a a flutter app, thats return a http error code -6 Maybe EscapeUtils.forJavaScript can be removed when targetURL != null thanks ### Reproducer _No response_ ### Expected behavior _No response_ ### PrimeFaces edition _No response_ ### PrimeFaces version 12.0.1 ### Theme saga ### JSF implementation Mojarra ### JSF version 2.4.8.Final ### Java version 1.8 ### Browser(s) webview on flutter
1.0
Button: Button href atribute rendering can be wrong when generate window.open - ### Describe the bug When set href on button component like: <p:button href="http://primefaces.com/teste"> they render this: ... onclick="window.open('http:\\/\\/primefaces.com\\/test','_parent')" .. Thats not a problem in modern browser because they can handle it, but in a webview on a a flutter app, thats return a http error code -6 Maybe EscapeUtils.forJavaScript can be removed when targetURL != null thanks ### Reproducer _No response_ ### Expected behavior _No response_ ### PrimeFaces edition _No response_ ### PrimeFaces version 12.0.1 ### Theme saga ### JSF implementation Mojarra ### JSF version 2.4.8.Final ### Java version 1.8 ### Browser(s) webview on flutter
non_process
button button href atribute rendering can be wrong when generate window open describe the bug when set href on button component like p button href they render this onclick window open http primefaces com test parent thats not a problem in modern browser because they can handle it but in a webview on a a flutter app thats return a http error code maybe escapeutils forjavascript can be removed when targeturl null thanks reproducer no response expected behavior no response primefaces edition no response primefaces version theme saga jsf implementation mojarra jsf version final java version browser s webview on flutter
0
16,715
21,873,031,490
IssuesEvent
2022-05-19 07:39:49
googleapis/google-cloud-dotnet
https://api.github.com/repos/googleapis/google-cloud-dotnet
opened
Evaluate benefits of analyzers
type: process
Currently we have four analyzers in Google.Cloud.Tools.Analyzers: - One about default literals, which is documented as probably redundant now - Optional parameters must be specified internally - Every file must have a copyright notice - We can't expose particular dependencies in our public API We removed the analyzer for Diagnostics a while ago. It's not clear whether the above analyzer is actually running - that's testable, of course, but it probably *isn't* running in CI, which is where it would be useful. The downside is that there's quite a lot of complexity in terms of project generation and build etc. Let's work out whether the pros and cons balance.
1.0
Evaluate benefits of analyzers - Currently we have four analyzers in Google.Cloud.Tools.Analyzers: - One about default literals, which is documented as probably redundant now - Optional parameters must be specified internally - Every file must have a copyright notice - We can't expose particular dependencies in our public API We removed the analyzer for Diagnostics a while ago. It's not clear whether the above analyzer is actually running - that's testable, of course, but it probably *isn't* running in CI, which is where it would be useful. The downside is that there's quite a lot of complexity in terms of project generation and build etc. Let's work out whether the pros and cons balance.
process
evaluate benefits of analyzers currently we have four analyzers in google cloud tools analyzers one about default literals which is documented as probably redundant now optional parameters must be specified internally every file must have a copyright notice we can t expose particular dependencies in our public api we removed the analyzer for diagnostics a while ago it s not clear whether the above analyzer is actually running that s testable of course but it probably isn t running in ci which is where it would be useful the downside is that there s quite a lot of complexity in terms of project generation and build etc let s work out whether the pros and cons balance
1
5,086
7,876,062,827
IssuesEvent
2018-06-25 22:53:00
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
Statistics needed for binary data upgrade
libs-all status-inprocess type-enhancement
- [ ] #158 How many bytes are currently stored in bloom filters in blocks vs. transactions? - [ ] #199 Number of One Hit Wonders - [ ] Stats: -- nTrans per block -- nError trans per block -- nTraces per trans -- nLogs per receipt -- nFiles -- nBytes -- nBloom files -- nBloom bytes - [x] ~~Preserve old data until all testing is done. Documented in #226.~~ - [x] ~~Surround as much of the data with tests and statistics as possible before starting. Documented in #226.~~
1.0
Statistics needed for binary data upgrade - - [ ] #158 How many bytes are currently stored in bloom filters in blocks vs. transactions? - [ ] #199 Number of One Hit Wonders - [ ] Stats: -- nTrans per block -- nError trans per block -- nTraces per trans -- nLogs per receipt -- nFiles -- nBytes -- nBloom files -- nBloom bytes - [x] ~~Preserve old data until all testing is done. Documented in #226.~~ - [x] ~~Surround as much of the data with tests and statistics as possible before starting. Documented in #226.~~
process
statistics needed for binary data upgrade how many bytes are currently stored in bloom filters in blocks vs transactions number of one hit wonders stats ntrans per block nerror trans per block ntraces per trans nlogs per receipt nfiles nbytes nbloom files nbloom bytes preserve old data until all testing is done documented in surround as much of the data with tests and statistics as possible before starting documented in
1
168,249
14,143,693,132
IssuesEvent
2020-11-10 15:34:16
emory-libraries/avalon
https://api.github.com/repos/emory-libraries/avalon
closed
Determine workflows associated with transcoding (ripping file, uploading to server)
documentation requirements service team
Need to determine workflows associated with transcoding (ripping file, uploading to server) - Nina Rao has done this previously. James will discuss with her on specifics of digitization workflows and tools used for this. Need details around how files are temporarily stored/shared until moved to Avalon. Avalon allows for flexibility in file types but James will work on decision around standard type (MP4).
1.0
Determine workflows associated with transcoding (ripping file, uploading to server) - Need to determine workflows associated with transcoding (ripping file, uploading to server) - Nina Rao has done this previously. James will discuss with her on specifics of digitization workflows and tools used for this. Need details around how files are temporarily stored/shared until moved to Avalon. Avalon allows for flexibility in file types but James will work on decision around standard type (MP4).
non_process
determine workflows associated with transcoding ripping file uploading to server need to determine workflows associated with transcoding ripping file uploading to server nina rao has done this previously james will discuss with her on specifics of digitization workflows and tools used for this need details around how files are temporarily stored shared until moved to avalon avalon allows for flexibility in file types but james will work on decision around standard type
0
392,363
11,590,492,026
IssuesEvent
2020-02-24 06:55:24
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
[0.9.0 staging-1307] Missing error notification "can't place a block there"
Priority: Medium Status: Fixed Status: Reopen
Step to reproduce: - select place where you can't to place block: ![image](https://user-images.githubusercontent.com/45708377/71356683-06effc80-2594-11ea-8460-bc5d24fd644f.png) - try to place. I haven't notification. 8.3.2: ![image](https://user-images.githubusercontent.com/45708377/71356889-a8774e00-2594-11ea-9b3e-5917a63fcc65.png)
1.0
[0.9.0 staging-1307] Missing error notification "can't place a block there" - Step to reproduce: - select place where you can't to place block: ![image](https://user-images.githubusercontent.com/45708377/71356683-06effc80-2594-11ea-8460-bc5d24fd644f.png) - try to place. I haven't notification. 8.3.2: ![image](https://user-images.githubusercontent.com/45708377/71356889-a8774e00-2594-11ea-9b3e-5917a63fcc65.png)
non_process
missing error notification can t place a block there step to reproduce select place where you can t to place block try to place i haven t notification
0
2,387
5,187,642,306
IssuesEvent
2017-01-20 17:24:51
Alfresco/alfresco-ng2-components
https://api.github.com/repos/Alfresco/alfresco-ng2-components
closed
After creating a new process first item in process list is shown not newly created process
browser: all bug comp: activiti-processList
Create a new process **Expected results** Process details of process just created is displayed **Actual results** First item in process list is displayed [process list.mp4.zip](https://github.com/Alfresco/alfresco-ng2-components/files/657225/process.list.mp4.zip)
1.0
After creating a new process first item in process list is shown not newly created process - Create a new process **Expected results** Process details of process just created is displayed **Actual results** First item in process list is displayed [process list.mp4.zip](https://github.com/Alfresco/alfresco-ng2-components/files/657225/process.list.mp4.zip)
process
after creating a new process first item in process list is shown not newly created process create a new process expected results process details of process just created is displayed actual results first item in process list is displayed
1
15,195
18,982,213,435
IssuesEvent
2021-11-21 04:13:57
ethereum/EIPs
https://api.github.com/repos/ethereum/EIPs
closed
"Optional EIPs" field in header
type: EIP1 (Process) stale
Hey, I am writing EIP-2470. At security considerations, I suggest the use of other EIPs, but only for that specific cases, so that would be optional EIPs linked to this. Should we have a dedicated field at header or I just link it at references?
1.0
"Optional EIPs" field in header - Hey, I am writing EIP-2470. At security considerations, I suggest the use of other EIPs, but only for that specific cases, so that would be optional EIPs linked to this. Should we have a dedicated field at header or I just link it at references?
process
optional eips field in header hey i am writing eip at security considerations i suggest the use of other eips but only for that specific cases so that would be optional eips linked to this should we have a dedicated field at header or i just link it at references
1
145,190
5,560,105,423
IssuesEvent
2017-03-24 18:34:30
careteditor/caret
https://api.github.com/repos/careteditor/caret
closed
Strange cursor positioning
bug priority
Unsure if this is a sub-issue of #261 but I'm experiencing strange cursor positioning in v2.0.1. I also notice that a cursor continues to be shown after "For example" in the gif. ![Imgur](http://i.imgur.com/ebGvEcc.gif) Using default settings on Ubuntu 16.10.
1.0
Strange cursor positioning - Unsure if this is a sub-issue of #261 but I'm experiencing strange cursor positioning in v2.0.1. I also notice that a cursor continues to be shown after "For example" in the gif. ![Imgur](http://i.imgur.com/ebGvEcc.gif) Using default settings on Ubuntu 16.10.
non_process
strange cursor positioning unsure if this is a sub issue of but i m experiencing strange cursor positioning in i also notice that a cursor continues to be shown after for example in the gif using default settings on ubuntu
0
510,392
14,789,796,654
IssuesEvent
2021-01-12 11:04:26
kubernetes-sigs/cluster-api
https://api.github.com/repos/kubernetes-sigs/cluster-api
closed
Remove embedded metadata from clusterctl
area/clusterctl good first issue help wanted kind/feature lifecycle/active priority/important-soon
<!-- NOTE: ⚠️ For larger proposals, we follow the CAEP process as outlined in https://sigs.k8s.io/cluster-api/CONTRIBUTING.md. --> **User Story** As a user I would like to remove the embedded metadata within clusterctl so that I don't need to rely on using the latest version of clusterctl in order to pull newer provider releases. **Detailed Description** The [embedded metadata provided within clusterctl](https://github.com/kubernetes-sigs/cluster-api/blob/eb07294ddebe8099c359c35a1e42f79244e7d33f/cmd/clusterctl/client/repository/metadata_client.go#L102) was very helpful initially when trying to provide a seamless Day 0 experience. However, recently there have been some issues (see #3418) regarding usage of a specific clusterctl version when trying to pull a latest provider release. However, it is encouraging that newer providers are following [the provider contract](https://cluster-api.sigs.k8s.io/clusterctl/provider-contract.html#provider-repositories) and are releasing a `metadata.yaml` as part of their release artifacts. IMO this would be beneficial for provider authors/developers because if they ship a `metadata.yaml` as part of their own repo release artifacts they won't have to remember or worry about clusterctl's version and backward compatibility. **Anything else you would like to add:** See issue https://github.com/kubernetes-sigs/cluster-api/issues/3418 for relevant info on the issue that raised reliance on embedded metadata. **Warning** One of the biggest drawbacks to removing the embedded metadata would be that, if a user uses the version of clusterctl without the embedded metadata AND they try to pull a provider release which also doesn't have a `metadata.yaml` (e.g. for CAPA, all releases <=v0.5.5) then they are not going to have a good time. A way to get around this is to include the appropriate `metadata.yaml` for all the older releases for that provider. /kind feature /area clusterctl
1.0
Remove embedded metadata from clusterctl - <!-- NOTE: ⚠️ For larger proposals, we follow the CAEP process as outlined in https://sigs.k8s.io/cluster-api/CONTRIBUTING.md. --> **User Story** As a user I would like to remove the embedded metadata within clusterctl so that I don't need to rely on using the latest version of clusterctl in order to pull newer provider releases. **Detailed Description** The [embedded metadata provided within clusterctl](https://github.com/kubernetes-sigs/cluster-api/blob/eb07294ddebe8099c359c35a1e42f79244e7d33f/cmd/clusterctl/client/repository/metadata_client.go#L102) was very helpful initially when trying to provide a seamless Day 0 experience. However, recently there have been some issues (see #3418) regarding usage of a specific clusterctl version when trying to pull a latest provider release. However, it is encouraging that newer providers are following [the provider contract](https://cluster-api.sigs.k8s.io/clusterctl/provider-contract.html#provider-repositories) and are releasing a `metadata.yaml` as part of their release artifacts. IMO this would be beneficial for provider authors/developers because if they ship a `metadata.yaml` as part of their own repo release artifacts they won't have to remember or worry about clusterctl's version and backward compatibility. **Anything else you would like to add:** See issue https://github.com/kubernetes-sigs/cluster-api/issues/3418 for relevant info on the issue that raised reliance on embedded metadata. **Warning** One of the biggest drawbacks to removing the embedded metadata would be that, if a user uses the version of clusterctl without the embedded metadata AND they try to pull a provider release which also doesn't have a `metadata.yaml` (e.g. for CAPA, all releases <=v0.5.5) then they are not going to have a good time. A way to get around this is to include the appropriate `metadata.yaml` for all the older releases for that provider. /kind feature /area clusterctl
non_process
remove embedded metadata from clusterctl user story as a user i would like to remove the embedded metadata within clusterctl so that i don t need to rely on using the latest version of clusterctl in order to pull newer provider releases detailed description the was very helpful initially when trying to provide a seamless day experience however recently there have been some issues see regarding usage of a specific clusterctl version when trying to pull a latest provider release however it is encouraging that newer providers are following and are releasing a metadata yaml as part of their release artifacts imo this would be beneficial for provider authors developers because if they ship a metadata yaml as part of their own repo release artifacts they won t have to remember or worry about clusterctl s version and backward compatibility anything else you would like to add see issue for relevant info on the issue that raised reliance on embedded metadata warning one of the biggest drawbacks to removing the embedded metadata would be that if a user uses the version of clusterctl without the embedded metadata and they try to pull a provider release which also doesn t have a metadata yaml e g for capa all releases then they are not going to have a good time a way to get around this is to include the appropriate metadata yaml for all the older releases for that provider kind feature area clusterctl
0
97,691
20,376,332,149
IssuesEvent
2022-02-21 15:58:38
WordPress/openverse-frontend
https://api.github.com/repos/WordPress/openverse-frontend
opened
Add types to `utils/resampling.js`
good first issue help wanted 🟩 priority: low 🚦 status: awaiting triage ✨ goal: improvement 💻 aspect: code
## Description <!-- Describe the feature and how it solves the problem. --> Add type checking to `resampling.js`. This module has zero dependencies. Make sure to add it to `tsconfig.json`'s `include` list. ## Additional context <!-- Add any other context about the feature here; or delete the section entirely. --> Part of an ongoing effort to add type checking to parts of the project that can be type checked. ## Implementation <!-- Replace the [ ] with [x] to check the box. --> - [ ] 🙋 I would be interested in implementing this feature.
1.0
Add types to `utils/resampling.js` - ## Description <!-- Describe the feature and how it solves the problem. --> Add type checking to `resampling.js`. This module has zero dependencies. Make sure to add it to `tsconfig.json`'s `include` list. ## Additional context <!-- Add any other context about the feature here; or delete the section entirely. --> Part of an ongoing effort to add type checking to parts of the project that can be type checked. ## Implementation <!-- Replace the [ ] with [x] to check the box. --> - [ ] 🙋 I would be interested in implementing this feature.
non_process
add types to utils resampling js description add type checking to resampling js this module has zero dependencies make sure to add it to tsconfig json s include list additional context part of an ongoing effort to add type checking to parts of the project that can be type checked implementation 🙋 i would be interested in implementing this feature
0
6,649
9,769,867,763
IssuesEvent
2019-06-06 09:34:37
ESMValGroup/ESMValTool
https://api.github.com/repos/ESMValGroup/ESMValTool
closed
Is there any reason why the area_average and volume_average preprocessors need to receive the coord1 and coord2 arguments?
preprocessor
Is there any reason why the `area_average` and `volume_average` preprocessors need to receive the `coord1` and `coord2` arguments? At the moment, I think we might as are hardwire these to `latitude` and `longitude`. I don't think that anyone is using `area_average` to calculate the transect-weighted average, it's always lat and lon. Alternatively, instead of hard-wiring `latitude` and `longitude`, we could do something clever and get ESMValTool or iris to figure out the names of the dimensions in the i and j directions. At the same time that we change this, we need to edit table 1 in @mattiarighi's documentation paper. Also, we could also rename these preprocessors as described in issue #963.
1.0
Is there any reason why the area_average and volume_average preprocessors need to receive the coord1 and coord2 arguments? - Is there any reason why the `area_average` and `volume_average` preprocessors need to receive the `coord1` and `coord2` arguments? At the moment, I think we might as are hardwire these to `latitude` and `longitude`. I don't think that anyone is using `area_average` to calculate the transect-weighted average, it's always lat and lon. Alternatively, instead of hard-wiring `latitude` and `longitude`, we could do something clever and get ESMValTool or iris to figure out the names of the dimensions in the i and j directions. At the same time that we change this, we need to edit table 1 in @mattiarighi's documentation paper. Also, we could also rename these preprocessors as described in issue #963.
process
is there any reason why the area average and volume average preprocessors need to receive the and arguments is there any reason why the area average and volume average preprocessors need to receive the and arguments at the moment i think we might as are hardwire these to latitude and longitude i don t think that anyone is using area average to calculate the transect weighted average it s always lat and lon alternatively instead of hard wiring latitude and longitude we could do something clever and get esmvaltool or iris to figure out the names of the dimensions in the i and j directions at the same time that we change this we need to edit table in mattiarighi s documentation paper also we could also rename these preprocessors as described in issue
1
17,180
22,761,622,971
IssuesEvent
2022-07-07 21:53:39
Carlosmtp/DomuzSGI
https://api.github.com/repos/Carlosmtp/DomuzSGI
opened
Actualizar Indicadores de Proceso (Backend)
Enhancement High Process Management
- [x] Crear las rutas dónde se recibirán los datos del frontend para la actualización de un indicador de proceso. - [x] Crear la función que permita actualizar un indicador de proceso en la base de datos.
1.0
Actualizar Indicadores de Proceso (Backend) - - [x] Crear las rutas dónde se recibirán los datos del frontend para la actualización de un indicador de proceso. - [x] Crear la función que permita actualizar un indicador de proceso en la base de datos.
process
actualizar indicadores de proceso backend crear las rutas dónde se recibirán los datos del frontend para la actualización de un indicador de proceso crear la función que permita actualizar un indicador de proceso en la base de datos
1
20,422
27,082,550,840
IssuesEvent
2023-02-14 14:56:59
barrycumbie/studious-engine-feb2023-githubPd
https://api.github.com/repos/barrycumbie/studious-engine-feb2023-githubPd
closed
super quick to local dev box demo
process
add this stuff in via `git clone` make our `README.md` into a real live web page deployed via GitHub Pages. 😮 remember `<zero-md>`? let's go find it. couple other goodies to include: https://gist.github.com/barrycumbie/b47f2b7ad78a231d9b8fe4bc539b441f & see day 01 maybe too? this: https://coggle.it/diagram/XfeRbWj7xy3dsEX8/t/web-development-in-2020
1.0
super quick to local dev box demo - add this stuff in via `git clone` make our `README.md` into a real live web page deployed via GitHub Pages. 😮 remember `<zero-md>`? let's go find it. couple other goodies to include: https://gist.github.com/barrycumbie/b47f2b7ad78a231d9b8fe4bc539b441f & see day 01 maybe too? this: https://coggle.it/diagram/XfeRbWj7xy3dsEX8/t/web-development-in-2020
process
super quick to local dev box demo add this stuff in via git clone make our readme md into a real live web page deployed via github pages 😮 remember let s go find it couple other goodies to include see day maybe too this
1
1,706
4,350,187,680
IssuesEvent
2016-07-31 03:09:39
P0cL4bs/WiFi-Pumpkin
https://api.github.com/repos/P0cL4bs/WiFi-Pumpkin
closed
DNS Spoof with Rougue AP
enhancement in process
Is it possible to route all HTTP to the spoofed update page when AP is enabled.
1.0
DNS Spoof with Rougue AP - Is it possible to route all HTTP to the spoofed update page when AP is enabled.
process
dns spoof with rougue ap is it possible to route all http to the spoofed update page when ap is enabled
1