Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
14,242
2,795,456,165
IssuesEvent
2015-05-11 22:07:52
revelc/formatter-maven-plugin
https://api.github.com/repos/revelc/formatter-maven-plugin
closed
use_on_off_tags to true
auto-migrated Priority-Medium Type-Defect
``` Hi guys, Does your useful plugin support "org.eclipse.jdt.core.formatter.use_on_off_tags" with true value ? it is about @formatter:off - @formatter:on flags.. I'm not able having this working ... may it is my error or ... ? Expected output: code between @formatter:off and @formatter:on not formatted Actual output: code between @formatter:off and @formatter:on not formatted Steps to reproduce: 1. add use_on_off_tags to true in formatter.xml 2. put // @formatter:off and // @formatter:on before a bad formatted portion of code 3. mvn java-formatter:format 4. check code in flags is not formatted Plugin version: 0.3.1 Maven version:3.0.4 Java version:1.7.0_05, vendor: Oracle Eclipse version:Juno OS:Linux Ubuntu Additional details here my formatter.xml ``` Original issue reported on code.google.com by `stefano....@gmail.com` on 24 Jun 2013 at 2:28 Attachments: * [eclipse_formatter.xml](https://storage.googleapis.com/google-code-attachments/maven-java-formatter-plugin/issue-28/comment-0/eclipse_formatter.xml)
1.0
use_on_off_tags to true - ``` Hi guys, Does your useful plugin support "org.eclipse.jdt.core.formatter.use_on_off_tags" with true value ? it is about @formatter:off - @formatter:on flags.. I'm not able having this working ... may it is my error or ... ? Expected output: code between @formatter:off and @formatter:on not formatted Actual output: code between @formatter:off and @formatter:on not formatted Steps to reproduce: 1. add use_on_off_tags to true in formatter.xml 2. put // @formatter:off and // @formatter:on before a bad formatted portion of code 3. mvn java-formatter:format 4. check code in flags is not formatted Plugin version: 0.3.1 Maven version:3.0.4 Java version:1.7.0_05, vendor: Oracle Eclipse version:Juno OS:Linux Ubuntu Additional details here my formatter.xml ``` Original issue reported on code.google.com by `stefano....@gmail.com` on 24 Jun 2013 at 2:28 Attachments: * [eclipse_formatter.xml](https://storage.googleapis.com/google-code-attachments/maven-java-formatter-plugin/issue-28/comment-0/eclipse_formatter.xml)
defect
use on off tags to true hi guys does your useful plugin support org eclipse jdt core formatter use on off tags with true value it is about formatter off formatter on flags i m not able having this working may it is my error or expected output code between formatter off and formatter on not formatted actual output code between formatter off and formatter on not formatted steps to reproduce add use on off tags to true in formatter xml put formatter off and formatter on before a bad formatted portion of code mvn java formatter format check code in flags is not formatted plugin version maven version java version vendor oracle eclipse version juno os linux ubuntu additional details here my formatter xml original issue reported on code google com by stefano gmail com on jun at attachments
1
49,202
13,445,705,080
IssuesEvent
2020-09-08 11:51:16
chaitanya00/aem-wknd
https://api.github.com/repos/chaitanya00/aem-wknd
opened
CVE-2020-15366 (Medium) detected in ajv-6.10.2.tgz
security vulnerability
## CVE-2020-15366 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-6.10.2.tgz</b></p></summary> <p>Another JSON Schema Validator</p> <p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.10.2.tgz">https://registry.npmjs.org/ajv/-/ajv-6.10.2.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/aem-wknd/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/aem-wknd/.scannerwork/css-bundle/node_modules/ajv/package.json</p> <p> Dependency Hierarchy: - tap-11.1.5.tgz (Root Library) - coveralls-3.0.9.tgz - request-2.88.0.tgz - har-validator-5.1.3.tgz - :x: **ajv-6.10.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/chaitanya00/aem-wknd/commit/3f4c2902a45eb04bc7915c408df14545aa90511c">3f4c2902a45eb04bc7915c408df14545aa90511c</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.) <p>Publish Date: 2020-07-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p> <p>Release Date: 2020-07-15</p> <p>Fix Resolution: ajv - 6.12.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-15366 (Medium) detected in ajv-6.10.2.tgz - ## CVE-2020-15366 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-6.10.2.tgz</b></p></summary> <p>Another JSON Schema Validator</p> <p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.10.2.tgz">https://registry.npmjs.org/ajv/-/ajv-6.10.2.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/aem-wknd/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/aem-wknd/.scannerwork/css-bundle/node_modules/ajv/package.json</p> <p> Dependency Hierarchy: - tap-11.1.5.tgz (Root Library) - coveralls-3.0.9.tgz - request-2.88.0.tgz - har-validator-5.1.3.tgz - :x: **ajv-6.10.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/chaitanya00/aem-wknd/commit/3f4c2902a45eb04bc7915c408df14545aa90511c">3f4c2902a45eb04bc7915c408df14545aa90511c</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.) <p>Publish Date: 2020-07-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p> <p>Release Date: 2020-07-15</p> <p>Fix Resolution: ajv - 6.12.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in ajv tgz cve medium severity vulnerability vulnerable library ajv tgz another json schema validator library home page a href path to dependency file tmp ws scm aem wknd package json path to vulnerable library tmp ws scm aem wknd scannerwork css bundle node modules ajv package json dependency hierarchy tap tgz root library coveralls tgz request tgz har validator tgz x ajv tgz vulnerable library found in head commit a href vulnerability details an issue was discovered in ajv validate in ajv aka another json schema validator a carefully crafted json schema could be provided that allows execution of other code by prototype pollution while untrusted schemas are recommended against the worst case of an untrusted schema should be a denial of service not execution of code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ajv step up your open source security game with whitesource
0
72,245
24,013,645,636
IssuesEvent
2022-09-14 21:21:51
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
closed
"All Day" event checkbox is off screen rather than hidden when creating a VAMC event
Defect Drupal engineering ⭐️ Public Websites
## Describe the defect When editor creates a new event, there is a checkbox "All Day" associated with the date and time area that is off screen/hard to find, but should actually be hidden until All Day events are supported. ## To Reproduce Steps to reproduce the behavior: 1. Go to any event in the VHA section of the CMS 2. Click on edit, then zoom out, making the text smaller until you see a little checkbox appear out to the left. ## AC / Expected behavior - [ ] All Day events field should be hidden - [ ] Add PR link to #3250 so how to unhide is clear when the time comes. ## Screenshots If applicable, add screenshots to help explain your problem. ![Screen Shot 2022-08-18 at 10.24.23 AM.png](https://images.zenhubusercontent.com/629a1db0e70457da479e69ae/8e85d2d0-9fb7-4205-9138-bf36bf5aecc1) ![Screen Shot 2022-08-25 at 3 24 11 PM](https://user-images.githubusercontent.com/85581471/186779319-23e78412-aea7-4970-9004-799a82fa8819.png) ## Labels (You can delete this section once it's complete) - [x] Issue type (red) (defaults to "Defect") - [ ] CMS subsystem (green) - [ ] CMS practice area (blue) - [x] CMS workstream (orange) (not needed for bug tickets) - [ ] CMS-supported product (black) ### CMS Team Please check the team(s) that will do this work. - [ ] `Program` - [ ] `Platform CMS Team` - [ ] `Sitewide Crew` - [ ] `⭐️ Sitewide CMS` - [x] `⭐️ Public Websites` - [ ] `⭐️ Facilities` - [ ] `⭐️ User support`
1.0
"All Day" event checkbox is off screen rather than hidden when creating a VAMC event - ## Describe the defect When editor creates a new event, there is a checkbox "All Day" associated with the date and time area that is off screen/hard to find, but should actually be hidden until All Day events are supported. ## To Reproduce Steps to reproduce the behavior: 1. Go to any event in the VHA section of the CMS 2. Click on edit, then zoom out, making the text smaller until you see a little checkbox appear out to the left. ## AC / Expected behavior - [ ] All Day events field should be hidden - [ ] Add PR link to #3250 so how to unhide is clear when the time comes. ## Screenshots If applicable, add screenshots to help explain your problem. ![Screen Shot 2022-08-18 at 10.24.23 AM.png](https://images.zenhubusercontent.com/629a1db0e70457da479e69ae/8e85d2d0-9fb7-4205-9138-bf36bf5aecc1) ![Screen Shot 2022-08-25 at 3 24 11 PM](https://user-images.githubusercontent.com/85581471/186779319-23e78412-aea7-4970-9004-799a82fa8819.png) ## Labels (You can delete this section once it's complete) - [x] Issue type (red) (defaults to "Defect") - [ ] CMS subsystem (green) - [ ] CMS practice area (blue) - [x] CMS workstream (orange) (not needed for bug tickets) - [ ] CMS-supported product (black) ### CMS Team Please check the team(s) that will do this work. - [ ] `Program` - [ ] `Platform CMS Team` - [ ] `Sitewide Crew` - [ ] `⭐️ Sitewide CMS` - [x] `⭐️ Public Websites` - [ ] `⭐️ Facilities` - [ ] `⭐️ User support`
defect
all day event checkbox is off screen rather than hidden when creating a vamc event describe the defect when editor creates a new event there is a checkbox all day associated with the date and time area that is off screen hard to find but should actually be hidden until all day events are supported to reproduce steps to reproduce the behavior go to any event in the vha section of the cms click on edit then zoom out making the text smaller until you see a little checkbox appear out to the left ac expected behavior all day events field should be hidden add pr link to so how to unhide is clear when the time comes screenshots if applicable add screenshots to help explain your problem labels you can delete this section once it s complete issue type red defaults to defect cms subsystem green cms practice area blue cms workstream orange not needed for bug tickets cms supported product black cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support
1
12,124
2,685,014,921
IssuesEvent
2015-03-29 16:21:49
IssueMigrationTest/Test5
https://api.github.com/repos/IssueMigrationTest/Test5
closed
f->close missing ()
auto-migrated Priority-Medium Type-Defect
**Issue by benjamin...@gmail.com** _11 Apr 2009 at 1:58 GMT_ _Originally opened on Google Code_ ---- ``` ben@bds2:/tmp$ shedskin dxfpurgeblocks *** SHED SKIN Python-to-C++ Compiler 0.1 *** Copyright 2005-2008 Mark Dufour; License GNU GPL version 3 (See LICENSE) [iterative type analysis..] *** iterations: 3 templates: 312 [generating c++ code..] ben@bds2:/tmp$ make g++ -O2 -pipe -Wno-deprecated -I. -I/home/ben/shedskin/shedskin-0.1/lib /home/ben/shedskin/shedskin-0.1/lib/stat.cpp /home/ben/shedskin/shedskin-0.1/lib/sys.cpp /home/ben/shedskin/shedskin-0.1/lib/builtin.cpp dxfpurgeblocks.cpp /home/ben/shedskin/shedskin-0.1/lib/os/__init__.cpp /home/ben/shedskin/shedskin-0.1/lib/os/path.cpp /home/ben/shedskin/shedskin-0.1/lib/re.cpp -lgc -lpcre -lutil -o dxfpurgeblocks dxfpurgeblocks.cpp: In function ‘void __dxfpurgeblocks__::__init()’: dxfpurgeblocks.cpp:216: error: statement cannot resolve address of overloaded function /home/ben/shedskin/shedskin-0.1/lib/os/__init__.cpp: In function ‘int __os__::tcgetpgrp(int)’: /home/ben/shedskin/shedskin-0.1/lib/os/__init__.cpp:630: warning: converting to non-pointer type ‘int’ from NULL make: *** [dxfpurgeblocks] Error 1 As you can see, shedskin completed without error. The resulting code failed to compile. I fixed this by changing f->close to f->close() in the generated dxfpurgeblocks.cpp. ```
1.0
f->close missing () - **Issue by benjamin...@gmail.com** _11 Apr 2009 at 1:58 GMT_ _Originally opened on Google Code_ ---- ``` ben@bds2:/tmp$ shedskin dxfpurgeblocks *** SHED SKIN Python-to-C++ Compiler 0.1 *** Copyright 2005-2008 Mark Dufour; License GNU GPL version 3 (See LICENSE) [iterative type analysis..] *** iterations: 3 templates: 312 [generating c++ code..] ben@bds2:/tmp$ make g++ -O2 -pipe -Wno-deprecated -I. -I/home/ben/shedskin/shedskin-0.1/lib /home/ben/shedskin/shedskin-0.1/lib/stat.cpp /home/ben/shedskin/shedskin-0.1/lib/sys.cpp /home/ben/shedskin/shedskin-0.1/lib/builtin.cpp dxfpurgeblocks.cpp /home/ben/shedskin/shedskin-0.1/lib/os/__init__.cpp /home/ben/shedskin/shedskin-0.1/lib/os/path.cpp /home/ben/shedskin/shedskin-0.1/lib/re.cpp -lgc -lpcre -lutil -o dxfpurgeblocks dxfpurgeblocks.cpp: In function ‘void __dxfpurgeblocks__::__init()’: dxfpurgeblocks.cpp:216: error: statement cannot resolve address of overloaded function /home/ben/shedskin/shedskin-0.1/lib/os/__init__.cpp: In function ‘int __os__::tcgetpgrp(int)’: /home/ben/shedskin/shedskin-0.1/lib/os/__init__.cpp:630: warning: converting to non-pointer type ‘int’ from NULL make: *** [dxfpurgeblocks] Error 1 As you can see, shedskin completed without error. The resulting code failed to compile. I fixed this by changing f->close to f->close() in the generated dxfpurgeblocks.cpp. ```
defect
f close missing issue by benjamin gmail com apr at gmt originally opened on google code ben tmp shedskin dxfpurgeblocks shed skin python to c compiler copyright mark dufour license gnu gpl version see license iterations templates ben tmp make g pipe wno deprecated i i home ben shedskin shedskin lib home ben shedskin shedskin lib stat cpp home ben shedskin shedskin lib sys cpp home ben shedskin shedskin lib builtin cpp dxfpurgeblocks cpp home ben shedskin shedskin lib os init cpp home ben shedskin shedskin lib os path cpp home ben shedskin shedskin lib re cpp lgc lpcre lutil o dxfpurgeblocks dxfpurgeblocks cpp in function ‘void dxfpurgeblocks init ’ dxfpurgeblocks cpp error statement cannot resolve address of overloaded function home ben shedskin shedskin lib os init cpp in function ‘int os tcgetpgrp int ’ home ben shedskin shedskin lib os init cpp warning converting to non pointer type ‘int’ from null make error as you can see shedskin completed without error the resulting code failed to compile i fixed this by changing f close to f close in the generated dxfpurgeblocks cpp
1
21,679
3,541,727,703
IssuesEvent
2016-01-19 03:19:48
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
Cache limiters normally set by PHP's session.cache_limiter option are never sent
Defect
I've been playing around with a project that I've inherited, and one of the things that aggregated me a lot was pages are always being cached on the browser, requiring me to refresh every time after I change some entry to see the changes. The original project on production (running on the ancient CakePHP version 2.1.1) behaved fine, while my local copy (upgraded to CakePHP version 2.7.3) was not. Looking at the difference in headers, I noticed that the production version was sending headers that were the result of PHP's `nocache` cache limiter, while no such thing was being sent for my local version. Despite trying different values for the option in my php.ini, nothing happened. Filing through CakePHP's source code, I noticed in `CakeSession::_startSession()`, it was setting the cache limiter to `must-revalidate` (an invalid value, mind you, as pointed out by commit 4a6159c), and hence overriding what I have set in php.ini. The reason for overriding the cache limiter was apparently for IE <= 8 to work properly, but I don't use that old of a version of IE (and ideally, neither should anyone else). I think this overriding should be configurable rather than enforced, and if needed, perhaps some smart detection for the IE version and any possible workarounds so other browsers' caching behavior aren't affected. (On the topic of `must-revalidate`, it really isn't a valid value, at least not one that does what people think it's doing. Looking through PHP's source code, there is no reference to it for setting cache limiters, and the value behaves the same as any invalid value (such as `none`), by not sending any cache limiter headers at all.)
1.0
Cache limiters normally set by PHP's session.cache_limiter option are never sent - I've been playing around with a project that I've inherited, and one of the things that aggregated me a lot was pages are always being cached on the browser, requiring me to refresh every time after I change some entry to see the changes. The original project on production (running on the ancient CakePHP version 2.1.1) behaved fine, while my local copy (upgraded to CakePHP version 2.7.3) was not. Looking at the difference in headers, I noticed that the production version was sending headers that were the result of PHP's `nocache` cache limiter, while no such thing was being sent for my local version. Despite trying different values for the option in my php.ini, nothing happened. Filing through CakePHP's source code, I noticed in `CakeSession::_startSession()`, it was setting the cache limiter to `must-revalidate` (an invalid value, mind you, as pointed out by commit 4a6159c), and hence overriding what I have set in php.ini. The reason for overriding the cache limiter was apparently for IE <= 8 to work properly, but I don't use that old of a version of IE (and ideally, neither should anyone else). I think this overriding should be configurable rather than enforced, and if needed, perhaps some smart detection for the IE version and any possible workarounds so other browsers' caching behavior aren't affected. (On the topic of `must-revalidate`, it really isn't a valid value, at least not one that does what people think it's doing. Looking through PHP's source code, there is no reference to it for setting cache limiters, and the value behaves the same as any invalid value (such as `none`), by not sending any cache limiter headers at all.)
defect
cache limiters normally set by php s session cache limiter option are never sent i ve been playing around with a project that i ve inherited and one of the things that aggregated me a lot was pages are always being cached on the browser requiring me to refresh every time after i change some entry to see the changes the original project on production running on the ancient cakephp version behaved fine while my local copy upgraded to cakephp version was not looking at the difference in headers i noticed that the production version was sending headers that were the result of php s nocache cache limiter while no such thing was being sent for my local version despite trying different values for the option in my php ini nothing happened filing through cakephp s source code i noticed in cakesession startsession it was setting the cache limiter to must revalidate an invalid value mind you as pointed out by commit and hence overriding what i have set in php ini the reason for overriding the cache limiter was apparently for ie to work properly but i don t use that old of a version of ie and ideally neither should anyone else i think this overriding should be configurable rather than enforced and if needed perhaps some smart detection for the ie version and any possible workarounds so other browsers caching behavior aren t affected on the topic of must revalidate it really isn t a valid value at least not one that does what people think it s doing looking through php s source code there is no reference to it for setting cache limiters and the value behaves the same as any invalid value such as none by not sending any cache limiter headers at all
1
145,026
5,557,240,397
IssuesEvent
2017-03-24 11:26:59
k0shk0sh/FastHub
https://api.github.com/repos/k0shk0sh/FastHub
closed
Commit diffs don't use monospace
Priority: Medium Status: Accepted Type: Bug
In the latest version, the diffs of commits get displayed as a normal font and not in monospace, which I think is a bit confusing. Syntax highlighting would be cool there too, but I think a simple monospace would be appropriate in my opinion. I would have attached a screenshot but that's not possible with FastHub by selecting a file (yet?). Really good work on the app in general though, I love it!
1.0
Commit diffs don't use monospace - In the latest version, the diffs of commits get displayed as a normal font and not in monospace, which I think is a bit confusing. Syntax highlighting would be cool there too, but I think a simple monospace would be appropriate in my opinion. I would have attached a screenshot but that's not possible with FastHub by selecting a file (yet?). Really good work on the app in general though, I love it!
non_defect
commit diffs don t use monospace in the latest version the diffs of commits get displayed as a normal font and not in monospace which i think is a bit confusing syntax highlighting would be cool there too but i think a simple monospace would be appropriate in my opinion i would have attached a screenshot but that s not possible with fasthub by selecting a file yet really good work on the app in general though i love it
0
22,695
3,687,580,616
IssuesEvent
2016-02-25 09:06:33
jOOQ/jOOL
https://api.github.com/repos/jOOQ/jOOL
opened
Bad generic variance on Seq.unfold()
P: Medium T: Defect T: Incompatible Change
the jOOλ signature is probably incorrect with respect to variance. At least, it should be `Function<? super U, ...>`. Will check about variance within the tuple... See: https://github.com/aol/cyclops-react/issues/102#issuecomment-188677112
1.0
Bad generic variance on Seq.unfold() - the jOOλ signature is probably incorrect with respect to variance. At least, it should be `Function<? super U, ...>`. Will check about variance within the tuple... See: https://github.com/aol/cyclops-react/issues/102#issuecomment-188677112
defect
bad generic variance on seq unfold the jooλ signature is probably incorrect with respect to variance at least it should be function will check about variance within the tuple see
1
40,023
9,798,580,461
IssuesEvent
2019-06-11 12:43:37
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
No LIMIT ... OFFSET ... WITH TIES support in Redshift
C: DB: Redshift C: Functionality E: Enterprise Edition P: Medium R: Fixed T: Defect
The `@Support` annotation on `SelectWithTiesAfterOffsetStep#withTies()` is wrong as it also includes `REDSHIFT`, which doesn't support this clause.
1.0
No LIMIT ... OFFSET ... WITH TIES support in Redshift - The `@Support` annotation on `SelectWithTiesAfterOffsetStep#withTies()` is wrong as it also includes `REDSHIFT`, which doesn't support this clause.
defect
no limit offset with ties support in redshift the support annotation on selectwithtiesafteroffsetstep withties is wrong as it also includes redshift which doesn t support this clause
1
69,113
22,162,411,404
IssuesEvent
2022-06-04 17:52:02
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
Selenium: TimeoutException in complex mask with many update="@form"
:lady_beetle: defect integration_test
### Describe the bug Let's try this (maybe) difficult one. Running a Selenium test on Firefox featuring a complex mask with many update="@form" fails with TimeoutException. It does not happen always so it might be a timing issue or race condition or concurrency problem. I have seen it with org.primefaces.selenium.component.InputText org.primefaces.selenium.component.Calendar org.primefaces.selenium.component.InputTextarea but interestingly NOT with org.primefaces.selenium.component.InputNumber In case you cannot reproduce it, here is some more information what happens as far as I understand it. The PrimeSelenium Guard class sets this indicator before executing the command: "pfselenium.xhr = 'somethingJustNotNull';" In waitUntilAjaxCompletes it checks among other stuff: pfselenium.xhr == null In the error case it seems that pfselenium.xhr is not reset to null like this debug output shows: Guard#ajax; ajaxDebugInfo **before methode.invoke**: document.readyState=complete, !window.jQuery=false, jQuery.active=0, !window.PrimeFaces=false, PrimeFaces.ajax.Queue.isEmpty()=true, PrimeFaces.animationActive=false, !window.pfselenium=false, **pfselenium.xhr=somethingJustNotNull**, pfselenium.anyXhrStarted=null, pfselenium.navigating=false Guard#ajax; ajaxDebugInfo **in error catch clause**: document.readyState=complete, !window.jQuery=false, jQuery.active=0, !window.PrimeFaces=false, PrimeFaces.ajax.Queue.isEmpty()=true, PrimeFaces.animationActive=false, !window.pfselenium=false, **pfselenium.xhr=somethingJustNotNull**, pfselenium.anyXhrStarted=null, pfselenium.navigating=false Therefore the condition for "Ajax complete" is never fulfilled and a TimeoutException occurs. ### Reproducer Will provide a pull request with an appropriate integration test (hopefully) producing the error. ### Expected behavior Selenium test runs fine without TimeoutException. ### PrimeFaces edition Community ### PrimeFaces version 11 ### Theme Any ### JSF implementation MyFaces ### JSF version 2.2, 2.3 ### Browser(s) Firefox 100
1.0
Selenium: TimeoutException in complex mask with many update="@form" - ### Describe the bug Let's try this (maybe) difficult one. Running a Selenium test on Firefox featuring a complex mask with many update="@form" fails with TimeoutException. It does not happen always so it might be a timing issue or race condition or concurrency problem. I have seen it with org.primefaces.selenium.component.InputText org.primefaces.selenium.component.Calendar org.primefaces.selenium.component.InputTextarea but interestingly NOT with org.primefaces.selenium.component.InputNumber In case you cannot reproduce it, here is some more information what happens as far as I understand it. The PrimeSelenium Guard class sets this indicator before executing the command: "pfselenium.xhr = 'somethingJustNotNull';" In waitUntilAjaxCompletes it checks among other stuff: pfselenium.xhr == null In the error case it seems that pfselenium.xhr is not reset to null like this debug output shows: Guard#ajax; ajaxDebugInfo **before methode.invoke**: document.readyState=complete, !window.jQuery=false, jQuery.active=0, !window.PrimeFaces=false, PrimeFaces.ajax.Queue.isEmpty()=true, PrimeFaces.animationActive=false, !window.pfselenium=false, **pfselenium.xhr=somethingJustNotNull**, pfselenium.anyXhrStarted=null, pfselenium.navigating=false Guard#ajax; ajaxDebugInfo **in error catch clause**: document.readyState=complete, !window.jQuery=false, jQuery.active=0, !window.PrimeFaces=false, PrimeFaces.ajax.Queue.isEmpty()=true, PrimeFaces.animationActive=false, !window.pfselenium=false, **pfselenium.xhr=somethingJustNotNull**, pfselenium.anyXhrStarted=null, pfselenium.navigating=false Therefore the condition for "Ajax complete" is never fulfilled and a TimeoutException occurs. ### Reproducer Will provide a pull request with an appropriate integration test (hopefully) producing the error. ### Expected behavior Selenium test runs fine without TimeoutException. ### PrimeFaces edition Community ### PrimeFaces version 11 ### Theme Any ### JSF implementation MyFaces ### JSF version 2.2, 2.3 ### Browser(s) Firefox 100
defect
selenium timeoutexception in complex mask with many update form describe the bug let s try this maybe difficult one running a selenium test on firefox featuring a complex mask with many update form fails with timeoutexception it does not happen always so it might be a timing issue or race condition or concurrency problem i have seen it with org primefaces selenium component inputtext org primefaces selenium component calendar org primefaces selenium component inputtextarea but interestingly not with org primefaces selenium component inputnumber in case you cannot reproduce it here is some more information what happens as far as i understand it the primeselenium guard class sets this indicator before executing the command pfselenium xhr somethingjustnotnull in waituntilajaxcompletes it checks among other stuff pfselenium xhr null in the error case it seems that pfselenium xhr is not reset to null like this debug output shows guard ajax ajaxdebuginfo before methode invoke document readystate complete window jquery false jquery active window primefaces false primefaces ajax queue isempty true primefaces animationactive false window pfselenium false pfselenium xhr somethingjustnotnull pfselenium anyxhrstarted null pfselenium navigating false guard ajax ajaxdebuginfo in error catch clause document readystate complete window jquery false jquery active window primefaces false primefaces ajax queue isempty true primefaces animationactive false window pfselenium false pfselenium xhr somethingjustnotnull pfselenium anyxhrstarted null pfselenium navigating false therefore the condition for ajax complete is never fulfilled and a timeoutexception occurs reproducer will provide a pull request with an appropriate integration test hopefully producing the error expected behavior selenium test runs fine without timeoutexception primefaces edition community primefaces version theme any jsf implementation myfaces jsf version browser s firefox
1
61,712
17,023,762,179
IssuesEvent
2021-07-03 03:42:43
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
non-ascii UTF-8 symbols in GPX traces names are converted to '_' on upload
Component: website Priority: minor Resolution: invalid Type: defect
**[Submitted to the original trac issue database at 7.00pm, Sunday, 4th December 2011]** I'm uploading a couple of gpx traces with russian names. All those symbols are converted to underscores on upload, and I have to duplicate it in description (where utf-8 chars seem to be ok). Should be no problem support utf-8 in filenames, I guess.
1.0
non-ascii UTF-8 symbols in GPX traces names are converted to '_' on upload - **[Submitted to the original trac issue database at 7.00pm, Sunday, 4th December 2011]** I'm uploading a couple of gpx traces with russian names. All those symbols are converted to underscores on upload, and I have to duplicate it in description (where utf-8 chars seem to be ok). Should be no problem support utf-8 in filenames, I guess.
defect
non ascii utf symbols in gpx traces names are converted to on upload i m uploading a couple of gpx traces with russian names all those symbols are converted to underscores on upload and i have to duplicate it in description where utf chars seem to be ok should be no problem support utf in filenames i guess
1
457,571
13,158,456,176
IssuesEvent
2020-08-10 14:21:39
openshift/origin-web-console
https://api.github.com/repos/openshift/origin-web-console
closed
Increase DeploymentConfig name maximum length to 63 characters in Web Console
kind/bug priority/P3
In Openshift Web Console, the DeploymentConfig name is limited to 24 characters. But in `oc` command line the maximum length is 63 characters. It's needed to increase the Web Console limit to match the `oc` command line. Verified in Openshift version 3.7.
1.0
Increase DeploymentConfig name maximum length to 63 characters in Web Console - In Openshift Web Console, the DeploymentConfig name is limited to 24 characters. But in `oc` command line the maximum length is 63 characters. It's needed to increase the Web Console limit to match the `oc` command line. Verified in Openshift version 3.7.
non_defect
increase deploymentconfig name maximum length to characters in web console in openshift web console the deploymentconfig name is limited to characters but in oc command line the maximum length is characters it s needed to increase the web console limit to match the oc command line verified in openshift version
0
31,316
11,907,444,841
IssuesEvent
2020-03-30 22:19:07
MicrosoftDocs/microsoft-365-docs
https://api.github.com/repos/MicrosoftDocs/microsoft-365-docs
closed
update ToC
security
The Table of Content (ToC) does still state twice 'What is EOP?' while the article is titled as 'What is Exchange Online Protection (EOP)'. The ToC should be updated so it's clear what it is about without opening the article. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 14130f97-cb6e-999d-c2c8-47ef1d7bfff8 * Version Independent ID: 4511a0e0-4616-c882-a420-13f4b59ce88b * Content: [What is EOP - Office 365](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/what-is-eop?view=o365-worldwide#feedback) * Content Source: [microsoft-365/security/office-365-security/what-is-eop.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/office-365-security/what-is-eop.md) * Service: **o365-seccomp** * GitHub Login: @MSFTTracyP * Microsoft Alias: **tracyp**
True
update ToC - The Table of Content (ToC) does still state twice 'What is EOP?' while the article is titled as 'What is Exchange Online Protection (EOP)'. The ToC should be updated so it's clear what it is about without opening the article. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 14130f97-cb6e-999d-c2c8-47ef1d7bfff8 * Version Independent ID: 4511a0e0-4616-c882-a420-13f4b59ce88b * Content: [What is EOP - Office 365](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/what-is-eop?view=o365-worldwide#feedback) * Content Source: [microsoft-365/security/office-365-security/what-is-eop.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/office-365-security/what-is-eop.md) * Service: **o365-seccomp** * GitHub Login: @MSFTTracyP * Microsoft Alias: **tracyp**
non_defect
update toc the table of content toc does still state twice what is eop while the article is titled as what is exchange online protection eop the toc should be updated so it s clear what it is about without opening the article document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service seccomp github login msfttracyp microsoft alias tracyp
0
61,713
17,023,762,338
IssuesEvent
2021-07-03 03:42:46
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Printing from browsers no longer prints map
Component: website Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 7.32am, Wednesday, 7th December 2011]** Something has changed recently which has affected printing (I have tested this with latest Firefox and Opera on Windows 7). I used to get the area I wanted to print a map of visible on screen and use the browser option to print to print it (specifying just page 1, as that was always the bit with the map I wanted on). Recently this now only has some text about the licence and the openstreetmap.org url on the first page (and in Firefox a blank page 2). No map. As a workaround I've been exporting the image and printing that, though it takes longer and at least on Windows export only seems to have ever worked in Firefox, but that's an issue I'm not particularly bothered about...
1.0
Printing from browsers no longer prints map - **[Submitted to the original trac issue database at 7.32am, Wednesday, 7th December 2011]** Something has changed recently which has affected printing (I have tested this with latest Firefox and Opera on Windows 7). I used to get the area I wanted to print a map of visible on screen and use the browser option to print to print it (specifying just page 1, as that was always the bit with the map I wanted on). Recently this now only has some text about the licence and the openstreetmap.org url on the first page (and in Firefox a blank page 2). No map. As a workaround I've been exporting the image and printing that, though it takes longer and at least on Windows export only seems to have ever worked in Firefox, but that's an issue I'm not particularly bothered about...
defect
printing from browsers no longer prints map something has changed recently which has affected printing i have tested this with latest firefox and opera on windows i used to get the area i wanted to print a map of visible on screen and use the browser option to print to print it specifying just page as that was always the bit with the map i wanted on recently this now only has some text about the licence and the openstreetmap org url on the first page and in firefox a blank page no map as a workaround i ve been exporting the image and printing that though it takes longer and at least on windows export only seems to have ever worked in firefox but that s an issue i m not particularly bothered about
1
138,234
12,810,475,723
IssuesEvent
2020-07-03 18:47:44
ijelliti/Deeplearning.ai-Natural-Language-Processing-Specialization
https://api.github.com/repos/ijelliti/Deeplearning.ai-Natural-Language-Processing-Specialization
closed
add course summary and notes of week 4 from Course 1
documentation good first issue help wanted
We would like to share the summary and notes on week 4 of Course 1 : Word Translation
1.0
add course summary and notes of week 4 from Course 1 - We would like to share the summary and notes on week 4 of Course 1 : Word Translation
non_defect
add course summary and notes of week from course we would like to share the summary and notes on week of course word translation
0
36,238
7,869,080,587
IssuesEvent
2018-06-24 09:11:57
StrikeNP/trac_test
https://api.github.com/repos/StrikeNP/trac_test
closed
CLUBB standalone deallocation error on empty statistics lists (Trac #765)
Migrated from Trac betlej@uwm.edu clubb_src defect
**Introduction** If I put no variables in a given statistics list, CLUBB runs to the end of the simulation but then crashes when attempting to clean up (deallocate) the statistics variables. **Steps to reproduce** The error was produced in version r7525 of CLUBB. 1. Compile CLUBB 2. Modify `standard_stats.in`. Change the declaration of, e.g., `vars_zm` to: ```text vars_zm = '' ``` 3. Run a case, e.g., `arm`. **Expected result** CLUBB runs to completion and exits successfully. No `<case>_zm.ctl/dat` file is produced. **Actual result** CLUBB runs to the end of the simulation and crashes with an error related to deallocating an unallocated variable. Attachments: [plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff) [plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff) [plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff) [plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff) [plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff) [plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff) [plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff) [plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff) [plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff) Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/765 ```json { "status": "closed", "changetime": "2015-05-06T20:11:57", "description": "'''Introduction'''\n\nIf I put no variables in a given statistics list, CLUBB runs to the end of the simulation but then crashes when attempting to clean up (deallocate) the statistics variables.\n\n'''Steps to reproduce'''\n\nThe error was produced in version r7525 of CLUBB.\n\n1. Compile CLUBB\n2. Modify `standard_stats.in`. Change the declaration of, e.g., `vars_zm` to:\n{{{\nvars_zm = ''\n}}}\n3. Run a case, e.g., `arm`.\n\n'''Expected result'''\n\nCLUBB runs to completion and exits successfully. No `<case>_zm.ctl/dat` file is produced.\n\n'''Actual result'''\n\nCLUBB runs to the end of the simulation and crashes with an error related to deallocating an unallocated variable.", "reporter": "raut@uwm.edu", "cc": "vlarson@uwm.edu", "resolution": "Verified by V. Larson", "_ts": "1430943117612893", "component": "clubb_src", "summary": "CLUBB standalone deallocation error on empty statistics lists", "priority": "minor", "keywords": "", "time": "2015-03-06T22:04:26", "milestone": "4. Fix bugs", "owner": "betlej@uwm.edu", "type": "defect" } ```
1.0
CLUBB standalone deallocation error on empty statistics lists (Trac #765) - **Introduction** If I put no variables in a given statistics list, CLUBB runs to the end of the simulation but then crashes when attempting to clean up (deallocate) the statistics variables. **Steps to reproduce** The error was produced in version r7525 of CLUBB. 1. Compile CLUBB 2. Modify `standard_stats.in`. Change the declaration of, e.g., `vars_zm` to: ```text vars_zm = '' ``` 3. Run a case, e.g., `arm`. **Expected result** CLUBB runs to completion and exits successfully. No `<case>_zm.ctl/dat` file is produced. **Actual result** CLUBB runs to the end of the simulation and crashes with an error related to deallocating an unallocated variable. Attachments: [plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff) [plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff) [plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff) [plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff) [plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff) [plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff) [plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff) [plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff) [plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff) Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/765 ```json { "status": "closed", "changetime": "2015-05-06T20:11:57", "description": "'''Introduction'''\n\nIf I put no variables in a given statistics list, CLUBB runs to the end of the simulation but then crashes when attempting to clean up (deallocate) the statistics variables.\n\n'''Steps to reproduce'''\n\nThe error was produced in version r7525 of CLUBB.\n\n1. Compile CLUBB\n2. Modify `standard_stats.in`. Change the declaration of, e.g., `vars_zm` to:\n{{{\nvars_zm = ''\n}}}\n3. Run a case, e.g., `arm`.\n\n'''Expected result'''\n\nCLUBB runs to completion and exits successfully. No `<case>_zm.ctl/dat` file is produced.\n\n'''Actual result'''\n\nCLUBB runs to the end of the simulation and crashes with an error related to deallocating an unallocated variable.", "reporter": "raut@uwm.edu", "cc": "vlarson@uwm.edu", "resolution": "Verified by V. Larson", "_ts": "1430943117612893", "component": "clubb_src", "summary": "CLUBB standalone deallocation error on empty statistics lists", "priority": "minor", "keywords": "", "time": "2015-03-06T22:04:26", "milestone": "4. Fix bugs", "owner": "betlej@uwm.edu", "type": "defect" } ```
defect
clubb standalone deallocation error on empty statistics lists trac introduction if i put no variables in a given statistics list clubb runs to the end of the simulation but then crashes when attempting to clean up deallocate the statistics variables steps to reproduce the error was produced in version of clubb compile clubb modify standard stats in change the declaration of e g vars zm to text vars zm run a case e g arm expected result clubb runs to completion and exits successfully no zm ctl dat file is produced actual result clubb runs to the end of the simulation and crashes with an error related to deallocating an unallocated variable attachments migrated from json status closed changetime description introduction n nif i put no variables in a given statistics list clubb runs to the end of the simulation but then crashes when attempting to clean up deallocate the statistics variables n n steps to reproduce n nthe error was produced in version of clubb n compile clubb modify standard stats in change the declaration of e g vars zm to n nvars zm n run a case e g arm n n expected result n nclubb runs to completion and exits successfully no zm ctl dat file is produced n n actual result n nclubb runs to the end of the simulation and crashes with an error related to deallocating an unallocated variable reporter raut uwm edu cc vlarson uwm edu resolution verified by v larson ts component clubb src summary clubb standalone deallocation error on empty statistics lists priority minor keywords time milestone fix bugs owner betlej uwm edu type defect
1
40,132
6,800,743,355
IssuesEvent
2017-11-02 14:52:16
usdot-jpo-ode/jpo-ode
https://api.github.com/repos/usdot-jpo-ode/jpo-ode
closed
JSON on Metadata page does not parse.
Documentation Problem
Attempting to use the JSON examples for TIM and BSM and it looks like there is a bracket mismatch in both of them.
1.0
JSON on Metadata page does not parse. - Attempting to use the JSON examples for TIM and BSM and it looks like there is a bracket mismatch in both of them.
non_defect
json on metadata page does not parse attempting to use the json examples for tim and bsm and it looks like there is a bracket mismatch in both of them
0
116,505
11,914,893,954
IssuesEvent
2020-03-31 14:14:32
ISPP-LinkedPet/backend_isppet
https://api.github.com/repos/ISPP-LinkedPet/backend_isppet
closed
Anuncio LinkedPet
documentation
- Realizacion del anuncio de la aplicaicion con los videos dados en google drive - Duracion entre 20 y 30 segundos como MAXIMO
1.0
Anuncio LinkedPet - - Realizacion del anuncio de la aplicaicion con los videos dados en google drive - Duracion entre 20 y 30 segundos como MAXIMO
non_defect
anuncio linkedpet realizacion del anuncio de la aplicaicion con los videos dados en google drive duracion entre y segundos como maximo
0
53,907
13,262,485,629
IssuesEvent
2020-08-20 21:53:52
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
[icetray] Weird behavior from I3Frame.merge (Trac #2294)
Migrated from Trac combo core defect
Using frame.merge doesn't appear to behave correctly in python. It seems to temporarily add the objects from the second frame, but they are not propagated down to subsequent modules in the processing chain. I have provided a trivial example: ```text from icecube import icetray,dataclasses from I3Tray import I3Tray def test_module(frame): frame["particle"]=dataclasses.I3Particle() frame2=icetray.I3Frame() frame2["ASDF"]=dataclasses.I3String("ASDF") frame.merge(frame2) print("\n\nThe Merged Frame looks like this:") print(frame) def print_frame(frame): print("But What gets saved looks like this:") print(frame) tray = I3Tray() tray.AddModule('BottomlessSource') tray.AddModule(test_module) tray.AddModule(print_frame) tray.Execute(2) ``` The output is ```text The Merged Frame looks like this: [ I3Frame (Physics): 'ASDF' [None] ==> I3PODHolder<string > (unk) 'particle' [Physics] ==> I3Particle (unk) ] But What gets saved looks like this: [ I3Frame (Physics): 'particle' [Physics] ==> I3Particle (unk) ] ``` both `ASDF` and `particle` appear in the frame when accessed in the same function. But `ASDF` isn't propagated to subsequent modules. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2294">https://code.icecube.wisc.edu/projects/icecube/ticket/2294</a>, reported by kjmeagher</summary> <p> ```json { "status": "closed", "changetime": "2019-05-23T17:15:54", "_ts": "1558631754875482", "description": "Using frame.merge doesn't appear to behave correctly in python. It seems to temporarily add the objects from the second frame, but they are not propagated down to subsequent modules in the processing chain. I have provided a trivial example:\n\n\n{{{\nfrom icecube import icetray,dataclasses\nfrom I3Tray import I3Tray\n\ndef test_module(frame): \n frame[\"particle\"]=dataclasses.I3Particle() \n frame2=icetray.I3Frame()\n frame2[\"ASDF\"]=dataclasses.I3String(\"ASDF\")\n frame.merge(frame2)\n \n print(\"\\n\\nThe Merged Frame looks like this:\")\n print(frame)\n\n\ndef print_frame(frame):\n print(\"But What gets saved looks like this:\")\n print(frame)\n\ntray = I3Tray()\ntray.AddModule('BottomlessSource')\ntray.AddModule(test_module)\ntray.AddModule(print_frame)\ntray.Execute(2)\n\n}}}\n \nThe output is \n\n\n{{{\nThe Merged Frame looks like this:\n[ I3Frame (Physics):\n 'ASDF' [None] ==> I3PODHolder<string > (unk)\n 'particle' [Physics] ==> I3Particle (unk)\n]\n\nBut What gets saved looks like this:\n[ I3Frame (Physics):\n 'particle' [Physics] ==> I3Particle (unk)\n]\n}}}\n\nboth `ASDF` and `particle` appear in the frame when accessed in the same function. But `ASDF` isn't propagated to subsequent modules.", "reporter": "kjmeagher", "cc": "", "resolution": "invalid", "time": "2019-05-23T16:45:09", "component": "combo core", "summary": "[icetray] Weird behavior from I3Frame.merge", "priority": "normal", "keywords": "", "milestone": "Summer Solstice 2019", "owner": "", "type": "defect" } ``` </p> </details>
1.0
[icetray] Weird behavior from I3Frame.merge (Trac #2294) - Using frame.merge doesn't appear to behave correctly in python. It seems to temporarily add the objects from the second frame, but they are not propagated down to subsequent modules in the processing chain. I have provided a trivial example: ```text from icecube import icetray,dataclasses from I3Tray import I3Tray def test_module(frame): frame["particle"]=dataclasses.I3Particle() frame2=icetray.I3Frame() frame2["ASDF"]=dataclasses.I3String("ASDF") frame.merge(frame2) print("\n\nThe Merged Frame looks like this:") print(frame) def print_frame(frame): print("But What gets saved looks like this:") print(frame) tray = I3Tray() tray.AddModule('BottomlessSource') tray.AddModule(test_module) tray.AddModule(print_frame) tray.Execute(2) ``` The output is ```text The Merged Frame looks like this: [ I3Frame (Physics): 'ASDF' [None] ==> I3PODHolder<string > (unk) 'particle' [Physics] ==> I3Particle (unk) ] But What gets saved looks like this: [ I3Frame (Physics): 'particle' [Physics] ==> I3Particle (unk) ] ``` both `ASDF` and `particle` appear in the frame when accessed in the same function. But `ASDF` isn't propagated to subsequent modules. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2294">https://code.icecube.wisc.edu/projects/icecube/ticket/2294</a>, reported by kjmeagher</summary> <p> ```json { "status": "closed", "changetime": "2019-05-23T17:15:54", "_ts": "1558631754875482", "description": "Using frame.merge doesn't appear to behave correctly in python. It seems to temporarily add the objects from the second frame, but they are not propagated down to subsequent modules in the processing chain. I have provided a trivial example:\n\n\n{{{\nfrom icecube import icetray,dataclasses\nfrom I3Tray import I3Tray\n\ndef test_module(frame): \n frame[\"particle\"]=dataclasses.I3Particle() \n frame2=icetray.I3Frame()\n frame2[\"ASDF\"]=dataclasses.I3String(\"ASDF\")\n frame.merge(frame2)\n \n print(\"\\n\\nThe Merged Frame looks like this:\")\n print(frame)\n\n\ndef print_frame(frame):\n print(\"But What gets saved looks like this:\")\n print(frame)\n\ntray = I3Tray()\ntray.AddModule('BottomlessSource')\ntray.AddModule(test_module)\ntray.AddModule(print_frame)\ntray.Execute(2)\n\n}}}\n \nThe output is \n\n\n{{{\nThe Merged Frame looks like this:\n[ I3Frame (Physics):\n 'ASDF' [None] ==> I3PODHolder<string > (unk)\n 'particle' [Physics] ==> I3Particle (unk)\n]\n\nBut What gets saved looks like this:\n[ I3Frame (Physics):\n 'particle' [Physics] ==> I3Particle (unk)\n]\n}}}\n\nboth `ASDF` and `particle` appear in the frame when accessed in the same function. But `ASDF` isn't propagated to subsequent modules.", "reporter": "kjmeagher", "cc": "", "resolution": "invalid", "time": "2019-05-23T16:45:09", "component": "combo core", "summary": "[icetray] Weird behavior from I3Frame.merge", "priority": "normal", "keywords": "", "milestone": "Summer Solstice 2019", "owner": "", "type": "defect" } ``` </p> </details>
defect
weird behavior from merge trac using frame merge doesn t appear to behave correctly in python it seems to temporarily add the objects from the second frame but they are not propagated down to subsequent modules in the processing chain i have provided a trivial example text from icecube import icetray dataclasses from import def test module frame frame dataclasses icetray dataclasses asdf frame merge print n nthe merged frame looks like this print frame def print frame frame print but what gets saved looks like this print frame tray tray addmodule bottomlesssource tray addmodule test module tray addmodule print frame tray execute the output is text the merged frame looks like this physics asdf unk particle unk but what gets saved looks like this physics particle unk both asdf and particle appear in the frame when accessed in the same function but asdf isn t propagated to subsequent modules migrated from json status closed changetime ts description using frame merge doesn t appear to behave correctly in python it seems to temporarily add the objects from the second frame but they are not propagated down to subsequent modules in the processing chain i have provided a trivial example n n n nfrom icecube import icetray dataclasses nfrom import n ndef test module frame n frame dataclasses n icetray n dataclasses asdf n frame merge n n print n nthe merged frame looks like this n print frame n n ndef print frame frame n print but what gets saved looks like this n print frame n ntray ntray addmodule bottomlesssource ntray addmodule test module ntray addmodule print frame ntray execute n n n nthe output is n n n nthe merged frame looks like this n unk n particle unk n n nbut what gets saved looks like this n unk n n n nboth asdf and particle appear in the frame when accessed in the same function but asdf isn t propagated to subsequent modules reporter kjmeagher cc resolution invalid time component combo core summary weird behavior from merge priority normal keywords milestone summer solstice owner type defect
1
6,631
2,610,258,118
IssuesEvent
2015-02-26 19:22:20
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳激光怎么样治痤疮
auto-migrated Priority-Medium Type-Defect
``` 深圳激光怎么样治痤疮【深圳韩方科颜全国热线400-869-1818,24 小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:47
1.0
深圳激光怎么样治痤疮 - ``` 深圳激光怎么样治痤疮【深圳韩方科颜全国热线400-869-1818,24 小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:47
defect
深圳激光怎么样治痤疮 深圳激光怎么样治痤疮【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 original issue reported on code google com by szft com on may at
1
230,872
25,482,788,676
IssuesEvent
2022-11-26 01:31:55
Satheesh575555/linux-4.1.15
https://api.github.com/repos/Satheesh575555/linux-4.1.15
reopened
CVE-2016-5829 (High) detected in linuxlinux-4.6
security vulnerability
## CVE-2016-5829 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/linux-4.1.15/commit/951a6fe29b85bb7a6493c21ded9c3151b6a6c8f1">951a6fe29b85bb7a6493c21ded9c3151b6a6c8f1</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/hid/usbhid/hiddev.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/hid/usbhid/hiddev.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Multiple heap-based buffer overflows in the hiddev_ioctl_usage function in drivers/hid/usbhid/hiddev.c in the Linux kernel through 4.6.3 allow local users to cause a denial of service or possibly have unspecified other impact via a crafted (1) HIDIOCGUSAGES or (2) HIDIOCSUSAGES ioctl call. <p>Publish Date: 2016-06-27 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-5829>CVE-2016-5829</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2016-5829">https://www.linuxkernelcves.com/cves/CVE-2016-5829</a></p> <p>Release Date: 2016-06-27</p> <p>Fix Resolution: v4.7-rc5,v3.12.62,v3.14.74,v3.16.37,v3.18.37,v3.2.82,v4.1.28,v4.4.16,v4.6.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-5829 (High) detected in linuxlinux-4.6 - ## CVE-2016-5829 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/linux-4.1.15/commit/951a6fe29b85bb7a6493c21ded9c3151b6a6c8f1">951a6fe29b85bb7a6493c21ded9c3151b6a6c8f1</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/hid/usbhid/hiddev.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/hid/usbhid/hiddev.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Multiple heap-based buffer overflows in the hiddev_ioctl_usage function in drivers/hid/usbhid/hiddev.c in the Linux kernel through 4.6.3 allow local users to cause a denial of service or possibly have unspecified other impact via a crafted (1) HIDIOCGUSAGES or (2) HIDIOCSUSAGES ioctl call. <p>Publish Date: 2016-06-27 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-5829>CVE-2016-5829</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2016-5829">https://www.linuxkernelcves.com/cves/CVE-2016-5829</a></p> <p>Release Date: 2016-06-27</p> <p>Fix Resolution: v4.7-rc5,v3.12.62,v3.14.74,v3.16.37,v3.18.37,v3.2.82,v4.1.28,v4.4.16,v4.6.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers hid usbhid hiddev c drivers hid usbhid hiddev c vulnerability details multiple heap based buffer overflows in the hiddev ioctl usage function in drivers hid usbhid hiddev c in the linux kernel through allow local users to cause a denial of service or possibly have unspecified other impact via a crafted hidiocgusages or hidiocsusages ioctl call publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
16,114
2,872,590,987
IssuesEvent
2015-06-08 12:56:27
netty/netty
https://api.github.com/repos/netty/netty
closed
Fix build stability issues with the epoll transport
defect
- http://clinker.netty.io/jenkins/job/netty-oraclejdk7/branch/4.1/412/ - http://clinker.netty.io/jenkins/job/netty-oraclejdk8/branch/4.1/412/ We might have more. Please go through the failures and fix the epoll-related ones.
1.0
Fix build stability issues with the epoll transport - - http://clinker.netty.io/jenkins/job/netty-oraclejdk7/branch/4.1/412/ - http://clinker.netty.io/jenkins/job/netty-oraclejdk8/branch/4.1/412/ We might have more. Please go through the failures and fix the epoll-related ones.
defect
fix build stability issues with the epoll transport we might have more please go through the failures and fix the epoll related ones
1
273,553
20,797,236,661
IssuesEvent
2022-03-17 10:28:41
rcgsheffield/sheffield_hpc
https://api.github.com/repos/rcgsheffield/sheffield_hpc
closed
it-servicedesk@sheffield.ac.uk email address is going away...
Documentation Error
From 16 February, service users can use the following support channels: * IT Self-service portal * Telephone * Online chat * Face-to-face - TechBar
1.0
it-servicedesk@sheffield.ac.uk email address is going away... - From 16 February, service users can use the following support channels: * IT Self-service portal * Telephone * Online chat * Face-to-face - TechBar
non_defect
it servicedesk sheffield ac uk email address is going away from february service users can use the following support channels it self service portal telephone online chat face to face techbar
0
4,066
2,610,086,835
IssuesEvent
2015-02-26 18:26:23
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳除去痘痘
auto-migrated Priority-Medium Type-Defect
``` 深圳除去痘痘【深圳韩方科颜全国热线400-869-1818,24小时QQ4008 691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方—�� �韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科颜� ��业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康祛 痘技术并结合先进“先进豪华彩光”仪,开创国内专业治疗�� �刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:15
1.0
深圳除去痘痘 - ``` 深圳除去痘痘【深圳韩方科颜全国热线400-869-1818,24小时QQ4008 691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方—�� �韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科颜� ��业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康祛 痘技术并结合先进“先进豪华彩光”仪,开创国内专业治疗�� �刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:15
defect
深圳除去痘痘 深圳除去痘痘【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方—�� �韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科颜� ��业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康祛 痘技术并结合先进“先进豪华彩光”仪,开创国内专业治疗�� �刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘。 original issue reported on code google com by szft com on may at
1
375,158
26,149,764,932
IssuesEvent
2022-12-30 11:43:32
wazuh/wazuh-documentation
https://api.github.com/repos/wazuh/wazuh-documentation
closed
Filebeat configuration should be corrected in Deployment with Ansible
Deployment guide Documentation improvements
While executing tests for [ Release 4.4.0 - Alpha 2 - E2E UX Tests - Deployment with Ansible](https://github.com/wazuh/wazuh/issues/15768), an issue was discovered relating filebeat port configuration. [Symptom 1 - Alerts index error](https://github.com/wazuh/wazuh/issues/15768#issuecomment-1366760607) [Symptom 2 - No events are visible](https://github.com/wazuh/wazuh/issues/15768#issuecomment-1366766958) [Resolution](https://github.com/wazuh/wazuh/issues/15768#issuecomment-1367413826) The documentation shows the following configuration for `wazuh-manage r-oss.yml`: ```console --- - hosts: managers roles: - role: ../roles/wazuh/ansible-wazuh-manager - role: ../roles/wazuh/ansible-filebeat-oss filebeat_output_indexer_hosts: - "<indexer-node-1>:9200" - "<indexer-node-2>:9200" - "<indexer-node-2>:9200" ``` The port should not be specified because it is added in the `filebeat.yml.j2` file: ```console # Send events directly to Wazuh indexer output.elasticsearch: hosts: {% for item in filebeat_output_indexer_hosts %} - {{ item }}:9200 {% endfor %} ``` So, the documentation should propose the following content for `wazuh-manager-oss.yml`: ```console --- - hosts: managers roles: - role: ../roles/wazuh/ansible-wazuh-manager - role: ../roles/wazuh/ansible-filebeat-oss filebeat_output_indexer_hosts: - "<indexer-node-1>" - "<indexer-node-2>" - "<indexer-node-2>" ```
1.0
Filebeat configuration should be corrected in Deployment with Ansible - While executing tests for [ Release 4.4.0 - Alpha 2 - E2E UX Tests - Deployment with Ansible](https://github.com/wazuh/wazuh/issues/15768), an issue was discovered relating filebeat port configuration. [Symptom 1 - Alerts index error](https://github.com/wazuh/wazuh/issues/15768#issuecomment-1366760607) [Symptom 2 - No events are visible](https://github.com/wazuh/wazuh/issues/15768#issuecomment-1366766958) [Resolution](https://github.com/wazuh/wazuh/issues/15768#issuecomment-1367413826) The documentation shows the following configuration for `wazuh-manage r-oss.yml`: ```console --- - hosts: managers roles: - role: ../roles/wazuh/ansible-wazuh-manager - role: ../roles/wazuh/ansible-filebeat-oss filebeat_output_indexer_hosts: - "<indexer-node-1>:9200" - "<indexer-node-2>:9200" - "<indexer-node-2>:9200" ``` The port should not be specified because it is added in the `filebeat.yml.j2` file: ```console # Send events directly to Wazuh indexer output.elasticsearch: hosts: {% for item in filebeat_output_indexer_hosts %} - {{ item }}:9200 {% endfor %} ``` So, the documentation should propose the following content for `wazuh-manager-oss.yml`: ```console --- - hosts: managers roles: - role: ../roles/wazuh/ansible-wazuh-manager - role: ../roles/wazuh/ansible-filebeat-oss filebeat_output_indexer_hosts: - "<indexer-node-1>" - "<indexer-node-2>" - "<indexer-node-2>" ```
non_defect
filebeat configuration should be corrected in deployment with ansible while executing tests for release alpha ux tests deployment with ansible an issue was discovered relating filebeat port configuration the documentation shows the following configuration for wazuh manage r oss yml console hosts managers roles role roles wazuh ansible wazuh manager role roles wazuh ansible filebeat oss filebeat output indexer hosts the port should not be specified because it is added in the filebeat yml file console send events directly to wazuh indexer output elasticsearch hosts for item in filebeat output indexer hosts item endfor so the documentation should propose the following content for wazuh manager oss yml console hosts managers roles role roles wazuh ansible wazuh manager role roles wazuh ansible filebeat oss filebeat output indexer hosts
0
48,621
20,198,426,895
IssuesEvent
2022-02-11 12:56:52
Azure/azure-cli
https://api.github.com/repos/Azure/azure-cli
closed
Getting list of Service Fabric applications
Service Attention Service Fabric customer-reported needs-author-feedback
### **This is autogenerated. Please review and update as needed.** ## Describe the bug Trying to fetch list of deployed applications in SFC always returns empty result even though there are applications deployed to a cluster **Command Name** `az sf application list --resource-group MyTestRG01 --cluster-name testsfcclust01` **Errors:** N/A ## To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. - Create new SFC (from portal or using CLI) - Deploy sample voting app (https://github.com/Azure-Samples/service-fabric-dotnet-quickstart) - `az sf application list --resource-group {} --cluster-name {}` ## Expected Behavior - Return the list of applications deployed on SF cluster ## Environment Summary ``` Windows-10-10.0.17763-SP0 Python 3.6.6 Installer: MSI azure-cli 2.3.1 ``` ## Additional Context <!--Please don't remove this:--> <!--auto-generated-->
2.0
Getting list of Service Fabric applications - ### **This is autogenerated. Please review and update as needed.** ## Describe the bug Trying to fetch list of deployed applications in SFC always returns empty result even though there are applications deployed to a cluster **Command Name** `az sf application list --resource-group MyTestRG01 --cluster-name testsfcclust01` **Errors:** N/A ## To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. - Create new SFC (from portal or using CLI) - Deploy sample voting app (https://github.com/Azure-Samples/service-fabric-dotnet-quickstart) - `az sf application list --resource-group {} --cluster-name {}` ## Expected Behavior - Return the list of applications deployed on SF cluster ## Environment Summary ``` Windows-10-10.0.17763-SP0 Python 3.6.6 Installer: MSI azure-cli 2.3.1 ``` ## Additional Context <!--Please don't remove this:--> <!--auto-generated-->
non_defect
getting list of service fabric applications this is autogenerated please review and update as needed describe the bug trying to fetch list of deployed applications in sfc always returns empty result even though there are applications deployed to a cluster command name az sf application list resource group cluster name errors n a to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information create new sfc from portal or using cli deploy sample voting app az sf application list resource group cluster name expected behavior return the list of applications deployed on sf cluster environment summary windows python installer msi azure cli additional context
0
35,997
12,395,885,266
IssuesEvent
2020-05-20 19:28:56
rammatzkvosky/654321
https://api.github.com/repos/rammatzkvosky/654321
closed
CVE-2019-16943 (High) detected in jackson-databind-2.8.8.jar
security vulnerability
## CVE-2019-16943 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.8.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/654321/pom.xml</p> <p>Path to vulnerable library: epository/com/fasterxml/jackson/core/jackson-databind/2.8.8/jackson-databind-2.8.8.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.8.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/654321/commit/704b452b67a3a42aa1a78fdfa86a05a9dba7e1c6">704b452b67a3a42aa1a78fdfa86a05a9dba7e1c6</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the p6spy (3.8.6) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of com.p6spy.engine.spy.P6DataSource mishandling. <p>Publish Date: 2019-10-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16943>CVE-2019-16943</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2478">https://github.com/FasterXML/jackson-databind/issues/2478</a></p> <p>Release Date: 2019-10-01</p> <p>Fix Resolution: 2.9.10.1</p> </p> </details> <p></p>
True
CVE-2019-16943 (High) detected in jackson-databind-2.8.8.jar - ## CVE-2019-16943 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.8.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/654321/pom.xml</p> <p>Path to vulnerable library: epository/com/fasterxml/jackson/core/jackson-databind/2.8.8/jackson-databind-2.8.8.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.8.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/654321/commit/704b452b67a3a42aa1a78fdfa86a05a9dba7e1c6">704b452b67a3a42aa1a78fdfa86a05a9dba7e1c6</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the p6spy (3.8.6) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of com.p6spy.engine.spy.P6DataSource mishandling. <p>Publish Date: 2019-10-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16943>CVE-2019-16943</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2478">https://github.com/FasterXML/jackson-databind/issues/2478</a></p> <p>Release Date: 2019-10-01</p> <p>Fix Resolution: 2.9.10.1</p> </p> </details> <p></p>
non_defect
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm pom xml path to vulnerable library epository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the jar in the classpath and an attacker can find an rmi service endpoint to access it is possible to make the service execute a malicious payload this issue exists because of com engine spy mishandling publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
179,078
6,621,392,832
IssuesEvent
2017-09-21 18:59:47
Lunik/tcloud
https://api.github.com/repos/Lunik/tcloud
closed
Rename file / folder
backend feature frontend priority/p3
User should be able to rename is file and folders. - Missing REST API - Frontend support
1.0
Rename file / folder - User should be able to rename is file and folders. - Missing REST API - Frontend support
non_defect
rename file folder user should be able to rename is file and folders missing rest api frontend support
0
158,629
12,421,053,172
IssuesEvent
2020-05-23 15:03:14
mAAdhaTTah/brookjs
https://api.github.com/repos/mAAdhaTTah/brookjs
opened
Add mock emitting component
enhancement scope:testing
As you compose components up, the lower-level "apps" you're building can be quite complex. It might not be worth interacting with them directly but instead replace them with mock components that simulate the events they emit into the Central Observable. `brookjs-desalinate` should provide a mock component to use in this way.
1.0
Add mock emitting component - As you compose components up, the lower-level "apps" you're building can be quite complex. It might not be worth interacting with them directly but instead replace them with mock components that simulate the events they emit into the Central Observable. `brookjs-desalinate` should provide a mock component to use in this way.
non_defect
add mock emitting component as you compose components up the lower level apps you re building can be quite complex it might not be worth interacting with them directly but instead replace them with mock components that simulate the events they emit into the central observable brookjs desalinate should provide a mock component to use in this way
0
75,770
26,039,916,730
IssuesEvent
2022-12-22 09:30:46
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Taskbar icons don't show up any more
T-Defect
### Steps to reproduce 1. Updated to 1.11.17 2. Started Element Desktop ### Outcome #### What did you expect? Icon in taskbar #### What happened instead? No icon in taskbar ### Operating system Ubuntu 22.04.1 LTS ### Application version Element version: 1.11.17 Olm version: 3.2.12 ### How did you install the app? official deb packages ### Homeserver _No response_ ### Will you send logs? Yes
1.0
Taskbar icons don't show up any more - ### Steps to reproduce 1. Updated to 1.11.17 2. Started Element Desktop ### Outcome #### What did you expect? Icon in taskbar #### What happened instead? No icon in taskbar ### Operating system Ubuntu 22.04.1 LTS ### Application version Element version: 1.11.17 Olm version: 3.2.12 ### How did you install the app? official deb packages ### Homeserver _No response_ ### Will you send logs? Yes
defect
taskbar icons don t show up any more steps to reproduce updated to started element desktop outcome what did you expect icon in taskbar what happened instead no icon in taskbar operating system ubuntu lts application version element version olm version how did you install the app official deb packages homeserver no response will you send logs yes
1
73,878
8,951,527,313
IssuesEvent
2019-01-25 14:13:05
double-double/web-app
https://api.github.com/repos/double-double/web-app
closed
Wireframe 2
design
Sketching low fidelity wireframes. - [x] Sketch some ideas - [x] Sketch Task Flow 1 - [x] Sketch Task Flow 2 - [x] Sketch Task Flow 3
1.0
Wireframe 2 - Sketching low fidelity wireframes. - [x] Sketch some ideas - [x] Sketch Task Flow 1 - [x] Sketch Task Flow 2 - [x] Sketch Task Flow 3
non_defect
wireframe sketching low fidelity wireframes sketch some ideas sketch task flow sketch task flow sketch task flow
0
45,708
13,041,795,152
IssuesEvent
2020-07-28 21:04:27
jccastillo0007/eFacturaT
https://api.github.com/repos/jccastillo0007/eFacturaT
opened
FI - Eliminar el campo de captura de RFC emisor y Fecha
defect
Ahora mismo no se ocupan, y solo confunden. Es muy probable que nos pidan el DT, customizado, es decir solo para ellos.
1.0
FI - Eliminar el campo de captura de RFC emisor y Fecha - Ahora mismo no se ocupan, y solo confunden. Es muy probable que nos pidan el DT, customizado, es decir solo para ellos.
defect
fi eliminar el campo de captura de rfc emisor y fecha ahora mismo no se ocupan y solo confunden es muy probable que nos pidan el dt customizado es decir solo para ellos
1
23,906
3,869,679,505
IssuesEvent
2016-04-10 19:00:18
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
opened
DateTime.MinValue cannot be converted to UTC in DST time zones
defect
Tested with Eastern Standard TimeZone. ### Expected 480 ### Actual 0 ### Steps To Reproduce [Live](http://live.bridge.net/#6cad8e2f400c1889be8c9f79bc69f67a) ```csharp public class App { [Ready] public static void Main() { var start = DateTime.MinValue; var end = start.AddHours(8); var difference = (end - start).TotalMinutes; Console.WriteLine(difference); } } ```
1.0
DateTime.MinValue cannot be converted to UTC in DST time zones - Tested with Eastern Standard TimeZone. ### Expected 480 ### Actual 0 ### Steps To Reproduce [Live](http://live.bridge.net/#6cad8e2f400c1889be8c9f79bc69f67a) ```csharp public class App { [Ready] public static void Main() { var start = DateTime.MinValue; var end = start.AddHours(8); var difference = (end - start).TotalMinutes; Console.WriteLine(difference); } } ```
defect
datetime minvalue cannot be converted to utc in dst time zones tested with eastern standard timezone expected actual steps to reproduce csharp public class app public static void main var start datetime minvalue var end start addhours var difference end start totalminutes console writeline difference
1
211,908
16,375,844,189
IssuesEvent
2021-05-16 04:03:51
DeFiCh/jellyfish
https://api.github.com/repos/DeFiCh/jellyfish
closed
`@defichain/testing` `sendtokenstoaddress` utility method
area/testing kind/feature triage/accepted
<!-- Please only use this template for submitting enhancement/feature requests --> #### What would you like to be added: Currently in `@defichain/testing` only `accountToAccount` is supported. That function requires manual selection of tokens to send for testing. We need `sendtokenstoaddress` for testing for a better/easier setup process.
1.0
`@defichain/testing` `sendtokenstoaddress` utility method - <!-- Please only use this template for submitting enhancement/feature requests --> #### What would you like to be added: Currently in `@defichain/testing` only `accountToAccount` is supported. That function requires manual selection of tokens to send for testing. We need `sendtokenstoaddress` for testing for a better/easier setup process.
non_defect
defichain testing sendtokenstoaddress utility method what would you like to be added currently in defichain testing only accounttoaccount is supported that function requires manual selection of tokens to send for testing we need sendtokenstoaddress for testing for a better easier setup process
0
350,779
10,508,478,731
IssuesEvent
2019-09-27 08:45:04
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
accounts.firefox.com - see bug description
browser-fenix engine-gecko priority-normal
<!-- @browser: Firefox Preview Mobile 69.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 --> <!-- @reported_with: --> <!-- @extra_labels: browser-fenix --> **URL**: https://accounts.firefox.com/oauth/signin?action=email **Browser / Version**: Firefox Preview Mobile 69.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Something else **Description**: Sync doesn't work **Steps to Reproduce**: On Android, on Firefox Preview, I went to Parameters, Sync. On PC, on Firefox Nightly, I went on firefox.com/pair and displayed the QR code. I tried to scan the QR code with my smartphone but it didnt do anything. So I tried with my e-mail, but after I put my password or the code from Google Authentificator, an error always occured and tthe connection is impossible. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
accounts.firefox.com - see bug description - <!-- @browser: Firefox Preview Mobile 69.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 --> <!-- @reported_with: --> <!-- @extra_labels: browser-fenix --> **URL**: https://accounts.firefox.com/oauth/signin?action=email **Browser / Version**: Firefox Preview Mobile 69.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Something else **Description**: Sync doesn't work **Steps to Reproduce**: On Android, on Firefox Preview, I went to Parameters, Sync. On PC, on Firefox Nightly, I went on firefox.com/pair and displayed the QR code. I tried to scan the QR code with my smartphone but it didnt do anything. So I tried with my e-mail, but after I put my password or the code from Google Authentificator, an error always occured and tthe connection is impossible. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_defect
accounts firefox com see bug description url browser version firefox preview mobile operating system android tested another browser no problem type something else description sync doesn t work steps to reproduce on android on firefox preview i went to parameters sync on pc on firefox nightly i went on firefox com pair and displayed the qr code i tried to scan the qr code with my smartphone but it didnt do anything so i tried with my e mail but after i put my password or the code from google authentificator an error always occured and tthe connection is impossible browser configuration none from with ❤️
0
257,376
8,136,526,518
IssuesEvent
2018-08-20 08:43:20
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
Export diagram fails on firefox on windows 7
Component/Composer Priority/High Type/Bug
**Description:** Export diagram fails on firefox on windows 7 **Steps to reproduce:** 1.open a sample ex:hello-world.bal 2.click on export diagram 3.enter the path 4.enter the file name **Affected Versions:** ballerina-tools-0.970.1-SNAPSHOT **OS, DB, other environment details and versions:** Firefox-59.0.2 Windows 7 Note: the given error message is also invalid and not clear ![export_diagram](https://user-images.githubusercontent.com/6552100/39469334-4d377254-4d55-11e8-8e14-de36e9a15ceb.png)
1.0
Export diagram fails on firefox on windows 7 - **Description:** Export diagram fails on firefox on windows 7 **Steps to reproduce:** 1.open a sample ex:hello-world.bal 2.click on export diagram 3.enter the path 4.enter the file name **Affected Versions:** ballerina-tools-0.970.1-SNAPSHOT **OS, DB, other environment details and versions:** Firefox-59.0.2 Windows 7 Note: the given error message is also invalid and not clear ![export_diagram](https://user-images.githubusercontent.com/6552100/39469334-4d377254-4d55-11e8-8e14-de36e9a15ceb.png)
non_defect
export diagram fails on firefox on windows description export diagram fails on firefox on windows steps to reproduce open a sample ex hello world bal click on export diagram enter the path enter the file name affected versions ballerina tools snapshot os db other environment details and versions firefox windows note the given error message is also invalid and not clear
0
4,419
2,610,093,725
IssuesEvent
2015-02-26 18:28:18
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳韩方科颜祛痘要多少钱
auto-migrated Priority-Medium Type-Defect
``` 深圳韩方科颜祛痘要多少钱【深圳韩方科颜全国热线400-869-181 8,24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构�� �韩国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳� ��,韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不 反弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创�� �内专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客� ��上的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:10
1.0
深圳韩方科颜祛痘要多少钱 - ``` 深圳韩方科颜祛痘要多少钱【深圳韩方科颜全国热线400-869-181 8,24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构�� �韩国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳� ��,韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不 反弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创�� �内专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客� ��上的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:10
defect
深圳韩方科颜祛痘要多少钱 深圳韩方科颜祛痘要多少钱【 , 】深圳韩方科颜专业祛痘连锁机构,机构�� �韩国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳� ��,韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不 反弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创�� �内专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客� ��上的痘痘。 original issue reported on code google com by szft com on may at
1
120,817
4,794,751,431
IssuesEvent
2016-10-31 22:01:04
stillmuseum/worksonpaper
https://api.github.com/repos/stillmuseum/worksonpaper
closed
need to send a link to the dev site to the author
A priority
@halfempty Good morning. I've made a few updates to the essay. Patricia Failing would like a link to be able to review the changes this morning. I believe it's now pw protected. Is that simple enough to remove for a few hours? I'd rather not create a user account for her if possible.
1.0
need to send a link to the dev site to the author - @halfempty Good morning. I've made a few updates to the essay. Patricia Failing would like a link to be able to review the changes this morning. I believe it's now pw protected. Is that simple enough to remove for a few hours? I'd rather not create a user account for her if possible.
non_defect
need to send a link to the dev site to the author halfempty good morning i ve made a few updates to the essay patricia failing would like a link to be able to review the changes this morning i believe it s now pw protected is that simple enough to remove for a few hours i d rather not create a user account for her if possible
0
182,745
6,673,335,387
IssuesEvent
2017-10-04 14:46:46
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
m.facebook.com - site is not usable
browser-firefox-mobile priority-critical status-needstriage
<!-- @browser: Firefox Mobile 58.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:58.0) Gecko/58.0 Firefox/58.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://m.facebook.com/home.php?_rdr **Browser / Version**: Firefox Mobile 58.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: Unknown **Problem type**: Site is not usable **Description**: f **Steps to Reproduce**: _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
m.facebook.com - site is not usable - <!-- @browser: Firefox Mobile 58.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:58.0) Gecko/58.0 Firefox/58.0 --> <!-- @reported_with: mobile-reporter --> **URL**: https://m.facebook.com/home.php?_rdr **Browser / Version**: Firefox Mobile 58.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: Unknown **Problem type**: Site is not usable **Description**: f **Steps to Reproduce**: _From [webcompat.com](https://webcompat.com/) with ❤️_
non_defect
m facebook com site is not usable url browser version firefox mobile operating system android tested another browser unknown problem type site is not usable description f steps to reproduce from with ❤️
0
56,483
15,108,944,508
IssuesEvent
2021-02-08 17:13:38
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
When I reproduce the queue items when a node leaves the cluster, items are delivered more than once.
Module: IQueue Source: Community Team: Core Type: Defect
[test.zip](https://github.com/hazelcast/hazelcast/files/5815012/test.zip) <!-- Thanks for reporting your issue. Please share with us the following information, to help us resolve your issue quickly and efficiently. --> **Describe the bug** I expect to receive only the items that I add to the queue, but I get the previous items when the consumer goes up. **Expected behavior** Hello, I'm implementing a producer-consumer example. Both the producer and the consumer are members of a cluster. The producer puts an item into the queue(IQueue) and the consumer takes it and puts the item into a cluster-wide map(IMap). When the consumer leaves the cluster, the producer catches this event and reproduces the items that are on the map into the queue. So in this way, if a consumer goes down, its items will be re-produced. In my implementation, the producer adds an item when it reads "ADD" in the command line, for testing purposes. When I close the consumer application, it reproduces the item and when the consumer goes up again, it receives the re-produced item. But after repeating three or more times, I see that the data are doubled at each going down and up. I would expect that the items that are produced by the producer exist in the system but in my tests, I see that the items are delivered more and more. I could not figure out the reason actually. Any help would be appreciated. Thanks. Here's my small example which consists of 3 simple classes only. **To Reproduce** Steps to reproduce the behavior: 1. Run Producer.java 2. Run Consumer.java 3. Go to Producer's console, and type ADD 4. See that the item is received by the Consumer 5. Close the Consumer 6. See that the Producer handles the consumer's leave event and reproduce the items 7. Start the Consumer 8. See that the Consumer receives the reproduced items 9. Repeat Step 5 to 9 four or five times 10. You will see the Consumer is receiving more and more items although it is not produced by the Producer My reproducing example is attached(test.zip) [test.zip](https://github.com/hazelcast/hazelcast/files/5815017/test.zip) . Ahmet Mircik's simplified code(written in Slack channel) to reproduce the issue ``` @Test public void test_continues_ownership_changes_does_not_leak_backup_memory() throws InterruptedException { Config config = getConfig(); TestHazelcastInstanceFactory factory = createHazelcastInstanceFactory(2); // ownership changes happen on stable // instance and it shouldn't leak memory HazelcastInstance stableInstance = factory.newHazelcastInstance(config); String queueName = "itemQueue"; IQueue<String> producer = stableInstance.getQueue(queueName); // initial offer producer.offer("item"); for (int j = 0; j < 5; j++) { // start unreliable instance HazelcastInstance unreliableInstance = factory.newHazelcastInstance(config); // consume data in queue IQueue<String> consumer = unreliableInstance.getQueue(queueName); consumer.take(); // intentional termination, we are not testing graceful shutdown. unreliableInstance.getLifecycleService().terminate(); producer.offer("item"); assertEquals("Failed at step :" + j + " (0 is first step)", 1, producer.size()); } } ``` **Additional context** <!-- Add any other context about the problem here. Common details that we're often interested in: - Detailed description of the steps to reproduce your issue - Logs and stack traces, if available - Hazelcast 4.1.1 - Cluster size 2 - 11. default jvm params - Linux 4.15.0-24-generic -->
1.0
When I reproduce the queue items when a node leaves the cluster, items are delivered more than once. - [test.zip](https://github.com/hazelcast/hazelcast/files/5815012/test.zip) <!-- Thanks for reporting your issue. Please share with us the following information, to help us resolve your issue quickly and efficiently. --> **Describe the bug** I expect to receive only the items that I add to the queue, but I get the previous items when the consumer goes up. **Expected behavior** Hello, I'm implementing a producer-consumer example. Both the producer and the consumer are members of a cluster. The producer puts an item into the queue(IQueue) and the consumer takes it and puts the item into a cluster-wide map(IMap). When the consumer leaves the cluster, the producer catches this event and reproduces the items that are on the map into the queue. So in this way, if a consumer goes down, its items will be re-produced. In my implementation, the producer adds an item when it reads "ADD" in the command line, for testing purposes. When I close the consumer application, it reproduces the item and when the consumer goes up again, it receives the re-produced item. But after repeating three or more times, I see that the data are doubled at each going down and up. I would expect that the items that are produced by the producer exist in the system but in my tests, I see that the items are delivered more and more. I could not figure out the reason actually. Any help would be appreciated. Thanks. Here's my small example which consists of 3 simple classes only. **To Reproduce** Steps to reproduce the behavior: 1. Run Producer.java 2. Run Consumer.java 3. Go to Producer's console, and type ADD 4. See that the item is received by the Consumer 5. Close the Consumer 6. See that the Producer handles the consumer's leave event and reproduce the items 7. Start the Consumer 8. See that the Consumer receives the reproduced items 9. Repeat Step 5 to 9 four or five times 10. You will see the Consumer is receiving more and more items although it is not produced by the Producer My reproducing example is attached(test.zip) [test.zip](https://github.com/hazelcast/hazelcast/files/5815017/test.zip) . Ahmet Mircik's simplified code(written in Slack channel) to reproduce the issue ``` @Test public void test_continues_ownership_changes_does_not_leak_backup_memory() throws InterruptedException { Config config = getConfig(); TestHazelcastInstanceFactory factory = createHazelcastInstanceFactory(2); // ownership changes happen on stable // instance and it shouldn't leak memory HazelcastInstance stableInstance = factory.newHazelcastInstance(config); String queueName = "itemQueue"; IQueue<String> producer = stableInstance.getQueue(queueName); // initial offer producer.offer("item"); for (int j = 0; j < 5; j++) { // start unreliable instance HazelcastInstance unreliableInstance = factory.newHazelcastInstance(config); // consume data in queue IQueue<String> consumer = unreliableInstance.getQueue(queueName); consumer.take(); // intentional termination, we are not testing graceful shutdown. unreliableInstance.getLifecycleService().terminate(); producer.offer("item"); assertEquals("Failed at step :" + j + " (0 is first step)", 1, producer.size()); } } ``` **Additional context** <!-- Add any other context about the problem here. Common details that we're often interested in: - Detailed description of the steps to reproduce your issue - Logs and stack traces, if available - Hazelcast 4.1.1 - Cluster size 2 - 11. default jvm params - Linux 4.15.0-24-generic -->
defect
when i reproduce the queue items when a node leaves the cluster items are delivered more than once thanks for reporting your issue please share with us the following information to help us resolve your issue quickly and efficiently describe the bug i expect to receive only the items that i add to the queue but i get the previous items when the consumer goes up expected behavior hello i m implementing a producer consumer example both the producer and the consumer are members of a cluster the producer puts an item into the queue iqueue and the consumer takes it and puts the item into a cluster wide map imap when the consumer leaves the cluster the producer catches this event and reproduces the items that are on the map into the queue so in this way if a consumer goes down its items will be re produced in my implementation the producer adds an item when it reads add in the command line for testing purposes when i close the consumer application it reproduces the item and when the consumer goes up again it receives the re produced item but after repeating three or more times i see that the data are doubled at each going down and up i would expect that the items that are produced by the producer exist in the system but in my tests i see that the items are delivered more and more i could not figure out the reason actually any help would be appreciated thanks here s my small example which consists of simple classes only to reproduce steps to reproduce the behavior run producer java run consumer java go to producer s console and type add see that the item is received by the consumer close the consumer see that the producer handles the consumer s leave event and reproduce the items start the consumer see that the consumer receives the reproduced items repeat step to four or five times you will see the consumer is receiving more and more items although it is not produced by the producer my reproducing example is attached test zip ahmet mircik s simplified code written in slack channel to reproduce the issue test public void test continues ownership changes does not leak backup memory throws interruptedexception config config getconfig testhazelcastinstancefactory factory createhazelcastinstancefactory ownership changes happen on stable instance and it shouldn t leak memory hazelcastinstance stableinstance factory newhazelcastinstance config string queuename itemqueue iqueue producer stableinstance getqueue queuename initial offer producer offer item for int j j j start unreliable instance hazelcastinstance unreliableinstance factory newhazelcastinstance config consume data in queue iqueue consumer unreliableinstance getqueue queuename consumer take intentional termination we are not testing graceful shutdown unreliableinstance getlifecycleservice terminate producer offer item assertequals failed at step j is first step producer size additional context add any other context about the problem here common details that we re often interested in detailed description of the steps to reproduce your issue logs and stack traces if available hazelcast cluster size default jvm params linux generic
1
399,491
11,755,799,864
IssuesEvent
2020-03-13 10:15:04
execom-eu/hawaii
https://api.github.com/repos/execom-eu/hawaii
opened
Security Alert - X-Content-Type-Options Header Missing
backend fix low priority question
The Anti-MIME-Sniffing header X-Content-Type-Options was not set to 'nosniff'. This allows older versions of Internet Explorer and Chrome to perform MIME-sniffing on the response body, potentially causing the response body to be interpreted and displayed as a content type other than the declared content type. Current (early 2014) and legacy versions of Firefox will use the declared content type (if one is set), rather than performing MIME-sniffing. URL: 1. https://hawaii2.execom.eu/service-worker.js 2. https://hawaii2.execom.eu/ 3. https://hawaii2.execom.eu/?token= 4. https://hawaii2.execom.eu/static/js/main.c1104239.js 5. https://hawaii2.execom.eu/index.html?_sw-precache=9d2bcbe94988931035a113f86c1f5f4e 6. https://hawaii2.execom.eu/static/css/main.a0b1ecc0.css
1.0
Security Alert - X-Content-Type-Options Header Missing - The Anti-MIME-Sniffing header X-Content-Type-Options was not set to 'nosniff'. This allows older versions of Internet Explorer and Chrome to perform MIME-sniffing on the response body, potentially causing the response body to be interpreted and displayed as a content type other than the declared content type. Current (early 2014) and legacy versions of Firefox will use the declared content type (if one is set), rather than performing MIME-sniffing. URL: 1. https://hawaii2.execom.eu/service-worker.js 2. https://hawaii2.execom.eu/ 3. https://hawaii2.execom.eu/?token= 4. https://hawaii2.execom.eu/static/js/main.c1104239.js 5. https://hawaii2.execom.eu/index.html?_sw-precache=9d2bcbe94988931035a113f86c1f5f4e 6. https://hawaii2.execom.eu/static/css/main.a0b1ecc0.css
non_defect
security alert x content type options header missing the anti mime sniffing header x content type options was not set to nosniff this allows older versions of internet explorer and chrome to perform mime sniffing on the response body potentially causing the response body to be interpreted and displayed as a content type other than the declared content type current early and legacy versions of firefox will use the declared content type if one is set rather than performing mime sniffing url
0
119,787
12,042,844,243
IssuesEvent
2020-04-14 11:20:21
NLnetLabs/krillmanager
https://api.github.com/repos/NLnetLabs/krillmanager
closed
Incorrect OWN certificate guidance
bug documentation wizard
The wizard says that an OWN certificate must be in `/tmp` but IIRC this restriction no longer applies.
1.0
Incorrect OWN certificate guidance - The wizard says that an OWN certificate must be in `/tmp` but IIRC this restriction no longer applies.
non_defect
incorrect own certificate guidance the wizard says that an own certificate must be in tmp but iirc this restriction no longer applies
0
144,297
11,610,952,819
IssuesEvent
2020-02-26 04:55:39
pyinstaller/pyinstaller
https://api.github.com/repos/pyinstaller/pyinstaller
closed
Rearrange test-cases
@low / cleanup area:test-suite good first issue pull-request wanted
While converting from the old test-suite to the new one and after that some new tests have been put into `test_basic` while they are related to import or even to specific libraries. This should be cleaned up.
1.0
Rearrange test-cases - While converting from the old test-suite to the new one and after that some new tests have been put into `test_basic` while they are related to import or even to specific libraries. This should be cleaned up.
non_defect
rearrange test cases while converting from the old test suite to the new one and after that some new tests have been put into test basic while they are related to import or even to specific libraries this should be cleaned up
0
55,770
14,675,502,215
IssuesEvent
2020-12-30 17:45:34
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
DataTable: virtualScroll + scrollable broken
defect
**Describe the defect** No data displayed: ![image](https://user-images.githubusercontent.com/7521311/103370121-4832ba80-4acc-11eb-9689-995b0566f603.png) Regression due to this commit d4be4df857550c1e42eec6d180299ab20ebe31e3. As a consequence JS error: > Uncaught TypeError: Cannot read property 'length' of undefined at Class.cloneHead (components.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:6190) at Class.setupScrolling (components.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:6046) at Class._render (components.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:4956) at Class.renderDeferred (core.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:4730) at Class.init (components.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:4942) at Class.prototype.<computed> [as init] (core.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:4104) at new Class (core.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:4117) at Object.createWidget (core.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:951) at Object.cw (core.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:916) at scroll.xhtml?jfwid=4d2dd:951
1.0
DataTable: virtualScroll + scrollable broken - **Describe the defect** No data displayed: ![image](https://user-images.githubusercontent.com/7521311/103370121-4832ba80-4acc-11eb-9689-995b0566f603.png) Regression due to this commit d4be4df857550c1e42eec6d180299ab20ebe31e3. As a consequence JS error: > Uncaught TypeError: Cannot read property 'length' of undefined at Class.cloneHead (components.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:6190) at Class.setupScrolling (components.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:6046) at Class._render (components.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:4956) at Class.renderDeferred (core.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:4730) at Class.init (components.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:4942) at Class.prototype.<computed> [as init] (core.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:4104) at new Class (core.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:4117) at Object.createWidget (core.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:951) at Object.cw (core.js.xhtml?ln=primefaces&v=9.0-SNAPSHOT:916) at scroll.xhtml?jfwid=4d2dd:951
defect
datatable virtualscroll scrollable broken describe the defect no data displayed regression due to this commit as a consequence js error uncaught typeerror cannot read property length of undefined at class clonehead components js xhtml ln primefaces v snapshot at class setupscrolling components js xhtml ln primefaces v snapshot at class render components js xhtml ln primefaces v snapshot at class renderdeferred core js xhtml ln primefaces v snapshot at class init components js xhtml ln primefaces v snapshot at class prototype core js xhtml ln primefaces v snapshot at new class core js xhtml ln primefaces v snapshot at object createwidget core js xhtml ln primefaces v snapshot at object cw core js xhtml ln primefaces v snapshot at scroll xhtml jfwid
1
676,346
23,123,158,315
IssuesEvent
2022-07-28 00:53:01
rpitv/glimpse-ui
https://api.github.com/repos/rpitv/glimpse-ui
opened
"More" dropdown in navbar can't be opened/navigated with keyboard
bug Priority: MED accessibility
- The "More" dropdown doesn't appear to work with a keyboard. - By switching it's trigger mechanism from `hover` to `click`, it's possible to open the menu, but you can't select individual items. - This probably needs to be fixed in [TuSimple/naive-ui](https://github.com/TuSimple/naive-ui/). See: https://www.naiveui.com/en-US/os-theme/components/dropdown https://github.com/TuSimple/naive-ui/issues/41
1.0
"More" dropdown in navbar can't be opened/navigated with keyboard - - The "More" dropdown doesn't appear to work with a keyboard. - By switching it's trigger mechanism from `hover` to `click`, it's possible to open the menu, but you can't select individual items. - This probably needs to be fixed in [TuSimple/naive-ui](https://github.com/TuSimple/naive-ui/). See: https://www.naiveui.com/en-US/os-theme/components/dropdown https://github.com/TuSimple/naive-ui/issues/41
non_defect
more dropdown in navbar can t be opened navigated with keyboard the more dropdown doesn t appear to work with a keyboard by switching it s trigger mechanism from hover to click it s possible to open the menu but you can t select individual items this probably needs to be fixed in see
0
23,397
2,658,979,583
IssuesEvent
2015-03-18 18:21:39
phetsims/tasks
https://api.github.com/repos/phetsims/tasks
opened
Solid, liquid, gas icons for States of Matter
Artwork High Priority
I need icons depicting a solid, liquid, and gas drawn for States of Matter. Currently the artwork looks like: ![image](https://cloud.githubusercontent.com/assets/8419308/6715891/484a5840-cd67-11e4-9f8e-6570120e8a51.png) Ideally, these icons would look like a (non-cartoonish) ice block, water droplet, and gas cloud. These icons should also be drawn in 2D. Once your PhET gmail account has been set up, I will share the original .ai files with you on Google Drive.
1.0
Solid, liquid, gas icons for States of Matter - I need icons depicting a solid, liquid, and gas drawn for States of Matter. Currently the artwork looks like: ![image](https://cloud.githubusercontent.com/assets/8419308/6715891/484a5840-cd67-11e4-9f8e-6570120e8a51.png) Ideally, these icons would look like a (non-cartoonish) ice block, water droplet, and gas cloud. These icons should also be drawn in 2D. Once your PhET gmail account has been set up, I will share the original .ai files with you on Google Drive.
non_defect
solid liquid gas icons for states of matter i need icons depicting a solid liquid and gas drawn for states of matter currently the artwork looks like ideally these icons would look like a non cartoonish ice block water droplet and gas cloud these icons should also be drawn in once your phet gmail account has been set up i will share the original ai files with you on google drive
0
44,917
9,659,415,332
IssuesEvent
2019-05-20 13:24:55
theia-ide/theia
https://api.github.com/repos/theia-ide/theia
closed
The PluginHot crash in latest master for SQL-Tools VSCode extension
bug vscode
### Description VSCode extension SQL-Tools crashes the pluginHosts. It worked about two weeks ago on master, and now on master it does not. ### Reproduction Steps Download the vsix: https://marketplace.visualstudio.com/_apis/public/gallery/publishers/mtxr/vsextensions/sqltools/0.17.18/vspackage Copy it to the plugins folder Start Theia server Access theia from browser - wait untill fully loaded, and observe server output: **OS and Theia version:** master, MAC-OS **Diagnostics:** Logs look like this: Before loading the browser (only start server): ``` yarn run v1.13.0 $ theia start --plugins=local-dir:../../plugins Failed to resolve module: getmac root INFO Theia app listening on http://localhost:3000. root INFO unzipping the plugin ProxyPluginDeployerEntry { deployer: PluginVsCodeFileHandler { unpackedFolder: '/private/var/folders/hm/zvsqr3ns2d98t7__1v64hk580000gn/T/vscode-unpacked' }, delegate: PluginDeployerEntryImpl { originId: 'local-dir:../../plugins', pluginId: 'mtxr.sqltools-0.17.18.vsix', map: Map {}, changes: [], acceptedTypes: [], currentPath: '/Users/i022021/dev/node/theia/plugins/mtxr.sqltools-0.17.18.vsix', initPath: '/Users/i022021/dev/node/theia/plugins/mtxr.sqltools-0.17.18.vsix', resolved: true, resolvedByName: 'LocalDirectoryPluginDeployerResolver' }, deployerName: 'PluginVsCodeFileHandler' } root INFO PluginTheiaDirectoryHandler: accepting plugin with path /Users/i022021/dev/node/theia/plugins/.DS_Store root INFO PluginTheiaDirectoryHandler: accepting plugin with path /private/var/folders/hm/zvsqr3ns2d98t7__1v64hk580000gn/T/vscode-unpacked/mtxr.sqltools-0.17.18.vsix root INFO Resolved "mtxr.sqltools-0.17.18.vsix" to a VS Code extension "sqltools@0.17.18" with engines: { vscode: '^1.30.0' } root INFO Deploying backend plugin "sqltools@0.17.18" from "/private/var/folders/hm/zvsqr3ns2d98t7__1v64hk580000gn/T/vscode-unpacked/mtxr.sqltools-0.17.18.vsix/extension/extension.js" ``` After loading browser ``` root INFO Using Git [2.15.0] from the PATH. (/usr/local/bin/git) root INFO Detected keyboard layout: US (Mac) root WARN LanguagesFrontendContribution.onStart is slow, took: 266.00000000325963 ms root INFO [nsfw-watcher: 81231] Started watching: /Users/i022021/.theia root INFO [nsfw-watcher: 81231] Started watching: /Users/i022021/dev/node/dist root WARN EditorNavigationContribution.onStart is slow, took: 628.2950000022538 ms root INFO [nsfw-watcher: 81231] Started watching: /Users/i022021/dev/node/dist Started watching: /Users/i022021/dev/node/dist root WARN Couldn't restore widget state for debug-console. Error: TypeError: Cannot read property 'getControl' of undefined root WARN "sqltools.connections" preference is not supported root ERROR [hosted-plugin: 81235] internal/modules/cjs/loader.js:584 throw err; ^ Error: Cannot find module 'getmac' at Function.Module._resolveFilename (internal/modules/cjs/loader.js:582:15) at Function.Module._load (internal/modules/cjs/loader.js:508:25) at Module.require (internal/modules/cjs/loader.js:637:17) at require (internal/modules/cjs/helpers.js:22:18) at Object.<anonymous> (/Users/i022021/dev/node/theia/packages/plugin-ext/lib/plugin/node/env-node-ext.js:33:16) at Module._compile (internal/modules/cjs/loader.js:701:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:712:10) at Module.load (internal/modules/cjs/loader.js:600:32) at tryModuleLoad (internal/modules/cjs/loader.js:539:12) at Function.Module._load (internal/modules/cjs/loader.js:531:3) root ERROR Error: schema is invalid: data.properties['sqltools.autoConnectTo'].required should be array at Ajv.validateSchema (http://localhost:3000/bundle.js:102759:16) at Ajv._addSchema (http://localhost:3000/bundle.js:102888:10) at Ajv.compile (http://localhost:3000/bundle.js:102688:24) at PreferenceSchemaProvider.../../packages/core/lib/browser/preferences/preference-contribution.js.PreferenceSchemaProvider.updateValidate (http://localhost:3000/bundle.js:84275:43) at http://localhost:3000/bundle.js:84133:65 at http://localhost:3000/bundle.js:98771:33 at CallbackList.../../packages/core/lib/common/event.js.CallbackList.invoke (http://localhost:3000/bundle.js:98786:39) at Emitter.../../packages/core/lib/common/event.js.Emitter.fire (http://localhost:3000/bundle.js:98874:29) at PreferenceSchemaProvider.../../packages/core/lib/browser/preferences/preference-provider.js.PreferenceProvider.emitPreferencesChangedEvent (http://localhost:3000/bundle.js:84532:49) at PreferenceSchemaProvider.../../packages/core/lib/browser/preferences/preference-contribution.js.PreferenceSchemaProvider.setSchema (http://localhost:3000/bundle.js:84298:14) root WARN Error: a registered grammar provider for 'source.sql' scope is overridden at TextmateRegistry.push.../../packages/monaco/lib/browser/textmate/textmate-registry.js.TextmateRegistry.registerTextmateGrammarScope (http://localhost:3000/22.bundle.js:814:26) at _loop_1 (http://localhost:3000/82.bundle.js:9129:41) at PluginContributionHandler.push.../../packages/plugin-ext/lib/main/browser/plugin-contribution-handler.js.PluginContributionHandler.handleContributions (http://localhost:3000/82.bundle.js:9157:21) at HostedPluginSupport.push.../../packages/plugin-ext/lib/hosted/browser/hosted-plugin.js.HostedPluginSupport.initContributions (http://localhost:3000/82.bundle.js:3206:46) at HostedPluginSupport.push.../../packages/plugin-ext/lib/hosted/browser/hosted-plugin.js.HostedPluginSupport.loadPlugins (http://localhost:3000/82.bundle.js:3134:30) at http://localhost:3000/82.bundle.js:3120:19 root WARN Error: 'sql' language is remapped from 'source.sql' to 'source.sql' scope at TextmateRegistry.push.../../packages/monaco/lib/browser/textmate/textmate-registry.js.TextmateRegistry.mapLanguageIdToTextmateGrammar (http://localhost:3000/22.bundle.js:824:26) at _loop_1 (http://localhost:3000/82.bundle.js:9145:45) at PluginContributionHandler.push.../../packages/plugin-ext/lib/main/browser/plugin-contribution-handler.js.PluginContributionHandler.handleContributions (http://localhost:3000/82.bundle.js:9157:21) at HostedPluginSupport.push.../../packages/plugin-ext/lib/hosted/browser/hosted-plugin.js.HostedPluginSupport.initContributions (http://localhost:3000/82.bundle.js:3206:46) at HostedPluginSupport.push.../../packages/plugin-ext/lib/hosted/browser/hosted-plugin.js.HostedPluginSupport.loadPlugins (http://localhost:3000/82.bundle.js:3134:30) at http://localhost:3000/82.bundle.js:3120:19 root WARN Plugin contributes items to a menu with invalid identifier: view/title root INFO Config file tasks.json does not exist under file:///Users/i022021/dev/node/dist root ERROR Uncaught Exception: Error [ERR_IPC_CHANNEL_CLOSED]: Channel closed root ERROR Error [ERR_IPC_CHANNEL_CLOSED]: Channel closed at ChildProcess.target.send (internal/child_process.js:636:16) at HostedPluginProcess.onMessage (/Users/i022021/dev/node/theia/packages/plugin-ext/lib/hosted/node/hosted-plugin-process.js:105:31) at HostedPluginSupport.onMessage (/Users/i022021/dev/node/theia/packages/plugin-ext/lib/hosted/node/hosted-plugin.js:104:38) at HostedPluginServerImpl.onMessage (/Users/i022021/dev/node/theia/packages/plugin-ext/lib/hosted/node/plugin-service.js:168:27) at JsonRpcProxyFactory.onNotification (/Users/i022021/dev/node/theia/packages/core/lib/common/messaging/proxy-factory.js:246:36) at /Users/i022021/dev/node/theia/packages/core/lib/common/messaging/proxy-factory.js:181:53 at handleNotification (/Users/i022021/dev/node/theia/node_modules/vscode-jsonrpc/lib/main.js:483:43) at processMessageQueue (/Users/i022021/dev/node/theia/node_modules/vscode-jsonrpc/lib/main.js:255:17) at Immediate.<anonymous> (/Users/i022021/dev/node/theia/node_modules/vscode-jsonrpc/lib/main.js:242:13) at runCallback (timers.js:705:18) root INFO [nsfw-watcher: 81231] Started watching: /Users/i022021/dev/node/dist/extension.js ``` the last part of Uncaught exception is repeating many times.
1.0
The PluginHot crash in latest master for SQL-Tools VSCode extension - ### Description VSCode extension SQL-Tools crashes the pluginHosts. It worked about two weeks ago on master, and now on master it does not. ### Reproduction Steps Download the vsix: https://marketplace.visualstudio.com/_apis/public/gallery/publishers/mtxr/vsextensions/sqltools/0.17.18/vspackage Copy it to the plugins folder Start Theia server Access theia from browser - wait untill fully loaded, and observe server output: **OS and Theia version:** master, MAC-OS **Diagnostics:** Logs look like this: Before loading the browser (only start server): ``` yarn run v1.13.0 $ theia start --plugins=local-dir:../../plugins Failed to resolve module: getmac root INFO Theia app listening on http://localhost:3000. root INFO unzipping the plugin ProxyPluginDeployerEntry { deployer: PluginVsCodeFileHandler { unpackedFolder: '/private/var/folders/hm/zvsqr3ns2d98t7__1v64hk580000gn/T/vscode-unpacked' }, delegate: PluginDeployerEntryImpl { originId: 'local-dir:../../plugins', pluginId: 'mtxr.sqltools-0.17.18.vsix', map: Map {}, changes: [], acceptedTypes: [], currentPath: '/Users/i022021/dev/node/theia/plugins/mtxr.sqltools-0.17.18.vsix', initPath: '/Users/i022021/dev/node/theia/plugins/mtxr.sqltools-0.17.18.vsix', resolved: true, resolvedByName: 'LocalDirectoryPluginDeployerResolver' }, deployerName: 'PluginVsCodeFileHandler' } root INFO PluginTheiaDirectoryHandler: accepting plugin with path /Users/i022021/dev/node/theia/plugins/.DS_Store root INFO PluginTheiaDirectoryHandler: accepting plugin with path /private/var/folders/hm/zvsqr3ns2d98t7__1v64hk580000gn/T/vscode-unpacked/mtxr.sqltools-0.17.18.vsix root INFO Resolved "mtxr.sqltools-0.17.18.vsix" to a VS Code extension "sqltools@0.17.18" with engines: { vscode: '^1.30.0' } root INFO Deploying backend plugin "sqltools@0.17.18" from "/private/var/folders/hm/zvsqr3ns2d98t7__1v64hk580000gn/T/vscode-unpacked/mtxr.sqltools-0.17.18.vsix/extension/extension.js" ``` After loading browser ``` root INFO Using Git [2.15.0] from the PATH. (/usr/local/bin/git) root INFO Detected keyboard layout: US (Mac) root WARN LanguagesFrontendContribution.onStart is slow, took: 266.00000000325963 ms root INFO [nsfw-watcher: 81231] Started watching: /Users/i022021/.theia root INFO [nsfw-watcher: 81231] Started watching: /Users/i022021/dev/node/dist root WARN EditorNavigationContribution.onStart is slow, took: 628.2950000022538 ms root INFO [nsfw-watcher: 81231] Started watching: /Users/i022021/dev/node/dist Started watching: /Users/i022021/dev/node/dist root WARN Couldn't restore widget state for debug-console. Error: TypeError: Cannot read property 'getControl' of undefined root WARN "sqltools.connections" preference is not supported root ERROR [hosted-plugin: 81235] internal/modules/cjs/loader.js:584 throw err; ^ Error: Cannot find module 'getmac' at Function.Module._resolveFilename (internal/modules/cjs/loader.js:582:15) at Function.Module._load (internal/modules/cjs/loader.js:508:25) at Module.require (internal/modules/cjs/loader.js:637:17) at require (internal/modules/cjs/helpers.js:22:18) at Object.<anonymous> (/Users/i022021/dev/node/theia/packages/plugin-ext/lib/plugin/node/env-node-ext.js:33:16) at Module._compile (internal/modules/cjs/loader.js:701:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:712:10) at Module.load (internal/modules/cjs/loader.js:600:32) at tryModuleLoad (internal/modules/cjs/loader.js:539:12) at Function.Module._load (internal/modules/cjs/loader.js:531:3) root ERROR Error: schema is invalid: data.properties['sqltools.autoConnectTo'].required should be array at Ajv.validateSchema (http://localhost:3000/bundle.js:102759:16) at Ajv._addSchema (http://localhost:3000/bundle.js:102888:10) at Ajv.compile (http://localhost:3000/bundle.js:102688:24) at PreferenceSchemaProvider.../../packages/core/lib/browser/preferences/preference-contribution.js.PreferenceSchemaProvider.updateValidate (http://localhost:3000/bundle.js:84275:43) at http://localhost:3000/bundle.js:84133:65 at http://localhost:3000/bundle.js:98771:33 at CallbackList.../../packages/core/lib/common/event.js.CallbackList.invoke (http://localhost:3000/bundle.js:98786:39) at Emitter.../../packages/core/lib/common/event.js.Emitter.fire (http://localhost:3000/bundle.js:98874:29) at PreferenceSchemaProvider.../../packages/core/lib/browser/preferences/preference-provider.js.PreferenceProvider.emitPreferencesChangedEvent (http://localhost:3000/bundle.js:84532:49) at PreferenceSchemaProvider.../../packages/core/lib/browser/preferences/preference-contribution.js.PreferenceSchemaProvider.setSchema (http://localhost:3000/bundle.js:84298:14) root WARN Error: a registered grammar provider for 'source.sql' scope is overridden at TextmateRegistry.push.../../packages/monaco/lib/browser/textmate/textmate-registry.js.TextmateRegistry.registerTextmateGrammarScope (http://localhost:3000/22.bundle.js:814:26) at _loop_1 (http://localhost:3000/82.bundle.js:9129:41) at PluginContributionHandler.push.../../packages/plugin-ext/lib/main/browser/plugin-contribution-handler.js.PluginContributionHandler.handleContributions (http://localhost:3000/82.bundle.js:9157:21) at HostedPluginSupport.push.../../packages/plugin-ext/lib/hosted/browser/hosted-plugin.js.HostedPluginSupport.initContributions (http://localhost:3000/82.bundle.js:3206:46) at HostedPluginSupport.push.../../packages/plugin-ext/lib/hosted/browser/hosted-plugin.js.HostedPluginSupport.loadPlugins (http://localhost:3000/82.bundle.js:3134:30) at http://localhost:3000/82.bundle.js:3120:19 root WARN Error: 'sql' language is remapped from 'source.sql' to 'source.sql' scope at TextmateRegistry.push.../../packages/monaco/lib/browser/textmate/textmate-registry.js.TextmateRegistry.mapLanguageIdToTextmateGrammar (http://localhost:3000/22.bundle.js:824:26) at _loop_1 (http://localhost:3000/82.bundle.js:9145:45) at PluginContributionHandler.push.../../packages/plugin-ext/lib/main/browser/plugin-contribution-handler.js.PluginContributionHandler.handleContributions (http://localhost:3000/82.bundle.js:9157:21) at HostedPluginSupport.push.../../packages/plugin-ext/lib/hosted/browser/hosted-plugin.js.HostedPluginSupport.initContributions (http://localhost:3000/82.bundle.js:3206:46) at HostedPluginSupport.push.../../packages/plugin-ext/lib/hosted/browser/hosted-plugin.js.HostedPluginSupport.loadPlugins (http://localhost:3000/82.bundle.js:3134:30) at http://localhost:3000/82.bundle.js:3120:19 root WARN Plugin contributes items to a menu with invalid identifier: view/title root INFO Config file tasks.json does not exist under file:///Users/i022021/dev/node/dist root ERROR Uncaught Exception: Error [ERR_IPC_CHANNEL_CLOSED]: Channel closed root ERROR Error [ERR_IPC_CHANNEL_CLOSED]: Channel closed at ChildProcess.target.send (internal/child_process.js:636:16) at HostedPluginProcess.onMessage (/Users/i022021/dev/node/theia/packages/plugin-ext/lib/hosted/node/hosted-plugin-process.js:105:31) at HostedPluginSupport.onMessage (/Users/i022021/dev/node/theia/packages/plugin-ext/lib/hosted/node/hosted-plugin.js:104:38) at HostedPluginServerImpl.onMessage (/Users/i022021/dev/node/theia/packages/plugin-ext/lib/hosted/node/plugin-service.js:168:27) at JsonRpcProxyFactory.onNotification (/Users/i022021/dev/node/theia/packages/core/lib/common/messaging/proxy-factory.js:246:36) at /Users/i022021/dev/node/theia/packages/core/lib/common/messaging/proxy-factory.js:181:53 at handleNotification (/Users/i022021/dev/node/theia/node_modules/vscode-jsonrpc/lib/main.js:483:43) at processMessageQueue (/Users/i022021/dev/node/theia/node_modules/vscode-jsonrpc/lib/main.js:255:17) at Immediate.<anonymous> (/Users/i022021/dev/node/theia/node_modules/vscode-jsonrpc/lib/main.js:242:13) at runCallback (timers.js:705:18) root INFO [nsfw-watcher: 81231] Started watching: /Users/i022021/dev/node/dist/extension.js ``` the last part of Uncaught exception is repeating many times.
non_defect
the pluginhot crash in latest master for sql tools vscode extension description vscode extension sql tools crashes the pluginhosts it worked about two weeks ago on master and now on master it does not reproduction steps download the vsix copy it to the plugins folder start theia server access theia from browser wait untill fully loaded and observe server output os and theia version master mac os diagnostics logs look like this before loading the browser only start server yarn run theia start plugins local dir plugins failed to resolve module getmac root info theia app listening on root info unzipping the plugin proxyplugindeployerentry deployer pluginvscodefilehandler unpackedfolder private var folders hm t vscode unpacked delegate plugindeployerentryimpl originid local dir plugins pluginid mtxr sqltools vsix map map changes acceptedtypes currentpath users dev node theia plugins mtxr sqltools vsix initpath users dev node theia plugins mtxr sqltools vsix resolved true resolvedbyname localdirectoryplugindeployerresolver deployername pluginvscodefilehandler root info plugintheiadirectoryhandler accepting plugin with path users dev node theia plugins ds store root info plugintheiadirectoryhandler accepting plugin with path private var folders hm t vscode unpacked mtxr sqltools vsix root info resolved mtxr sqltools vsix to a vs code extension sqltools with engines vscode root info deploying backend plugin sqltools from private var folders hm t vscode unpacked mtxr sqltools vsix extension extension js after loading browser root info using git from the path usr local bin git root info detected keyboard layout us mac root warn languagesfrontendcontribution onstart is slow took ms root info started watching users theia root info started watching users dev node dist root warn editornavigationcontribution onstart is slow took ms root info started watching users dev node dist started watching users dev node dist root warn couldn t restore widget state for debug console error typeerror cannot read property getcontrol of undefined root warn sqltools connections preference is not supported root error internal modules cjs loader js throw err error cannot find module getmac at function module resolvefilename internal modules cjs loader js at function module load internal modules cjs loader js at module require internal modules cjs loader js at require internal modules cjs helpers js at object users dev node theia packages plugin ext lib plugin node env node ext js at module compile internal modules cjs loader js at object module extensions js internal modules cjs loader js at module load internal modules cjs loader js at trymoduleload internal modules cjs loader js at function module load internal modules cjs loader js root error error schema is invalid data properties required should be array at ajv validateschema at ajv addschema at ajv compile at preferenceschemaprovider packages core lib browser preferences preference contribution js preferenceschemaprovider updatevalidate at at at callbacklist packages core lib common event js callbacklist invoke at emitter packages core lib common event js emitter fire at preferenceschemaprovider packages core lib browser preferences preference provider js preferenceprovider emitpreferenceschangedevent at preferenceschemaprovider packages core lib browser preferences preference contribution js preferenceschemaprovider setschema root warn error a registered grammar provider for source sql scope is overridden at textmateregistry push packages monaco lib browser textmate textmate registry js textmateregistry registertextmategrammarscope at loop at plugincontributionhandler push packages plugin ext lib main browser plugin contribution handler js plugincontributionhandler handlecontributions at hostedpluginsupport push packages plugin ext lib hosted browser hosted plugin js hostedpluginsupport initcontributions at hostedpluginsupport push packages plugin ext lib hosted browser hosted plugin js hostedpluginsupport loadplugins at root warn error sql language is remapped from source sql to source sql scope at textmateregistry push packages monaco lib browser textmate textmate registry js textmateregistry maplanguageidtotextmategrammar at loop at plugincontributionhandler push packages plugin ext lib main browser plugin contribution handler js plugincontributionhandler handlecontributions at hostedpluginsupport push packages plugin ext lib hosted browser hosted plugin js hostedpluginsupport initcontributions at hostedpluginsupport push packages plugin ext lib hosted browser hosted plugin js hostedpluginsupport loadplugins at root warn plugin contributes items to a menu with invalid identifier view title root info config file tasks json does not exist under file users dev node dist root error uncaught exception error channel closed root error error channel closed at childprocess target send internal child process js at hostedpluginprocess onmessage users dev node theia packages plugin ext lib hosted node hosted plugin process js at hostedpluginsupport onmessage users dev node theia packages plugin ext lib hosted node hosted plugin js at hostedpluginserverimpl onmessage users dev node theia packages plugin ext lib hosted node plugin service js at jsonrpcproxyfactory onnotification users dev node theia packages core lib common messaging proxy factory js at users dev node theia packages core lib common messaging proxy factory js at handlenotification users dev node theia node modules vscode jsonrpc lib main js at processmessagequeue users dev node theia node modules vscode jsonrpc lib main js at immediate users dev node theia node modules vscode jsonrpc lib main js at runcallback timers js root info started watching users dev node dist extension js the last part of uncaught exception is repeating many times
0
8,673
2,611,535,310
IssuesEvent
2015-02-27 06:05:26
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
opened
If you are missing a custom map/theme when visiting a room, these values persist on creation of a new room.
auto-migrated Priority-Medium Type-Defect
``` 0.9.20.3 etc etc. Join a room with a custom theme, e.g. INFOBOT ROOM SERVICE exit, create a new room. The newly created room will have the SAME map, and when you start the game you will notice an error like "failed to load the cfg theme" ``` Original issue reported on code.google.com by `costamag...@gmail.com` on 2 Jan 2014 at 6:37
1.0
If you are missing a custom map/theme when visiting a room, these values persist on creation of a new room. - ``` 0.9.20.3 etc etc. Join a room with a custom theme, e.g. INFOBOT ROOM SERVICE exit, create a new room. The newly created room will have the SAME map, and when you start the game you will notice an error like "failed to load the cfg theme" ``` Original issue reported on code.google.com by `costamag...@gmail.com` on 2 Jan 2014 at 6:37
defect
if you are missing a custom map theme when visiting a room these values persist on creation of a new room etc etc join a room with a custom theme e g infobot room service exit create a new room the newly created room will have the same map and when you start the game you will notice an error like failed to load the cfg theme original issue reported on code google com by costamag gmail com on jan at
1
103,273
8,893,695,990
IssuesEvent
2019-01-16 00:31:27
ESMCI/cime
https://api.github.com/repos/ESMCI/cime
opened
Baseline namelists for failed tests are not world-readable
tp: system tests ty: Bug
For most tests, the generated namelists (i.e., in the baseline directory) have permissions `-rw-rw-r--` (at least on cheyenne). However, for at least some tests that fail, the generated namelists have permissions `-rw-rw----`. This leads to incorrect NLCOMP failures when someone tries to do baseline comparisons but isn't in the appropriate group. This is an issue for tests that are expected to fail some later phase (say, the RUN phase), but for which namelists should still have been generated / compared. It is also an issue for the FUNIT test.
1.0
Baseline namelists for failed tests are not world-readable - For most tests, the generated namelists (i.e., in the baseline directory) have permissions `-rw-rw-r--` (at least on cheyenne). However, for at least some tests that fail, the generated namelists have permissions `-rw-rw----`. This leads to incorrect NLCOMP failures when someone tries to do baseline comparisons but isn't in the appropriate group. This is an issue for tests that are expected to fail some later phase (say, the RUN phase), but for which namelists should still have been generated / compared. It is also an issue for the FUNIT test.
non_defect
baseline namelists for failed tests are not world readable for most tests the generated namelists i e in the baseline directory have permissions rw rw r at least on cheyenne however for at least some tests that fail the generated namelists have permissions rw rw this leads to incorrect nlcomp failures when someone tries to do baseline comparisons but isn t in the appropriate group this is an issue for tests that are expected to fail some later phase say the run phase but for which namelists should still have been generated compared it is also an issue for the funit test
0
638,118
20,713,139,607
IssuesEvent
2022-03-12 07:38:17
oceanprotocol/contracts
https://api.github.com/repos/oceanprotocol/contracts
opened
Vesting send token to original owner when NFT is transferred
Type: Bug Priority: Critical
Given: - NFT created - Pool created - Vesting created: value: 1.000.000 tokens, vestingDuration: 2 years When original owner transfers NFT to owner2, future vesting amounts should be send to owner2 Status now: owner still gets the vesting
1.0
Vesting send token to original owner when NFT is transferred - Given: - NFT created - Pool created - Vesting created: value: 1.000.000 tokens, vestingDuration: 2 years When original owner transfers NFT to owner2, future vesting amounts should be send to owner2 Status now: owner still gets the vesting
non_defect
vesting send token to original owner when nft is transferred given nft created pool created vesting created value tokens vestingduration years when original owner transfers nft to future vesting amounts should be send to status now owner still gets the vesting
0
62,998
6,822,546,004
IssuesEvent
2017-11-07 20:27:51
Edurate/edurate
https://api.github.com/repos/Edurate/edurate
closed
Test Cases
testing
Need to make a test suite to make sure all of the functions are working as desired. To start off we need cases to check that the Google forms are downloaded correctly and then turned into a spread sheet to be used.
1.0
Test Cases - Need to make a test suite to make sure all of the functions are working as desired. To start off we need cases to check that the Google forms are downloaded correctly and then turned into a spread sheet to be used.
non_defect
test cases need to make a test suite to make sure all of the functions are working as desired to start off we need cases to check that the google forms are downloaded correctly and then turned into a spread sheet to be used
0
45,383
12,753,738,211
IssuesEvent
2020-06-28 00:22:55
jlaffaye/ftp
https://api.github.com/repos/jlaffaye/ftp
opened
Better docs on List and
defect
**Describe the bug** I found it very surprising that List() was returning timestamps with an incorrect year. More digging found that the root cause was that if this library encounters an FTP server that returns timestamps without a year in the timestamp it just guesses. https://github.com/jlaffaye/ftp/blob/master/parse.go#L235 I think two things should happen and would like to send two PRs: 1. The Godoc mentions this year guessing on List and Walk 2. MLST/MLSD should be supported as an alternative format to List where supported **To Reproduce** ``` Fun List against an rclone ftp server ``` **Expected behavior** Dates to be set in 2019, 2018, etc **FTP server** - Name and version: rclone 1.52.1
1.0
Better docs on List and - **Describe the bug** I found it very surprising that List() was returning timestamps with an incorrect year. More digging found that the root cause was that if this library encounters an FTP server that returns timestamps without a year in the timestamp it just guesses. https://github.com/jlaffaye/ftp/blob/master/parse.go#L235 I think two things should happen and would like to send two PRs: 1. The Godoc mentions this year guessing on List and Walk 2. MLST/MLSD should be supported as an alternative format to List where supported **To Reproduce** ``` Fun List against an rclone ftp server ``` **Expected behavior** Dates to be set in 2019, 2018, etc **FTP server** - Name and version: rclone 1.52.1
defect
better docs on list and describe the bug i found it very surprising that list was returning timestamps with an incorrect year more digging found that the root cause was that if this library encounters an ftp server that returns timestamps without a year in the timestamp it just guesses i think two things should happen and would like to send two prs the godoc mentions this year guessing on list and walk mlst mlsd should be supported as an alternative format to list where supported to reproduce fun list against an rclone ftp server expected behavior dates to be set in etc ftp server name and version rclone
1
40,536
2,868,926,316
IssuesEvent
2015-06-05 22:00:13
dart-lang/pub
https://api.github.com/repos/dart-lang/pub
closed
Don't use bare mailto links for author names on pub.dartlang.org
bug Fixed Priority-Medium
<a href="https://github.com/nex3"><img src="https://avatars.githubusercontent.com/u/188?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [nex3](https://github.com/nex3)** _Originally opened as dart-lang/sdk#5536_ ---- Instead, have the mailto link be separate from the name (perhaps an icon).
1.0
Don't use bare mailto links for author names on pub.dartlang.org - <a href="https://github.com/nex3"><img src="https://avatars.githubusercontent.com/u/188?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [nex3](https://github.com/nex3)** _Originally opened as dart-lang/sdk#5536_ ---- Instead, have the mailto link be separate from the name (perhaps an icon).
non_defect
don t use bare mailto links for author names on pub dartlang org issue by originally opened as dart lang sdk instead have the mailto link be separate from the name perhaps an icon
0
606,430
18,762,281,487
IssuesEvent
2021-11-05 17:57:34
RedstoneMedia/SussyLauncher
https://api.github.com/repos/RedstoneMedia/SussyLauncher
closed
Make solo dll mods addeble
enhancement High Priority
**Describe the solution you'd like** Mods that consist of only of DLL files should be installable, even if the BepInEx is not installed already. This means that in a case, when BepInEx is not installed already it would have to downloaded and installed automatically. **Describe why your request is worth looking into** This feature is important, because otherwise you need to install a mod that does include BepInEx first to install a solo DLL mod.
1.0
Make solo dll mods addeble - **Describe the solution you'd like** Mods that consist of only of DLL files should be installable, even if the BepInEx is not installed already. This means that in a case, when BepInEx is not installed already it would have to downloaded and installed automatically. **Describe why your request is worth looking into** This feature is important, because otherwise you need to install a mod that does include BepInEx first to install a solo DLL mod.
non_defect
make solo dll mods addeble describe the solution you d like mods that consist of only of dll files should be installable even if the bepinex is not installed already this means that in a case when bepinex is not installed already it would have to downloaded and installed automatically describe why your request is worth looking into this feature is important because otherwise you need to install a mod that does include bepinex first to install a solo dll mod
0
73,790
24,803,980,043
IssuesEvent
2022-10-25 01:39:50
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
Element Android does not display replies sent from weechat-matrix
T-Defect
### Steps to reproduce 1. Middle-click + R on a message in `weechat-matrix` or directly use the `/reply-matrix` command, and send a message. 2. Watch the message in Element Android. ### Outcome #### What did you expect? Replies being correctly displayed. #### What happened instead? Element does not display a reply. ![Screenshot_20221025_003437_im vector app](https://user-images.githubusercontent.com/2134486/197635227-13a0e546-3d00-4135-a682-9200708eb0e5.jpg) Though FluffyChat does: ![Screenshot_20221025_003446_chat fluffy fluffychat](https://user-images.githubusercontent.com/2134486/197635308-0c7f9c12-6c34-4c63-a945-e4d2836f2e7e.jpg) Element Web also does: ![2022-10-25-043747_1366x768_scrot](https://user-images.githubusercontent.com/2134486/197661881-656bd234-c09a-4652-8be2-9cc36491d0d5.png) Even if this is a `weechat-matrix` issue and should be fixed there, old messages sent by it should still be displayed correctly for historical purposes. Raw event: ``` { "content": { "body": "Деяких так.", "format": "org.matrix.custom.html", "formatted_body": "Деяких так.", "m.relates_to": { "m.in_reply_to": { "event_id": "$Lhz5w_FDIwSmLOq1kFUr-HCeRN22qYF_fx1DY4R9fjU" } }, "msgtype": "m.text" }, "origin_server_ts": 1666547560425, "room_id": "!ssKCAMXeLZoUGjdcyU:matrix.org", "sender": "@bodqhrohro:matrix.org", "type": "m.room.message", "unsigned": { "age": 114117264 }, "event_id": "$C9SDwU55bPrNVA6EIHQcW1TMc9_pMQwXUbgUC7solcM", "user_id": "@bodqhrohro:matrix.org", "age": 113616469 } ``` ### Your phone model HRY-LX1 ### Operating system version EMUI 9.1.0 ### Application version and app store Element version 1.4.36, olm version 3.2.12 from F-Droid ### Homeserver matrix.org ### Will you send logs? No ### Are you willing to provide a PR? No
1.0
Element Android does not display replies sent from weechat-matrix - ### Steps to reproduce 1. Middle-click + R on a message in `weechat-matrix` or directly use the `/reply-matrix` command, and send a message. 2. Watch the message in Element Android. ### Outcome #### What did you expect? Replies being correctly displayed. #### What happened instead? Element does not display a reply. ![Screenshot_20221025_003437_im vector app](https://user-images.githubusercontent.com/2134486/197635227-13a0e546-3d00-4135-a682-9200708eb0e5.jpg) Though FluffyChat does: ![Screenshot_20221025_003446_chat fluffy fluffychat](https://user-images.githubusercontent.com/2134486/197635308-0c7f9c12-6c34-4c63-a945-e4d2836f2e7e.jpg) Element Web also does: ![2022-10-25-043747_1366x768_scrot](https://user-images.githubusercontent.com/2134486/197661881-656bd234-c09a-4652-8be2-9cc36491d0d5.png) Even if this is a `weechat-matrix` issue and should be fixed there, old messages sent by it should still be displayed correctly for historical purposes. Raw event: ``` { "content": { "body": "Деяких так.", "format": "org.matrix.custom.html", "formatted_body": "Деяких так.", "m.relates_to": { "m.in_reply_to": { "event_id": "$Lhz5w_FDIwSmLOq1kFUr-HCeRN22qYF_fx1DY4R9fjU" } }, "msgtype": "m.text" }, "origin_server_ts": 1666547560425, "room_id": "!ssKCAMXeLZoUGjdcyU:matrix.org", "sender": "@bodqhrohro:matrix.org", "type": "m.room.message", "unsigned": { "age": 114117264 }, "event_id": "$C9SDwU55bPrNVA6EIHQcW1TMc9_pMQwXUbgUC7solcM", "user_id": "@bodqhrohro:matrix.org", "age": 113616469 } ``` ### Your phone model HRY-LX1 ### Operating system version EMUI 9.1.0 ### Application version and app store Element version 1.4.36, olm version 3.2.12 from F-Droid ### Homeserver matrix.org ### Will you send logs? No ### Are you willing to provide a PR? No
defect
element android does not display replies sent from weechat matrix steps to reproduce middle click r on a message in weechat matrix or directly use the reply matrix command and send a message watch the message in element android outcome what did you expect replies being correctly displayed what happened instead element does not display a reply though fluffychat does element web also does even if this is a weechat matrix issue and should be fixed there old messages sent by it should still be displayed correctly for historical purposes raw event content body деяких так format org matrix custom html formatted body деяких так m relates to m in reply to event id msgtype m text origin server ts room id sskcamxelzougjdcyu matrix org sender bodqhrohro matrix org type m room message unsigned age event id user id bodqhrohro matrix org age your phone model hry operating system version emui application version and app store element version olm version from f droid homeserver matrix org will you send logs no are you willing to provide a pr no
1
29,778
5,889,974,324
IssuesEvent
2017-05-17 14:04:24
NREL/EnergyPlus
https://api.github.com/repos/NREL/EnergyPlus
opened
Revise epupdate URL EP-Launch uses with hard reset
Defect EP-Launch
In EP-Launch a hard reset (VIEW .. OPTIONS .. RESET .. Reset All Options and Exit) erases the registry entries for EP-Launch including the epupdate page. When EP-Launch starts up again after that it gets set to the old url: http://energyplus.net/epupdate.htm and it should point to the new url: http://nrel.github.io/EnergyPlus/epupdate.htm ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [ ] Defect file added (list location of defect file here) - [ ] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
1.0
Revise epupdate URL EP-Launch uses with hard reset - In EP-Launch a hard reset (VIEW .. OPTIONS .. RESET .. Reset All Options and Exit) erases the registry entries for EP-Launch including the epupdate page. When EP-Launch starts up again after that it gets set to the old url: http://energyplus.net/epupdate.htm and it should point to the new url: http://nrel.github.io/EnergyPlus/epupdate.htm ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [ ] Defect file added (list location of defect file here) - [ ] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
defect
revise epupdate url ep launch uses with hard reset in ep launch a hard reset view options reset reset all options and exit erases the registry entries for ep launch including the epupdate page when ep launch starts up again after that it gets set to the old url and it should point to the new url checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
1
735
2,587,732,415
IssuesEvent
2015-02-17 20:19:09
chrsmith/codesearch
https://api.github.com/repos/chrsmith/codesearch
opened
regex->trigram has wrong behaviour when trigrams are question-marked
auto-migrated Priority-Medium Type-Defect
``` bash$ csearch -verbose "foo_(bar_)?" >/dev/null 2012/03/05 10:44:37 query: "_ba" "foo" "o_b" "oo_" 2012/03/05 10:44:37 post query identified 11 possible files bash$ csearch -verbose "foo_b(ar_)?" >/dev/null 2012/03/05 10:44:44 query: "foo" "o_b" "oo_" 2012/03/05 10:44:44 post query identified 12 possible files In the first example, "_ba" and "o_b"' should not be required trigrams for the file. In the second example, "ar_" is correctly _not_ included in the list of required trigrams. I'll take a look at this later tonight. It looks like an issue with precedence in index/regexp.go. (I.e., the question mark is only applying to the final trigram, and not all trigrams included in the grouping. ). ``` ----- Original issue reported on code.google.com by dgryski on 5 Mar 2012 at 10:07
1.0
regex->trigram has wrong behaviour when trigrams are question-marked - ``` bash$ csearch -verbose "foo_(bar_)?" >/dev/null 2012/03/05 10:44:37 query: "_ba" "foo" "o_b" "oo_" 2012/03/05 10:44:37 post query identified 11 possible files bash$ csearch -verbose "foo_b(ar_)?" >/dev/null 2012/03/05 10:44:44 query: "foo" "o_b" "oo_" 2012/03/05 10:44:44 post query identified 12 possible files In the first example, "_ba" and "o_b"' should not be required trigrams for the file. In the second example, "ar_" is correctly _not_ included in the list of required trigrams. I'll take a look at this later tonight. It looks like an issue with precedence in index/regexp.go. (I.e., the question mark is only applying to the final trigram, and not all trigrams included in the grouping. ). ``` ----- Original issue reported on code.google.com by dgryski on 5 Mar 2012 at 10:07
defect
regex trigram has wrong behaviour when trigrams are question marked bash csearch verbose foo bar dev null query ba foo o b oo post query identified possible files bash csearch verbose foo b ar dev null query foo o b oo post query identified possible files in the first example ba and o b should not be required trigrams for the file in the second example ar is correctly not included in the list of required trigrams i ll take a look at this later tonight it looks like an issue with precedence in index regexp go i e the question mark is only applying to the final trigram and not all trigrams included in the grouping original issue reported on code google com by dgryski on mar at
1
214,286
16,555,242,083
IssuesEvent
2021-05-28 13:16:36
microsoft/playwright-sharp
https://api.github.com/repos/microsoft/playwright-sharp
closed
docs: Show in the TOC only relevant classes
documentation
It doesn't make any sense to show `IBrowser` and `Browser` in the list. Or getting `BindingSource` as the first item.
1.0
docs: Show in the TOC only relevant classes - It doesn't make any sense to show `IBrowser` and `Browser` in the list. Or getting `BindingSource` as the first item.
non_defect
docs show in the toc only relevant classes it doesn t make any sense to show ibrowser and browser in the list or getting bindingsource as the first item
0
533,491
15,592,151,129
IssuesEvent
2021-03-18 11:19:52
OpenNebula/one
https://api.github.com/repos/OpenNebula/one
closed
Add Dockerfile support for image create
Category: Sunstone Priority: Normal Status: Accepted Type: Feature
**Description** Apart from DockerHub integration it will be useful to allow image creation from Dockerfiles. This will allow the user to create an image by passing a Dockerfile to OpenNebula, something similar to: ``` oneimage create --name <name> --type OS --datastore <ds_id> --dockerfile <path_to_dockerfile> ``` **Use case** Allow user to create images directly from Dockerfiles. **Interface Changes** CLI: probably new `--dockerfile` attribute should be added. Sunstone: image creation form should be modified to add the Dockerfile option. <!--////////////////////////////////////////////--> <!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM --> <!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS --> <!-- PROGRESS WILL BE REFLECTED HERE --> <!--////////////////////////////////////////////--> ## Progress Status - [ ] Branch created - [ ] Code committed to development branch - [ ] Testing - QA - [ ] Documentation - [ ] Release notes - resolved issues, compatibility, known issues - [ ] Code committed to upstream release/hotfix branches - [ ] Documentation committed to upstream release/hotfix branches
1.0
Add Dockerfile support for image create - **Description** Apart from DockerHub integration it will be useful to allow image creation from Dockerfiles. This will allow the user to create an image by passing a Dockerfile to OpenNebula, something similar to: ``` oneimage create --name <name> --type OS --datastore <ds_id> --dockerfile <path_to_dockerfile> ``` **Use case** Allow user to create images directly from Dockerfiles. **Interface Changes** CLI: probably new `--dockerfile` attribute should be added. Sunstone: image creation form should be modified to add the Dockerfile option. <!--////////////////////////////////////////////--> <!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM --> <!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS --> <!-- PROGRESS WILL BE REFLECTED HERE --> <!--////////////////////////////////////////////--> ## Progress Status - [ ] Branch created - [ ] Code committed to development branch - [ ] Testing - QA - [ ] Documentation - [ ] Release notes - resolved issues, compatibility, known issues - [ ] Code committed to upstream release/hotfix branches - [ ] Documentation committed to upstream release/hotfix branches
non_defect
add dockerfile support for image create description apart from dockerhub integration it will be useful to allow image creation from dockerfiles this will allow the user to create an image by passing a dockerfile to opennebula something similar to oneimage create name type os datastore dockerfile use case allow user to create images directly from dockerfiles interface changes cli probably new dockerfile attribute should be added sunstone image creation form should be modified to add the dockerfile option progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches
0
74,890
25,392,820,708
IssuesEvent
2022-11-22 05:57:06
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
start a single cluster node.after running for a while,node change to lite members.
Type: Defect
spring config code: ` private Config getConfigByMemberIps(List<String> hazelcastClusterMembers) { Config config = new Config(); config.setInstanceName(Constants.HAZELCAST_INSTANCE_NAME); config.setClusterName(Constants.CAPACITY_CLUSTER_NAME); TcpIpConfig tcpIpConfig = new TcpIpConfig() .setEnabled(true) .setMembers(hazelcastClusterMembers); JoinConfig joinConfig = new JoinConfig() .setMulticastConfig(new MulticastConfig().setEnabled(false)) .setTcpIpConfig(tcpIpConfig); NetworkConfig networkConfig = new NetworkConfig().setJoin(joinConfig); config.setNetworkConfig(networkConfig); return config; }` use hazelcast.getCPSubsystem().getAtomicReference error msg: `com.hazelcast.partition.NoDataMemberInClusterException: Target of invocation cannot be found! Partition owner is null but partitions can't be assigned since all nodes in the cluster are lite members.` ![image](https://user-images.githubusercontent.com/47474762/203235790-657ab04e-8520-4586-8919-806b66dd528c.png) and use hazelcast.getMap error msg: `com.hazelcast.partition.NoDataMemberInClusterException: Target of invocation cannot be found! Partition owner is null but partitions can't be assigned since all nodes in the cluster are lite members. ` ![image](https://user-images.githubusercontent.com/47474762/203236166-6f57d60d-bb20-420f-b617-e925d179682c.png)
1.0
start a single cluster node.after running for a while,node change to lite members. - spring config code: ` private Config getConfigByMemberIps(List<String> hazelcastClusterMembers) { Config config = new Config(); config.setInstanceName(Constants.HAZELCAST_INSTANCE_NAME); config.setClusterName(Constants.CAPACITY_CLUSTER_NAME); TcpIpConfig tcpIpConfig = new TcpIpConfig() .setEnabled(true) .setMembers(hazelcastClusterMembers); JoinConfig joinConfig = new JoinConfig() .setMulticastConfig(new MulticastConfig().setEnabled(false)) .setTcpIpConfig(tcpIpConfig); NetworkConfig networkConfig = new NetworkConfig().setJoin(joinConfig); config.setNetworkConfig(networkConfig); return config; }` use hazelcast.getCPSubsystem().getAtomicReference error msg: `com.hazelcast.partition.NoDataMemberInClusterException: Target of invocation cannot be found! Partition owner is null but partitions can't be assigned since all nodes in the cluster are lite members.` ![image](https://user-images.githubusercontent.com/47474762/203235790-657ab04e-8520-4586-8919-806b66dd528c.png) and use hazelcast.getMap error msg: `com.hazelcast.partition.NoDataMemberInClusterException: Target of invocation cannot be found! Partition owner is null but partitions can't be assigned since all nodes in the cluster are lite members. ` ![image](https://user-images.githubusercontent.com/47474762/203236166-6f57d60d-bb20-420f-b617-e925d179682c.png)
defect
start a single cluster node after running for a while node change to lite members spring config code private config getconfigbymemberips list hazelcastclustermembers config config new config config setinstancename constants hazelcast instance name config setclustername constants capacity cluster name tcpipconfig tcpipconfig new tcpipconfig setenabled true setmembers hazelcastclustermembers joinconfig joinconfig new joinconfig setmulticastconfig new multicastconfig setenabled false settcpipconfig tcpipconfig networkconfig networkconfig new networkconfig setjoin joinconfig config setnetworkconfig networkconfig return config use hazelcast getcpsubsystem getatomicreference error msg com hazelcast partition nodatamemberinclusterexception target of invocation cannot be found partition owner is null but partitions can t be assigned since all nodes in the cluster are lite members and use hazelcast getmap error msg com hazelcast partition nodatamemberinclusterexception target of invocation cannot be found partition owner is null but partitions can t be assigned since all nodes in the cluster are lite members
1
32,845
6,953,397,245
IssuesEvent
2017-12-06 20:52:48
Dzhuneyt/jquery-tubular
https://api.github.com/repos/Dzhuneyt/jquery-tubular
closed
file://www.youtube.com/iframe_api Failed to load resource: net::ERR_FILE_NOT_FOUND
auto-migrated Priority-Medium Type-Defect
``` In line 133 should be: tag.src = 'http://www.youtube.com/iframe_api'; instead of: tag.src = '//www.youtube.com/iframe_api'; Otherwise, the console complains "file://www.youtube.com/iframe_api Failed to load resource: net::ERR_FILE_NOT_FOUND" ``` Original issue reported on code.google.com by `Kirin1...@gmail.com` on 4 Dec 2014 at 10:54
1.0
file://www.youtube.com/iframe_api Failed to load resource: net::ERR_FILE_NOT_FOUND - ``` In line 133 should be: tag.src = 'http://www.youtube.com/iframe_api'; instead of: tag.src = '//www.youtube.com/iframe_api'; Otherwise, the console complains "file://www.youtube.com/iframe_api Failed to load resource: net::ERR_FILE_NOT_FOUND" ``` Original issue reported on code.google.com by `Kirin1...@gmail.com` on 4 Dec 2014 at 10:54
defect
file failed to load resource net err file not found in line should be tag src instead of tag src otherwise the console complains file failed to load resource net err file not found original issue reported on code google com by gmail com on dec at
1
266,799
20,165,007,181
IssuesEvent
2022-02-10 02:45:26
ConsumerDataStandardsAustralia/standards-maintenance
https://api.github.com/repos/ConsumerDataStandardsAustralia/standards-maintenance
opened
Add isQueryParamUnsupported to MetaPaginated for schema validation
documentation change request In Backlog energy banking
# Description A participant raised an issue on ZenDesk in that the `isQueryParamUnsupported` is not explicitly included in the `MetaPaginated` swagger which causes some implementations to fail schema validation. # Area Affected MetaPaginated # Change Proposed Update the `MetaPaginated` schema to include the conditional `isQueryParamUnsupported` so that schema validation works.
1.0
Add isQueryParamUnsupported to MetaPaginated for schema validation - # Description A participant raised an issue on ZenDesk in that the `isQueryParamUnsupported` is not explicitly included in the `MetaPaginated` swagger which causes some implementations to fail schema validation. # Area Affected MetaPaginated # Change Proposed Update the `MetaPaginated` schema to include the conditional `isQueryParamUnsupported` so that schema validation works.
non_defect
add isqueryparamunsupported to metapaginated for schema validation description a participant raised an issue on zendesk in that the isqueryparamunsupported is not explicitly included in the metapaginated swagger which causes some implementations to fail schema validation area affected metapaginated change proposed update the metapaginated schema to include the conditional isqueryparamunsupported so that schema validation works
0
192,649
15,354,840,906
IssuesEvent
2021-03-01 10:18:55
TheGreatCodeholio/adam-kikbot
https://api.github.com/repos/TheGreatCodeholio/adam-kikbot
opened
Smaller more manageable files
documentation help wanted
We keep adding more and more to each of the files and they are becoming convoluted and difficult to work with. I suggest we break each of them into smaller files. for instance group_message_verification, group_message_chatterbot, group_message_substitutions, etc. and just import/include those files. Ideas, suggestions, etc?
1.0
Smaller more manageable files - We keep adding more and more to each of the files and they are becoming convoluted and difficult to work with. I suggest we break each of them into smaller files. for instance group_message_verification, group_message_chatterbot, group_message_substitutions, etc. and just import/include those files. Ideas, suggestions, etc?
non_defect
smaller more manageable files we keep adding more and more to each of the files and they are becoming convoluted and difficult to work with i suggest we break each of them into smaller files for instance group message verification group message chatterbot group message substitutions etc and just import include those files ideas suggestions etc
0
13,503
10,300,417,592
IssuesEvent
2019-08-28 13:51:28
jenkins-x/jx
https://api.github.com/repos/jenkins-x/jx
closed
jx step e2e gc deletes legit running clusters for contexts that are substrings of other contexts
area/infrastructure kind/bug priority/critical
I'm seeing `jenkins-x-versions` PR builds for the `boot-lh` context getting deleted while running, and I believe this is due to https://github.com/abayer/jx/blob/01d273d5e6b64c5865ac1bcc010878234203c997/pkg/cmd/step/e2e/step_e2e_gc.go#L169 checking whether a cluster with a higher build number contains the context name. Since `boot-lh-ghe` kicks off after `boot-lh`, and `boot-lh` is a substring of `boot-lh-ghe`, it's always going to delete the `boot-lh` cluster, even though it shouldn't.
1.0
jx step e2e gc deletes legit running clusters for contexts that are substrings of other contexts - I'm seeing `jenkins-x-versions` PR builds for the `boot-lh` context getting deleted while running, and I believe this is due to https://github.com/abayer/jx/blob/01d273d5e6b64c5865ac1bcc010878234203c997/pkg/cmd/step/e2e/step_e2e_gc.go#L169 checking whether a cluster with a higher build number contains the context name. Since `boot-lh-ghe` kicks off after `boot-lh`, and `boot-lh` is a substring of `boot-lh-ghe`, it's always going to delete the `boot-lh` cluster, even though it shouldn't.
non_defect
jx step gc deletes legit running clusters for contexts that are substrings of other contexts i m seeing jenkins x versions pr builds for the boot lh context getting deleted while running and i believe this is due to checking whether a cluster with a higher build number contains the context name since boot lh ghe kicks off after boot lh and boot lh is a substring of boot lh ghe it s always going to delete the boot lh cluster even though it shouldn t
0
145,516
19,339,479,276
IssuesEvent
2021-12-15 01:36:12
billmcchesney1/concord-plugins
https://api.github.com/repos/billmcchesney1/concord-plugins
opened
CVE-2021-37136 (High) detected in netty-codec-4.1.17.Final.jar
security vulnerability
## CVE-2021-37136 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-4.1.17.Final.jar</b></p></summary> <p>Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers and clients.</p> <p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p> <p>Path to dependency file: concord-plugins/tasks/s3/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec/4.1.17.Final/netty-codec-4.1.17.Final.jar</p> <p> Dependency Hierarchy: - aws-java-sdk-1.11.641.jar (Root Library) - aws-java-sdk-kinesisvideo-1.11.641.jar - netty-codec-http-4.1.17.Final.jar - :x: **netty-codec-4.1.17.Final.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The Bzip2 decompression decoder function doesn't allow setting size restrictions on the decompressed output data (which affects the allocation size used during decompression). All users of Bzip2Decoder are affected. The malicious input can trigger an OOME and so a DoS attack <p>Publish Date: 2021-10-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37136>CVE-2021-37136</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv">https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv</a></p> <p>Release Date: 2021-10-19</p> <p>Fix Resolution: io.netty:netty-codec:4.1.68.Final;io.netty:netty-all::4.1.68.Final</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec","packageVersion":"4.1.17.Final","packageFilePaths":["/tasks/s3/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.amazonaws:aws-java-sdk:1.11.641;com.amazonaws:aws-java-sdk-kinesisvideo:1.11.641;io.netty:netty-codec-http:4.1.17.Final;io.netty:netty-codec:4.1.17.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-codec:4.1.68.Final;io.netty:netty-all::4.1.68.Final","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-37136","vulnerabilityDetails":"The Bzip2 decompression decoder function doesn\u0027t allow setting size restrictions on the decompressed output data (which affects the allocation size used during decompression). All users of Bzip2Decoder are affected. The malicious input can trigger an OOME and so a DoS attack","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37136","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-37136 (High) detected in netty-codec-4.1.17.Final.jar - ## CVE-2021-37136 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-4.1.17.Final.jar</b></p></summary> <p>Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers and clients.</p> <p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p> <p>Path to dependency file: concord-plugins/tasks/s3/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec/4.1.17.Final/netty-codec-4.1.17.Final.jar</p> <p> Dependency Hierarchy: - aws-java-sdk-1.11.641.jar (Root Library) - aws-java-sdk-kinesisvideo-1.11.641.jar - netty-codec-http-4.1.17.Final.jar - :x: **netty-codec-4.1.17.Final.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The Bzip2 decompression decoder function doesn't allow setting size restrictions on the decompressed output data (which affects the allocation size used during decompression). All users of Bzip2Decoder are affected. The malicious input can trigger an OOME and so a DoS attack <p>Publish Date: 2021-10-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37136>CVE-2021-37136</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv">https://github.com/netty/netty/security/advisories/GHSA-grg4-wf29-r9vv</a></p> <p>Release Date: 2021-10-19</p> <p>Fix Resolution: io.netty:netty-codec:4.1.68.Final;io.netty:netty-all::4.1.68.Final</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec","packageVersion":"4.1.17.Final","packageFilePaths":["/tasks/s3/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.amazonaws:aws-java-sdk:1.11.641;com.amazonaws:aws-java-sdk-kinesisvideo:1.11.641;io.netty:netty-codec-http:4.1.17.Final;io.netty:netty-codec:4.1.17.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-codec:4.1.68.Final;io.netty:netty-all::4.1.68.Final","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-37136","vulnerabilityDetails":"The Bzip2 decompression decoder function doesn\u0027t allow setting size restrictions on the decompressed output data (which affects the allocation size used during decompression). All users of Bzip2Decoder are affected. The malicious input can trigger an OOME and so a DoS attack","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37136","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_defect
cve high detected in netty codec final jar cve high severity vulnerability vulnerable library netty codec final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file concord plugins tasks pom xml path to vulnerable library home wss scanner repository io netty netty codec final netty codec final jar dependency hierarchy aws java sdk jar root library aws java sdk kinesisvideo jar netty codec http final jar x netty codec final jar vulnerable library found in base branch master vulnerability details the decompression decoder function doesn t allow setting size restrictions on the decompressed output data which affects the allocation size used during decompression all users of are affected the malicious input can trigger an oome and so a dos attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty codec final io netty netty all final isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com amazonaws aws java sdk com amazonaws aws java sdk kinesisvideo io netty netty codec http final io netty netty codec final isminimumfixversionavailable true minimumfixversion io netty netty codec final io netty netty all final isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the decompression decoder function doesn allow setting size restrictions on the decompressed output data which affects the allocation size used during decompression all users of are affected the malicious input can trigger an oome and so a dos attack vulnerabilityurl
0
43,952
11,883,972,437
IssuesEvent
2020-03-27 16:50:38
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
[CI/CD]: Review coverage of accessibility checks in 686/686c-674 end-to-end tests
508-defect-3 508/Accessibility testing
**Feedback framework** - **❗️ Must** for if the feedback must be applied - **⚠️Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Description Applications **must** have thorough end-to-end tests that run in our continuous integration/continuous deployment (CI/CD) pipeline. These tests should include thorough axe checks. While auditing the `/disability-benefits/686` and `/disability-benefits/686c-674` app folders, I wasn't sure if there were modals or hidden content that needed axe checks. I'd like the front-end engineering team to review this application, and add end-to-end tests as needed. Definition of done in acceptance criteria below. In consult with Jason and Michah, it's possible these are versions of the same application, so I've bundled the review together. Please review the current version and discard the older one. ## Point of Contact <!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. --> **VFS Point of Contact:** _Jennifer_ ## Environment * `vets-website/src/applications//disability-benefits/686/tests` * `vets-website/src/applications//disability-benefits/686c-674/tests` * `$ yarn test:e2e src/applications//disability-benefits/686/tests` * `$ yarn test:e2e src/applications//disability-benefits/686c-674/tests` ## Acceptance Criteria <!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. --> **Definition of done:** - [ ] Front-end team member(s) have reviewed end-to-end tests for axe checks - [ ] axe checks are run for hidden content like modal windows, accordions - [ ] FE team has consulted with accessibility specialist in cases where there are high numbers of modals, accordions, other hidden content that could slow down e2e test runs. ## WCAG or Vendor Guidance (optional) * [Custom axeCheck helper method](https://github.com/department-of-veterans-affairs/vets-website/blob/master/src/platform/testing/e2e/nightwatch-commands/axeCheck.js) * [VSP guidance on writing end-to-end tests](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/platform/quality-assurance/e2e-testing)
1.0
[CI/CD]: Review coverage of accessibility checks in 686/686c-674 end-to-end tests - **Feedback framework** - **❗️ Must** for if the feedback must be applied - **⚠️Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Description Applications **must** have thorough end-to-end tests that run in our continuous integration/continuous deployment (CI/CD) pipeline. These tests should include thorough axe checks. While auditing the `/disability-benefits/686` and `/disability-benefits/686c-674` app folders, I wasn't sure if there were modals or hidden content that needed axe checks. I'd like the front-end engineering team to review this application, and add end-to-end tests as needed. Definition of done in acceptance criteria below. In consult with Jason and Michah, it's possible these are versions of the same application, so I've bundled the review together. Please review the current version and discard the older one. ## Point of Contact <!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. --> **VFS Point of Contact:** _Jennifer_ ## Environment * `vets-website/src/applications//disability-benefits/686/tests` * `vets-website/src/applications//disability-benefits/686c-674/tests` * `$ yarn test:e2e src/applications//disability-benefits/686/tests` * `$ yarn test:e2e src/applications//disability-benefits/686c-674/tests` ## Acceptance Criteria <!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. --> **Definition of done:** - [ ] Front-end team member(s) have reviewed end-to-end tests for axe checks - [ ] axe checks are run for hidden content like modal windows, accordions - [ ] FE team has consulted with accessibility specialist in cases where there are high numbers of modals, accordions, other hidden content that could slow down e2e test runs. ## WCAG or Vendor Guidance (optional) * [Custom axeCheck helper method](https://github.com/department-of-veterans-affairs/vets-website/blob/master/src/platform/testing/e2e/nightwatch-commands/axeCheck.js) * [VSP guidance on writing end-to-end tests](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/platform/quality-assurance/e2e-testing)
defect
review coverage of accessibility checks in end to end tests feedback framework ❗️ must for if the feedback must be applied ⚠️should if the feedback is best practice ✔️ consider for suggestions enhancements description applications must have thorough end to end tests that run in our continuous integration continuous deployment ci cd pipeline these tests should include thorough axe checks while auditing the disability benefits and disability benefits app folders i wasn t sure if there were modals or hidden content that needed axe checks i d like the front end engineering team to review this application and add end to end tests as needed definition of done in acceptance criteria below in consult with jason and michah it s possible these are versions of the same application so i ve bundled the review together please review the current version and discard the older one point of contact if this issue is being opened by a vfs team member please add a point of contact usually this is the same person who enters the issue ticket vfs point of contact jennifer environment vets website src applications disability benefits tests vets website src applications disability benefits tests yarn test src applications disability benefits tests yarn test src applications disability benefits tests acceptance criteria definition of done front end team member s have reviewed end to end tests for axe checks axe checks are run for hidden content like modal windows accordions fe team has consulted with accessibility specialist in cases where there are high numbers of modals accordions other hidden content that could slow down test runs wcag or vendor guidance optional
1
144,903
22,585,479,991
IssuesEvent
2022-06-28 14:55:17
blockframes/blockframes
https://api.github.com/repos/blockframes/blockframes
closed
Import Movie/Contract - what is mandatory?
Design - UX July clean up
From Nicolas in [Release 4.2 - Import](https://github.com/blockframes/blockframes/issues/8307#issuecomment-1118017895) Should the original title/licensee be mandatory in the import or not? # Movie import - [ ] (C) `Original title` is treated as mandatory, whereas it is not in the movie tunnel ![image](https://user-images.githubusercontent.com/78768220/166839857-502c9120-1720-462a-ba41-7c925a19979e.png) vs ![image](https://user-images.githubusercontent.com/78768220/166839879-9b917d6d-8d9c-43a5-8fac-72e3c69cf6f8.png) # Contract Import - [ ] C) `Licensee` field is treated as required, but it is `complementary information` in the template ![image](https://user-images.githubusercontent.com/78768220/166840344-2dbccfe5-a45c-4243-9a40-12f808ab1460.png) # Movie - [ ] Production country : missing * if it's really required, or delete validator if not ![image](https://user-images.githubusercontent.com/78768220/166687924-2d9ccedf-c31a-4df8-83f0-90a28033f1ce.png) same for : co-production country / producers / distribution countries / sales company's nationality ![image](https://user-images.githubusercontent.com/78768220/166688099-c70f68fc-34ae-4fdf-bd61-794d30da8145.png) ![image](https://user-images.githubusercontent.com/78768220/166688115-5c137462-95e5-41b2-a640-3c54b5a934b0.png) ![image](https://user-images.githubusercontent.com/78768220/166688136-958ed092-8da1-4909-a875-e9763c0d6954.png) ![image](https://user-images.githubusercontent.com/78768220/166688145-6f62f893-f90a-4a01-a36c-42d364fbb41a.png)
1.0
Import Movie/Contract - what is mandatory? - From Nicolas in [Release 4.2 - Import](https://github.com/blockframes/blockframes/issues/8307#issuecomment-1118017895) Should the original title/licensee be mandatory in the import or not? # Movie import - [ ] (C) `Original title` is treated as mandatory, whereas it is not in the movie tunnel ![image](https://user-images.githubusercontent.com/78768220/166839857-502c9120-1720-462a-ba41-7c925a19979e.png) vs ![image](https://user-images.githubusercontent.com/78768220/166839879-9b917d6d-8d9c-43a5-8fac-72e3c69cf6f8.png) # Contract Import - [ ] C) `Licensee` field is treated as required, but it is `complementary information` in the template ![image](https://user-images.githubusercontent.com/78768220/166840344-2dbccfe5-a45c-4243-9a40-12f808ab1460.png) # Movie - [ ] Production country : missing * if it's really required, or delete validator if not ![image](https://user-images.githubusercontent.com/78768220/166687924-2d9ccedf-c31a-4df8-83f0-90a28033f1ce.png) same for : co-production country / producers / distribution countries / sales company's nationality ![image](https://user-images.githubusercontent.com/78768220/166688099-c70f68fc-34ae-4fdf-bd61-794d30da8145.png) ![image](https://user-images.githubusercontent.com/78768220/166688115-5c137462-95e5-41b2-a640-3c54b5a934b0.png) ![image](https://user-images.githubusercontent.com/78768220/166688136-958ed092-8da1-4909-a875-e9763c0d6954.png) ![image](https://user-images.githubusercontent.com/78768220/166688145-6f62f893-f90a-4a01-a36c-42d364fbb41a.png)
non_defect
import movie contract what is mandatory from nicolas in should the original title licensee be mandatory in the import or not movie import c original title is treated as mandatory whereas it is not in the movie tunnel vs contract import c licensee field is treated as required but it is complementary information in the template movie production country missing if it s really required or delete validator if not same for co production country producers distribution countries sales company s nationality
0
794,153
28,024,468,606
IssuesEvent
2023-03-28 08:13:27
AY2223S2-CS2113-T13-1/tp
https://api.github.com/repos/AY2223S2-CS2113-T13-1/tp
closed
Add extra parameter to AddCommand
type.Story priority.High
AddCommand will now be `add CURRENCY AMOUNT [DESCRIPTION]` . User can add description to remark the particular transaction.E.g. `add SGD 10 part-time`
1.0
Add extra parameter to AddCommand - AddCommand will now be `add CURRENCY AMOUNT [DESCRIPTION]` . User can add description to remark the particular transaction.E.g. `add SGD 10 part-time`
non_defect
add extra parameter to addcommand addcommand will now be add currency amount user can add description to remark the particular transaction e g add sgd part time
0
7,714
2,610,434,286
IssuesEvent
2015-02-26 20:22:18
chrsmith/scribefire-chrome
https://api.github.com/repos/chrsmith/scribefire-chrome
opened
Unable to post due to invalid XML character
auto-migrated Priority-Medium Type-Defect
``` What's the problem? I cannot publish to any blogs, I get this error with wordpress: ScribeFire couldn't publish your post. Here's the error message that bubbled up: parse error. not well formed I get this error with blogspot: ScribeFire couldn't publish your post. Here's the error message that bubbled up: An invalid XML character (Unicode: 0xb) was found in the CDATA section. What browser are you using? Firefox What Operating system are you using? MacOS Mavericks What version of ScribeFire are you running? 4.3.1 What Blog Type are you having this problem with? Please include version # if known or applicable See errors above, I publish to mostly wordpress and 1 blogger account. I get the error no matter which version of wordpress I'm publishing to. My blogs range from 2.9 to 4.0. ``` ----- Original issue reported on code.google.com by `mac8m...@gmail.com` on 15 Sep 2014 at 3:15
1.0
Unable to post due to invalid XML character - ``` What's the problem? I cannot publish to any blogs, I get this error with wordpress: ScribeFire couldn't publish your post. Here's the error message that bubbled up: parse error. not well formed I get this error with blogspot: ScribeFire couldn't publish your post. Here's the error message that bubbled up: An invalid XML character (Unicode: 0xb) was found in the CDATA section. What browser are you using? Firefox What Operating system are you using? MacOS Mavericks What version of ScribeFire are you running? 4.3.1 What Blog Type are you having this problem with? Please include version # if known or applicable See errors above, I publish to mostly wordpress and 1 blogger account. I get the error no matter which version of wordpress I'm publishing to. My blogs range from 2.9 to 4.0. ``` ----- Original issue reported on code.google.com by `mac8m...@gmail.com` on 15 Sep 2014 at 3:15
defect
unable to post due to invalid xml character what s the problem i cannot publish to any blogs i get this error with wordpress scribefire couldn t publish your post here s the error message that bubbled up parse error not well formed i get this error with blogspot scribefire couldn t publish your post here s the error message that bubbled up an invalid xml character unicode was found in the cdata section what browser are you using firefox what operating system are you using macos mavericks what version of scribefire are you running what blog type are you having this problem with please include version if known or applicable see errors above i publish to mostly wordpress and blogger account i get the error no matter which version of wordpress i m publishing to my blogs range from to original issue reported on code google com by gmail com on sep at
1
220,268
24,564,795,774
IssuesEvent
2022-10-13 01:13:30
turkdevops/desktop
https://api.github.com/repos/turkdevops/desktop
closed
CVE-2020-7608 (Medium) detected in yargs-parser-10.1.0.tgz - autoclosed
security vulnerability
## CVE-2020-7608 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>yargs-parser-10.1.0.tgz</b></p></summary> <p>the mighty option parser used by yargs</p> <p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-10.1.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-10.1.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/ts-jest/node_modules/yargs-parser/package.json</p> <p> Dependency Hierarchy: - ts-jest-24.3.0.tgz (Root Library) - :x: **yargs-parser-10.1.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/desktop/commit/9e0c818b6cb48aa77f07a97653da926d8fb70362">9e0c818b6cb48aa77f07a97653da926d8fb70362</a></p> <p>Found in base branch: <b>development</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload. <p>Publish Date: 2020-03-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2020-03-16</p> <p>Fix Resolution (yargs-parser): 13.1.2</p> <p>Direct dependency fix Resolution (ts-jest): 25.2.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7608 (Medium) detected in yargs-parser-10.1.0.tgz - autoclosed - ## CVE-2020-7608 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>yargs-parser-10.1.0.tgz</b></p></summary> <p>the mighty option parser used by yargs</p> <p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-10.1.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-10.1.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/ts-jest/node_modules/yargs-parser/package.json</p> <p> Dependency Hierarchy: - ts-jest-24.3.0.tgz (Root Library) - :x: **yargs-parser-10.1.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/desktop/commit/9e0c818b6cb48aa77f07a97653da926d8fb70362">9e0c818b6cb48aa77f07a97653da926d8fb70362</a></p> <p>Found in base branch: <b>development</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload. <p>Publish Date: 2020-03-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2020-03-16</p> <p>Fix Resolution (yargs-parser): 13.1.2</p> <p>Direct dependency fix Resolution (ts-jest): 25.2.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in yargs parser tgz autoclosed cve medium severity vulnerability vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file package json path to vulnerable library node modules ts jest node modules yargs parser package json dependency hierarchy ts jest tgz root library x yargs parser tgz vulnerable library found in head commit a href found in base branch development vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution yargs parser direct dependency fix resolution ts jest step up your open source security game with mend
0
38,500
8,855,043,614
IssuesEvent
2019-01-09 04:20:50
prettydiff/prettydiff
https://api.github.com/repos/prettydiff/prettydiff
closed
Some markup beautification trivialities
Beautification Defect
The code sample is the HTML of https://en.wikipedia.org/wiki/Functional_programming 1. ~~attribute wrap - I need to evaluate the algorithm for when attributes are indented, because it does not appear to be working when a tag is wider than the value of options.width~~ 1. comment indent - Near the bottom of the code sample there are two HTML comments that span multiple lines. The first of these is indented different than the second. 1. error - types[bb] undefined - This error appeared when editing the source sample using the web tool, but it isn't related to the webtool.
1.0
Some markup beautification trivialities - The code sample is the HTML of https://en.wikipedia.org/wiki/Functional_programming 1. ~~attribute wrap - I need to evaluate the algorithm for when attributes are indented, because it does not appear to be working when a tag is wider than the value of options.width~~ 1. comment indent - Near the bottom of the code sample there are two HTML comments that span multiple lines. The first of these is indented different than the second. 1. error - types[bb] undefined - This error appeared when editing the source sample using the web tool, but it isn't related to the webtool.
defect
some markup beautification trivialities the code sample is the html of attribute wrap i need to evaluate the algorithm for when attributes are indented because it does not appear to be working when a tag is wider than the value of options width comment indent near the bottom of the code sample there are two html comments that span multiple lines the first of these is indented different than the second error types undefined this error appeared when editing the source sample using the web tool but it isn t related to the webtool
1
193,860
22,216,392,511
IssuesEvent
2022-06-08 02:25:18
maddyCode23/linux-4.1.15
https://api.github.com/repos/maddyCode23/linux-4.1.15
reopened
CVE-2018-18955 (High) detected in linux-stable-rtv4.1.33
security vulnerability
## CVE-2018-18955 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/user_namespace.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/user_namespace.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In the Linux kernel 4.15.x through 4.19.x before 4.19.2, map_write() in kernel/user_namespace.c allows privilege escalation because it mishandles nested user namespaces with more than 5 UID or GID ranges. A user who has CAP_SYS_ADMIN in an affected user namespace can bypass access controls on resources outside the namespace, as demonstrated by reading /etc/shadow. This occurs because an ID transformation takes place properly for the namespaced-to-kernel direction but not for the kernel-to-namespaced direction. <p>Publish Date: 2018-11-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-18955>CVE-2018-18955</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-18955">https://nvd.nist.gov/vuln/detail/CVE-2018-18955</a></p> <p>Release Date: 2018-11-16</p> <p>Fix Resolution: linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-18955 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2018-18955 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/user_namespace.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/user_namespace.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In the Linux kernel 4.15.x through 4.19.x before 4.19.2, map_write() in kernel/user_namespace.c allows privilege escalation because it mishandles nested user namespaces with more than 5 UID or GID ranges. A user who has CAP_SYS_ADMIN in an affected user namespace can bypass access controls on resources outside the namespace, as demonstrated by reading /etc/shadow. This occurs because an ID transformation takes place properly for the namespaced-to-kernel direction but not for the kernel-to-namespaced direction. <p>Publish Date: 2018-11-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-18955>CVE-2018-18955</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-18955">https://nvd.nist.gov/vuln/detail/CVE-2018-18955</a></p> <p>Release Date: 2018-11-16</p> <p>Fix Resolution: linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files kernel user namespace c kernel user namespace c vulnerability details in the linux kernel x through x before map write in kernel user namespace c allows privilege escalation because it mishandles nested user namespaces with more than uid or gid ranges a user who has cap sys admin in an affected user namespace can bypass access controls on resources outside the namespace as demonstrated by reading etc shadow this occurs because an id transformation takes place properly for the namespaced to kernel direction but not for the kernel to namespaced direction publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux yocto gitautoinc gitautoinc step up your open source security game with whitesource
0
34,034
7,780,447,905
IssuesEvent
2018-06-05 20:05:03
Microsoft/vscode-python
https://api.github.com/repos/Microsoft/vscode-python
closed
Use ILogger instead of console.error to reduce noise in unit tests
feature-* needs PR type-code health
We have errors being written out into console window when running tests. This can be confusing when logging at the output.
1.0
Use ILogger instead of console.error to reduce noise in unit tests - We have errors being written out into console window when running tests. This can be confusing when logging at the output.
non_defect
use ilogger instead of console error to reduce noise in unit tests we have errors being written out into console window when running tests this can be confusing when logging at the output
0
89,504
25,819,711,712
IssuesEvent
2022-12-12 08:40:18
gitpod-io/gitpod
https://api.github.com/repos/gitpod-io/gitpod
closed
Prebuild triggered on project creation does not get reported as usage
type: bug feature: prebuilds feature: teams and projects
Unlike prebuilds triggered by a commit webhook, the prebuild triggered after new project creation is not recorded in d_b_usage. **NOTE** This may be moot if we stop auto-triggering prebuilds on new projects. (#15281) Example workspace instance below - looks like the startedTime is missing ```json { "id": "26a24057-cd7b-4e03-bdcb-28190a69fa3c", "workspaceId": "jldec-teamwowproject-l9n8921gofu", "creationTime": "2022-12-10T10:41:17.513Z", "startedTime": "", "stoppedTime": "2022-12-10T10:43:14.344Z", "lastHeartbeat": "", "ideUrl": "https://jldec-teamwowproject-l9n8921gofu.ws-eu78.gitpod.io", "status_old": null, "workspaceImage": "eu.gcr.io/gitpod-dev/workspace-images:3537a65c39f3373206fe0e6bdf38ae4bc221068731c57ddce7a798aa4a91b15a", "region": "eu78", "deployedTime": "", "workspaceBaseImage": "", "_lastModified": "2022-12-10 10:43:14.349798 UTC", "status": "{\"repo\": {\"branch\": \"main\", \"latestCommit\": \"e7e873a35f6264ccbb27f0a929596f48d81b5e9c\", \"totalUntrackedFiles\": 0, \"totalUncommitedFiles\": 0, \"totalUnpushedCommits\": 0}, \"phase\": \"stopped\", \"nodeIp\": \"10.10.0.29\", \"message\": \"\", \"podName\": \"prebuild-26a24057-cd7b-4e03-bdcb-28190a69fa3c\", \"timeout\": \"30m0s\", \"version\": 109488963190786, \"nodeName\": \"headless-ws-eu78-pool-z4xg\", \"conditions\": {\"failed\": \"\", \"timeout\": \"\", \"deployed\": false, \"pullingImages\": false, \"stoppedByRequest\": false, \"headlessTaskFailed\": \"\"}, \"ownerToken\": \"xxx\", \"exposedPorts\": []}", "phase": "stopped", "deleted": "0", "phasePersisted": "stopped", "configuration": "{\"ideImage\":\"eu.gcr.io/gitpod-core-dev/build/ide/code:commit-8335b0de46d748b9d12119bc7cbdf8554a9e121c\",\"ideImageLayers\":[],\"supervisorImage\":\"eu.gcr.io/gitpod-core-dev/build/supervisor:commit-478a75e744a642d9b764de37cfae655bc8b29dd5\",\"ideConfig\":{\"useLatest\":false},\"featureFlags\":[\"workspace_psi\",\"workspace_class_limiting\"]}", "stoppingTime": "2022-12-10T10:43:11.505Z", "imageBuildInfo": null, "workspaceClass": "g1-standard", "usageAttributionId": "team:b3ddef51-43cf-4fb3-ad0b-3edbd4e57c96" } ```
1.0
Prebuild triggered on project creation does not get reported as usage - Unlike prebuilds triggered by a commit webhook, the prebuild triggered after new project creation is not recorded in d_b_usage. **NOTE** This may be moot if we stop auto-triggering prebuilds on new projects. (#15281) Example workspace instance below - looks like the startedTime is missing ```json { "id": "26a24057-cd7b-4e03-bdcb-28190a69fa3c", "workspaceId": "jldec-teamwowproject-l9n8921gofu", "creationTime": "2022-12-10T10:41:17.513Z", "startedTime": "", "stoppedTime": "2022-12-10T10:43:14.344Z", "lastHeartbeat": "", "ideUrl": "https://jldec-teamwowproject-l9n8921gofu.ws-eu78.gitpod.io", "status_old": null, "workspaceImage": "eu.gcr.io/gitpod-dev/workspace-images:3537a65c39f3373206fe0e6bdf38ae4bc221068731c57ddce7a798aa4a91b15a", "region": "eu78", "deployedTime": "", "workspaceBaseImage": "", "_lastModified": "2022-12-10 10:43:14.349798 UTC", "status": "{\"repo\": {\"branch\": \"main\", \"latestCommit\": \"e7e873a35f6264ccbb27f0a929596f48d81b5e9c\", \"totalUntrackedFiles\": 0, \"totalUncommitedFiles\": 0, \"totalUnpushedCommits\": 0}, \"phase\": \"stopped\", \"nodeIp\": \"10.10.0.29\", \"message\": \"\", \"podName\": \"prebuild-26a24057-cd7b-4e03-bdcb-28190a69fa3c\", \"timeout\": \"30m0s\", \"version\": 109488963190786, \"nodeName\": \"headless-ws-eu78-pool-z4xg\", \"conditions\": {\"failed\": \"\", \"timeout\": \"\", \"deployed\": false, \"pullingImages\": false, \"stoppedByRequest\": false, \"headlessTaskFailed\": \"\"}, \"ownerToken\": \"xxx\", \"exposedPorts\": []}", "phase": "stopped", "deleted": "0", "phasePersisted": "stopped", "configuration": "{\"ideImage\":\"eu.gcr.io/gitpod-core-dev/build/ide/code:commit-8335b0de46d748b9d12119bc7cbdf8554a9e121c\",\"ideImageLayers\":[],\"supervisorImage\":\"eu.gcr.io/gitpod-core-dev/build/supervisor:commit-478a75e744a642d9b764de37cfae655bc8b29dd5\",\"ideConfig\":{\"useLatest\":false},\"featureFlags\":[\"workspace_psi\",\"workspace_class_limiting\"]}", "stoppingTime": "2022-12-10T10:43:11.505Z", "imageBuildInfo": null, "workspaceClass": "g1-standard", "usageAttributionId": "team:b3ddef51-43cf-4fb3-ad0b-3edbd4e57c96" } ```
non_defect
prebuild triggered on project creation does not get reported as usage unlike prebuilds triggered by a commit webhook the prebuild triggered after new project creation is not recorded in d b usage note this may be moot if we stop auto triggering prebuilds on new projects example workspace instance below looks like the startedtime is missing json id bdcb workspaceid jldec teamwowproject creationtime startedtime stoppedtime lastheartbeat ideurl status old null workspaceimage eu gcr io gitpod dev workspace images region deployedtime workspacebaseimage lastmodified utc status repo branch main latestcommit totaluntrackedfiles totaluncommitedfiles totalunpushedcommits phase stopped nodeip message podname prebuild bdcb timeout version nodename headless ws pool conditions failed timeout deployed false pullingimages false stoppedbyrequest false headlesstaskfailed ownertoken xxx exposedports phase stopped deleted phasepersisted stopped configuration ideimage eu gcr io gitpod core dev build ide code commit ideimagelayers supervisorimage eu gcr io gitpod core dev build supervisor commit ideconfig uselatest false featureflags stoppingtime imagebuildinfo null workspaceclass standard usageattributionid team
0
67,433
20,961,612,065
IssuesEvent
2022-03-27 21:49:23
abedmaatalla/sipdroid
https://api.github.com/repos/abedmaatalla/sipdroid
closed
VPN not sensed by Siprdroid in latest ICS
Priority-Medium Type-Defect auto-migrated
``` Hi, we live in a country where VOIP is blocked, so tht only way we can connect sipdroid is suing VPN ( IPsec ). After the new ICS update from Google, the sipdroid does not sense the VPN, even when it is clicked in the settings. So what I do is click both wifi & VPN in the settings and then it is OK. This bug is also in the new version 2.5 released 3 days ago. kindly upgrade the sipdroid software to sense VPN. Thanks ``` Original issue reported on code.google.com by `senp...@gmail.com` on 23 Mar 2012 at 4:27
1.0
VPN not sensed by Siprdroid in latest ICS - ``` Hi, we live in a country where VOIP is blocked, so tht only way we can connect sipdroid is suing VPN ( IPsec ). After the new ICS update from Google, the sipdroid does not sense the VPN, even when it is clicked in the settings. So what I do is click both wifi & VPN in the settings and then it is OK. This bug is also in the new version 2.5 released 3 days ago. kindly upgrade the sipdroid software to sense VPN. Thanks ``` Original issue reported on code.google.com by `senp...@gmail.com` on 23 Mar 2012 at 4:27
defect
vpn not sensed by siprdroid in latest ics hi we live in a country where voip is blocked so tht only way we can connect sipdroid is suing vpn ipsec after the new ics update from google the sipdroid does not sense the vpn even when it is clicked in the settings so what i do is click both wifi vpn in the settings and then it is ok this bug is also in the new version released days ago kindly upgrade the sipdroid software to sense vpn thanks original issue reported on code google com by senp gmail com on mar at
1
203,646
15,378,237,927
IssuesEvent
2021-03-02 18:03:47
nih-cfde/cfde-deriva
https://api.github.com/repos/nih-cfde/cfde-deriva
closed
Indicate horizontal scroll is an option
Testing
One of my testers pointed out that on pages like [File](https://app-staging.nih-cfde.org/chaise/recordset/#1/CFDE:file@sort(RID)) there are more columns off the right of the screen, but no indication of that, so the only way you can find out is accidentally scrolling it, or by resizing your window. I always look at it full screen on my dual monitor and so until 10 minutes ago I thought we were only displaying View, ID Namespace, Local Id, Filename, Project and Size In Bytes. I think if I've been looking at this thing for 9 months and had no idea there were more columns, then we need some kind of indicator :) My normal view ![image](https://user-images.githubusercontent.com/1719360/108387748-e2190880-71db-11eb-8bbc-99f1a7bf3d7f.png) Surprise extra columns: ![image](https://user-images.githubusercontent.com/1719360/108387816-f52bd880-71db-11eb-90c9-f1177e1c8ebd.png)
1.0
Indicate horizontal scroll is an option - One of my testers pointed out that on pages like [File](https://app-staging.nih-cfde.org/chaise/recordset/#1/CFDE:file@sort(RID)) there are more columns off the right of the screen, but no indication of that, so the only way you can find out is accidentally scrolling it, or by resizing your window. I always look at it full screen on my dual monitor and so until 10 minutes ago I thought we were only displaying View, ID Namespace, Local Id, Filename, Project and Size In Bytes. I think if I've been looking at this thing for 9 months and had no idea there were more columns, then we need some kind of indicator :) My normal view ![image](https://user-images.githubusercontent.com/1719360/108387748-e2190880-71db-11eb-8bbc-99f1a7bf3d7f.png) Surprise extra columns: ![image](https://user-images.githubusercontent.com/1719360/108387816-f52bd880-71db-11eb-90c9-f1177e1c8ebd.png)
non_defect
indicate horizontal scroll is an option one of my testers pointed out that on pages like there are more columns off the right of the screen but no indication of that so the only way you can find out is accidentally scrolling it or by resizing your window i always look at it full screen on my dual monitor and so until minutes ago i thought we were only displaying view id namespace local id filename project and size in bytes i think if i ve been looking at this thing for months and had no idea there were more columns then we need some kind of indicator my normal view surprise extra columns
0
53,397
22,779,767,267
IssuesEvent
2022-07-08 18:16:46
BCDevOps/developer-experience
https://api.github.com/repos/BCDevOps/developer-experience
opened
RocketChat Upgrade: Upgrade plan
ops and shared services
**Describe the issue** Based on the results of the upgrade testing in Dev, prepare an upgrade plan for Prod. **Additional context** The upgrade plan will be used as a step by step guide for the actual upgrade in Prod. **Definition of done** - [ ] Upgrade plan is recorded here
1.0
RocketChat Upgrade: Upgrade plan - **Describe the issue** Based on the results of the upgrade testing in Dev, prepare an upgrade plan for Prod. **Additional context** The upgrade plan will be used as a step by step guide for the actual upgrade in Prod. **Definition of done** - [ ] Upgrade plan is recorded here
non_defect
rocketchat upgrade upgrade plan describe the issue based on the results of the upgrade testing in dev prepare an upgrade plan for prod additional context the upgrade plan will be used as a step by step guide for the actual upgrade in prod definition of done upgrade plan is recorded here
0
51,116
13,188,131,545
IssuesEvent
2020-08-13 05:38:20
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
closed
[icetray] Fixed broken test pattern for failure modes (Trac #2014)
Migrated from Trac combo core defect
There's a common pattern in both C++ and python tests that check to see if a module fails as expected. It goes something like this: try: run_code_i_expect_to_throw() # Hmm...I shouldn't have gotten this far. # This should have thrown. # This is a failure. What do we do with failures? # That's right, we throw. assert False, "FAIL!" except: # OK...I was expecting this to throw, so this is good. pass The problem here is assert False throws an AssertionError, which is caught in the except block, so the test doesn't fail as expected. The take away is, no matter how the test code behaves there's ALWAYS a throw within the try block. We need to find all of these and fix them. One solution is to know the exception you're expecting and catch only that. The other is to call exit with a non-zero return code so the test fails. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2014">https://code.icecube.wisc.edu/ticket/2014</a>, reported by olivas and owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:15:13", "description": "There's a common pattern in both C++ and python tests that check to see if a module fails as expected. It goes something like this:\n\ntry:\n run_code_i_expect_to_throw()\n # Hmm...I shouldn't have gotten this far. \n # This should have thrown.\n # This is a failure. What do we do with failures?\n # That's right, we throw.\n assert False, \"FAIL!\"\nexcept:\n # OK...I was expecting this to throw, so this is good. \n pass\n\nThe problem here is assert False throws an AssertionError, \nwhich is caught in the except block, so the test doesn't fail\nas expected.\n\nThe take away is, no matter how the test code behaves \nthere's ALWAYS a throw within the try block.\n\nWe need to find all of these and fix them. One solution is\nto know the exception you're expecting and catch only that.\nThe other is to call exit with a non-zero return code\nso the test fails.\n ", "reporter": "olivas", "cc": "", "resolution": "fixed", "_ts": "1550067313248429", "component": "combo core", "summary": "[icetray] Fixed broken test pattern for failure modes", "priority": "normal", "keywords": "", "time": "2017-05-10T03:50:45", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
1.0
[icetray] Fixed broken test pattern for failure modes (Trac #2014) - There's a common pattern in both C++ and python tests that check to see if a module fails as expected. It goes something like this: try: run_code_i_expect_to_throw() # Hmm...I shouldn't have gotten this far. # This should have thrown. # This is a failure. What do we do with failures? # That's right, we throw. assert False, "FAIL!" except: # OK...I was expecting this to throw, so this is good. pass The problem here is assert False throws an AssertionError, which is caught in the except block, so the test doesn't fail as expected. The take away is, no matter how the test code behaves there's ALWAYS a throw within the try block. We need to find all of these and fix them. One solution is to know the exception you're expecting and catch only that. The other is to call exit with a non-zero return code so the test fails. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2014">https://code.icecube.wisc.edu/ticket/2014</a>, reported by olivas and owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:15:13", "description": "There's a common pattern in both C++ and python tests that check to see if a module fails as expected. It goes something like this:\n\ntry:\n run_code_i_expect_to_throw()\n # Hmm...I shouldn't have gotten this far. \n # This should have thrown.\n # This is a failure. What do we do with failures?\n # That's right, we throw.\n assert False, \"FAIL!\"\nexcept:\n # OK...I was expecting this to throw, so this is good. \n pass\n\nThe problem here is assert False throws an AssertionError, \nwhich is caught in the except block, so the test doesn't fail\nas expected.\n\nThe take away is, no matter how the test code behaves \nthere's ALWAYS a throw within the try block.\n\nWe need to find all of these and fix them. One solution is\nto know the exception you're expecting and catch only that.\nThe other is to call exit with a non-zero return code\nso the test fails.\n ", "reporter": "olivas", "cc": "", "resolution": "fixed", "_ts": "1550067313248429", "component": "combo core", "summary": "[icetray] Fixed broken test pattern for failure modes", "priority": "normal", "keywords": "", "time": "2017-05-10T03:50:45", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
defect
fixed broken test pattern for failure modes trac there s a common pattern in both c and python tests that check to see if a module fails as expected it goes something like this try run code i expect to throw hmm i shouldn t have gotten this far this should have thrown this is a failure what do we do with failures that s right we throw assert false fail except ok i was expecting this to throw so this is good pass the problem here is assert false throws an assertionerror which is caught in the except block so the test doesn t fail as expected the take away is no matter how the test code behaves there s always a throw within the try block we need to find all of these and fix them one solution is to know the exception you re expecting and catch only that the other is to call exit with a non zero return code so the test fails migrated from json status closed changetime description there s a common pattern in both c and python tests that check to see if a module fails as expected it goes something like this n ntry n run code i expect to throw n hmm i shouldn t have gotten this far n this should have thrown n this is a failure what do we do with failures n that s right we throw n assert false fail nexcept n ok i was expecting this to throw so this is good n pass n nthe problem here is assert false throws an assertionerror nwhich is caught in the except block so the test doesn t fail nas expected n nthe take away is no matter how the test code behaves nthere s always a throw within the try block n nwe need to find all of these and fix them one solution is nto know the exception you re expecting and catch only that nthe other is to call exit with a non zero return code nso the test fails n reporter olivas cc resolution fixed ts component combo core summary fixed broken test pattern for failure modes priority normal keywords time milestone owner olivas type defect
1
25,762
4,441,172,219
IssuesEvent
2016-08-19 08:12:42
MICommunity/psicquic
https://api.github.com/repos/MICommunity/psicquic
closed
Boolean operators case sensitive for IntAct PSICQUIC Service
auto-migrated Priority-High Type-Defect
``` As an email to the mailing list shows: (taxidA:9606 and -taxidB:9606) or (taxidB:9606 and -taxidA:9606) does not return the same than (taxidA:9606 AND -taxidB:9606) OR (taxidB:9606 AND -taxidA:9606) It should be fixed so that all implementations are case-insensitive. ``` Original issue reported on code.google.com by `brunoaranda` on 7 Apr 2011 at 2:30
1.0
Boolean operators case sensitive for IntAct PSICQUIC Service - ``` As an email to the mailing list shows: (taxidA:9606 and -taxidB:9606) or (taxidB:9606 and -taxidA:9606) does not return the same than (taxidA:9606 AND -taxidB:9606) OR (taxidB:9606 AND -taxidA:9606) It should be fixed so that all implementations are case-insensitive. ``` Original issue reported on code.google.com by `brunoaranda` on 7 Apr 2011 at 2:30
defect
boolean operators case sensitive for intact psicquic service as an email to the mailing list shows taxida and taxidb or taxidb and taxida does not return the same than taxida and taxidb or taxidb and taxida it should be fixed so that all implementations are case insensitive original issue reported on code google com by brunoaranda on apr at
1
39,144
5,219,976,890
IssuesEvent
2017-01-26 20:34:36
zfsonlinux/zfs
https://api.github.com/repos/zfsonlinux/zfs
closed
ztest: ztest_tx_assign
Test Suite
Observed during automated testing ``` 0 0x00007f5b70a9f067 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007f5b70aa0448 in __GI_abort () at abort.c:89 #2 0x000000000040886c in libspl_assert () #3 0x000000000040bb06 in ztest_tx_assign.isra () #4 0x000000000040e204 in ztest_replay_write () #5 0x0000000000413d8d in ztest_write () #6 0x0000000000413ec4 in ztest_io () #7 0x000000000041488f in ztest_dmu_write_parallel () #8 0x000000000040c9e0 in ztest_thread () #9 0x00007f5b71dd3e4c in zk_thread_helper (arg=0x16bf2b0) at kernel.c:139 #10 0x00007f5b70e1d0a4 in start_thread (arg=0x7f5b663c4700) at pthread_create.c:309 #11 0x00007f5b70b5287d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 ``` Full logs: http://build.zfsonlinux.org/builders/Debian%208%20x86_64%20%28TEST%29/builds/939/steps/shell_5/logs/stdio
1.0
ztest: ztest_tx_assign - Observed during automated testing ``` 0 0x00007f5b70a9f067 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007f5b70aa0448 in __GI_abort () at abort.c:89 #2 0x000000000040886c in libspl_assert () #3 0x000000000040bb06 in ztest_tx_assign.isra () #4 0x000000000040e204 in ztest_replay_write () #5 0x0000000000413d8d in ztest_write () #6 0x0000000000413ec4 in ztest_io () #7 0x000000000041488f in ztest_dmu_write_parallel () #8 0x000000000040c9e0 in ztest_thread () #9 0x00007f5b71dd3e4c in zk_thread_helper (arg=0x16bf2b0) at kernel.c:139 #10 0x00007f5b70e1d0a4 in start_thread (arg=0x7f5b663c4700) at pthread_create.c:309 #11 0x00007f5b70b5287d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 ``` Full logs: http://build.zfsonlinux.org/builders/Debian%208%20x86_64%20%28TEST%29/builds/939/steps/shell_5/logs/stdio
non_defect
ztest ztest tx assign observed during automated testing in gi raise sig sig entry at nptl sysdeps unix sysv linux raise c in gi abort at abort c in libspl assert in ztest tx assign isra in ztest replay write in ztest write in ztest io in ztest dmu write parallel in ztest thread in zk thread helper arg at kernel c in start thread arg at pthread create c in clone at sysdeps unix sysv linux clone s full logs
0
747,882
26,101,809,264
IssuesEvent
2022-12-27 08:08:59
bounswe/bounswe2022group1
https://api.github.com/repos/bounswe/bounswe2022group1
closed
Fix linking error, set AndroidManifest configurations
Type: Bug Priority: High Status: Completed Android
**Issue Description:** Fix android resource linking failed error Set Android Manifest configurations **Tasks to Do:** - [x] Fix android resource linking failed error - [x] Set Android Manifest configurations *Task Deadline:* 27.12.2022
1.0
Fix linking error, set AndroidManifest configurations - **Issue Description:** Fix android resource linking failed error Set Android Manifest configurations **Tasks to Do:** - [x] Fix android resource linking failed error - [x] Set Android Manifest configurations *Task Deadline:* 27.12.2022
non_defect
fix linking error set androidmanifest configurations issue description fix android resource linking failed error set android manifest configurations tasks to do fix android resource linking failed error set android manifest configurations task deadline
0
144,695
19,296,148,475
IssuesEvent
2021-12-12 16:16:53
AlexRogalskiy/typescript-tools
https://api.github.com/repos/AlexRogalskiy/typescript-tools
closed
CVE-2021-3807 (High) detected in multiple libraries
security vulnerability Status: Invalid
## CVE-2021-3807 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ansi-regex-4.1.0.tgz</b>, <b>ansi-regex-3.0.0.tgz</b>, <b>ansi-regex-5.0.0.tgz</b></p></summary> <p> <details><summary><b>ansi-regex-4.1.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz</a></p> <p>Path to dependency file: typescript-tools/package.json</p> <p>Path to vulnerable library: typescript-tools/node_modules/jest-watch-typeahead/node_modules/ansi-regex/package.json,typescript-tools/node_modules/tsdx/node_modules/strip-ansi/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - tsdx-0.14.1.tgz (Root Library) - eslint-6.8.0.tgz - strip-ansi-5.2.0.tgz - :x: **ansi-regex-4.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>ansi-regex-3.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz</a></p> <p> Dependency Hierarchy: - tsdx-0.14.1.tgz (Root Library) - progress-estimator-0.2.2.tgz - log-update-2.3.0.tgz - wrap-ansi-3.0.1.tgz - strip-ansi-4.0.0.tgz - :x: **ansi-regex-3.0.0.tgz** (Vulnerable Library) </details> <details><summary><b>ansi-regex-5.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p> <p>Path to dependency file: typescript-tools/package.json</p> <p>Path to vulnerable library: typescript-tools/node_modules/npm/node_modules/cli-table3/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - npm-7.1.3.tgz (Root Library) - npm-7.24.2.tgz - cli-table3-0.6.0.tgz - string-width-4.2.2.tgz - strip-ansi-6.0.0.tgz - :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/typescript-tools/commit/a18e5af080b78d64b4a8d452840600495eaaf3fa">a18e5af080b78d64b4a8d452840600495eaaf3fa</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ansi-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3807 (High) detected in multiple libraries - ## CVE-2021-3807 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ansi-regex-4.1.0.tgz</b>, <b>ansi-regex-3.0.0.tgz</b>, <b>ansi-regex-5.0.0.tgz</b></p></summary> <p> <details><summary><b>ansi-regex-4.1.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz</a></p> <p>Path to dependency file: typescript-tools/package.json</p> <p>Path to vulnerable library: typescript-tools/node_modules/jest-watch-typeahead/node_modules/ansi-regex/package.json,typescript-tools/node_modules/tsdx/node_modules/strip-ansi/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - tsdx-0.14.1.tgz (Root Library) - eslint-6.8.0.tgz - strip-ansi-5.2.0.tgz - :x: **ansi-regex-4.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>ansi-regex-3.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz</a></p> <p> Dependency Hierarchy: - tsdx-0.14.1.tgz (Root Library) - progress-estimator-0.2.2.tgz - log-update-2.3.0.tgz - wrap-ansi-3.0.1.tgz - strip-ansi-4.0.0.tgz - :x: **ansi-regex-3.0.0.tgz** (Vulnerable Library) </details> <details><summary><b>ansi-regex-5.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p> <p>Path to dependency file: typescript-tools/package.json</p> <p>Path to vulnerable library: typescript-tools/node_modules/npm/node_modules/cli-table3/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - npm-7.1.3.tgz (Root Library) - npm-7.24.2.tgz - cli-table3-0.6.0.tgz - string-width-4.2.2.tgz - strip-ansi-6.0.0.tgz - :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/typescript-tools/commit/a18e5af080b78d64b4a8d452840600495eaaf3fa">a18e5af080b78d64b4a8d452840600495eaaf3fa</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ansi-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries ansi regex tgz ansi regex tgz ansi regex tgz ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file typescript tools package json path to vulnerable library typescript tools node modules jest watch typeahead node modules ansi regex package json typescript tools node modules tsdx node modules strip ansi node modules ansi regex package json dependency hierarchy tsdx tgz root library eslint tgz strip ansi tgz x ansi regex tgz vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href dependency hierarchy tsdx tgz root library progress estimator tgz log update tgz wrap ansi tgz strip ansi tgz x ansi regex tgz vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file typescript tools package json path to vulnerable library typescript tools node modules npm node modules cli node modules ansi regex package json dependency hierarchy npm tgz root library npm tgz cli tgz string width tgz strip ansi tgz x ansi regex tgz vulnerable library found in head commit a href vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex step up your open source security game with whitesource
0
46,533
13,055,928,202
IssuesEvent
2020-07-30 03:08:48
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
segfault when starting steamshovel (Trac #1378)
Incomplete Migration Migrated from Trac combo core defect
Migrated from https://code.icecube.wisc.edu/ticket/1378 ```json { "status": "closed", "changetime": "2015-10-05T15:28:27", "description": "A strange problem:\nIf I start the environment with ./env-shell.sh, then run steamshovel, it segfaults right at the beginning. With \"./env-shell.sh steamshovel file.i3\" it starts up fine.\nSystem: Ubuntu 15.04\nicerec version: trunk, Revision: 138186\n\nbacktrace:\n\n(gdb) run selfveto-results.i3\nStarting program: /home/berghaus/events/steamshovel selfveto-results.i3\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nERROR (steamshovel): Problem loading icecube python bindings:\nImportError: No module named icecube.dataio\nAre there unbuilt pybindings? (embed.cpp:64 in scripting::PyInterpreter::PyInterpreter(char*))\nterminate called after throwing an instance of 'boost::python::error_already_set'\n\nProgram received signal SIGABRT, Aborted.\n0x00007fffeaefd267 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55\n55\t../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n(gdb) bt\n#0 0x00007fffeaefd267 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55\n#1 0x00007fffeaefeeca in __GI_abort () at abort.c:89\n#2 0x00007ffff792706d in __gnu_cxx::__verbose_terminate_handler() ()\n from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\n#3 0x00007ffff7924ee6 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\n#4 0x00007ffff7924f31 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\n#5 0x00007ffff7925199 in __cxa_rethrow () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\n#6 0x00000000004e23a2 in scripting::PyInterpreter::PyInterpreter(char*) ()\n#7 0x0000000000473fbd in main ()\n", "reporter": "berghaus", "cc": "", "resolution": "invalid", "_ts": "1444058907340211", "component": "combo core", "summary": "segfault when starting steamshovel", "priority": "normal", "keywords": "steamshovel", "time": "2015-10-05T07:29:14", "milestone": "", "owner": "hdembinski", "type": "defect" } ```
1.0
segfault when starting steamshovel (Trac #1378) - Migrated from https://code.icecube.wisc.edu/ticket/1378 ```json { "status": "closed", "changetime": "2015-10-05T15:28:27", "description": "A strange problem:\nIf I start the environment with ./env-shell.sh, then run steamshovel, it segfaults right at the beginning. With \"./env-shell.sh steamshovel file.i3\" it starts up fine.\nSystem: Ubuntu 15.04\nicerec version: trunk, Revision: 138186\n\nbacktrace:\n\n(gdb) run selfveto-results.i3\nStarting program: /home/berghaus/events/steamshovel selfveto-results.i3\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nERROR (steamshovel): Problem loading icecube python bindings:\nImportError: No module named icecube.dataio\nAre there unbuilt pybindings? (embed.cpp:64 in scripting::PyInterpreter::PyInterpreter(char*))\nterminate called after throwing an instance of 'boost::python::error_already_set'\n\nProgram received signal SIGABRT, Aborted.\n0x00007fffeaefd267 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55\n55\t../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n(gdb) bt\n#0 0x00007fffeaefd267 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55\n#1 0x00007fffeaefeeca in __GI_abort () at abort.c:89\n#2 0x00007ffff792706d in __gnu_cxx::__verbose_terminate_handler() ()\n from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\n#3 0x00007ffff7924ee6 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\n#4 0x00007ffff7924f31 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\n#5 0x00007ffff7925199 in __cxa_rethrow () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6\n#6 0x00000000004e23a2 in scripting::PyInterpreter::PyInterpreter(char*) ()\n#7 0x0000000000473fbd in main ()\n", "reporter": "berghaus", "cc": "", "resolution": "invalid", "_ts": "1444058907340211", "component": "combo core", "summary": "segfault when starting steamshovel", "priority": "normal", "keywords": "steamshovel", "time": "2015-10-05T07:29:14", "milestone": "", "owner": "hdembinski", "type": "defect" } ```
defect
segfault when starting steamshovel trac migrated from json status closed changetime description a strange problem nif i start the environment with env shell sh then run steamshovel it segfaults right at the beginning with env shell sh steamshovel file it starts up fine nsystem ubuntu nicerec version trunk revision n nbacktrace n n gdb run selfveto results nstarting program home berghaus events steamshovel selfveto results n nusing host libthread db library lib linux gnu libthread db so nerror steamshovel problem loading icecube python bindings nimporterror no module named icecube dataio nare there unbuilt pybindings embed cpp in scripting pyinterpreter pyinterpreter char nterminate called after throwing an instance of boost python error already set n nprogram received signal sigabrt aborted in gi raise sig sig entry at sysdeps unix sysv linux raise c t sysdeps unix sysv linux raise c no such file or directory n gdb bt n in gi raise sig sig entry at sysdeps unix sysv linux raise c n in gi abort at abort c n in gnu cxx verbose terminate handler n from usr lib linux gnu libstdc so n in from usr lib linux gnu libstdc so n in std terminate from usr lib linux gnu libstdc so n in cxa rethrow from usr lib linux gnu libstdc so n in scripting pyinterpreter pyinterpreter char n in main n reporter berghaus cc resolution invalid ts component combo core summary segfault when starting steamshovel priority normal keywords steamshovel time milestone owner hdembinski type defect
1
236,390
26,009,785,774
IssuesEvent
2022-12-20 23:47:26
ManageIQ/miq_bot
https://api.github.com/repos/ManageIQ/miq_bot
closed
CVE-2022-23519 (Medium) detected in rails-html-sanitizer-1.4.3.gem - autoclosed
security vulnerability
## CVE-2022-23519 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rails-html-sanitizer-1.4.3.gem</b></p></summary> <p>HTML sanitization for Rails applications</p> <p>Library home page: <a href="https://rubygems.org/gems/rails-html-sanitizer-1.4.3.gem">https://rubygems.org/gems/rails-html-sanitizer-1.4.3.gem</a></p> <p>Path to dependency file: /Gemfile.lock</p> <p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/rails-html-sanitizer-1.4.3.gem</p> <p> Dependency Hierarchy: - rails-5.2.8.1.gem (Root Library) - railties-5.2.8.1.gem - actionpack-5.2.8.1.gem - :x: **rails-html-sanitizer-1.4.3.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ManageIQ/miq_bot/commit/37c2faddad2f3de376140b931bef0dd3ca39e68e">37c2faddad2f3de376140b931bef0dd3ca39e68e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> rails-html-sanitizer is responsible for sanitizing HTML fragments in Rails applications. Prior to version 1.4.4, a possible XSS vulnerability with certain configurations of Rails::Html::Sanitizer may allow an attacker to inject content if the application developer has overridden the sanitizer's allowed tags in either of the following ways: allow both "math" and "style" elements, or allow both "svg" and "style" elements. Code is only impacted if allowed tags are being overridden. . This issue is fixed in version 1.4.4. All users overriding the allowed tags to include "math" or "svg" and "style" should either upgrade or use the following workaround immediately: Remove "style" from the overridden allowed tags, or remove "math" and "svg" from the overridden allowed tags. <p>Publish Date: 2022-12-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23519>CVE-2022-23519</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/rails/rails-html-sanitizer/security/advisories/GHSA-9h9g-93gc-623h">https://github.com/rails/rails-html-sanitizer/security/advisories/GHSA-9h9g-93gc-623h</a></p> <p>Release Date: 2022-12-14</p> <p>Fix Resolution: rails-html-sanitizer - 1.4.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-23519 (Medium) detected in rails-html-sanitizer-1.4.3.gem - autoclosed - ## CVE-2022-23519 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rails-html-sanitizer-1.4.3.gem</b></p></summary> <p>HTML sanitization for Rails applications</p> <p>Library home page: <a href="https://rubygems.org/gems/rails-html-sanitizer-1.4.3.gem">https://rubygems.org/gems/rails-html-sanitizer-1.4.3.gem</a></p> <p>Path to dependency file: /Gemfile.lock</p> <p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/rails-html-sanitizer-1.4.3.gem</p> <p> Dependency Hierarchy: - rails-5.2.8.1.gem (Root Library) - railties-5.2.8.1.gem - actionpack-5.2.8.1.gem - :x: **rails-html-sanitizer-1.4.3.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ManageIQ/miq_bot/commit/37c2faddad2f3de376140b931bef0dd3ca39e68e">37c2faddad2f3de376140b931bef0dd3ca39e68e</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> rails-html-sanitizer is responsible for sanitizing HTML fragments in Rails applications. Prior to version 1.4.4, a possible XSS vulnerability with certain configurations of Rails::Html::Sanitizer may allow an attacker to inject content if the application developer has overridden the sanitizer's allowed tags in either of the following ways: allow both "math" and "style" elements, or allow both "svg" and "style" elements. Code is only impacted if allowed tags are being overridden. . This issue is fixed in version 1.4.4. All users overriding the allowed tags to include "math" or "svg" and "style" should either upgrade or use the following workaround immediately: Remove "style" from the overridden allowed tags, or remove "math" and "svg" from the overridden allowed tags. <p>Publish Date: 2022-12-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23519>CVE-2022-23519</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/rails/rails-html-sanitizer/security/advisories/GHSA-9h9g-93gc-623h">https://github.com/rails/rails-html-sanitizer/security/advisories/GHSA-9h9g-93gc-623h</a></p> <p>Release Date: 2022-12-14</p> <p>Fix Resolution: rails-html-sanitizer - 1.4.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in rails html sanitizer gem autoclosed cve medium severity vulnerability vulnerable library rails html sanitizer gem html sanitization for rails applications library home page a href path to dependency file gemfile lock path to vulnerable library home wss scanner gem ruby cache rails html sanitizer gem dependency hierarchy rails gem root library railties gem actionpack gem x rails html sanitizer gem vulnerable library found in head commit a href found in base branch master vulnerability details rails html sanitizer is responsible for sanitizing html fragments in rails applications prior to version a possible xss vulnerability with certain configurations of rails html sanitizer may allow an attacker to inject content if the application developer has overridden the sanitizer s allowed tags in either of the following ways allow both math and style elements or allow both svg and style elements code is only impacted if allowed tags are being overridden this issue is fixed in version all users overriding the allowed tags to include math or svg and style should either upgrade or use the following workaround immediately remove style from the overridden allowed tags or remove math and svg from the overridden allowed tags publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rails html sanitizer step up your open source security game with mend
0
47,409
13,056,173,188
IssuesEvent
2020-07-30 03:53:02
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
i3tvtpd clients loose connection and stop getting new events (Trac #515)
Migrated from Trac defect glshovel
I3Tv client/server connections seems to run into a problem. Tested with offline-software/trunk Revision: 51842 Started an i3tvtpd server, using a well-used i3 file (single file, runs in a loop over it) Started 2 i3tvc clients connecting to this server as: ./env-shell.sh i3tvc localhost 26227 2 ./env-shell.sh i3tvc localhost 26227 3 After some time (~hr), both clients will stop getting new events with: Logging configured from file log4cplus.conf /disk02/home/blaufuss/icework/offline-software/trunk/src/icetray/private/icetray/I3Frame.cxx:590: FATAL: checksums don't match Server continues to run w/o issue. Stopping and restarting the clients gets them back on track. Migrated from https://code.icecube.wisc.edu/ticket/515 ```json { "status": "closed", "changetime": "2009-01-16T19:47:04", "description": "I3Tv client/server connections seems to run into a problem.\nTested with offline-software/trunk Revision: 51842\n\nStarted an i3tvtpd server, using a well-used i3 file (single file, runs in a loop over it)\n\nStarted 2 i3tvc clients connecting to this server as:\n./env-shell.sh i3tvc localhost 26227 2\n./env-shell.sh i3tvc localhost 26227 3 \n\nAfter some time (~hr), both clients will stop getting new events with:\nLogging configured from file log4cplus.conf\n/disk02/home/blaufuss/icework/offline-software/trunk/src/icetray/private/icetray/I3Frame.cxx:590: FATAL: checksums don't match\n\nServer continues to run w/o issue. Stopping and restarting the clients gets them back on track.\n\n", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "_ts": "1232135224000000", "component": "glshovel", "summary": "i3tvtpd clients loose connection and stop getting new events", "priority": "normal", "keywords": "", "time": "2009-01-15T17:34:29", "milestone": "", "owner": "troy", "type": "defect" } ```
1.0
i3tvtpd clients loose connection and stop getting new events (Trac #515) - I3Tv client/server connections seems to run into a problem. Tested with offline-software/trunk Revision: 51842 Started an i3tvtpd server, using a well-used i3 file (single file, runs in a loop over it) Started 2 i3tvc clients connecting to this server as: ./env-shell.sh i3tvc localhost 26227 2 ./env-shell.sh i3tvc localhost 26227 3 After some time (~hr), both clients will stop getting new events with: Logging configured from file log4cplus.conf /disk02/home/blaufuss/icework/offline-software/trunk/src/icetray/private/icetray/I3Frame.cxx:590: FATAL: checksums don't match Server continues to run w/o issue. Stopping and restarting the clients gets them back on track. Migrated from https://code.icecube.wisc.edu/ticket/515 ```json { "status": "closed", "changetime": "2009-01-16T19:47:04", "description": "I3Tv client/server connections seems to run into a problem.\nTested with offline-software/trunk Revision: 51842\n\nStarted an i3tvtpd server, using a well-used i3 file (single file, runs in a loop over it)\n\nStarted 2 i3tvc clients connecting to this server as:\n./env-shell.sh i3tvc localhost 26227 2\n./env-shell.sh i3tvc localhost 26227 3 \n\nAfter some time (~hr), both clients will stop getting new events with:\nLogging configured from file log4cplus.conf\n/disk02/home/blaufuss/icework/offline-software/trunk/src/icetray/private/icetray/I3Frame.cxx:590: FATAL: checksums don't match\n\nServer continues to run w/o issue. Stopping and restarting the clients gets them back on track.\n\n", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "_ts": "1232135224000000", "component": "glshovel", "summary": "i3tvtpd clients loose connection and stop getting new events", "priority": "normal", "keywords": "", "time": "2009-01-15T17:34:29", "milestone": "", "owner": "troy", "type": "defect" } ```
defect
clients loose connection and stop getting new events trac client server connections seems to run into a problem tested with offline software trunk revision started an server using a well used file single file runs in a loop over it started clients connecting to this server as env shell sh localhost env shell sh localhost after some time hr both clients will stop getting new events with logging configured from file conf home blaufuss icework offline software trunk src icetray private icetray cxx fatal checksums don t match server continues to run w o issue stopping and restarting the clients gets them back on track migrated from json status closed changetime description client server connections seems to run into a problem ntested with offline software trunk revision n nstarted an server using a well used file single file runs in a loop over it n nstarted clients connecting to this server as n env shell sh localhost n env shell sh localhost n nafter some time hr both clients will stop getting new events with nlogging configured from file conf n home blaufuss icework offline software trunk src icetray private icetray cxx fatal checksums don t match n nserver continues to run w o issue stopping and restarting the clients gets them back on track n n reporter blaufuss cc resolution fixed ts component glshovel summary clients loose connection and stop getting new events priority normal keywords time milestone owner troy type defect
1
59,998
17,023,307,586
IssuesEvent
2021-07-03 01:20:49
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Recent Open menu list not working correctly
Component: merkaartor Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 8.50pm, Wednesday, 8th October 2008]** The most-recently-used list of opened files is always empty (File->Recent Open). The file appears in the MRU list of *imported* files instead (File->Recent Import). Procedure: - start Merkaartor, don't download anything from web - select File->Open, open any GPX track - view and compare Recent Open vs. Recent Import Obviously opening GPX tracks that way is always treated as import. Seen in Merkaartor 0.12-RC1 for Windows
1.0
Recent Open menu list not working correctly - **[Submitted to the original trac issue database at 8.50pm, Wednesday, 8th October 2008]** The most-recently-used list of opened files is always empty (File->Recent Open). The file appears in the MRU list of *imported* files instead (File->Recent Import). Procedure: - start Merkaartor, don't download anything from web - select File->Open, open any GPX track - view and compare Recent Open vs. Recent Import Obviously opening GPX tracks that way is always treated as import. Seen in Merkaartor 0.12-RC1 for Windows
defect
recent open menu list not working correctly the most recently used list of opened files is always empty file recent open the file appears in the mru list of imported files instead file recent import procedure start merkaartor don t download anything from web select file open open any gpx track view and compare recent open vs recent import obviously opening gpx tracks that way is always treated as import seen in merkaartor for windows
1
64,852
8,766,965,397
IssuesEvent
2018-12-17 18:19:05
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Use a variable for latest Bazel version number in docs
type: documentation type: feature request
Our install docs are often displaying the latest version number to download: http://docs.bazel.build/versions/master/install-ubuntu.html Today, this is hardcoded in the text. We should introduce a variable that is defined in one place and that we update when performing the release. https://github.com/bazelbuild/bazel/blob/master/site/_config.yml is probably the place to put this variable.
1.0
Use a variable for latest Bazel version number in docs - Our install docs are often displaying the latest version number to download: http://docs.bazel.build/versions/master/install-ubuntu.html Today, this is hardcoded in the text. We should introduce a variable that is defined in one place and that we update when performing the release. https://github.com/bazelbuild/bazel/blob/master/site/_config.yml is probably the place to put this variable.
non_defect
use a variable for latest bazel version number in docs our install docs are often displaying the latest version number to download today this is hardcoded in the text we should introduce a variable that is defined in one place and that we update when performing the release is probably the place to put this variable
0
30,238
6,049,713,293
IssuesEvent
2017-06-12 19:23:40
contao/core-bundle
https://api.github.com/repos/contao/core-bundle
closed
Speichern und zurück funktioniert nicht mehr
defect
<a href="https://github.com/theDyingMountain"><img src="https://avatars1.githubusercontent.com/u/4446224?v=3" align="left" width="42" height="42"></img></a> [Issue](https://github.com/contao/standard-edition/issues/59) by @theDyingMountain March 28th, 2017, 15:35 GMT Die Funktion "Speichern und zurück" im Backend funktioniert nicht wie erwartet, statt dessen wird "Speichern und schließen" ausgeführt. Fehler ist aufgetreten in der Stylesheetbearbeitung. Erwartete Funktionalität: Speichern und Rückkehr zur Übergeordneten Seite Resultat: Änderungen werden gespeichert, Rückkehr zur vorherigen Seite
1.0
Speichern und zurück funktioniert nicht mehr - <a href="https://github.com/theDyingMountain"><img src="https://avatars1.githubusercontent.com/u/4446224?v=3" align="left" width="42" height="42"></img></a> [Issue](https://github.com/contao/standard-edition/issues/59) by @theDyingMountain March 28th, 2017, 15:35 GMT Die Funktion "Speichern und zurück" im Backend funktioniert nicht wie erwartet, statt dessen wird "Speichern und schließen" ausgeführt. Fehler ist aufgetreten in der Stylesheetbearbeitung. Erwartete Funktionalität: Speichern und Rückkehr zur Übergeordneten Seite Resultat: Änderungen werden gespeichert, Rückkehr zur vorherigen Seite
defect
speichern und zurück funktioniert nicht mehr by thedyingmountain march gmt die funktion speichern und zurück im backend funktioniert nicht wie erwartet statt dessen wird speichern und schließen ausgeführt fehler ist aufgetreten in der stylesheetbearbeitung erwartete funktionalität speichern und rückkehr zur übergeordneten seite resultat änderungen werden gespeichert rückkehr zur vorherigen seite
1
41,092
10,298,728,148
IssuesEvent
2019-08-28 18:00:39
svalinn/DAGMC
https://api.github.com/repos/svalinn/DAGMC
closed
DAGMC-MCNP6 MCNP6.2 compiler
Type: Defect
Hi, I am a novice user, and trying to compile DAGMC-MCNP6 with MCNP6.2. I have just followed the instructions you provided, and failed to compile. One think I have changed was the line below. Instead of cp -r <path_to_dvd>/MCNP6/Source . I used cp -r <path_to_dvd>/MCNP_CODE/MCNP620/Source . since I could not find Source directory in the suggested location. However, while 'make' the error message (see below) was printed and just stopped. Is there any possible reason for this? /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:41114.34: use mcnp_random, only : rang 1 /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:41113.50: implicit real(dknd) (a-h,o-z), integer (i-n) 2 Error: USE statement at (1) cannot follow IMPLICIT statement at (2) /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:41155.34: use mcnp_random, only : rang 1 /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:41154.50: implicit real(dknd) (a-h,o-z), integer (i-n) 2 Error: USE statement at (1) cannot follow IMPLICIT statement at (2) /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:42270.34: use mcnp_random, only : rang 1 /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:42269.50: implicit real(dknd) (a-h,o-z), integer (i-n)
1.0
DAGMC-MCNP6 MCNP6.2 compiler - Hi, I am a novice user, and trying to compile DAGMC-MCNP6 with MCNP6.2. I have just followed the instructions you provided, and failed to compile. One think I have changed was the line below. Instead of cp -r <path_to_dvd>/MCNP6/Source . I used cp -r <path_to_dvd>/MCNP_CODE/MCNP620/Source . since I could not find Source directory in the suggested location. However, while 'make' the error message (see below) was printed and just stopped. Is there any possible reason for this? /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:41114.34: use mcnp_random, only : rang 1 /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:41113.50: implicit real(dknd) (a-h,o-z), integer (i-n) 2 Error: USE statement at (1) cannot follow IMPLICIT statement at (2) /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:41155.34: use mcnp_random, only : rang 1 /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:41154.50: implicit real(dknd) (a-h,o-z), integer (i-n) 2 Error: USE statement at (1) cannot follow IMPLICIT statement at (2) /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:42270.34: use mcnp_random, only : rang 1 /data/home/hhg/dagmc_bld/DAGMC/src/mcnp/mcnp6/Source/import/laqmod31.F90:42269.50: implicit real(dknd) (a-h,o-z), integer (i-n)
defect
dagmc compiler hi i am a novice user and trying to compile dagmc with i have just followed the instructions you provided and failed to compile one think i have changed was the line below instead of cp r source i used cp r mcnp code source since i could not find source directory in the suggested location however while make the error message see below was printed and just stopped is there any possible reason for this data home hhg dagmc bld dagmc src mcnp source import use mcnp random only rang data home hhg dagmc bld dagmc src mcnp source import implicit real dknd a h o z integer i n error use statement at cannot follow implicit statement at data home hhg dagmc bld dagmc src mcnp source import use mcnp random only rang data home hhg dagmc bld dagmc src mcnp source import implicit real dknd a h o z integer i n error use statement at cannot follow implicit statement at data home hhg dagmc bld dagmc src mcnp source import use mcnp random only rang data home hhg dagmc bld dagmc src mcnp source import implicit real dknd a h o z integer i n
1
481,528
13,888,257,239
IssuesEvent
2020-10-19 05:56:34
pravega/zookeeper-operator
https://api.github.com/repos/pravega/zookeeper-operator
closed
annotations does not take effect
Priority-P2 improvement
``` apiVersion: zookeeper.pravega.io/v1beta1 kind: ZookeeperCluster metadata: name: zookeepercluster spec: # Add fields here replicas: 3 pod: # 发现注解字段不生效 annotations: sidecar.istio.io/inject: "false" when i check pod: kubectl get pod zookeepercluster-0 -o yaml apiVersion: v1 kind: Pod metadata: annotations: sidecar.istio.io/status: '{"version":"761ebc5a63976754715f22fcf548f05270fb4b8db07324894aebdb31fa81d960","initContainers":null,"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}' tke.cloud.tencent.com/networks-status: |- [{ "name": "tke-bridge", "ips": [ "172.31.11.142" ], "default": true, "dns": {} }] when i check statefulsets: kubectl get statefulsets.apps zookeepercluster -o yaml apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: "2020-02-12T11:40:33Z" generation: 1 labels: app: zookeepercluster release: zookeepercluster name: zookeepercluster namespace: default ownerReferences: - apiVersion: zookeeper.pravega.io/v1beta1 blockOwnerDeletion: true controller: true kind: ZookeeperCluster name: zookeepercluster uid: 7acda871-4d8c-11ea-a8df-525400fe84b0 resourceVersion: "49757594" selfLink: /apis/apps/v1/namespaces/default/statefulsets/zookeepercluster uid: 7ad28a02-4d8c-11ea-b662-525400122c98 ``` I can not find annotations with sidecar.istio.io/inject: false
1.0
annotations does not take effect - ``` apiVersion: zookeeper.pravega.io/v1beta1 kind: ZookeeperCluster metadata: name: zookeepercluster spec: # Add fields here replicas: 3 pod: # 发现注解字段不生效 annotations: sidecar.istio.io/inject: "false" when i check pod: kubectl get pod zookeepercluster-0 -o yaml apiVersion: v1 kind: Pod metadata: annotations: sidecar.istio.io/status: '{"version":"761ebc5a63976754715f22fcf548f05270fb4b8db07324894aebdb31fa81d960","initContainers":null,"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}' tke.cloud.tencent.com/networks-status: |- [{ "name": "tke-bridge", "ips": [ "172.31.11.142" ], "default": true, "dns": {} }] when i check statefulsets: kubectl get statefulsets.apps zookeepercluster -o yaml apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: "2020-02-12T11:40:33Z" generation: 1 labels: app: zookeepercluster release: zookeepercluster name: zookeepercluster namespace: default ownerReferences: - apiVersion: zookeeper.pravega.io/v1beta1 blockOwnerDeletion: true controller: true kind: ZookeeperCluster name: zookeepercluster uid: 7acda871-4d8c-11ea-a8df-525400fe84b0 resourceVersion: "49757594" selfLink: /apis/apps/v1/namespaces/default/statefulsets/zookeepercluster uid: 7ad28a02-4d8c-11ea-b662-525400122c98 ``` I can not find annotations with sidecar.istio.io/inject: false
non_defect
annotations does not take effect apiversion zookeeper pravega io kind zookeepercluster metadata name zookeepercluster spec add fields here replicas pod 发现注解字段不生效 annotations sidecar istio io inject false when i check pod kubectl get pod zookeepercluster o yaml apiversion kind pod metadata annotations sidecar istio io status version initcontainers null containers volumes imagepullsecrets null tke cloud tencent com networks status name tke bridge ips default true dns when i check statefulsets kubectl get statefulsets apps zookeepercluster o yaml apiversion apps kind statefulset metadata creationtimestamp generation labels app zookeepercluster release zookeepercluster name zookeepercluster namespace default ownerreferences apiversion zookeeper pravega io blockownerdeletion true controller true kind zookeepercluster name zookeepercluster uid resourceversion selflink apis apps namespaces default statefulsets zookeepercluster uid i can not find annotations with sidecar istio io inject false
0
43,053
9,368,784,115
IssuesEvent
2019-04-03 09:29:29
mantidproject/mantid
https://api.github.com/repos/mantidproject/mantid
opened
Remove owning raw pointers
Misc: Maintenance Quality: Code Quality
There are still a large number of Raw Pointers in the code-base that should be replaced with smart pointers.
1.0
Remove owning raw pointers - There are still a large number of Raw Pointers in the code-base that should be replaced with smart pointers.
non_defect
remove owning raw pointers there are still a large number of raw pointers in the code base that should be replaced with smart pointers
0
7,368
2,610,365,483
IssuesEvent
2015-02-26 19:58:04
chrsmith/scribefire-chrome
https://api.github.com/repos/chrsmith/scribefire-chrome
closed
Wordpress blogs not showing up
auto-migrated Priority-Medium Type-Defect
``` What's the problem? It tells me that my blog is added successfully, but when it's the Wordpress blogs, they aren't actually added. What browser are you using? Chrome What version of ScribeFire are you running? 4.1 ``` ----- Original issue reported on code.google.com by `thestumb...@gmail.com` on 19 Jan 2013 at 3:41
1.0
Wordpress blogs not showing up - ``` What's the problem? It tells me that my blog is added successfully, but when it's the Wordpress blogs, they aren't actually added. What browser are you using? Chrome What version of ScribeFire are you running? 4.1 ``` ----- Original issue reported on code.google.com by `thestumb...@gmail.com` on 19 Jan 2013 at 3:41
defect
wordpress blogs not showing up what s the problem it tells me that my blog is added successfully but when it s the wordpress blogs they aren t actually added what browser are you using chrome what version of scribefire are you running original issue reported on code google com by thestumb gmail com on jan at
1
45,618
12,947,526,054
IssuesEvent
2020-07-18 23:51:34
fairbad/Super-Dino-Boys
https://api.github.com/repos/fairbad/Super-Dino-Boys
closed
Fix Friction
defect
Opted to not use friction in the game. Frictionless gameplay feels better for the user experience in this type of game.
1.0
Fix Friction - Opted to not use friction in the game. Frictionless gameplay feels better for the user experience in this type of game.
defect
fix friction opted to not use friction in the game frictionless gameplay feels better for the user experience in this type of game
1
30,207
24,646,039,957
IssuesEvent
2022-10-17 14:55:23
arduino/arduino-fwuploader
https://api.github.com/repos/arduino/arduino-fwuploader
closed
`arduino-fwuploader` panics
topic: infrastructure type: imperfection
### Describe the problem ``` ./arduino-fwuploader panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x824766] goroutine 1 [running]: debug/elf.(*Section).ReadAt(0xc000200000?, {0xc000210000?, 0x270?, 0x24?}, 0x40?) <autogenerated>:1 +0x26 archive/zip.readDirectoryEnd({0xa23b20, 0xc000138e80}, 0x210) /opt/hostedtoolcache/go/1.19.1/x64/src/archive/zip/reader.go:526 +0xf5 archive/zip.(*Reader).init(0xc000179a40, {0xa23b20?, 0xc000138e80}, 0x210) /opt/hostedtoolcache/go/1.19.1/x64/src/archive/zip/reader.go:97 +0x5c archive/zip.NewReader({0xa23b20, 0xc000138e80}, 0x210) /opt/hostedtoolcache/go/1.19.1/x64/src/archive/zip/reader.go:90 +0x5e github.com/daaku/go%2ezipexe.zipExeReaderElf({0xa246c0?, 0xc0000ae0c0}, 0xd758be) /home/runner/go/pkg/mod/github.com/daaku/go.zipexe@v1.0.0/zipexe.go:128 +0x8b github.com/daaku/go%2ezipexe.NewReader({0xa246c0, 0xc0000ae0c0}, 0x0?) /home/runner/go/pkg/mod/github.com/daaku/go.zipexe@v1.0.0/zipexe.go:48 +0x98 github.com/daaku/go%2ezipexe.OpenCloser({0xc0000d62d0?, 0xc000115720?}) /home/runner/go/pkg/mod/github.com/daaku/go.zipexe@v1.0.0/zipexe.go:30 +0x57 github.com/cmaglie/go%2erice.init.0() /home/runner/go/pkg/mod/github.com/cmaglie/go.rice@v1.0.3/appended.go:42 +0x65 ``` ### To reproduce This seems to only affect linux [2.2.1](https://github.com/arduino/arduino-fwuploader/releases/tag/2.2.1) released binary ### Expected behavior The binary should not crash ### Arduino Firmware Uploader version `arduino-fwuploader Version: 2.2.1 Commit: 75bcf76` ### Operating system Linux ### Operating system version Ubuntu 18.04 (same also on archlinux) ### Additional context _No response_ ### Issue checklist - [X] I searched for previous reports in [the issue tracker](https://github.com/arduino/arduino-fwuploader/issues?q=) - [X] I verified the problem still occurs when using the latest version - [X] My report contains all necessary details
1.0
`arduino-fwuploader` panics - ### Describe the problem ``` ./arduino-fwuploader panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x824766] goroutine 1 [running]: debug/elf.(*Section).ReadAt(0xc000200000?, {0xc000210000?, 0x270?, 0x24?}, 0x40?) <autogenerated>:1 +0x26 archive/zip.readDirectoryEnd({0xa23b20, 0xc000138e80}, 0x210) /opt/hostedtoolcache/go/1.19.1/x64/src/archive/zip/reader.go:526 +0xf5 archive/zip.(*Reader).init(0xc000179a40, {0xa23b20?, 0xc000138e80}, 0x210) /opt/hostedtoolcache/go/1.19.1/x64/src/archive/zip/reader.go:97 +0x5c archive/zip.NewReader({0xa23b20, 0xc000138e80}, 0x210) /opt/hostedtoolcache/go/1.19.1/x64/src/archive/zip/reader.go:90 +0x5e github.com/daaku/go%2ezipexe.zipExeReaderElf({0xa246c0?, 0xc0000ae0c0}, 0xd758be) /home/runner/go/pkg/mod/github.com/daaku/go.zipexe@v1.0.0/zipexe.go:128 +0x8b github.com/daaku/go%2ezipexe.NewReader({0xa246c0, 0xc0000ae0c0}, 0x0?) /home/runner/go/pkg/mod/github.com/daaku/go.zipexe@v1.0.0/zipexe.go:48 +0x98 github.com/daaku/go%2ezipexe.OpenCloser({0xc0000d62d0?, 0xc000115720?}) /home/runner/go/pkg/mod/github.com/daaku/go.zipexe@v1.0.0/zipexe.go:30 +0x57 github.com/cmaglie/go%2erice.init.0() /home/runner/go/pkg/mod/github.com/cmaglie/go.rice@v1.0.3/appended.go:42 +0x65 ``` ### To reproduce This seems to only affect linux [2.2.1](https://github.com/arduino/arduino-fwuploader/releases/tag/2.2.1) released binary ### Expected behavior The binary should not crash ### Arduino Firmware Uploader version `arduino-fwuploader Version: 2.2.1 Commit: 75bcf76` ### Operating system Linux ### Operating system version Ubuntu 18.04 (same also on archlinux) ### Additional context _No response_ ### Issue checklist - [X] I searched for previous reports in [the issue tracker](https://github.com/arduino/arduino-fwuploader/issues?q=) - [X] I verified the problem still occurs when using the latest version - [X] My report contains all necessary details
non_defect
arduino fwuploader panics describe the problem arduino fwuploader panic runtime error invalid memory address or nil pointer dereference goroutine debug elf section readat archive zip readdirectoryend opt hostedtoolcache go src archive zip reader go archive zip reader init opt hostedtoolcache go src archive zip reader go archive zip newreader opt hostedtoolcache go src archive zip reader go github com daaku go zipexereaderelf home runner go pkg mod github com daaku go zipexe zipexe go github com daaku go newreader home runner go pkg mod github com daaku go zipexe zipexe go github com daaku go opencloser home runner go pkg mod github com daaku go zipexe zipexe go github com cmaglie go init home runner go pkg mod github com cmaglie go rice appended go to reproduce this seems to only affect linux released binary expected behavior the binary should not crash arduino firmware uploader version arduino fwuploader version commit operating system linux operating system version ubuntu same also on archlinux additional context no response issue checklist i searched for previous reports in i verified the problem still occurs when using the latest version my report contains all necessary details
0
26,477
4,726,883,624
IssuesEvent
2016-10-18 11:46:56
zotonic/zotonic
https://api.github.com/repos/zotonic/zotonic
opened
Check if admin settings are saved on master
admin-ui defect
On master the admin settings of modules, like `mod_seo`, are not always saved due to binarification. Saving settings is usually implemented in a function named `save_settings` but uses lists to check for form values.
1.0
Check if admin settings are saved on master - On master the admin settings of modules, like `mod_seo`, are not always saved due to binarification. Saving settings is usually implemented in a function named `save_settings` but uses lists to check for form values.
defect
check if admin settings are saved on master on master the admin settings of modules like mod seo are not always saved due to binarification saving settings is usually implemented in a function named save settings but uses lists to check for form values
1
590,882
17,790,251,528
IssuesEvent
2021-08-31 15:24:49
OpenNebula/one
https://api.github.com/repos/OpenNebula/one
opened
Dynamically load providers in OneProvision
Type: Feature Status: Accepted Priority: Normal Category: Provision
**Description** Currently there is a dependency between OneProvision code and the providers that it supports. The idea is to limit this dependency as much as possible, so it is easier to add a new provider. **Use case** Make the process of adding a new provider easier. **Interface Changes** OneProvision code. <!--////////////////////////////////////////////--> <!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM --> <!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS --> <!-- PROGRESS WILL BE REFLECTED HERE --> <!--////////////////////////////////////////////--> ## Progress Status - [ ] Branch created - [ ] Code committed to development branch - [ ] Testing - QA - [ ] Documentation - [ ] Release notes - resolved issues, compatibility, known issues - [ ] Code committed to upstream release/hotfix branches - [ ] Documentation committed to upstream release/hotfix branches
1.0
Dynamically load providers in OneProvision - **Description** Currently there is a dependency between OneProvision code and the providers that it supports. The idea is to limit this dependency as much as possible, so it is easier to add a new provider. **Use case** Make the process of adding a new provider easier. **Interface Changes** OneProvision code. <!--////////////////////////////////////////////--> <!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM --> <!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS --> <!-- PROGRESS WILL BE REFLECTED HERE --> <!--////////////////////////////////////////////--> ## Progress Status - [ ] Branch created - [ ] Code committed to development branch - [ ] Testing - QA - [ ] Documentation - [ ] Release notes - resolved issues, compatibility, known issues - [ ] Code committed to upstream release/hotfix branches - [ ] Documentation committed to upstream release/hotfix branches
non_defect
dynamically load providers in oneprovision description currently there is a dependency between oneprovision code and the providers that it supports the idea is to limit this dependency as much as possible so it is easier to add a new provider use case make the process of adding a new provider easier interface changes oneprovision code progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches
0
614,981
19,210,893,594
IssuesEvent
2021-12-07 01:42:06
CERT-Polska/ursadb
https://api.github.com/repos/CERT-Polska/ursadb
closed
Create a benchmark suite
level:medium priority:medium status:up for grabs zone:performance
We need a way to test if DB is getting faster/slower with time. - [x] decide on the framework - [x] add benchmarks to test indexing performance (including for large and for small files - [x] add benchmarks to test query performance (including very large queries) - [ ] integrate with CI
1.0
Create a benchmark suite - We need a way to test if DB is getting faster/slower with time. - [x] decide on the framework - [x] add benchmarks to test indexing performance (including for large and for small files - [x] add benchmarks to test query performance (including very large queries) - [ ] integrate with CI
non_defect
create a benchmark suite we need a way to test if db is getting faster slower with time decide on the framework add benchmarks to test indexing performance including for large and for small files add benchmarks to test query performance including very large queries integrate with ci
0
105,245
4,233,107,557
IssuesEvent
2016-07-05 05:58:21
xcat2/xcat-core
https://api.github.com/repos/xcat2/xcat-core
closed
rmkit -f essl failed to remove the essl kit.
component:kit priority:normal status:pending type:bug xCAT 2.12.1 Sprint 2
My osimage is : lsdef -t osimage rh72diskfull1621 Object name: rh72diskfull1621 exlist=/install/osimages/rh72diskfull1621/kits/KIT_COMPONENTS.exlist imagetype=linux kitcomponents=essl-license-5.5.0-0-rhels-7.2-ppc64le,xlf.license-compute-15.1.3-0-rhels-7.2-ppc64le,essl-computenode-5.5.0-0-rhels-7.2-ppc64le,xlc.compiler-compute-13.1.3-0-rhels-7.2-ppc64le,xlf.rte-compute-15.1.3-0-rhels-7.2-ppc64le,xlc.license-compute-13.1.3-0-rhels-7.2-ppc64le,xlf.compiler-compute-15.1.3-0-rhels-7.2-ppc64le,xlc.rte-compute-13.1.3-0-rhels-7.2-ppc64le,essl-loginnode-5.5.0-0-rhels-7.2-ppc64le,pessl-loginnode-5.2.0-0-rhels-7.2-ppc64le,pessl-license-5.2.0-0-rhels-7.2-ppc64le,pessl-computenode-5.2.0-0-rhels-7.2-ppc64le osarch=ppc64le ... ------------------------------------------------------------------------------------------------------------------------ [root@c712ems4 test_nogpu]# rmkit -f essl Removing kit essl-5.5.0-0-ppc64le Error: Command failed: rmkitcomp. Error message: kitcomponent pperte-compute basename does not existRemoving kitcomponent essl-computenode-5.5.0-0-rhels-7.2-ppc64le from osimage rh72diskfull1621kitcomponents essl-computenode-5.5.0-0-rhels-7.2-ppc64le were removed from osimage rh72diskfull1621 successfully. Error: Failed to remove kit component essl-computenode-5.5.0-0-rhels-7.2-ppc64le from rh72diskfull1621
1.0
rmkit -f essl failed to remove the essl kit. - My osimage is : lsdef -t osimage rh72diskfull1621 Object name: rh72diskfull1621 exlist=/install/osimages/rh72diskfull1621/kits/KIT_COMPONENTS.exlist imagetype=linux kitcomponents=essl-license-5.5.0-0-rhels-7.2-ppc64le,xlf.license-compute-15.1.3-0-rhels-7.2-ppc64le,essl-computenode-5.5.0-0-rhels-7.2-ppc64le,xlc.compiler-compute-13.1.3-0-rhels-7.2-ppc64le,xlf.rte-compute-15.1.3-0-rhels-7.2-ppc64le,xlc.license-compute-13.1.3-0-rhels-7.2-ppc64le,xlf.compiler-compute-15.1.3-0-rhels-7.2-ppc64le,xlc.rte-compute-13.1.3-0-rhels-7.2-ppc64le,essl-loginnode-5.5.0-0-rhels-7.2-ppc64le,pessl-loginnode-5.2.0-0-rhels-7.2-ppc64le,pessl-license-5.2.0-0-rhels-7.2-ppc64le,pessl-computenode-5.2.0-0-rhels-7.2-ppc64le osarch=ppc64le ... ------------------------------------------------------------------------------------------------------------------------ [root@c712ems4 test_nogpu]# rmkit -f essl Removing kit essl-5.5.0-0-ppc64le Error: Command failed: rmkitcomp. Error message: kitcomponent pperte-compute basename does not existRemoving kitcomponent essl-computenode-5.5.0-0-rhels-7.2-ppc64le from osimage rh72diskfull1621kitcomponents essl-computenode-5.5.0-0-rhels-7.2-ppc64le were removed from osimage rh72diskfull1621 successfully. Error: Failed to remove kit component essl-computenode-5.5.0-0-rhels-7.2-ppc64le from rh72diskfull1621
non_defect
rmkit f essl failed to remove the essl kit my osimage is lsdef t osimage object name exlist install osimages kits kit components exlist imagetype linux kitcomponents essl license rhels xlf license compute rhels essl computenode rhels xlc compiler compute rhels xlf rte compute rhels xlc license compute rhels xlf compiler compute rhels xlc rte compute rhels essl loginnode rhels pessl loginnode rhels pessl license rhels pessl computenode rhels osarch rmkit f essl removing kit essl error command failed rmkitcomp error message kitcomponent pperte compute basename does not existremoving kitcomponent essl computenode rhels from osimage essl computenode rhels were removed from osimage successfully error failed to remove kit component essl computenode rhels from
0
224,124
7,466,877,193
IssuesEvent
2018-04-02 13:03:52
California-Planet-Search/radvel
https://api.github.com/repos/California-Planet-Search/radvel
closed
Initial positions of MCMC walkers have incorrect scales
bug priority:high
In version 1.1.7, when the radvel.mcmc function determines initial positions of walkers it first assesses "pscales" on lines 145-162 of the mcmc.py file. Then a deep copy of the posterior object is created in line 167. However, this deep copy does not necessarily preserve the order of parameters (nor is it meant to, since dictionaries are not ordered). With a new ordering of parameters the pscales no longer match up with the correct parameters and the distribution of initial walker positions in each dimension can be much larger or smaller than intended.
1.0
Initial positions of MCMC walkers have incorrect scales - In version 1.1.7, when the radvel.mcmc function determines initial positions of walkers it first assesses "pscales" on lines 145-162 of the mcmc.py file. Then a deep copy of the posterior object is created in line 167. However, this deep copy does not necessarily preserve the order of parameters (nor is it meant to, since dictionaries are not ordered). With a new ordering of parameters the pscales no longer match up with the correct parameters and the distribution of initial walker positions in each dimension can be much larger or smaller than intended.
non_defect
initial positions of mcmc walkers have incorrect scales in version when the radvel mcmc function determines initial positions of walkers it first assesses pscales on lines of the mcmc py file then a deep copy of the posterior object is created in line however this deep copy does not necessarily preserve the order of parameters nor is it meant to since dictionaries are not ordered with a new ordering of parameters the pscales no longer match up with the correct parameters and the distribution of initial walker positions in each dimension can be much larger or smaller than intended
0
53,071
7,805,948,149
IssuesEvent
2018-06-11 12:39:43
kubernetes/kubeadm
https://api.github.com/repos/kubernetes/kubeadm
closed
document the latest config under the kubeadm-init website page
documentation/out-of-date kind/documentation priority/important-soon
document the latest config we have here: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/ as discussed on the meeting: https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit# > kubeadm config: including godocs links vs inlining static configs vs generating configs in the future: https://github.com/kubernetes/kubeadm/issues/842 [Lubomir] What should we do with configs? link out? generate them? Current idea is to encode current version (v1alpha2) and link to the godoc. [Tim] We should have a separate page that links out to all the detail of the config object. Mirroring content is bad. If docs link to godoc, then it forces devs to make sure the docs are up to date. [Jennifer] We should branch for now. But relying on godocs is completely different than how the rest of the docs website works. Could bring it up to sig-docs. Make a proposal and present to sig-docs if we want to change how we reference code. [Action Item] Generate data into docs and figure out a long term solution later. [Tim] There has to be a deprecation policy on the old config summary: we can't do more for 1.11 other than including the latest output from `kubeadm config print-default`, mentioning the command and linking to godocs where we have comments for the fields. @timothysc @Bradamant3
2.0
document the latest config under the kubeadm-init website page - document the latest config we have here: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/ as discussed on the meeting: https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit# > kubeadm config: including godocs links vs inlining static configs vs generating configs in the future: https://github.com/kubernetes/kubeadm/issues/842 [Lubomir] What should we do with configs? link out? generate them? Current idea is to encode current version (v1alpha2) and link to the godoc. [Tim] We should have a separate page that links out to all the detail of the config object. Mirroring content is bad. If docs link to godoc, then it forces devs to make sure the docs are up to date. [Jennifer] We should branch for now. But relying on godocs is completely different than how the rest of the docs website works. Could bring it up to sig-docs. Make a proposal and present to sig-docs if we want to change how we reference code. [Action Item] Generate data into docs and figure out a long term solution later. [Tim] There has to be a deprecation policy on the old config summary: we can't do more for 1.11 other than including the latest output from `kubeadm config print-default`, mentioning the command and linking to godocs where we have comments for the fields. @timothysc @Bradamant3
non_defect
document the latest config under the kubeadm init website page document the latest config we have here as discussed on the meeting kubeadm config including godocs links vs inlining static configs vs generating configs in the future what should we do with configs link out generate them current idea is to encode current version and link to the godoc we should have a separate page that links out to all the detail of the config object mirroring content is bad if docs link to godoc then it forces devs to make sure the docs are up to date we should branch for now but relying on godocs is completely different than how the rest of the docs website works could bring it up to sig docs make a proposal and present to sig docs if we want to change how we reference code generate data into docs and figure out a long term solution later there has to be a deprecation policy on the old config summary we can t do more for other than including the latest output from kubeadm config print default mentioning the command and linking to godocs where we have comments for the fields timothysc
0
98,596
4,028,745,608
IssuesEvent
2016-05-18 07:56:30
djoproject/pyshell
https://api.github.com/repos/djoproject/pyshell
opened
LOADER/What about the addon hard reload
bug Middle priority NOT SOLVED
### Description The hard reload consist to unload an addon and reload it from file. If the addon file has been updated, the modifications have to be loaded. Pretty sure it does not work, check it, and fix it if needed
1.0
LOADER/What about the addon hard reload - ### Description The hard reload consist to unload an addon and reload it from file. If the addon file has been updated, the modifications have to be loaded. Pretty sure it does not work, check it, and fix it if needed
non_defect
loader what about the addon hard reload description the hard reload consist to unload an addon and reload it from file if the addon file has been updated the modifications have to be loaded pretty sure it does not work check it and fix it if needed
0