Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
37,250
15,218,514,108
IssuesEvent
2021-02-17 17:55:52
angular/angular
https://api.github.com/repos/angular/angular
closed
Generics do not correctly infer in template pipe usage
P3 comp: language-service fixed by Ivy freq3: high type: bug/fix
## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> </code></pre> ## Current behavior Please take a look at the minimal example below: ```typescript @Pipe({name: 'map'}) export class MapPipe implements PipeTransform { transform<T, R>(value: T[], func: (value: T) => R): R[] { return value ? value.map(func) : []; } } // the component interface InData { foo: string; } interface OutData { bar: string; } @Component({ selector: 'app-test', template: ` <span *ngFor="let item of items | map: _mapper"> {{item.bar}} </span> `, }) export class TestComponent { @Input() items: InData[]; _mapper(item: InData): OutData { return { bar: item.foo, }; } } ``` Angular language service (v5.0.2) gives me an error: ```Error:(7, 9) Angular: Identifier 'bar' is not defined. 'R' does not contain such a member``` ## Expected behavior Should infer the type as TypeScript would. From https://github.com/angular/vscode-ng-language-service/issues/196
1.0
Generics do not correctly infer in template pipe usage - ## I'm submitting a... <!-- Check one of the following options with "x" --> <pre><code> [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> </code></pre> ## Current behavior Please take a look at the minimal example below: ```typescript @Pipe({name: 'map'}) export class MapPipe implements PipeTransform { transform<T, R>(value: T[], func: (value: T) => R): R[] { return value ? value.map(func) : []; } } // the component interface InData { foo: string; } interface OutData { bar: string; } @Component({ selector: 'app-test', template: ` <span *ngFor="let item of items | map: _mapper"> {{item.bar}} </span> `, }) export class TestComponent { @Input() items: InData[]; _mapper(item: InData): OutData { return { bar: item.foo, }; } } ``` Angular language service (v5.0.2) gives me an error: ```Error:(7, 9) Angular: Identifier 'bar' is not defined. 'R' does not contain such a member``` ## Expected behavior Should infer the type as TypeScript would. From https://github.com/angular/vscode-ng-language-service/issues/196
non_defect
generics do not correctly infer in template pipe usage i m submitting a bug report current behavior please take a look at the minimal example below typescript pipe name map export class mappipe implements pipetransform transform value t func value t r r return value value map func the component interface indata foo string interface outdata bar string component selector app test template item bar export class testcomponent input items indata mapper item indata outdata return bar item foo angular language service gives me an error error angular identifier bar is not defined r does not contain such a member expected behavior should infer the type as typescript would from
0
47,867
13,066,300,345
IssuesEvent
2020-07-30 21:24:29
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
CascadeVariables - missing coverage (Trac #1308)
Migrated from Trac combo reconstruction defect
no tests == no coverage Migrated from https://code.icecube.wisc.edu/ticket/1308 ```json { "status": "closed", "changetime": "2019-02-13T14:14:55", "description": "no tests == no coverage", "reporter": "nega", "cc": "", "resolution": "wontfix", "_ts": "1550067295757382", "component": "combo reconstruction", "summary": "CascadeVariables - missing coverage", "priority": "normal", "keywords": "coverage", "time": "2015-08-28T23:25:37", "milestone": "", "owner": "markw04", "type": "defect" } ```
1.0
CascadeVariables - missing coverage (Trac #1308) - no tests == no coverage Migrated from https://code.icecube.wisc.edu/ticket/1308 ```json { "status": "closed", "changetime": "2019-02-13T14:14:55", "description": "no tests == no coverage", "reporter": "nega", "cc": "", "resolution": "wontfix", "_ts": "1550067295757382", "component": "combo reconstruction", "summary": "CascadeVariables - missing coverage", "priority": "normal", "keywords": "coverage", "time": "2015-08-28T23:25:37", "milestone": "", "owner": "markw04", "type": "defect" } ```
defect
cascadevariables missing coverage trac no tests no coverage migrated from json status closed changetime description no tests no coverage reporter nega cc resolution wontfix ts component combo reconstruction summary cascadevariables missing coverage priority normal keywords coverage time milestone owner type defect
1
62,011
17,023,831,638
IssuesEvent
2021-07-03 04:04:39
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Toolbox "sticks" to background, moves out of window when panning
Component: potlatch2 Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 6.50am, Tuesday, 16th October 2012]** I've only had this problem over the last few hours, but the toolbox seems to "stick" to the background. When entering the "Edit" function and using Bing as a background, dragging the map down (or South) moves everything but the toolbox. It seems to "ride" the map straight out of the window. I believe this is related to: https://trac.openstreetmap.org/ticket/4626.
1.0
Toolbox "sticks" to background, moves out of window when panning - **[Submitted to the original trac issue database at 6.50am, Tuesday, 16th October 2012]** I've only had this problem over the last few hours, but the toolbox seems to "stick" to the background. When entering the "Edit" function and using Bing as a background, dragging the map down (or South) moves everything but the toolbox. It seems to "ride" the map straight out of the window. I believe this is related to: https://trac.openstreetmap.org/ticket/4626.
defect
toolbox sticks to background moves out of window when panning i ve only had this problem over the last few hours but the toolbox seems to stick to the background when entering the edit function and using bing as a background dragging the map down or south moves everything but the toolbox it seems to ride the map straight out of the window i believe this is related to
1
52,794
13,225,071,300
IssuesEvent
2020-08-17 20:25:51
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
bad description of 6599 (Trac #351)
Migrated from Trac combo simulation defect
The energy range on the SimpProd webpage, http://internal.icecube.wisc.edu/simulation/dataset/6599. The energy range is listed as 10^3.0^ - 10^9^, but is actually 10^1.0^ - 10^9^. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/351">https://code.icecube.wisc.edu/projects/icecube/ticket/351</a>, reported by icecubeand owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2012-02-15T23:09:40", "_ts": "1329347380000000", "description": "The energy range on the SimpProd webpage, http://internal.icecube.wisc.edu/simulation/dataset/6599. The energy range is listed as 10^3.0^ - 10^9^, but is actually 10^1.0^ - 10^9^. ", "reporter": "icecube", "cc": "", "resolution": "invalid", "time": "2012-02-03T20:28:34", "component": "combo simulation", "summary": "bad description of 6599", "priority": "minor", "keywords": "", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
1.0
bad description of 6599 (Trac #351) - The energy range on the SimpProd webpage, http://internal.icecube.wisc.edu/simulation/dataset/6599. The energy range is listed as 10^3.0^ - 10^9^, but is actually 10^1.0^ - 10^9^. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/351">https://code.icecube.wisc.edu/projects/icecube/ticket/351</a>, reported by icecubeand owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2012-02-15T23:09:40", "_ts": "1329347380000000", "description": "The energy range on the SimpProd webpage, http://internal.icecube.wisc.edu/simulation/dataset/6599. The energy range is listed as 10^3.0^ - 10^9^, but is actually 10^1.0^ - 10^9^. ", "reporter": "icecube", "cc": "", "resolution": "invalid", "time": "2012-02-03T20:28:34", "component": "combo simulation", "summary": "bad description of 6599", "priority": "minor", "keywords": "", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
defect
bad description of trac the energy range on the simpprod webpage the energy range is listed as but is actually migrated from json status closed changetime ts description the energy range on the simpprod webpage the energy range is listed as but is actually reporter icecube cc resolution invalid time component combo simulation summary bad description of priority minor keywords milestone owner olivas type defect
1
6,524
9,612,280,912
IssuesEvent
2019-05-13 08:32:51
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
in IE11, my content="IE=edge" not working.
AREA: server SYSTEM: resource processing TYPE: bug
<!-- If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below. Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed. Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours). --> ### What is your Test Scenario? the testcafe add their tags before my '<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />', so IE 11 can not run as IE 11 mode. ### What is the Current behavior? the page rendered as IE 7 in IE 11. ### What is the Expected behavior? the page rendered as IE 11 in IE 11. ### What is your web application and your TestCafe test code? my html: <!DOCTYPE html> <html> <head> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" /> <meta charset="utf-8" /> <title></title> <meta name="viewport" content="width=device-width, initial-scale=1" /> ... ... testcafe code: import { Selector } from 'testcafe'; fixture `New Fixture` .page `http://127.0.0.1:8080/framework`; test('New Test', async t => { await t .typeText(Selector('#username'), 'admin') .pressKey('tab') .typeText(Selector('#password'), 'admin') .pressKey('enter'); }); Your website URL (or attach your complete example): <details> <summary>Your complete test code (or attach your test files):</summary> <!-- Paste your test code here: --> ```js ``` </details> <details> <summary>Your complete test report:</summary> <!-- Paste your complete result test report here (even if it is huge): --> ``` ``` </details> <details> <summary>Screenshots:</summary> <!-- If applicable, add screenshots to help explain the issue. --> ``` ``` </details> ### Steps to Reproduce: <!-- Describe what we should do to reproduce the behavior you encountered. --> 1. Go to my website ... 3. Execute this command... 4. See the error... ### Your Environment details: * testcafe version: <!-- run `testcafe -v` --> * node.js version: <!-- run `node -v` --> * command-line arguments: <!-- example: "testcafe ie,chrome -e test.js" --> * browser name and version: <!-- example: IE 11, Chrome 69, Firefox 100, etc. --> * platform and version: <!-- example: "macOS 10.14, Windows, Linux Ubuntu 18.04.1, iOS 12 --> * other: <!-- any notes you consider important -->
1.0
in IE11, my content="IE=edge" not working. - <!-- If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below. Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed. Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours). --> ### What is your Test Scenario? the testcafe add their tags before my '<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />', so IE 11 can not run as IE 11 mode. ### What is the Current behavior? the page rendered as IE 7 in IE 11. ### What is the Expected behavior? the page rendered as IE 11 in IE 11. ### What is your web application and your TestCafe test code? my html: <!DOCTYPE html> <html> <head> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" /> <meta charset="utf-8" /> <title></title> <meta name="viewport" content="width=device-width, initial-scale=1" /> ... ... testcafe code: import { Selector } from 'testcafe'; fixture `New Fixture` .page `http://127.0.0.1:8080/framework`; test('New Test', async t => { await t .typeText(Selector('#username'), 'admin') .pressKey('tab') .typeText(Selector('#password'), 'admin') .pressKey('enter'); }); Your website URL (or attach your complete example): <details> <summary>Your complete test code (or attach your test files):</summary> <!-- Paste your test code here: --> ```js ``` </details> <details> <summary>Your complete test report:</summary> <!-- Paste your complete result test report here (even if it is huge): --> ``` ``` </details> <details> <summary>Screenshots:</summary> <!-- If applicable, add screenshots to help explain the issue. --> ``` ``` </details> ### Steps to Reproduce: <!-- Describe what we should do to reproduce the behavior you encountered. --> 1. Go to my website ... 3. Execute this command... 4. See the error... ### Your Environment details: * testcafe version: <!-- run `testcafe -v` --> * node.js version: <!-- run `node -v` --> * command-line arguments: <!-- example: "testcafe ie,chrome -e test.js" --> * browser name and version: <!-- example: IE 11, Chrome 69, Firefox 100, etc. --> * platform and version: <!-- example: "macOS 10.14, Windows, Linux Ubuntu 18.04.1, iOS 12 --> * other: <!-- any notes you consider important -->
non_defect
in my content ie edge not working if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository  in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario the testcafe add their tags before my so ie can not run as ie mode what is the current behavior the page rendered as ie in ie what is the expected behavior the page rendered as ie in ie what is your web application and your testcafe test code my html testcafe code import selector from testcafe fixture new fixture page test new test async t await t typetext selector username admin presskey tab typetext selector password admin presskey enter your website url or attach your complete example your complete test code or attach your test files js your complete test report screenshots steps to reproduce go to my website execute this command see the error your environment details testcafe version node js version command line arguments browser name and version platform and version other
0
68,720
21,795,506,122
IssuesEvent
2022-05-15 15:13:03
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
[5.18 comnp5.18-rc6 won't configure build
Type: Defect
### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Arch Distribution Version | Rolling Kernel Version | linux-mainline-rc6-1 Architecture | x86-64 OpenZFS Version | zfs-2.1.99-1202_gde82164518 ### Describe the problem you're observing `configure` fails for kernel modules with the following error message: ``` *** None of the expected "blk_queue_discard" interfaces were detected. *** This may be because your kernel version is newer than what is *** supported, or you are using a patched custom kernel with *** incompatible modifications. *** *** ZFS Version: zfs-2.1.99-1202_gde82164518 *** Compatible Kernels: 3.10 - 5.17 ``` ### Describe how to reproduce the problem Using the Arch Linux AUR for zfs-dkms-git and linux-mainline, which presently builds a 5.18-rc6 kernel, the current ZFS won't configure ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
1.0
[5.18 comnp5.18-rc6 won't configure build - ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Arch Distribution Version | Rolling Kernel Version | linux-mainline-rc6-1 Architecture | x86-64 OpenZFS Version | zfs-2.1.99-1202_gde82164518 ### Describe the problem you're observing `configure` fails for kernel modules with the following error message: ``` *** None of the expected "blk_queue_discard" interfaces were detected. *** This may be because your kernel version is newer than what is *** supported, or you are using a patched custom kernel with *** incompatible modifications. *** *** ZFS Version: zfs-2.1.99-1202_gde82164518 *** Compatible Kernels: 3.10 - 5.17 ``` ### Describe how to reproduce the problem Using the Arch Linux AUR for zfs-dkms-git and linux-mainline, which presently builds a 5.18-rc6 kernel, the current ZFS won't configure ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
defect
won t configure build system information type version name distribution name arch distribution version rolling kernel version linux mainline architecture openzfs version zfs describe the problem you re observing configure fails for kernel modules with the following error message none of the expected blk queue discard interfaces were detected this may be because your kernel version is newer than what is supported or you are using a patched custom kernel with incompatible modifications zfs version zfs compatible kernels describe how to reproduce the problem using the arch linux aur for zfs dkms git and linux mainline which presently builds a kernel the current zfs won t configure include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with
1
69,990
22,778,659,910
IssuesEvent
2022-07-08 17:01:09
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
opened
[🐛 Bug]: Selenium 4 integration with Sauce Labs
I-defect needs-triaging
### What happened? Hi, I am trying to run the test scripts developed using Selenium 4 in sauce labs but getting "Exception in thread "main" org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure." I took the sample scripts from official documentation from sauce labs and passed userid, accesskey and proxy but still getting error Pls help me to get resolve this issue. Thanks ### How can we reproduce the issue? ```shell public class RunMethod { public static void main(String[] args) throws MalformedURLException { String sauce_userid ="TestUser"; String sauce_accesskey="TestPassword"; // String URL = "http://" + sauce_userid + ":" + sauce_accesskey+"@ondemand.saucelabs.com:80/wd/hub"; Proxy proxy = new Proxy(); ChromeOptions options = new ChromeOptions(); options.setPlatformName("Windows 10"); options.setBrowserVersion("latest"); proxy.setHttpProxy("TestProxy:8080"); options.setCapability("proxy", proxy); Map<String, Object> sauceOptions = new HashMap<String, Object>(); sauceOptions.put("username", sauce_userid); sauceOptions.put("accessKey", sauce_accesskey); sauceOptions.put("Test name", "Selenium 4 Integration with sauce labs"); options.setCapability("sauce:options", sauceOptions); URL url = new URL("http://ondemand.saucelabs.com:80/wd/hub"); WebDriver driver = new RemoteWebDriver(url, options); driver.get("https://www.google.com"); System.out.println(driver.getTitle()); } } ``` ### Relevant log output ```shell Jul 08, 2022 12:51:18 PM org.openqa.selenium.remote.tracing.opentelemetry.OpenTelemetryTracer createTracer INFO: Using OpenTelemetry for tracing SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Exception in thread "main" org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure. Build info: version: '4.1.4', revision: '535d840ee2' System info: host: '20LJ-MP1GUJAX', ip: '10.203.214.221', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_172' Driver info: org.openqa.selenium.remote.RemoteWebDriver Command: [null, newSession {capabilities=[Capabilities {browserName: chrome, browserVersion: latest, goog:chromeOptions: {args: [], extensions: []}, platformName: windows 10, proxy: {httpProxy: ch3proxy.accounts.root.corp..., proxyType: manual}, sauce:options: {Test name: Selenium 4 Integration with..., accessKey: e13d6fe4-bd73-4eaf-99d3-6ec..., username: sso-accounts-RAGHAVA.BHOJEG...}}], desiredCapabilities=Capabilities {browserName: chrome, browserVersion: latest, goog:chromeOptions: {args: [], extensions: []}, platformName: Windows 10, proxy: Proxy(manual, http=ch3proxy..., sauce:options: {Test name: Selenium 4 Integration with..., accessKey: e13d6fe4-bd73-4eaf-99d3-6ec..., username: sso-accounts-RAGHAVA.BHOJEG...}}}] Capabilities {} at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:585) at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:248) at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:164) at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:146) at SampleSauceLabsCode.SampleSauceLabsCode.RunMethod.main(RunMethod.java:43) Caused by: java.io.UncheckedIOException: java.io.IOException: Stream closed at org.openqa.selenium.remote.http.netty.NettyHttpHandler.makeCall(NettyHttpHandler.java:73) at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42) at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56) at org.openqa.selenium.remote.http.netty.NettyHttpHandler.execute(NettyHttpHandler.java:49) at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42) at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56) at org.openqa.selenium.remote.http.netty.NettyClient.execute(NettyClient.java:97) at org.openqa.selenium.remote.tracing.TracedHttpClient.execute(TracedHttpClient.java:55) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:102) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:84) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:62) at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:156) at org.openqa.selenium.remote.TracedCommandExecutor.execute(TracedCommandExecutor.java:51) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:567) ... 4 more Caused by: java.io.IOException: Stream closed at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:170) at java.io.BufferedInputStream.reset(BufferedInputStream.java:446) at org.asynchttpclient.netty.request.body.NettyInputStreamBody.write(NettyInputStreamBody.java:61) at org.asynchttpclient.netty.request.NettyRequestSender.writeRequest(NettyRequestSender.java:433) at org.asynchttpclient.netty.channel.NettyConnectListener.writeRequest(NettyConnectListener.java:80) at org.asynchttpclient.netty.channel.NettyConnectListener.onSuccess(NettyConnectListener.java:156) at org.asynchttpclient.netty.channel.NettyChannelConnector$1.onSuccess(NettyChannelConnector.java:92) at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:26) at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:20) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:300) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:335) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:710) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Picked up JAVA_TOOL_OPTIONS: -Djava.vendor="Sun Microsystems Inc" ``` ### Operating System Windows 10 ### Selenium version latest ### What are the browser(s) and version(s) where you see this issue? latest ### What are the browser driver(s) and version(s) where you see this issue? RemoteWebDriver ### Are you using Selenium Grid? _No response_
1.0
[🐛 Bug]: Selenium 4 integration with Sauce Labs - ### What happened? Hi, I am trying to run the test scripts developed using Selenium 4 in sauce labs but getting "Exception in thread "main" org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure." I took the sample scripts from official documentation from sauce labs and passed userid, accesskey and proxy but still getting error Pls help me to get resolve this issue. Thanks ### How can we reproduce the issue? ```shell public class RunMethod { public static void main(String[] args) throws MalformedURLException { String sauce_userid ="TestUser"; String sauce_accesskey="TestPassword"; // String URL = "http://" + sauce_userid + ":" + sauce_accesskey+"@ondemand.saucelabs.com:80/wd/hub"; Proxy proxy = new Proxy(); ChromeOptions options = new ChromeOptions(); options.setPlatformName("Windows 10"); options.setBrowserVersion("latest"); proxy.setHttpProxy("TestProxy:8080"); options.setCapability("proxy", proxy); Map<String, Object> sauceOptions = new HashMap<String, Object>(); sauceOptions.put("username", sauce_userid); sauceOptions.put("accessKey", sauce_accesskey); sauceOptions.put("Test name", "Selenium 4 Integration with sauce labs"); options.setCapability("sauce:options", sauceOptions); URL url = new URL("http://ondemand.saucelabs.com:80/wd/hub"); WebDriver driver = new RemoteWebDriver(url, options); driver.get("https://www.google.com"); System.out.println(driver.getTitle()); } } ``` ### Relevant log output ```shell Jul 08, 2022 12:51:18 PM org.openqa.selenium.remote.tracing.opentelemetry.OpenTelemetryTracer createTracer INFO: Using OpenTelemetry for tracing SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Exception in thread "main" org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure. Build info: version: '4.1.4', revision: '535d840ee2' System info: host: '20LJ-MP1GUJAX', ip: '10.203.214.221', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_172' Driver info: org.openqa.selenium.remote.RemoteWebDriver Command: [null, newSession {capabilities=[Capabilities {browserName: chrome, browserVersion: latest, goog:chromeOptions: {args: [], extensions: []}, platformName: windows 10, proxy: {httpProxy: ch3proxy.accounts.root.corp..., proxyType: manual}, sauce:options: {Test name: Selenium 4 Integration with..., accessKey: e13d6fe4-bd73-4eaf-99d3-6ec..., username: sso-accounts-RAGHAVA.BHOJEG...}}], desiredCapabilities=Capabilities {browserName: chrome, browserVersion: latest, goog:chromeOptions: {args: [], extensions: []}, platformName: Windows 10, proxy: Proxy(manual, http=ch3proxy..., sauce:options: {Test name: Selenium 4 Integration with..., accessKey: e13d6fe4-bd73-4eaf-99d3-6ec..., username: sso-accounts-RAGHAVA.BHOJEG...}}}] Capabilities {} at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:585) at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:248) at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:164) at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:146) at SampleSauceLabsCode.SampleSauceLabsCode.RunMethod.main(RunMethod.java:43) Caused by: java.io.UncheckedIOException: java.io.IOException: Stream closed at org.openqa.selenium.remote.http.netty.NettyHttpHandler.makeCall(NettyHttpHandler.java:73) at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42) at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56) at org.openqa.selenium.remote.http.netty.NettyHttpHandler.execute(NettyHttpHandler.java:49) at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42) at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56) at org.openqa.selenium.remote.http.netty.NettyClient.execute(NettyClient.java:97) at org.openqa.selenium.remote.tracing.TracedHttpClient.execute(TracedHttpClient.java:55) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:102) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:84) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:62) at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:156) at org.openqa.selenium.remote.TracedCommandExecutor.execute(TracedCommandExecutor.java:51) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:567) ... 4 more Caused by: java.io.IOException: Stream closed at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:170) at java.io.BufferedInputStream.reset(BufferedInputStream.java:446) at org.asynchttpclient.netty.request.body.NettyInputStreamBody.write(NettyInputStreamBody.java:61) at org.asynchttpclient.netty.request.NettyRequestSender.writeRequest(NettyRequestSender.java:433) at org.asynchttpclient.netty.channel.NettyConnectListener.writeRequest(NettyConnectListener.java:80) at org.asynchttpclient.netty.channel.NettyConnectListener.onSuccess(NettyConnectListener.java:156) at org.asynchttpclient.netty.channel.NettyChannelConnector$1.onSuccess(NettyChannelConnector.java:92) at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:26) at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:20) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:300) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:335) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:710) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) Picked up JAVA_TOOL_OPTIONS: -Djava.vendor="Sun Microsystems Inc" ``` ### Operating System Windows 10 ### Selenium version latest ### What are the browser(s) and version(s) where you see this issue? latest ### What are the browser driver(s) and version(s) where you see this issue? RemoteWebDriver ### Are you using Selenium Grid? _No response_
defect
selenium integration with sauce labs what happened hi i am trying to run the test scripts developed using selenium in sauce labs but getting exception in thread main org openqa selenium sessionnotcreatedexception could not start a new session possible causes are invalid address of the remote server or browser start up failure i took the sample scripts from official documentation from sauce labs and passed userid accesskey and proxy but still getting error pls help me to get resolve this issue thanks how can we reproduce the issue shell public class runmethod public static void main string args throws malformedurlexception string sauce userid testuser string sauce accesskey testpassword string url sauce userid sauce accesskey ondemand saucelabs com wd hub proxy proxy new proxy chromeoptions options new chromeoptions options setplatformname windows options setbrowserversion latest proxy sethttpproxy testproxy options setcapability proxy proxy map sauceoptions new hashmap sauceoptions put username sauce userid sauceoptions put accesskey sauce accesskey sauceoptions put test name selenium integration with sauce labs options setcapability sauce options sauceoptions url url new url webdriver driver new remotewebdriver url options driver get system out println driver gettitle relevant log output shell jul pm org openqa selenium remote tracing opentelemetry opentelemetrytracer createtracer info using opentelemetry for tracing failed to load class org impl staticloggerbinder defaulting to no operation nop logger implementation see for further details exception in thread main org openqa selenium sessionnotcreatedexception could not start a new session possible causes are invalid address of the remote server or browser start up failure build info version revision system info host ip os name windows os arch os version java version driver info org openqa selenium remote remotewebdriver command extensions platformname windows proxy httpproxy accounts root corp proxytype manual sauce options test name selenium integration with accesskey username sso accounts raghava bhojeg desiredcapabilities capabilities browsername chrome browserversion latest goog chromeoptions args extensions platformname windows proxy proxy manual http sauce options test name selenium integration with accesskey username sso accounts raghava bhojeg capabilities at org openqa selenium remote remotewebdriver execute remotewebdriver java at org openqa selenium remote remotewebdriver startsession remotewebdriver java at org openqa selenium remote remotewebdriver remotewebdriver java at org openqa selenium remote remotewebdriver remotewebdriver java at samplesaucelabscode samplesaucelabscode runmethod main runmethod java caused by java io uncheckedioexception java io ioexception stream closed at org openqa selenium remote http netty nettyhttphandler makecall nettyhttphandler java at org openqa selenium remote http addseleniumuseragent lambda apply addseleniumuseragent java at org openqa selenium remote http filter lambda andfinally filter java at org openqa selenium remote http netty nettyhttphandler execute nettyhttphandler java at org openqa selenium remote http addseleniumuseragent lambda apply addseleniumuseragent java at org openqa selenium remote http filter lambda andfinally filter java at org openqa selenium remote http netty nettyclient execute nettyclient java at org openqa selenium remote tracing tracedhttpclient execute tracedhttpclient java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote httpcommandexecutor execute httpcommandexecutor java at org openqa selenium remote tracedcommandexecutor execute tracedcommandexecutor java at org openqa selenium remote remotewebdriver execute remotewebdriver java more caused by java io ioexception stream closed at java io bufferedinputstream getbufifopen bufferedinputstream java at java io bufferedinputstream reset bufferedinputstream java at org asynchttpclient netty request body nettyinputstreambody write nettyinputstreambody java at org asynchttpclient netty request nettyrequestsender writerequest nettyrequestsender java at org asynchttpclient netty channel nettyconnectlistener writerequest nettyconnectlistener java at org asynchttpclient netty channel nettyconnectlistener onsuccess nettyconnectlistener java at org asynchttpclient netty channel nettychannelconnector onsuccess nettychannelconnector java at org asynchttpclient netty simplechannelfuturelistener operationcomplete simplechannelfuturelistener java at org asynchttpclient netty simplechannelfuturelistener operationcomplete simplechannelfuturelistener java at io netty util concurrent defaultpromise defaultpromise java at io netty util concurrent defaultpromise defaultpromise java at io netty util concurrent defaultpromise notifylistenersnow defaultpromise java at io netty util concurrent defaultpromise notifylisteners defaultpromise java at io netty util concurrent defaultpromise defaultpromise java at io netty util concurrent defaultpromise defaultpromise java at io netty util concurrent defaultpromise trysuccess defaultpromise java at io netty channel defaultchannelpromise trysuccess defaultchannelpromise java at io netty channel nio abstractniochannel abstractniounsafe fulfillconnectpromise abstractniochannel java at io netty channel nio abstractniochannel abstractniounsafe finishconnect abstractniochannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java picked up java tool options djava vendor sun microsystems inc operating system windows selenium version latest what are the browser s and version s where you see this issue latest what are the browser driver s and version s where you see this issue remotewebdriver are you using selenium grid no response
1
9,998
2,616,018,861
IssuesEvent
2015-03-02 01:00:24
jasonhall/bwapi
https://api.github.com/repos/jasonhall/bwapi
closed
getUnitsInWeaponRange not working properly
auto-migrated Component-Logic Milestone-MajorRelease Priority-Medium Type-Defect Usability
``` Using BWAPI 3.7.4 (r4160). When I try to get the units in weapon (Arclite Shock Cannon) range the output is always 0. unit->getUnitsInWeaponRange(WeaponTypes::Arclite_Shock_Cannon); But getUnitsInRadius() is working properly. unit->getUnitsInRadius(384); // 384 is the max range of Arclite_Shock_Cannon ``` Original issue reported on code.google.com by `warwol...@gmail.com` on 12 Aug 2012 at 8:14
1.0
getUnitsInWeaponRange not working properly - ``` Using BWAPI 3.7.4 (r4160). When I try to get the units in weapon (Arclite Shock Cannon) range the output is always 0. unit->getUnitsInWeaponRange(WeaponTypes::Arclite_Shock_Cannon); But getUnitsInRadius() is working properly. unit->getUnitsInRadius(384); // 384 is the max range of Arclite_Shock_Cannon ``` Original issue reported on code.google.com by `warwol...@gmail.com` on 12 Aug 2012 at 8:14
defect
getunitsinweaponrange not working properly using bwapi when i try to get the units in weapon arclite shock cannon range the output is always unit getunitsinweaponrange weapontypes arclite shock cannon but getunitsinradius is working properly unit getunitsinradius is the max range of arclite shock cannon original issue reported on code google com by warwol gmail com on aug at
1
54,660
13,807,895,365
IssuesEvent
2020-10-12 00:22:53
STEllAR-GROUP/phylanx
https://api.github.com/repos/STEllAR-GROUP/phylanx
closed
Problem creating an hstack
category: primitives submodule: frontend type: defect
For this code: ``` from phylanx import Phylanx import numpy as np @Phylanx def zip(): a = hstack([[1,2],[3,4]]) print(diag(a)) # works print(diag(hstack([[1,2],[3,4]]))) # does not work zip() ```` Output ``` Singularity> python3 y.py [1, 4] Traceback (most recent call last): File "y.py", line 10, in <module> zip() File "/usr/local/userbase/lib/python3.6/site-packages/phylanx-0.0.1-py3.6-linux-x86_64.egg/phylanx/ast/transducer.py", line 191, in __call__ result = self.backend.call(*mapped_args, **mapped_kwargs) File "/usr/local/userbase/lib/python3.6/site-packages/phylanx-0.0.1-py3.6-linux-x86_64.egg/phylanx/ast/physl.py", line 588, in call self.wrapped_function.__name__, *args, **kwargs) RuntimeError: y.py(8, 10): diag:: primitive_argument_type does not hold a numeric value type (type held: 'phylanx::ir::range'): HPX(bad_parameter) ```
1.0
Problem creating an hstack - For this code: ``` from phylanx import Phylanx import numpy as np @Phylanx def zip(): a = hstack([[1,2],[3,4]]) print(diag(a)) # works print(diag(hstack([[1,2],[3,4]]))) # does not work zip() ```` Output ``` Singularity> python3 y.py [1, 4] Traceback (most recent call last): File "y.py", line 10, in <module> zip() File "/usr/local/userbase/lib/python3.6/site-packages/phylanx-0.0.1-py3.6-linux-x86_64.egg/phylanx/ast/transducer.py", line 191, in __call__ result = self.backend.call(*mapped_args, **mapped_kwargs) File "/usr/local/userbase/lib/python3.6/site-packages/phylanx-0.0.1-py3.6-linux-x86_64.egg/phylanx/ast/physl.py", line 588, in call self.wrapped_function.__name__, *args, **kwargs) RuntimeError: y.py(8, 10): diag:: primitive_argument_type does not hold a numeric value type (type held: 'phylanx::ir::range'): HPX(bad_parameter) ```
defect
problem creating an hstack for this code from phylanx import phylanx import numpy as np phylanx def zip a hstack print diag a works print diag hstack does not work zip output singularity y py traceback most recent call last file y py line in zip file usr local userbase lib site packages phylanx linux egg phylanx ast transducer py line in call result self backend call mapped args mapped kwargs file usr local userbase lib site packages phylanx linux egg phylanx ast physl py line in call self wrapped function name args kwargs runtimeerror y py diag primitive argument type does not hold a numeric value type type held phylanx ir range hpx bad parameter
1
137,450
18,752,715,662
IssuesEvent
2021-11-05 05:53:27
madhans23/linux-4.15
https://api.github.com/repos/madhans23/linux-4.15
opened
CVE-2021-0447 (Medium) detected in linuxv4.15
security vulnerability
## CVE-2021-0447 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.15</b></p></summary> <p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/brodo/linux.git>https://git.kernel.org/pub/scm/linux/kernel/git/brodo/linux.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.15/commit/d96ee498864d1a0b6222cfb17d64ca8196014940">d96ee498864d1a0b6222cfb17d64ca8196014940</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/l2tp/l2tp_ppp.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/l2tp/l2tp_ppp.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A security vulnerability was found in Linux Kernel before 3.2.99, 3.16.54, 4.4.225, 4.9.225 and 4.14.182. Pppol2tp_session_create() registers sessions that can't have their corresponding socket initialised. <p>Publish Date: 2020-11-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-0447>CVE-2021-0447</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-0447">https://www.linuxkernelcves.com/cves/CVE-2021-0447</a></p> <p>Release Date: 2020-11-07</p> <p>Fix Resolution: 3.2.99,v3.16.54,v4.4.225,v4.9.225,v4.14.182</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-0447 (Medium) detected in linuxv4.15 - ## CVE-2021-0447 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.15</b></p></summary> <p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/brodo/linux.git>https://git.kernel.org/pub/scm/linux/kernel/git/brodo/linux.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.15/commit/d96ee498864d1a0b6222cfb17d64ca8196014940">d96ee498864d1a0b6222cfb17d64ca8196014940</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/l2tp/l2tp_ppp.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/l2tp/l2tp_ppp.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A security vulnerability was found in Linux Kernel before 3.2.99, 3.16.54, 4.4.225, 4.9.225 and 4.14.182. Pppol2tp_session_create() registers sessions that can't have their corresponding socket initialised. <p>Publish Date: 2020-11-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-0447>CVE-2021-0447</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-0447">https://www.linuxkernelcves.com/cves/CVE-2021-0447</a></p> <p>Release Date: 2020-11-07</p> <p>Fix Resolution: 3.2.99,v3.16.54,v4.4.225,v4.9.225,v4.14.182</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in cve medium severity vulnerability vulnerable library library home page a href found in head commit a href found in base branch master vulnerable source files net ppp c net ppp c vulnerability details a security vulnerability was found in linux kernel before and session create registers sessions that can t have their corresponding socket initialised publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
78,774
27,752,545,198
IssuesEvent
2023-03-15 22:08:16
idaholab/moose
https://api.github.com/repos/idaholab/moose
opened
Perf graph live prints dot dot dots outside of a section
P: minor T: defect
## Bug Description ``` Postprocessor Values: +----------------+----------------+----------------+----------------+----------------+---------------------------+----------------------+--------------------+-------------------------+--------------------+----------------+ | time | fuel_temp_avg | fuel_temp_max | fuel_temp_min | heat_pipe_area | heatpipe_surface_temp_avg | outside_bc_heat_flux | reflector_bdy_area | reflector_temp_side_avg | total_linear_power | volume_fuel | +----------------+----------------+----------------+----------------+----------------+---------------------------+----------------------+--------------------+-------------------------+--------------------+----------------+ | -1.000000e+04 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 1.338101e+02 | 0.000000e+00 | -9.489557e+04 | 3.028089e+01 | 1.273000e+03 | 0.000000e+00 | 1.598493e+01 | +----------------+----------------+----------------+----------------+----------------+---------------------------+----------------------+--------------------+-------------------------+--------------------+----------------+ ........... ``` what does this mean ## Steps to Reproduce Run client's input (ask me who the client is) ## Impact Looks bad, makes us look bad
1.0
Perf graph live prints dot dot dots outside of a section - ## Bug Description ``` Postprocessor Values: +----------------+----------------+----------------+----------------+----------------+---------------------------+----------------------+--------------------+-------------------------+--------------------+----------------+ | time | fuel_temp_avg | fuel_temp_max | fuel_temp_min | heat_pipe_area | heatpipe_surface_temp_avg | outside_bc_heat_flux | reflector_bdy_area | reflector_temp_side_avg | total_linear_power | volume_fuel | +----------------+----------------+----------------+----------------+----------------+---------------------------+----------------------+--------------------+-------------------------+--------------------+----------------+ | -1.000000e+04 | 0.000000e+00 | 0.000000e+00 | 0.000000e+00 | 1.338101e+02 | 0.000000e+00 | -9.489557e+04 | 3.028089e+01 | 1.273000e+03 | 0.000000e+00 | 1.598493e+01 | +----------------+----------------+----------------+----------------+----------------+---------------------------+----------------------+--------------------+-------------------------+--------------------+----------------+ ........... ``` what does this mean ## Steps to Reproduce Run client's input (ask me who the client is) ## Impact Looks bad, makes us look bad
defect
perf graph live prints dot dot dots outside of a section bug description postprocessor values time fuel temp avg fuel temp max fuel temp min heat pipe area heatpipe surface temp avg outside bc heat flux reflector bdy area reflector temp side avg total linear power volume fuel what does this mean steps to reproduce run client s input ask me who the client is impact looks bad makes us look bad
1
37,306
8,352,849,614
IssuesEvent
2018-10-02 08:10:36
hazelcast/hazelcast-csharp-client
https://api.github.com/repos/hazelcast/hazelcast-csharp-client
closed
Custom Authentication is Broken
Estimation: M Priority: High Type: Defect
When we try to use custom authentication with `UsernamePasswordCredentials` class, we can only configure it by using XML configuration as below: ``` <security> <credentials>com.hazelcast.security.UsernamePasswordCredentials</credentials> <login-credentials> <username>User</username> <password>usrPassword</password> </login-credentials> </security> ``` It doesn't allow to configure programmatically since the `UsernamePasswordCredentials` is internal. It is a public class in Java client.
1.0
Custom Authentication is Broken - When we try to use custom authentication with `UsernamePasswordCredentials` class, we can only configure it by using XML configuration as below: ``` <security> <credentials>com.hazelcast.security.UsernamePasswordCredentials</credentials> <login-credentials> <username>User</username> <password>usrPassword</password> </login-credentials> </security> ``` It doesn't allow to configure programmatically since the `UsernamePasswordCredentials` is internal. It is a public class in Java client.
defect
custom authentication is broken when we try to use custom authentication with usernamepasswordcredentials class we can only configure it by using xml configuration as below com hazelcast security usernamepasswordcredentials user usrpassword it doesn t allow to configure programmatically since the usernamepasswordcredentials is internal it is a public class in java client
1
24,211
3,924,569,368
IssuesEvent
2016-04-22 15:38:07
googlei18n/libphonenumber
https://api.github.com/repos/googlei18n/libphonenumber
reopened
Brazilian emergency and public utilities numbers are invalid
priority-medium type-defect
Imported from [Google Code issue #377](https://code.google.com/p/libphonenumber/issues/detail?id=377) created by [fabiojmendes](https://code.google.com/u/fabiojmendes@gmail.com/) on 2013-11-13T18:09:39.000Z: ---- <b>What steps will reproduce the problem?</b> Here are the test cases: @Test public void testEmergency() throws Exception { PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance(); //190 in brazil is equivalent to EUA's 911 PhoneNumber phone = phoneUtil.parse(&quot;190&quot;, &quot;BR&quot;); assertTrue(phoneUtil.isValidNumber(phone)); //fails } @Test public void testPublicServices() throws Exception { PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance(); //Directory assistance‎ PhoneNumber phone = phoneUtil.parse(&quot;102&quot;, &quot;BR&quot;); assertTrue(phoneUtil.isValidNumber(phone)); //fails } @Test public void testServiceProviderNumber() throws Exception { PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance(); //Cable TV service provider PhoneNumber phone = phoneUtil.parse(&quot;10621&quot;, &quot;BR&quot;); assertTrue(phoneUtil.isValidNumber(phone)); //fails } <b>What is the expected output? What do you see instead?</b> A valid number <b>What version of the product are you using? On what operating system?</b> libphonenumber 5.8 Java JDK 1.7_45 <b>Please provide any additional information below.</b> Here is the document that list all the emergency and public utilities numbers: http://www.anatel.gov.br/Portal/exibirPortalPaginaEspecial.do?codItemCanal=746&amp;codCanal=277 I think its something like 1\d{2,3} for the basic ones and 10[356]\d{1,2} for the ones with a possible extension. I dont fell confident to contribute to the metadata just yet but I'll try. Nevertheless, I'll be happy to help testing and translating the document I just linked.
1.0
Brazilian emergency and public utilities numbers are invalid - Imported from [Google Code issue #377](https://code.google.com/p/libphonenumber/issues/detail?id=377) created by [fabiojmendes](https://code.google.com/u/fabiojmendes@gmail.com/) on 2013-11-13T18:09:39.000Z: ---- <b>What steps will reproduce the problem?</b> Here are the test cases: @Test public void testEmergency() throws Exception { PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance(); //190 in brazil is equivalent to EUA's 911 PhoneNumber phone = phoneUtil.parse(&quot;190&quot;, &quot;BR&quot;); assertTrue(phoneUtil.isValidNumber(phone)); //fails } @Test public void testPublicServices() throws Exception { PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance(); //Directory assistance‎ PhoneNumber phone = phoneUtil.parse(&quot;102&quot;, &quot;BR&quot;); assertTrue(phoneUtil.isValidNumber(phone)); //fails } @Test public void testServiceProviderNumber() throws Exception { PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance(); //Cable TV service provider PhoneNumber phone = phoneUtil.parse(&quot;10621&quot;, &quot;BR&quot;); assertTrue(phoneUtil.isValidNumber(phone)); //fails } <b>What is the expected output? What do you see instead?</b> A valid number <b>What version of the product are you using? On what operating system?</b> libphonenumber 5.8 Java JDK 1.7_45 <b>Please provide any additional information below.</b> Here is the document that list all the emergency and public utilities numbers: http://www.anatel.gov.br/Portal/exibirPortalPaginaEspecial.do?codItemCanal=746&amp;codCanal=277 I think its something like 1\d{2,3} for the basic ones and 10[356]\d{1,2} for the ones with a possible extension. I dont fell confident to contribute to the metadata just yet but I'll try. Nevertheless, I'll be happy to help testing and translating the document I just linked.
defect
brazilian emergency and public utilities numbers are invalid imported from created by on what steps will reproduce the problem here are the test cases test public void testemergency throws exception phonenumberutil phoneutil phonenumberutil getinstance in brazil is equivalent to eua s phonenumber phone phoneutil parse quot quot quot br quot asserttrue phoneutil isvalidnumber phone fails test public void testpublicservices throws exception phonenumberutil phoneutil phonenumberutil getinstance directory assistance‎ phonenumber phone phoneutil parse quot quot quot br quot asserttrue phoneutil isvalidnumber phone fails test public void testserviceprovidernumber throws exception phonenumberutil phoneutil phonenumberutil getinstance cable tv service provider phonenumber phone phoneutil parse quot quot quot br quot asserttrue phoneutil isvalidnumber phone fails what is the expected output what do you see instead a valid number what version of the product are you using on what operating system libphonenumber java jdk please provide any additional information below here is the document that list all the emergency and public utilities numbers i think its something like d for the basic ones and d for the ones with a possible extension i dont fell confident to contribute to the metadata just yet but i ll try nevertheless i ll be happy to help testing and translating the document i just linked
1
74,739
25,289,295,135
IssuesEvent
2022-11-16 22:16:48
matrix-org/synapse
https://api.github.com/repos/matrix-org/synapse
closed
Complement `Adding a push rule wakes up an incremental /sync` is consistently failing
S-Major T-Defect O-Frequent
and it's irritating me. https://github.com/matrix-org/synapse/actions/runs/3472783471/jobs/5804115166 e.g. From early debugging it looks - adding the push rule correctly wakes up a user_stream - the user_stream calls its callback to generate a SyncResult - we fetch global_account_data and get `{}` for our troubles - presumably it is supposed to be a nonempty dict?
1.0
Complement `Adding a push rule wakes up an incremental /sync` is consistently failing - and it's irritating me. https://github.com/matrix-org/synapse/actions/runs/3472783471/jobs/5804115166 e.g. From early debugging it looks - adding the push rule correctly wakes up a user_stream - the user_stream calls its callback to generate a SyncResult - we fetch global_account_data and get `{}` for our troubles - presumably it is supposed to be a nonempty dict?
defect
complement adding a push rule wakes up an incremental sync is consistently failing and it s irritating me e g from early debugging it looks adding the push rule correctly wakes up a user stream the user stream calls its callback to generate a syncresult we fetch global account data and get for our troubles presumably it is supposed to be a nonempty dict
1
334,346
29,831,756,193
IssuesEvent
2023-06-18 11:09:02
NoroffFEU/live-social
https://api.github.com/repos/NoroffFEU/live-social
closed
Add unit test for logout function
enhancement good first issue test
`src/js/api/auth/logout.test.js` This function should: - Call localStorage.removeItem once - Remove any token value from localStorage
1.0
Add unit test for logout function - `src/js/api/auth/logout.test.js` This function should: - Call localStorage.removeItem once - Remove any token value from localStorage
non_defect
add unit test for logout function src js api auth logout test js this function should call localstorage removeitem once remove any token value from localstorage
0
66,661
20,423,957,303
IssuesEvent
2022-02-24 00:26:17
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Signing into Element afresh, the roomlist loaded but clicking on a room in the list didn't replace the home screen with the timeline
T-Defect
### Steps to reproduce 1. Sign in to Element 2. Click on a room 3. Don't get the timeline in the middle of the screen ### Outcome #### What did you expect? Timeline #### What happened instead? Just the "Welcome to Element Nightly" screen, stubbornly not moving. ### Operating system macOS ### Application version Element Nightly version: 2022022001 Olm version: 3.2.8 ### How did you install the app? Website ### Homeserver lant.uk ### Will you send logs? Yes
1.0
Signing into Element afresh, the roomlist loaded but clicking on a room in the list didn't replace the home screen with the timeline - ### Steps to reproduce 1. Sign in to Element 2. Click on a room 3. Don't get the timeline in the middle of the screen ### Outcome #### What did you expect? Timeline #### What happened instead? Just the "Welcome to Element Nightly" screen, stubbornly not moving. ### Operating system macOS ### Application version Element Nightly version: 2022022001 Olm version: 3.2.8 ### How did you install the app? Website ### Homeserver lant.uk ### Will you send logs? Yes
defect
signing into element afresh the roomlist loaded but clicking on a room in the list didn t replace the home screen with the timeline steps to reproduce sign in to element click on a room don t get the timeline in the middle of the screen outcome what did you expect timeline what happened instead just the welcome to element nightly screen stubbornly not moving operating system macos application version element nightly version olm version how did you install the app website homeserver lant uk will you send logs yes
1
107,901
9,247,762,975
IssuesEvent
2019-03-15 02:24:14
strongbox/strongbox
https://api.github.com/repos/strongbox/strongbox
opened
Test the NuGet support with Chocolatey
good first issue help wanted testing
# Task Description We need feedback on how well our NuGet layout provider works with Chocolatey. # Help * [Gitter](gitter.im/strongbox/strongbox) * Points of contact: * @carlspring * @sbespalov * @fuss86
1.0
Test the NuGet support with Chocolatey - # Task Description We need feedback on how well our NuGet layout provider works with Chocolatey. # Help * [Gitter](gitter.im/strongbox/strongbox) * Points of contact: * @carlspring * @sbespalov * @fuss86
non_defect
test the nuget support with chocolatey task description we need feedback on how well our nuget layout provider works with chocolatey help gitter im strongbox strongbox points of contact carlspring sbespalov
0
58,067
16,342,381,551
IssuesEvent
2021-05-13 00:10:22
darshan-hpc/darshan
https://api.github.com/repos/darshan-hpc/darshan
closed
zlib configure check bug
defect other
In GitLab by @shanedsnyder on Sep 24, 2015, 16:26 Initially reported by Kalyana Chadalavada: ```text checking if zlib is wanted... ./configure: line 3632: ,: command not found ``` On other platforms the zlib check doesn't produce this error exactly, but something is wrong with the check because it doesn't produce the normal result string and newline.
1.0
zlib configure check bug - In GitLab by @shanedsnyder on Sep 24, 2015, 16:26 Initially reported by Kalyana Chadalavada: ```text checking if zlib is wanted... ./configure: line 3632: ,: command not found ``` On other platforms the zlib check doesn't produce this error exactly, but something is wrong with the check because it doesn't produce the normal result string and newline.
defect
zlib configure check bug in gitlab by shanedsnyder on sep initially reported by kalyana chadalavada text checking if zlib is wanted configure line command not found on other platforms the zlib check doesn t produce this error exactly but something is wrong with the check because it doesn t produce the normal result string and newline
1
104,364
16,613,643,679
IssuesEvent
2021-06-02 14:20:02
Thanraj/linux-4.1.15
https://api.github.com/repos/Thanraj/linux-4.1.15
opened
CVE-2019-11190 (Medium) detected in linux-stable-rtv4.1.33
security vulnerability
## CVE-2019-11190 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-4.1.15/commits/5e3fb3e332499e1ad10a0969e55582af1027b085">5e3fb3e332499e1ad10a0969e55582af1027b085</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/fs/binfmt_elf.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/fs/binfmt_elf.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The Linux kernel before 4.8 allows local users to bypass ASLR on setuid programs (such as /bin/su) because install_exec_creds() is called too late in load_elf_binary() in fs/binfmt_elf.c, and thus the ptrace_may_access() check has a race condition when reading /proc/pid/stat. <p>Publish Date: 2019-04-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11190>CVE-2019-11190</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11190">http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11190</a></p> <p>Release Date: 2019-04-12</p> <p>Fix Resolution: v4.8-rc5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-11190 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2019-11190 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-4.1.15/commits/5e3fb3e332499e1ad10a0969e55582af1027b085">5e3fb3e332499e1ad10a0969e55582af1027b085</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/fs/binfmt_elf.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/fs/binfmt_elf.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The Linux kernel before 4.8 allows local users to bypass ASLR on setuid programs (such as /bin/su) because install_exec_creds() is called too late in load_elf_binary() in fs/binfmt_elf.c, and thus the ptrace_may_access() check has a race condition when reading /proc/pid/stat. <p>Publish Date: 2019-04-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11190>CVE-2019-11190</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11190">http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11190</a></p> <p>Release Date: 2019-04-12</p> <p>Fix Resolution: v4.8-rc5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files linux fs binfmt elf c linux fs binfmt elf c vulnerability details the linux kernel before allows local users to bypass aslr on setuid programs such as bin su because install exec creds is called too late in load elf binary in fs binfmt elf c and thus the ptrace may access check has a race condition when reading proc pid stat publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
38,673
2,849,991,260
IssuesEvent
2015-05-31 06:01:14
GLolol/PyLink
https://api.github.com/repos/GLolol/PyLink
closed
Add shared functions for sending messages to users (ircmsgs.py)
feature priority:medium
This is a lot simpler than writing `_sendFromUser('PRIVMSG person :hello!')` every time you want to send a message from a pseudoclient.
1.0
Add shared functions for sending messages to users (ircmsgs.py) - This is a lot simpler than writing `_sendFromUser('PRIVMSG person :hello!')` every time you want to send a message from a pseudoclient.
non_defect
add shared functions for sending messages to users ircmsgs py this is a lot simpler than writing sendfromuser privmsg person hello every time you want to send a message from a pseudoclient
0
307,329
26,524,837,822
IssuesEvent
2023-01-19 07:47:12
dusk-network/rusk
https://api.github.com/repos/dusk-network/rusk
opened
Integrate RocksDB as backend engine for blockchain db
mark:testnet area:rusk-node
#### Summary This issue is an effort to investigate if `rocksdb` wrapper crate could serve as a back-end storage for storing blocks and blockchain metadata as we already do with `goleveldb` in dusk-blockchain. Main use-cases to be supported - Implement an atomic read-write transactions ( `WriteBatch` should be facilitate this) - Execute read-only transactions in isolation to avoid conflicts. Additional use-cases to be considered - Use `Snapshots` to speed up sync-up procedure - Column families to support mempool data persistence - Compression to reduce the amount of disk space
1.0
Integrate RocksDB as backend engine for blockchain db - #### Summary This issue is an effort to investigate if `rocksdb` wrapper crate could serve as a back-end storage for storing blocks and blockchain metadata as we already do with `goleveldb` in dusk-blockchain. Main use-cases to be supported - Implement an atomic read-write transactions ( `WriteBatch` should be facilitate this) - Execute read-only transactions in isolation to avoid conflicts. Additional use-cases to be considered - Use `Snapshots` to speed up sync-up procedure - Column families to support mempool data persistence - Compression to reduce the amount of disk space
non_defect
integrate rocksdb as backend engine for blockchain db summary this issue is an effort to investigate if rocksdb wrapper crate could serve as a back end storage for storing blocks and blockchain metadata as we already do with goleveldb in dusk blockchain main use cases to be supported implement an atomic read write transactions writebatch should be facilitate this execute read only transactions in isolation to avoid conflicts additional use cases to be considered use snapshots to speed up sync up procedure column families to support mempool data persistence compression to reduce the amount of disk space
0
15,063
2,845,828,729
IssuesEvent
2015-05-29 07:20:00
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
[TEST-FAILURE] cluster.TcpIpJoinTest failures
Team: Core Type: Defect
``` 03:02:18 test_whenExplicitPortConfigured(com.hazelcast.cluster.TcpIpJoinTest) Time elapsed: 11.025 sec <<< FAILURE! 03:02:18 java.lang.AssertionError: expected:<2> but was:<1> 03:02:18 at org.junit.Assert.fail(Assert.java:88) 03:02:18 at org.junit.Assert.failNotEquals(Assert.java:743) ``` com.hazelcast.cluster.TcpIpJoinTest.test_whenExplicitPortConfigured com.hazelcast.cluster.TcpIpJoinTest.test_whenPortAndInterfacesConfigured com.hazelcast.cluster.TcpIpJoinTest.test_whenNoExplicitPortConfigured after these tests have failed, another 23 tests seems failed due to some port errors. https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.maintenance-OracleJDK1.7/130/#showFailuresLink
1.0
[TEST-FAILURE] cluster.TcpIpJoinTest failures - ``` 03:02:18 test_whenExplicitPortConfigured(com.hazelcast.cluster.TcpIpJoinTest) Time elapsed: 11.025 sec <<< FAILURE! 03:02:18 java.lang.AssertionError: expected:<2> but was:<1> 03:02:18 at org.junit.Assert.fail(Assert.java:88) 03:02:18 at org.junit.Assert.failNotEquals(Assert.java:743) ``` com.hazelcast.cluster.TcpIpJoinTest.test_whenExplicitPortConfigured com.hazelcast.cluster.TcpIpJoinTest.test_whenPortAndInterfacesConfigured com.hazelcast.cluster.TcpIpJoinTest.test_whenNoExplicitPortConfigured after these tests have failed, another 23 tests seems failed due to some port errors. https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.maintenance-OracleJDK1.7/130/#showFailuresLink
defect
cluster tcpipjointest failures test whenexplicitportconfigured com hazelcast cluster tcpipjointest time elapsed sec failure java lang assertionerror expected but was at org junit assert fail assert java at org junit assert failnotequals assert java com hazelcast cluster tcpipjointest test whenexplicitportconfigured com hazelcast cluster tcpipjointest test whenportandinterfacesconfigured com hazelcast cluster tcpipjointest test whennoexplicitportconfigured after these tests have failed another tests seems failed due to some port errors
1
176,146
28,036,936,022
IssuesEvent
2023-03-28 15:39:13
gitcoinco/grants-stack
https://api.github.com/repos/gitcoinco/grants-stack
opened
Enhance preview feature for Round Applications on Builder
design
**User Stories** As a project owner I want to be able to preview what my project and application will look like So that I can make any edits to my project or application before I submit my **Acceptance Criteria** GIVEN that I click Preview from the Grant Application page WHEN I look at the page THEN I see a preview of my project page on Grant Explorer GIVEN that I am on the Preview Application page WHEN I look at the page THEN I am educated that this is a preview of what my project page on Grant Explorer would look like if accepted AND if I want to make any changes, I should do so before submitting my application
1.0
Enhance preview feature for Round Applications on Builder - **User Stories** As a project owner I want to be able to preview what my project and application will look like So that I can make any edits to my project or application before I submit my **Acceptance Criteria** GIVEN that I click Preview from the Grant Application page WHEN I look at the page THEN I see a preview of my project page on Grant Explorer GIVEN that I am on the Preview Application page WHEN I look at the page THEN I am educated that this is a preview of what my project page on Grant Explorer would look like if accepted AND if I want to make any changes, I should do so before submitting my application
non_defect
enhance preview feature for round applications on builder user stories as a project owner i want to be able to preview what my project and application will look like so that i can make any edits to my project or application before i submit my acceptance criteria given that i click preview from the grant application page when i look at the page then i see a preview of my project page on grant explorer given that i am on the preview application page when i look at the page then i am educated that this is a preview of what my project page on grant explorer would look like if accepted and if i want to make any changes i should do so before submitting my application
0
8,351
2,611,493,884
IssuesEvent
2015-02-27 05:34:03
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
Vampirism combined with piano makes hog disappear
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. 2 Player mode with AI 2. Weapon mode: crazy 3. enable vampirism and dispatch piano The hog disappears and is considered dead. Hedgewars 0.9.17 ``` Original issue reported on code.google.com by `hei...@gmail.com` on 7 Jan 2012 at 1:24
1.0
Vampirism combined with piano makes hog disappear - ``` What steps will reproduce the problem? 1. 2 Player mode with AI 2. Weapon mode: crazy 3. enable vampirism and dispatch piano The hog disappears and is considered dead. Hedgewars 0.9.17 ``` Original issue reported on code.google.com by `hei...@gmail.com` on 7 Jan 2012 at 1:24
defect
vampirism combined with piano makes hog disappear what steps will reproduce the problem player mode with ai weapon mode crazy enable vampirism and dispatch piano the hog disappears and is considered dead hedgewars original issue reported on code google com by hei gmail com on jan at
1
16,735
9,514,696,078
IssuesEvent
2019-04-26 01:46:41
cosinekitty/astronomy
https://api.github.com/repos/cosinekitty/astronomy
closed
Improve SearchRelativeLongitude convergence for Mercury and Mars
performance
Mercury and Mars have high orbital eccentricities. This causes SearchRelativeLongitude to take a long time to converge, especially for Mercury. Improve the algorithm so that it converges faster. The other planets converge within 4 or 5 iterations, but Mercury takes somewhere between 20 and 100.
True
Improve SearchRelativeLongitude convergence for Mercury and Mars - Mercury and Mars have high orbital eccentricities. This causes SearchRelativeLongitude to take a long time to converge, especially for Mercury. Improve the algorithm so that it converges faster. The other planets converge within 4 or 5 iterations, but Mercury takes somewhere between 20 and 100.
non_defect
improve searchrelativelongitude convergence for mercury and mars mercury and mars have high orbital eccentricities this causes searchrelativelongitude to take a long time to converge especially for mercury improve the algorithm so that it converges faster the other planets converge within or iterations but mercury takes somewhere between and
0
305,182
23,100,845,854
IssuesEvent
2022-07-27 02:28:34
exercism/python
https://api.github.com/repos/exercism/python
closed
[New Python Track Docs]: Test Driven Development (TDD)
x:action/create x:status/claimed abandoned 🏚 claimed 🐾 new documentation ✨ x:rep/large
This issue describes how to implement the `Test Driven Development (TDD)` Python Track docs. ## Getting started **Please please please read the docs before starting.** Posting PRs without reading these docs will be a lot more frustrating for you during the review cycle, and exhaust Exercism's maintainers' time. So, before diving into the implementation, please read up on the following documents: - [Contributing to Exercism](https://exercism.org/docs/building) | [Exercism and GitHub](https://exercism.org/docs/building/github) | - [Contributor Pull Request Guide](https://exercism.org/docs/building/github/contributors-pull-request-guide) - [What are those Weird Task Tags about?](https://exercism.org/docs/building/product/tasks) - [Building Language Tracks: An Overview](https://exercism.org/docs/building/tracks) - [Existing Python Track Documents](https://exercism.org/docs/tracks/python) - [Python Track Documents on GitHub](https://github.com/exercism/python/tree/main/docs) - [Exercism Formatting and Style Guide](https://exercism.org/docs/building/markdown/style-guide) - [Exercism Markdown Specification](https://exercism.org/docs/building/markdown/markdown) - [Reputation](https://exercism.org/docs/using/product/reputation) ## Goal This document is intended to explain basic `Test Driven Development (TDD)` practices/philosophies and how exercism adapts those practices. It should orient students to a "TDD Mindset" for iterating on and solving exercism problems on the Python track. <br> ## Learning objectives - Understand what `Test Driven Development (TDD)` is, and how it can inform and improve the process of coding. - Understand how exercism uses `TDD` principals in the design of its concept and practice exercises. - Understand the role that tests and test failure play in working through the track challenges. - Understand the role mentorship and iteration play in refining and improving problem solutions. - Provide some "challenges" or "next steps" students can take in their coding practice (_e.g., writing further tests to extend or adapt a problem or problem solution, PRing improvements to existing test cases, expanding a challenge with different approaches or techniques._) <br> ## Out of scope This is a fairly high-level and broad document, emphasizing a programming _approach_ or philosophy. The intent here is to explain why exercises are structured the way they are on the Python track (_minimal stubs, viewable tests, access to mentoring and community solutions, encouragement for iterating on solutions_) and provide resources for students to explore and adapt the TDD technique more fully, rather than set up a programming problem or tutorial. No topic is really "out of scope", as long as it applies to TDD, refactoring, iterating on solutions, reading test files, and writing tests. <br> ## Concepts <details> <summary><b>Proposed Concepts to Cover</b></summary> - `Test Driven Development (TDD)` - `TDD as design` - `unit tests` & `testing` - `test failure` as a coding guide - `refactoring` as a coding discipline - `test writing` </details> <br> ## Prerequisites Since this is a broad track-focused document, there really are no prerequisites. <br> ## Resources to refer to <details> <summary><b>Articles on Test Driven Development</b></summary> - [Agile Alliance: TDD](https://www.agilealliance.org/glossary/tdd) - [The TDD Manifesto](https://tddmanifesto.com/) - [Martin Fowler: Test Driven Development](https://martinfowler.com/bliki/TestDrivenDevelopment.html) - [Test Driven: Test Driven Development](https://testdriven.io/test-driven-development/) - [Xeno Stack: Test Driven Development](https://www.xenonstack.com/blog/test-driven-development) - [Semaphore Blog: Test Driven Development](https://semaphoreci.com/blog/test-driven-development) - [Guru99: Test Driven Development](https://www.guru99.com/test-driven-development.html) - [Inspired Testing: What Why and How of TDD](https://www.inspiredtesting.com/news-insights/insights/466-what-why-how-test-driven-development) - [Browser Stack: What is TDD](https://www.browserstack.com/guide/what-is-test-driven-development) - [Brian Okken: Lean TDD (_or TDD without the insanity_)](https://pythontest.com/lean-tdd/) </details> <details> <summary><b>Replies on TDD Made to Students in the Python Repo</b></summary> _Below are some replies made to students in the Python repo when they've had issues with having to read the test files. Feel free to copy, paraphrase or otherwise use them to explain the Python Track's TDD approach:_ > [Twelve Days Issue](https://github.com/exercism/python/issues/3031) However, the directions were never intended to specify the detail that is in the attached test file. > We we practice a form of Test Driven Development (*TDD for short*) on exercism, where we provide the main tests for you (_rather than have you write the tests yourself_). > [TDD](https://www.agilealliance.org/glossary/tdd/) practice has you write tests *before* code, to allow test failure to guide the implementation. So we expect students to look at the [test files](https://github.com/exercism/python/blob/main/exercises/practice/twelve-days/twelve_days_test.py) to figure out what the expected inputs and outputs of their functions should be. >[Second Twelve Days Issue](https://github.com/exercism/python/issues/2983) For [Practice exercises](https://exercism.org/docs/building/product/practice-exercises), we give very minimal stubs. The 'ethos' behind this is that practice exercises are implemented as a form of [test driven development](https://www.agilealliance.org/glossary/tdd/#q), where we've written out the core tests for you ahead of time. You **_should_** be looking at the tests to get a feel for what is expected, and you should let test failure guide what you write in code. > Practice exercises (in contrast to the "learning" or [Concept exercises](https://exercism.org/docs/building/product/concept-exercises)) are meant to encourage practice with multiple techniques in the language and/or algorithms or design decisions. Implementation is much less directed, problem descriptions are less detailed (_and are mostly shared across tracks_) and the test files tend to look for outcomes, and shy away from dictating data structures (_where that's possible_). Often students will discuss with mentors different approaches, and may even add their own additional test cases locally as they work through the problem. > [Gigasecond Issue](https://github.com/exercism/python/issues/3018) We practice a form of Test Driven Development (_TDD for short_) on exercism, where we provide the main tests for you. [TDD](https://www.agilealliance.org/glossary/tdd/) has you write tests _before_ code, to allow test failure to guide the implementation. So we expect students to look at the [test files](https://github.com/exercism/python/blob/main/exercises/practice/gigasecond/gigasecond_test.py) to figure out what the expected inputs and outputs of their functions should be. > [Kindergarten Garden Issue](https://github.com/exercism/python/issues/2856) > 1. By design, the practice exercises (_the exercises that are not in the syllabus tree as the "main" exercises in the boxes_) are implemented as a form of TDD - [Test-Driven development](https://www.agilealliance.org/glossary/tdd/#q=~(infinite~false~filters~(postType~(~'page~'post~'aa_book~'aa_event_session~'aa_experience_report~'aa_glossary~'aa_research_paper~'aa_video)~tags~(~'tdd))~searchTerm~'~sort~false~sortDirection~'asc~page~1)). But in exercism's case, we've written out the basic tests for you ahead of time. **`TL;DR`** - the expectation is that you engage with and look at the tests in addition to the instructions/specification. That doesn't mean we can't do better on the instructions. But it **does** mean that we will probably never get to the point of specifying everything in detail -- because we expect you to explore the test file. > 2. Practice exercises are intended to be as open-ended as possible, and to encourage discussion with a mentor. That means we take pains to **not** dictate implementation. Now for OOP exercises in OOP-supporting languages, some exercises do indeed "dictate" a `class` and/or `method` names. But apart from importing expected `class` and or `method` names, we try not to overly constrict a student to a particular interface. We strive to have the tests look for _results_ -- I don't care how you implemented `Garden` -- I care that when I call `garden.plants("Alice")`, I get back a list that has `["Violets", "Clover", "Radishes", "Clover"]`. Even the parameters listed in the stub are optional -- you could call them anything you like. And for `students`, many [community solutions](https://exercism.org/tracks/python/exercises/kindergarten-garden/solutions?passed_head_tests=true) use a default argument of `None`, rather than use it as positional-only. Again, this is meant to encourage a conversation with a mentor about different possible approaches or techniques. > 3. Stub code for practice exercises is kept to a minimum. While we have been discussing if more detail is warranted, we have been wary of over-specifying or constricting implementation. For now, we are going with bare minimum sans docstrings or typehinting. We might revisit that later this year. > [Lack of Clarity in Python Testing issue](https://github.com/exercism/python/issues/1827)Broadly speaking we’re using Test Driven Development principles — although we’ve written the tests for you — and so part of the Fun is reading the errors that crop up when the tests fail. > The pass statement is there to keep the empty “slug” file from throwing a SyntaxError before the tests can even run. Though there are many exercises, IIRC all have been implemented so you’ve got enough of a “slug” that the tests will run to completion but all tests will fail. This is actually a big improvement over the situation with many other language tracks, where no slug is provided at all. > From the Pangram slug you need only run the tests to encounter the first failure, which will be that False is not returned when “five boxing wizards [...]” is passed into your function. You could respond by putting return False at the bottom of your function, and you’ll pass that test but fail on “Five quacking Zephyrs[...]", and so on. > Via this iterative method you quickly learn the outline of what your implementation needs to do at a minimum. This is a helpful set of skills to acquire and I’m not sure that making the documentation more clear about the implementation (as opposed to about the problem) is really helping you with that. > It’s my hope that everyone starts to read the test suite and understand the constraints they’re trying to meet before they start implementing a solution, as that’s exactly what you’d do when using TDD principles in a work environment. </details> <br> ## Files to Be Created ### Track Document Since this is a track document, it can be formatted in markdown any way the author desires. However, it should still conform to the [Exercism Markdown Specification](https://exercism.org/docs/building/markdown/markdown), and follow all the rules in the [Exercism Formatting and Style Guide](https://exercism.org/docs/building/markdown/style-guide). Any links should be reference links, and any included images should be placed in the `docs/images` folder. ### Document Entry in `docs/config.json` See other entries in the [`config`](https://github.com/exercism/python/blob/main/docs/config.json) doc. Note that each document displayed on the website needs: - [ ] A valid UUID v4 number. You can use `configlet`, or [this generator](https://www.uuidgenerator.net/version4). - [ ] A slug for the document name - [ ] A path (_typically `docs/<DOCUMENTNAME>.md`_) - [ ] A title - [ ] A blurb of less than 350 characters, describing what the document is/explains. <br> ## Implementation Notes - Tone should be friendly and semi formal. See other track docs for a general guide - Our markdown and JSON files are checked against [prettier](https://prettier.io/) . We recommend [setting prettier up locally](https://prettier.io/docs/en/install.html) and running it prior to submitting your PR to avoid any CI errors. <br> ## Help If you have any questions while implementing this issue, please post the questions as comments in this issue, or contact one of the maintainers on our Slack channel.
1.0
[New Python Track Docs]: Test Driven Development (TDD) - This issue describes how to implement the `Test Driven Development (TDD)` Python Track docs. ## Getting started **Please please please read the docs before starting.** Posting PRs without reading these docs will be a lot more frustrating for you during the review cycle, and exhaust Exercism's maintainers' time. So, before diving into the implementation, please read up on the following documents: - [Contributing to Exercism](https://exercism.org/docs/building) | [Exercism and GitHub](https://exercism.org/docs/building/github) | - [Contributor Pull Request Guide](https://exercism.org/docs/building/github/contributors-pull-request-guide) - [What are those Weird Task Tags about?](https://exercism.org/docs/building/product/tasks) - [Building Language Tracks: An Overview](https://exercism.org/docs/building/tracks) - [Existing Python Track Documents](https://exercism.org/docs/tracks/python) - [Python Track Documents on GitHub](https://github.com/exercism/python/tree/main/docs) - [Exercism Formatting and Style Guide](https://exercism.org/docs/building/markdown/style-guide) - [Exercism Markdown Specification](https://exercism.org/docs/building/markdown/markdown) - [Reputation](https://exercism.org/docs/using/product/reputation) ## Goal This document is intended to explain basic `Test Driven Development (TDD)` practices/philosophies and how exercism adapts those practices. It should orient students to a "TDD Mindset" for iterating on and solving exercism problems on the Python track. <br> ## Learning objectives - Understand what `Test Driven Development (TDD)` is, and how it can inform and improve the process of coding. - Understand how exercism uses `TDD` principals in the design of its concept and practice exercises. - Understand the role that tests and test failure play in working through the track challenges. - Understand the role mentorship and iteration play in refining and improving problem solutions. - Provide some "challenges" or "next steps" students can take in their coding practice (_e.g., writing further tests to extend or adapt a problem or problem solution, PRing improvements to existing test cases, expanding a challenge with different approaches or techniques._) <br> ## Out of scope This is a fairly high-level and broad document, emphasizing a programming _approach_ or philosophy. The intent here is to explain why exercises are structured the way they are on the Python track (_minimal stubs, viewable tests, access to mentoring and community solutions, encouragement for iterating on solutions_) and provide resources for students to explore and adapt the TDD technique more fully, rather than set up a programming problem or tutorial. No topic is really "out of scope", as long as it applies to TDD, refactoring, iterating on solutions, reading test files, and writing tests. <br> ## Concepts <details> <summary><b>Proposed Concepts to Cover</b></summary> - `Test Driven Development (TDD)` - `TDD as design` - `unit tests` & `testing` - `test failure` as a coding guide - `refactoring` as a coding discipline - `test writing` </details> <br> ## Prerequisites Since this is a broad track-focused document, there really are no prerequisites. <br> ## Resources to refer to <details> <summary><b>Articles on Test Driven Development</b></summary> - [Agile Alliance: TDD](https://www.agilealliance.org/glossary/tdd) - [The TDD Manifesto](https://tddmanifesto.com/) - [Martin Fowler: Test Driven Development](https://martinfowler.com/bliki/TestDrivenDevelopment.html) - [Test Driven: Test Driven Development](https://testdriven.io/test-driven-development/) - [Xeno Stack: Test Driven Development](https://www.xenonstack.com/blog/test-driven-development) - [Semaphore Blog: Test Driven Development](https://semaphoreci.com/blog/test-driven-development) - [Guru99: Test Driven Development](https://www.guru99.com/test-driven-development.html) - [Inspired Testing: What Why and How of TDD](https://www.inspiredtesting.com/news-insights/insights/466-what-why-how-test-driven-development) - [Browser Stack: What is TDD](https://www.browserstack.com/guide/what-is-test-driven-development) - [Brian Okken: Lean TDD (_or TDD without the insanity_)](https://pythontest.com/lean-tdd/) </details> <details> <summary><b>Replies on TDD Made to Students in the Python Repo</b></summary> _Below are some replies made to students in the Python repo when they've had issues with having to read the test files. Feel free to copy, paraphrase or otherwise use them to explain the Python Track's TDD approach:_ > [Twelve Days Issue](https://github.com/exercism/python/issues/3031) However, the directions were never intended to specify the detail that is in the attached test file. > We we practice a form of Test Driven Development (*TDD for short*) on exercism, where we provide the main tests for you (_rather than have you write the tests yourself_). > [TDD](https://www.agilealliance.org/glossary/tdd/) practice has you write tests *before* code, to allow test failure to guide the implementation. So we expect students to look at the [test files](https://github.com/exercism/python/blob/main/exercises/practice/twelve-days/twelve_days_test.py) to figure out what the expected inputs and outputs of their functions should be. >[Second Twelve Days Issue](https://github.com/exercism/python/issues/2983) For [Practice exercises](https://exercism.org/docs/building/product/practice-exercises), we give very minimal stubs. The 'ethos' behind this is that practice exercises are implemented as a form of [test driven development](https://www.agilealliance.org/glossary/tdd/#q), where we've written out the core tests for you ahead of time. You **_should_** be looking at the tests to get a feel for what is expected, and you should let test failure guide what you write in code. > Practice exercises (in contrast to the "learning" or [Concept exercises](https://exercism.org/docs/building/product/concept-exercises)) are meant to encourage practice with multiple techniques in the language and/or algorithms or design decisions. Implementation is much less directed, problem descriptions are less detailed (_and are mostly shared across tracks_) and the test files tend to look for outcomes, and shy away from dictating data structures (_where that's possible_). Often students will discuss with mentors different approaches, and may even add their own additional test cases locally as they work through the problem. > [Gigasecond Issue](https://github.com/exercism/python/issues/3018) We practice a form of Test Driven Development (_TDD for short_) on exercism, where we provide the main tests for you. [TDD](https://www.agilealliance.org/glossary/tdd/) has you write tests _before_ code, to allow test failure to guide the implementation. So we expect students to look at the [test files](https://github.com/exercism/python/blob/main/exercises/practice/gigasecond/gigasecond_test.py) to figure out what the expected inputs and outputs of their functions should be. > [Kindergarten Garden Issue](https://github.com/exercism/python/issues/2856) > 1. By design, the practice exercises (_the exercises that are not in the syllabus tree as the "main" exercises in the boxes_) are implemented as a form of TDD - [Test-Driven development](https://www.agilealliance.org/glossary/tdd/#q=~(infinite~false~filters~(postType~(~'page~'post~'aa_book~'aa_event_session~'aa_experience_report~'aa_glossary~'aa_research_paper~'aa_video)~tags~(~'tdd))~searchTerm~'~sort~false~sortDirection~'asc~page~1)). But in exercism's case, we've written out the basic tests for you ahead of time. **`TL;DR`** - the expectation is that you engage with and look at the tests in addition to the instructions/specification. That doesn't mean we can't do better on the instructions. But it **does** mean that we will probably never get to the point of specifying everything in detail -- because we expect you to explore the test file. > 2. Practice exercises are intended to be as open-ended as possible, and to encourage discussion with a mentor. That means we take pains to **not** dictate implementation. Now for OOP exercises in OOP-supporting languages, some exercises do indeed "dictate" a `class` and/or `method` names. But apart from importing expected `class` and or `method` names, we try not to overly constrict a student to a particular interface. We strive to have the tests look for _results_ -- I don't care how you implemented `Garden` -- I care that when I call `garden.plants("Alice")`, I get back a list that has `["Violets", "Clover", "Radishes", "Clover"]`. Even the parameters listed in the stub are optional -- you could call them anything you like. And for `students`, many [community solutions](https://exercism.org/tracks/python/exercises/kindergarten-garden/solutions?passed_head_tests=true) use a default argument of `None`, rather than use it as positional-only. Again, this is meant to encourage a conversation with a mentor about different possible approaches or techniques. > 3. Stub code for practice exercises is kept to a minimum. While we have been discussing if more detail is warranted, we have been wary of over-specifying or constricting implementation. For now, we are going with bare minimum sans docstrings or typehinting. We might revisit that later this year. > [Lack of Clarity in Python Testing issue](https://github.com/exercism/python/issues/1827)Broadly speaking we’re using Test Driven Development principles — although we’ve written the tests for you — and so part of the Fun is reading the errors that crop up when the tests fail. > The pass statement is there to keep the empty “slug” file from throwing a SyntaxError before the tests can even run. Though there are many exercises, IIRC all have been implemented so you’ve got enough of a “slug” that the tests will run to completion but all tests will fail. This is actually a big improvement over the situation with many other language tracks, where no slug is provided at all. > From the Pangram slug you need only run the tests to encounter the first failure, which will be that False is not returned when “five boxing wizards [...]” is passed into your function. You could respond by putting return False at the bottom of your function, and you’ll pass that test but fail on “Five quacking Zephyrs[...]", and so on. > Via this iterative method you quickly learn the outline of what your implementation needs to do at a minimum. This is a helpful set of skills to acquire and I’m not sure that making the documentation more clear about the implementation (as opposed to about the problem) is really helping you with that. > It’s my hope that everyone starts to read the test suite and understand the constraints they’re trying to meet before they start implementing a solution, as that’s exactly what you’d do when using TDD principles in a work environment. </details> <br> ## Files to Be Created ### Track Document Since this is a track document, it can be formatted in markdown any way the author desires. However, it should still conform to the [Exercism Markdown Specification](https://exercism.org/docs/building/markdown/markdown), and follow all the rules in the [Exercism Formatting and Style Guide](https://exercism.org/docs/building/markdown/style-guide). Any links should be reference links, and any included images should be placed in the `docs/images` folder. ### Document Entry in `docs/config.json` See other entries in the [`config`](https://github.com/exercism/python/blob/main/docs/config.json) doc. Note that each document displayed on the website needs: - [ ] A valid UUID v4 number. You can use `configlet`, or [this generator](https://www.uuidgenerator.net/version4). - [ ] A slug for the document name - [ ] A path (_typically `docs/<DOCUMENTNAME>.md`_) - [ ] A title - [ ] A blurb of less than 350 characters, describing what the document is/explains. <br> ## Implementation Notes - Tone should be friendly and semi formal. See other track docs for a general guide - Our markdown and JSON files are checked against [prettier](https://prettier.io/) . We recommend [setting prettier up locally](https://prettier.io/docs/en/install.html) and running it prior to submitting your PR to avoid any CI errors. <br> ## Help If you have any questions while implementing this issue, please post the questions as comments in this issue, or contact one of the maintainers on our Slack channel.
non_defect
test driven development tdd this issue describes how to implement the test driven development tdd python track docs getting started please please please read the docs before starting posting prs without reading these docs will be a lot more frustrating for you during the review cycle and exhaust exercism s maintainers time so before diving into the implementation please read up on the following documents goal this document is intended to explain basic test driven development tdd practices philosophies and how exercism adapts those practices it should orient students to a tdd mindset for iterating on and solving exercism problems on the python track learning objectives understand what test driven development tdd is and how it can inform and improve the process of coding understand how exercism uses tdd principals in the design of its concept and practice exercises understand the role that tests and test failure play in working through the track challenges understand the role mentorship and iteration play in refining and improving problem solutions provide some challenges or next steps students can take in their coding practice e g writing further tests to extend or adapt a problem or problem solution pring improvements to existing test cases expanding a challenge with different approaches or techniques out of scope this is a fairly high level and broad document emphasizing a programming approach or philosophy the intent here is to explain why exercises are structured the way they are on the python track minimal stubs viewable tests access to mentoring and community solutions encouragement for iterating on solutions and provide resources for students to explore and adapt the tdd technique more fully rather than set up a programming problem or tutorial no topic is really out of scope as long as it applies to tdd refactoring iterating on solutions reading test files and writing tests concepts proposed concepts to cover test driven development tdd tdd as design unit tests testing test failure as a coding guide refactoring as a coding discipline test writing prerequisites since this is a broad track focused document there really are no prerequisites resources to refer to articles on test driven development replies on tdd made to students in the python repo below are some replies made to students in the python repo when they ve had issues with having to read the test files feel free to copy paraphrase or otherwise use them to explain the python track s tdd approach however the directions were never intended to specify the detail that is in the attached test file we we practice a form of test driven development tdd for short on exercism where we provide the main tests for you rather than have you write the tests yourself practice has you write tests before code to allow test failure to guide the implementation so we expect students to look at the to figure out what the expected inputs and outputs of their functions should be for we give very minimal stubs the ethos behind this is that practice exercises are implemented as a form of where we ve written out the core tests for you ahead of time you should be looking at the tests to get a feel for what is expected and you should let test failure guide what you write in code practice exercises in contrast to the learning or are meant to encourage practice with multiple techniques in the language and or algorithms or design decisions implementation is much less directed problem descriptions are less detailed and are mostly shared across tracks and the test files tend to look for outcomes and shy away from dictating data structures where that s possible often students will discuss with mentors different approaches and may even add their own additional test cases locally as they work through the problem we practice a form of test driven development tdd for short on exercism where we provide the main tests for you has you write tests before code to allow test failure to guide the implementation so we expect students to look at the to figure out what the expected inputs and outputs of their functions should be by design the practice exercises the exercises that are not in the syllabus tree as the main exercises in the boxes are implemented as a form of tdd but in exercism s case we ve written out the basic tests for you ahead of time tl dr the expectation is that you engage with and look at the tests in addition to the instructions specification that doesn t mean we can t do better on the instructions but it does mean that we will probably never get to the point of specifying everything in detail because we expect you to explore the test file practice exercises are intended to be as open ended as possible and to encourage discussion with a mentor that means we take pains to not dictate implementation now for oop exercises in oop supporting languages some exercises do indeed dictate a class and or method names but apart from importing expected class and or method names we try not to overly constrict a student to a particular interface we strive to have the tests look for results i don t care how you implemented garden i care that when i call garden plants alice i get back a list that has even the parameters listed in the stub are optional you could call them anything you like and for students many use a default argument of none rather than use it as positional only again this is meant to encourage a conversation with a mentor about different possible approaches or techniques stub code for practice exercises is kept to a minimum while we have been discussing if more detail is warranted we have been wary of over specifying or constricting implementation for now we are going with bare minimum sans docstrings or typehinting we might revisit that later this year speaking we’re using test driven development principles — although we’ve written the tests for you — and so part of the fun is reading the errors that crop up when the tests fail the pass statement is there to keep the empty “slug” file from throwing a syntaxerror before the tests can even run though there are many exercises iirc all have been implemented so you’ve got enough of a “slug” that the tests will run to completion but all tests will fail this is actually a big improvement over the situation with many other language tracks where no slug is provided at all from the pangram slug you need only run the tests to encounter the first failure which will be that false is not returned when “five boxing wizards ” is passed into your function you could respond by putting return false at the bottom of your function and you’ll pass that test but fail on “five quacking zephyrs and so on via this iterative method you quickly learn the outline of what your implementation needs to do at a minimum this is a helpful set of skills to acquire and i’m not sure that making the documentation more clear about the implementation as opposed to about the problem is really helping you with that it’s my hope that everyone starts to read the test suite and understand the constraints they’re trying to meet before they start implementing a solution as that’s exactly what you’d do when using tdd principles in a work environment files to be created track document since this is a track document it can be formatted in markdown any way the author desires however it should still conform to the and follow all the rules in the any links should be reference links and any included images should be placed in the docs images folder document entry in docs config json see other entries in the doc note that each document displayed on the website needs a valid uuid number you can use configlet or a slug for the document name a path typically docs md a title a blurb of less than characters describing what the document is explains implementation notes tone should be friendly and semi formal see other track docs for a general guide our markdown and json files are checked against we recommend and running it prior to submitting your pr to avoid any ci errors help if you have any questions while implementing this issue please post the questions as comments in this issue or contact one of the maintainers on our slack channel
0
94,200
8,475,728,907
IssuesEvent
2018-10-24 19:46:35
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
teamcity: failed test: TestReportUsage
C-test-failure O-robot
The following tests appear to have failed on master (testrace): TestReportUsage You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestReportUsage). [#983796](https://teamcity.cockroachdb.com/viewLog.html?buildId=983796): ``` TestReportUsage ...vent_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config:constraints: {"+zone=somestring,+somestring": 2, +somestring: 1} Options: User:root} I181024 15:39:44.366585 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config:experimental_lease_preferences: [[+zone=somestring,+somestring], [+somestring]] Options: User:root} I181024 15:39:44.454695 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "create_table", target: 53, info: {TableName:somestring.public.somestring Statement:CREATE TABLE somestring.public.somestring (somestring INT, CONSTRAINT somestring CHECK (somestring > 1)) User:root} I181024 15:39:44.461760 56693 storage/replica_command.go:300 [n1,split,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/53 [r21] I181024 15:39:44.553741 56230 storage/allocator_scorer.go:597 [n1,replicate,s1,r21/1:/{Table/53-Max}] nodeHasReplica(n1, [(n1,s1):1])=true I181024 15:39:44.557968 56230 storage/allocator_scorer.go:597 [n1,replicate,s1,r21/1:/{Table/53-Max}] nodeHasReplica(n1, [(n1,s1):1])=true I181024 15:39:44.816868 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:45.043238 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:45.428307 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:45.738928 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:46.024382 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:46.261437 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:46.594008 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:46.859112 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:47.114751 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:47.384468 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:48.274928 55973 util/stop/stopper.go:537 quiescing; tasks left: 2 [async] closedts-rangefeed-subscriber 1 [async] transport racer I181024 15:39:48.275179 55902 kv/transport_race.go:113 transport race promotion: ran 32 iterations on up to 1300 requests I181024 15:39:48.275413 55973 util/stop/stopper.go:537 quiescing; tasks left: 2 [async] closedts-rangefeed-subscriber I181024 15:39:48.287432 55973 util/stop/stopper.go:537 quiescing; tasks left: 1 [async] closedts-rangefeed-subscriber TestReportUsage ...47 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_zone_config", target: 13, info: {Target:system.rangelog Config:{gc: {ttlseconds: 1}} Options: User:root} I181024 15:26:44.141643 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config:num_replicas: 5 Options: User:root} I181024 15:26:44.158092 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config:constraints: {"+zone=somestring,+somestring": 2, +somestring: 1} Options: User:root} I181024 15:26:44.172283 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config:experimental_lease_preferences: [[+zone=somestring,+somestring], [+somestring]] Options: User:root} I181024 15:26:44.179979 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "create_table", target: 53, info: {TableName:somestring.public.somestring Statement:CREATE TABLE somestring.public.somestring (somestring INT, CONSTRAINT somestring CHECK (somestring > 1)) User:root} I181024 15:26:44.180729 54871 storage/replica_command.go:300 [n1,split,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/53 [r21] I181024 15:26:44.186588 54432 storage/allocator_scorer.go:597 [n1,replicate,s1,r21/1:/{Table/53-Max}] nodeHasReplica(n1, [(n1,s1):1])=true I181024 15:26:44.186882 54432 storage/allocator_scorer.go:597 [n1,replicate,s1,r21/1:/{Table/53-Max}] nodeHasReplica(n1, [(n1,s1):1])=true I181024 15:26:44.226441 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.301060 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.346271 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.393495 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.422244 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.446609 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.474072 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.515750 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.564276 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.590240 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.654045 54085 util/stop/stopper.go:537 quiescing; tasks left: 2 [async] closedts-rangefeed-subscriber ``` Please assign, take a look and update the issue accordingly.
1.0
teamcity: failed test: TestReportUsage - The following tests appear to have failed on master (testrace): TestReportUsage You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestReportUsage). [#983796](https://teamcity.cockroachdb.com/viewLog.html?buildId=983796): ``` TestReportUsage ...vent_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config:constraints: {"+zone=somestring,+somestring": 2, +somestring: 1} Options: User:root} I181024 15:39:44.366585 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config:experimental_lease_preferences: [[+zone=somestring,+somestring], [+somestring]] Options: User:root} I181024 15:39:44.454695 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "create_table", target: 53, info: {TableName:somestring.public.somestring Statement:CREATE TABLE somestring.public.somestring (somestring INT, CONSTRAINT somestring CHECK (somestring > 1)) User:root} I181024 15:39:44.461760 56693 storage/replica_command.go:300 [n1,split,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/53 [r21] I181024 15:39:44.553741 56230 storage/allocator_scorer.go:597 [n1,replicate,s1,r21/1:/{Table/53-Max}] nodeHasReplica(n1, [(n1,s1):1])=true I181024 15:39:44.557968 56230 storage/allocator_scorer.go:597 [n1,replicate,s1,r21/1:/{Table/53-Max}] nodeHasReplica(n1, [(n1,s1):1])=true I181024 15:39:44.816868 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:45.043238 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:45.428307 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:45.738928 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:46.024382 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:46.261437 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:46.594008 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:46.859112 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:47.114751 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:47.384468 56607 sql/event_log.go:126 [n1,client=127.0.0.1:51144,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:39:48.274928 55973 util/stop/stopper.go:537 quiescing; tasks left: 2 [async] closedts-rangefeed-subscriber 1 [async] transport racer I181024 15:39:48.275179 55902 kv/transport_race.go:113 transport race promotion: ran 32 iterations on up to 1300 requests I181024 15:39:48.275413 55973 util/stop/stopper.go:537 quiescing; tasks left: 2 [async] closedts-rangefeed-subscriber I181024 15:39:48.287432 55973 util/stop/stopper.go:537 quiescing; tasks left: 1 [async] closedts-rangefeed-subscriber TestReportUsage ...47 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_zone_config", target: 13, info: {Target:system.rangelog Config:{gc: {ttlseconds: 1}} Options: User:root} I181024 15:26:44.141643 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config:num_replicas: 5 Options: User:root} I181024 15:26:44.158092 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config:constraints: {"+zone=somestring,+somestring": 2, +somestring: 1} Options: User:root} I181024 15:26:44.172283 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_zone_config", target: 1, info: {Target:system Config:experimental_lease_preferences: [[+zone=somestring,+somestring], [+somestring]] Options: User:root} I181024 15:26:44.179979 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "create_table", target: 53, info: {TableName:somestring.public.somestring Statement:CREATE TABLE somestring.public.somestring (somestring INT, CONSTRAINT somestring CHECK (somestring > 1)) User:root} I181024 15:26:44.180729 54871 storage/replica_command.go:300 [n1,split,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/53 [r21] I181024 15:26:44.186588 54432 storage/allocator_scorer.go:597 [n1,replicate,s1,r21/1:/{Table/53-Max}] nodeHasReplica(n1, [(n1,s1):1])=true I181024 15:26:44.186882 54432 storage/allocator_scorer.go:597 [n1,replicate,s1,r21/1:/{Table/53-Max}] nodeHasReplica(n1, [(n1,s1):1])=true I181024 15:26:44.226441 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.301060 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.346271 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.393495 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.422244 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.446609 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.474072 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.515750 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.564276 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.590240 54847 sql/event_log.go:126 [n1,client=127.0.0.1:47576,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.organization Value:'Cockroach Labs - Production Testing' User:root} I181024 15:26:44.654045 54085 util/stop/stopper.go:537 quiescing; tasks left: 2 [async] closedts-rangefeed-subscriber ``` Please assign, take a look and update the issue accordingly.
non_defect
teamcity failed test testreportusage the following tests appear to have failed on master testrace testreportusage you may want to check testreportusage vent log go event set zone config target info target system config constraints zone somestring somestring somestring options user root sql event log go event set zone config target info target system config experimental lease preferences options user root sql event log go event create table target info tablename somestring public somestring statement create table somestring public somestring somestring int constraint somestring check somestring user root storage replica command go initiating a split of this range at key table storage allocator scorer go nodehasreplica true storage allocator scorer go nodehasreplica true sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root util stop stopper go quiescing tasks left closedts rangefeed subscriber transport racer kv transport race go transport race promotion ran iterations on up to requests util stop stopper go quiescing tasks left closedts rangefeed subscriber util stop stopper go quiescing tasks left closedts rangefeed subscriber testreportusage sql event log go event set zone config target info target system rangelog config gc ttlseconds options user root sql event log go event set zone config target info target system config num replicas options user root sql event log go event set zone config target info target system config constraints zone somestring somestring somestring options user root sql event log go event set zone config target info target system config experimental lease preferences options user root sql event log go event create table target info tablename somestring public somestring statement create table somestring public somestring somestring int constraint somestring check somestring user root storage replica command go initiating a split of this range at key table storage allocator scorer go nodehasreplica true storage allocator scorer go nodehasreplica true sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root sql event log go event set cluster setting target info settingname cluster organization value cockroach labs production testing user root util stop stopper go quiescing tasks left closedts rangefeed subscriber please assign take a look and update the issue accordingly
0
81,978
15,646,479,237
IssuesEvent
2021-03-23 01:01:08
jgeraigery/activemq
https://api.github.com/repos/jgeraigery/activemq
opened
CVE-2020-13920 (Medium) detected in activemqactivemq-5.9.1
security vulnerability
## CVE-2020-13920 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>activemqactivemq-5.9.1</b></p></summary> <p> <p>Mirror of Apache ActiveMQ</p> <p>Library home page: <a href=https://github.com/apache/activemq.git>https://github.com/apache/activemq.git</a></p> <p>Found in base branch: <b>ptc-5.9.1</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>activemq/activemq-broker/src/main/java/org/apache/activemq/broker/jmx/ManagementContext.java</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Apache ActiveMQ uses LocateRegistry.createRegistry() to create the JMX RMI registry and binds the server to the "jmxrmi" entry. It is possible to connect to the registry without authentication and call the rebind method to rebind jmxrmi to something else. If an attacker creates another server to proxy the original, and bound that, he effectively becomes a man in the middle and is able to intercept the credentials when an user connects. Upgrade to Apache ActiveMQ 5.15.12. <p>Publish Date: 2020-09-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13920>CVE-2020-13920</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13920">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13920</a></p> <p>Release Date: 2020-10-20</p> <p>Fix Resolution: org.apache.activemq:activemq-broker:5.15.12;org.apache.activemq:activemq-all:5.15.12</p> </p> </details> <p></p>
True
CVE-2020-13920 (Medium) detected in activemqactivemq-5.9.1 - ## CVE-2020-13920 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>activemqactivemq-5.9.1</b></p></summary> <p> <p>Mirror of Apache ActiveMQ</p> <p>Library home page: <a href=https://github.com/apache/activemq.git>https://github.com/apache/activemq.git</a></p> <p>Found in base branch: <b>ptc-5.9.1</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>activemq/activemq-broker/src/main/java/org/apache/activemq/broker/jmx/ManagementContext.java</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Apache ActiveMQ uses LocateRegistry.createRegistry() to create the JMX RMI registry and binds the server to the "jmxrmi" entry. It is possible to connect to the registry without authentication and call the rebind method to rebind jmxrmi to something else. If an attacker creates another server to proxy the original, and bound that, he effectively becomes a man in the middle and is able to intercept the credentials when an user connects. Upgrade to Apache ActiveMQ 5.15.12. <p>Publish Date: 2020-09-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13920>CVE-2020-13920</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13920">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13920</a></p> <p>Release Date: 2020-10-20</p> <p>Fix Resolution: org.apache.activemq:activemq-broker:5.15.12;org.apache.activemq:activemq-all:5.15.12</p> </p> </details> <p></p>
non_defect
cve medium detected in activemqactivemq cve medium severity vulnerability vulnerable library activemqactivemq mirror of apache activemq library home page a href found in base branch ptc vulnerable source files activemq activemq broker src main java org apache activemq broker jmx managementcontext java vulnerability details apache activemq uses locateregistry createregistry to create the jmx rmi registry and binds the server to the jmxrmi entry it is possible to connect to the registry without authentication and call the rebind method to rebind jmxrmi to something else if an attacker creates another server to proxy the original and bound that he effectively becomes a man in the middle and is able to intercept the credentials when an user connects upgrade to apache activemq publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache activemq activemq broker org apache activemq activemq all
0
190,127
6,810,165,858
IssuesEvent
2017-11-05 01:55:41
Wuzzy2/MineClone2-Bugs
https://api.github.com/repos/Wuzzy2/MineClone2-Bugs
closed
Compass crash?
bug CRITICAL HIGH PRIORITY items
``` 2017-10-15 13:50:02: ERROR[Main]: ServerError: AsyncErr: environment_Step: Runtime error from mod 'mcl_compass' in callback environment_Step(): ...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:50: attempt to perform arithmetic on field 'x' (a nil value) 2017-10-15 13:50:02: ERROR[Main]: stack traceback: 2017-10-15 13:50:02: ERROR[Main]: ...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:50: in function <...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:13> 2017-10-15 13:50:02: ERROR[Main]: /usr/share/games/minetest/builtin/game/register.lua:412: in function </usr/share/games/minetest/builtin/game/register.lua:392> 2017-10-15 13:50:02: ERROR[Main]: stack traceback: ```
1.0
Compass crash? - ``` 2017-10-15 13:50:02: ERROR[Main]: ServerError: AsyncErr: environment_Step: Runtime error from mod 'mcl_compass' in callback environment_Step(): ...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:50: attempt to perform arithmetic on field 'x' (a nil value) 2017-10-15 13:50:02: ERROR[Main]: stack traceback: 2017-10-15 13:50:02: ERROR[Main]: ...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:50: in function <...inetest/games/MineClone2/mods/ITEMS/mcl_compass/init.lua:13> 2017-10-15 13:50:02: ERROR[Main]: /usr/share/games/minetest/builtin/game/register.lua:412: in function </usr/share/games/minetest/builtin/game/register.lua:392> 2017-10-15 13:50:02: ERROR[Main]: stack traceback: ```
non_defect
compass crash error servererror asyncerr environment step runtime error from mod mcl compass in callback environment step inetest games mods items mcl compass init lua attempt to perform arithmetic on field x a nil value error stack traceback error inetest games mods items mcl compass init lua in function error usr share games minetest builtin game register lua in function error stack traceback
0
49,711
13,187,255,049
IssuesEvent
2020-08-13 02:50:15
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
hard-coded path for bzip2 in SimulatonFiltering.py (Trac #1913)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1913">https://code.icecube.wisc.edu/ticket/1913</a>, reported by juancarlos and owned by juancarlos</em></summary> <p> ```json { "status": "closed", "changetime": "2016-11-22T19:13:12", "description": "Line 381 has a hard-coded path to /usr/bin/bzip2. On some systems (Ubuntu) bzip2 is located in /bin/bzip2", "reporter": "juancarlos", "cc": "", "resolution": "fixed", "_ts": "1479841992979744", "component": "combo reconstruction", "summary": "hard-coded path for bzip2 in SimulatonFiltering.py", "priority": "major", "keywords": "filterscripts", "time": "2016-11-22T19:09:43", "milestone": "", "owner": "juancarlos", "type": "defect" } ``` </p> </details>
1.0
hard-coded path for bzip2 in SimulatonFiltering.py (Trac #1913) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1913">https://code.icecube.wisc.edu/ticket/1913</a>, reported by juancarlos and owned by juancarlos</em></summary> <p> ```json { "status": "closed", "changetime": "2016-11-22T19:13:12", "description": "Line 381 has a hard-coded path to /usr/bin/bzip2. On some systems (Ubuntu) bzip2 is located in /bin/bzip2", "reporter": "juancarlos", "cc": "", "resolution": "fixed", "_ts": "1479841992979744", "component": "combo reconstruction", "summary": "hard-coded path for bzip2 in SimulatonFiltering.py", "priority": "major", "keywords": "filterscripts", "time": "2016-11-22T19:09:43", "milestone": "", "owner": "juancarlos", "type": "defect" } ``` </p> </details>
defect
hard coded path for in simulatonfiltering py trac migrated from json status closed changetime description line has a hard coded path to usr bin on some systems ubuntu is located in bin reporter juancarlos cc resolution fixed ts component combo reconstruction summary hard coded path for in simulatonfiltering py priority major keywords filterscripts time milestone owner juancarlos type defect
1
67,913
21,316,629,046
IssuesEvent
2022-04-16 11:49:42
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Threads list empty
T-Defect
### Steps to reproduce 1. Enabled threads beta 2. start thread on 2 messages 3. go "back" from thread panel to threads panel ### Outcome #### What did you expect? list of threads #### What happened instead? - at first the panel was not just empty, but the element that is supposed to hold the list of threads `class="mx_AutoHideScrollbar mx_ScrollPanel mx_RoomView_messagePanel mx_GroupLayout mx_MessagePanel_narrow"` was not there at all(!) - then I tried switching back and forth to another room once. after a sec of being back in the original room, above element loaded, but was empty with the usual "this is where your threads would be if there were any" message - then eventually after dozens more seconds the threads loaded ### Operating system arch ### Application version Element Nightly version: 2022041601 Olm version: 3.2.8 ### How did you install the app? aur ### Homeserver 1.56 ### Will you send logs? No
1.0
Threads list empty - ### Steps to reproduce 1. Enabled threads beta 2. start thread on 2 messages 3. go "back" from thread panel to threads panel ### Outcome #### What did you expect? list of threads #### What happened instead? - at first the panel was not just empty, but the element that is supposed to hold the list of threads `class="mx_AutoHideScrollbar mx_ScrollPanel mx_RoomView_messagePanel mx_GroupLayout mx_MessagePanel_narrow"` was not there at all(!) - then I tried switching back and forth to another room once. after a sec of being back in the original room, above element loaded, but was empty with the usual "this is where your threads would be if there were any" message - then eventually after dozens more seconds the threads loaded ### Operating system arch ### Application version Element Nightly version: 2022041601 Olm version: 3.2.8 ### How did you install the app? aur ### Homeserver 1.56 ### Will you send logs? No
defect
threads list empty steps to reproduce enabled threads beta start thread on messages go back from thread panel to threads panel outcome what did you expect list of threads what happened instead at first the panel was not just empty but the element that is supposed to hold the list of threads class mx autohidescrollbar mx scrollpanel mx roomview messagepanel mx grouplayout mx messagepanel narrow was not there at all then i tried switching back and forth to another room once after a sec of being back in the original room above element loaded but was empty with the usual this is where your threads would be if there were any message then eventually after dozens more seconds the threads loaded operating system arch application version element nightly version olm version how did you install the app aur homeserver will you send logs no
1
81,220
30,756,637,666
IssuesEvent
2023-07-29 06:02:25
colour-science/colour
https://api.github.com/repos/colour-science/colour
closed
[BUG]: `colour.XYZ_to_Luv` definition raises an exception when passed a [1, 1, 3] array.
Defect API Minor
### Description Hello, I noticed an odd behaviour with the `XYZ_to_Luv` function. (see reproduction code) Now using ` numpy.full([2,2,3], ...` instead will not produce any error. As I was using this function in the context of plotting with the CIE Diagrams I cannot use the same array for all the diagrams, though this is a minor annoyance. Cheers. Liam. ### Code for Reproduction ```python import numpy import colour image = numpy.full([1,1,3], [0.33, 0.6, 0.7]) result = colour.XYZ_to_Luv(image) ``` ### Exception Message ```shell File "...", line 141, in XYZ_to_Luv Luv = tstack([L, u, v]) File "...", line 2033, in tstack a = as_array(a, dtype) File "...", line 564, in as_array return np.asarray(a, dtype) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimension. The detected shape was (3,) + inhomogeneous part. ``` ### Environment Information ```shell =============================================================================== * * * Interpreter : * * python : 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC * * v.1929 64 bit (AMD64)] * * * * colour-science.org : * * colour : 0.4.2 * * * * Runtime : * * imageio : 2.31.1 * * matplotlib : 3.7.2 * * numpy : 1.25.1 * * pandas : 2.0.3 * * scipy : 1.11.1 * * * =============================================================================== defaultdict(<class 'dict'>, {'Interpreter': {'python': '3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)]'}, 'colour-science.org': {'colour': '0.4.2'}, 'Runtime': {'imageio': '2.31.1', 'matplotlib': '3.7.2', 'numpy': '1.25.1', 'pandas': '2.0.3', 'scipy': '1.11.1'}}) ```
1.0
[BUG]: `colour.XYZ_to_Luv` definition raises an exception when passed a [1, 1, 3] array. - ### Description Hello, I noticed an odd behaviour with the `XYZ_to_Luv` function. (see reproduction code) Now using ` numpy.full([2,2,3], ...` instead will not produce any error. As I was using this function in the context of plotting with the CIE Diagrams I cannot use the same array for all the diagrams, though this is a minor annoyance. Cheers. Liam. ### Code for Reproduction ```python import numpy import colour image = numpy.full([1,1,3], [0.33, 0.6, 0.7]) result = colour.XYZ_to_Luv(image) ``` ### Exception Message ```shell File "...", line 141, in XYZ_to_Luv Luv = tstack([L, u, v]) File "...", line 2033, in tstack a = as_array(a, dtype) File "...", line 564, in as_array return np.asarray(a, dtype) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimension. The detected shape was (3,) + inhomogeneous part. ``` ### Environment Information ```shell =============================================================================== * * * Interpreter : * * python : 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC * * v.1929 64 bit (AMD64)] * * * * colour-science.org : * * colour : 0.4.2 * * * * Runtime : * * imageio : 2.31.1 * * matplotlib : 3.7.2 * * numpy : 1.25.1 * * pandas : 2.0.3 * * scipy : 1.11.1 * * * =============================================================================== defaultdict(<class 'dict'>, {'Interpreter': {'python': '3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)]'}, 'colour-science.org': {'colour': '0.4.2'}, 'Runtime': {'imageio': '2.31.1', 'matplotlib': '3.7.2', 'numpy': '1.25.1', 'pandas': '2.0.3', 'scipy': '1.11.1'}}) ```
defect
colour xyz to luv definition raises an exception when passed a array description hello i noticed an odd behaviour with the xyz to luv function see reproduction code now using numpy full instead will not produce any error as i was using this function in the context of plotting with the cie diagrams i cannot use the same array for all the diagrams though this is a minor annoyance cheers liam code for reproduction python import numpy import colour image numpy full result colour xyz to luv image exception message shell file line in xyz to luv luv tstack file line in tstack a as array a dtype file line in as array return np asarray a dtype valueerror setting an array element with a sequence the requested array has an inhomogeneous shape after dimension the detected shape was inhomogeneous part environment information shell interpreter python tags may msc v bit colour science org colour runtime imageio matplotlib numpy pandas scipy defaultdict interpreter python tags may colour science org colour runtime imageio matplotlib numpy pandas scipy
1
49,340
6,022,874,560
IssuesEvent
2017-06-07 22:10:34
docker/swarmkit
https://api.github.com/repos/docker/swarmkit
closed
Flaky test: TestRootRotationReconciliationNoChanges
area/tests
I saw this failure in CI (https://circleci.com/gh/docker/swarmkit/7002). It does not seem to be related to the changes in that PR. ``` --- FAIL: TestRootRotationReconciliationNoChanges (3.05s) Error Trace: server_test.go:1108 Error: Not equal: []byte{0xa, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x42, 0x45, 0x47, 0x49, 0x4e, 0x20, 0x45, 0x43, 0x20, 0x50, 0x52, 0x49, 0x56, 0x41, 0x54, 0x45, 0x20, 0x4b, 0x45, 0x59, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0xa, 0x4d, 0x48, 0x63, 0x43, 0x41, 0x51, 0x45, 0x45, 0x49, 0x4b, 0x58, 0x6b, 0x76, 0x46, 0x66, 0x55, 0x63, 0x56, 0x62, 0x48, 0x39, 0x55, 0x71, 0x78, 0x6b, 0x64, 0x6f, 0x34, 0x4f, 0x62, 0x77, 0x63, 0x33, 0x52, 0x53, 0x4a, 0x66, 0x45, 0x48, 0x32, 0x32, 0x35, 0x34, 0x73, 0x66, 0x71, 0x6b, 0x78, 0x35, 0x30, 0x78, 0x42, 0x6f, 0x41, 0x6f, 0x47, 0x43, 0x43, 0x71, 0x47, 0x53, 0x4d, 0x34, 0x39, 0xa, 0x41, 0x77, 0x45, 0x48, 0x6f, 0x55, 0x51, 0x44, 0x51, 0x67, 0x41, 0x45, 0x65, 0x62, 0x4a, 0x2b, 0x41, 0x55, 0x6b, 0x75, 0x37, 0x33, 0x67, 0x6a, 0x49, 0x39, 0x68, 0x35, 0x69, 0x2f, 0x2b, 0x56, 0x6f, 0x4e, 0x52, 0x37, 0x70, 0x78, 0x64, 0x78, 0x6c, 0x5a, 0x6b, 0x76, 0x72, 0x5a, 0x31, 0x62, 0x65, 0x32, 0x62, 0x72, 0x56, 0x68, 0x51, 0x4e, 0x46, 0x42, 0x76, 0x6e, 0x76, 0x4d, 0x6e, 0x47, 0xa, 0x39, 0x6f, 0x68, 0x34, 0x65, 0x78, 0x33, 0x73, 0x70, 0x62, 0x52, 0x6a, 0x42, 0x41, 0x52, 0x73, 0x35, 0x36, 0x48, 0x5a, 0x43, 0x6a, 0x52, 0x52, 0x70, 0x76, 0x6a, 0x36, 0x6d, 0x42, 0x4f, 0x79, 0x4a, 0x51, 0x3d, 0x3d, 0xa, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x45, 0x4e, 0x44, 0x20, 0x45, 0x43, 0x20, 0x50, 0x52, 0x49, 0x56, 0x41, 0x54, 0x45, 0x20, 0x4b, 0x45, 0x59, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0xa} (expected) != []byte{0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x42, 0x45, 0x47, 0x49, 0x4e, 0x20, 0x45, 0x43, 0x20, 0x50, 0x52, 0x49, 0x56, 0x41, 0x54, 0x45, 0x20, 0x4b, 0x45, 0x59, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0xa, 0x4d, 0x48, 0x63, 0x43, 0x41, 0x51, 0x45, 0x45, 0x49, 0x50, 0x76, 0x32, 0x71, 0x63, 0x46, 0x6f, 0x4f, 0x79, 0x34, 0x65, 0x6e, 0x4a, 0x59, 0x2b, 0x45, 0x39, 0x7a, 0x62, 0x62, 0x7a, 0x63, 0x4b, 0x72, 0x5a, 0x4d, 0x6c, 0x73, 0x34, 0x43, 0x44, 0x48, 0x77, 0x62, 0x6e, 0x2f, 0x38, 0x49, 0x74, 0x61, 0x63, 0x79, 0x73, 0x6f, 0x41, 0x6f, 0x47, 0x43, 0x43, 0x71, 0x47, 0x53, 0x4d, 0x34, 0x39, 0xa, 0x41, 0x77, 0x45, 0x48, 0x6f, 0x55, 0x51, 0x44, 0x51, 0x67, 0x41, 0x45, 0x48, 0x54, 0x34, 0x63, 0x6e, 0x55, 0x4d, 0x44, 0x58, 0x7a, 0x33, 0x58, 0x6b, 0x68, 0x37, 0x6c, 0x57, 0x68, 0x49, 0x2b, 0x53, 0x47, 0x56, 0x57, 0x36, 0x4b, 0x41, 0x74, 0x6d, 0x7a, 0x64, 0x76, 0x32, 0x6a, 0x42, 0x6d, 0x42, 0x57, 0x6d, 0x5a, 0x52, 0x32, 0x34, 0x4f, 0x44, 0x77, 0x38, 0x4c, 0x6e, 0x35, 0x44, 0x4b, 0xa, 0x48, 0x56, 0x43, 0x5a, 0x32, 0x67, 0x6d, 0x4c, 0x73, 0x36, 0x67, 0x69, 0x4b, 0x48, 0x73, 0x31, 0x69, 0x31, 0x6a, 0x36, 0x56, 0x49, 0x69, 0x55, 0x62, 0x4b, 0x30, 0x49, 0x31, 0x79, 0x55, 0x45, 0x47, 0x77, 0x3d, 0x3d, 0xa, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x45, 0x4e, 0x44, 0x20, 0x45, 0x43, 0x20, 0x50, 0x52, 0x49, 0x56, 0x41, 0x54, 0x45, 0x20, 0x4b, 0x45, 0x59, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0xa} (actual) Diff: --- Expected +++ Actual @@ -1,17 +1,17 @@ -([]uint8) (len=228 cap=228) { - 00000000 0a 2d 2d 2d 2d 2d 42 45 47 49 4e 20 45 43 20 50 |.-----BEGIN EC P| - 00000010 52 49 56 41 54 45 20 4b 45 59 2d 2d 2d 2d 2d 0a |RIVATE KEY-----.| - 00000020 4d 48 63 43 41 51 45 45 49 4b 58 6b 76 46 66 55 |MHcCAQEEIKXkvFfU| - 00000030 63 56 62 48 39 55 71 78 6b 64 6f 34 4f 62 77 63 |cVbH9Uqxkdo4Obwc| - 00000040 33 52 53 4a 66 45 48 32 32 35 34 73 66 71 6b 78 |3RSJfEH2254sfqkx| - 00000050 35 30 78 42 6f 41 6f 47 43 43 71 47 53 4d 34 39 |50xBoAoGCCqGSM49| - 00000060 0a 41 77 45 48 6f 55 51 44 51 67 41 45 65 62 4a |.AwEHoUQDQgAEebJ| - 00000070 2b 41 55 6b 75 37 33 67 6a 49 39 68 35 69 2f 2b |+AUku73gjI9h5i/+| - 00000080 56 6f 4e 52 37 70 78 64 78 6c 5a 6b 76 72 5a 31 |VoNR7pxdxlZkvrZ1| - 00000090 62 65 32 62 72 56 68 51 4e 46 42 76 6e 76 4d 6e |be2brVhQNFBvnvMn| - 000000a0 47 0a 39 6f 68 34 65 78 33 73 70 62 52 6a 42 41 |G.9oh4ex3spbRjBA| - 000000b0 52 73 35 36 48 5a 43 6a 52 52 70 76 6a 36 6d 42 |Rs56HZCjRRpvj6mB| - 000000c0 4f 79 4a 51 3d 3d 0a 2d 2d 2d 2d 2d 45 4e 44 20 |OyJQ==.-----END | - 000000d0 45 43 20 50 52 49 56 41 54 45 20 4b 45 59 2d 2d |EC PRIVATE KEY--| - 000000e0 2d 2d 2d 0a |---.| +([]uint8) (len=227 cap=227) { + 00000000 2d 2d 2d 2d 2d 42 45 47 49 4e 20 45 43 20 50 52 |-----BEGIN EC PR| + 00000010 49 56 41 54 45 20 4b 45 59 2d 2d 2d 2d 2d 0a 4d |IVATE KEY-----.M| + 00000020 48 63 43 41 51 45 45 49 50 76 32 71 63 46 6f 4f |HcCAQEEIPv2qcFoO| + 00000030 79 34 65 6e 4a 59 2b 45 39 7a 62 62 7a 63 4b 72 |y4enJY+E9zbbzcKr| + 00000040 5a 4d 6c 73 34 43 44 48 77 62 6e 2f 38 49 74 61 |ZMls4CDHwbn/8Ita| + 00000050 63 79 73 6f 41 6f 47 43 43 71 47 53 4d 34 39 0a |cysoAoGCCqGSM49.| + 00000060 41 77 45 48 6f 55 51 44 51 67 41 45 48 54 34 63 |AwEHoUQDQgAEHT4c| + 00000070 6e 55 4d 44 58 7a 33 58 6b 68 37 6c 57 68 49 2b |nUMDXz3Xkh7lWhI+| + 00000080 53 47 56 57 36 4b 41 74 6d 7a 64 76 32 6a 42 6d |SGVW6KAtmzdv2jBm| + 00000090 42 57 6d 5a 52 32 34 4f 44 77 38 4c 6e 35 44 4b |BWmZR24ODw8Ln5DK| + 000000a0 0a 48 56 43 5a 32 67 6d 4c 73 36 67 69 4b 48 73 |.HVCZ2gmLs6giKHs| + 000000b0 31 69 31 6a 36 56 49 69 55 62 4b 30 49 31 79 55 |1i1j6VIiUbK0I1yU| + 000000c0 45 47 77 3d 3d 0a 2d 2d 2d 2d 2d 45 4e 44 20 45 |EGw==.-----END E| + 000000d0 43 20 50 52 49 56 41 54 45 20 4b 45 59 2d 2d 2d |C PRIVATE KEY---| + 000000e0 2d 2d 0a |--.| } Messages: Nodes already in rotate state, even if they currently have the correct TLS issuer, will be left in the rotate state even if root rotation is aborted because we don't know if they're already in the process of getting a new cert. Even if they're issued by a different issuer, they will be left alone because they'll have an interemdiate that chains up to the old issuer. ```
1.0
Flaky test: TestRootRotationReconciliationNoChanges - I saw this failure in CI (https://circleci.com/gh/docker/swarmkit/7002). It does not seem to be related to the changes in that PR. ``` --- FAIL: TestRootRotationReconciliationNoChanges (3.05s) Error Trace: server_test.go:1108 Error: Not equal: []byte{0xa, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x42, 0x45, 0x47, 0x49, 0x4e, 0x20, 0x45, 0x43, 0x20, 0x50, 0x52, 0x49, 0x56, 0x41, 0x54, 0x45, 0x20, 0x4b, 0x45, 0x59, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0xa, 0x4d, 0x48, 0x63, 0x43, 0x41, 0x51, 0x45, 0x45, 0x49, 0x4b, 0x58, 0x6b, 0x76, 0x46, 0x66, 0x55, 0x63, 0x56, 0x62, 0x48, 0x39, 0x55, 0x71, 0x78, 0x6b, 0x64, 0x6f, 0x34, 0x4f, 0x62, 0x77, 0x63, 0x33, 0x52, 0x53, 0x4a, 0x66, 0x45, 0x48, 0x32, 0x32, 0x35, 0x34, 0x73, 0x66, 0x71, 0x6b, 0x78, 0x35, 0x30, 0x78, 0x42, 0x6f, 0x41, 0x6f, 0x47, 0x43, 0x43, 0x71, 0x47, 0x53, 0x4d, 0x34, 0x39, 0xa, 0x41, 0x77, 0x45, 0x48, 0x6f, 0x55, 0x51, 0x44, 0x51, 0x67, 0x41, 0x45, 0x65, 0x62, 0x4a, 0x2b, 0x41, 0x55, 0x6b, 0x75, 0x37, 0x33, 0x67, 0x6a, 0x49, 0x39, 0x68, 0x35, 0x69, 0x2f, 0x2b, 0x56, 0x6f, 0x4e, 0x52, 0x37, 0x70, 0x78, 0x64, 0x78, 0x6c, 0x5a, 0x6b, 0x76, 0x72, 0x5a, 0x31, 0x62, 0x65, 0x32, 0x62, 0x72, 0x56, 0x68, 0x51, 0x4e, 0x46, 0x42, 0x76, 0x6e, 0x76, 0x4d, 0x6e, 0x47, 0xa, 0x39, 0x6f, 0x68, 0x34, 0x65, 0x78, 0x33, 0x73, 0x70, 0x62, 0x52, 0x6a, 0x42, 0x41, 0x52, 0x73, 0x35, 0x36, 0x48, 0x5a, 0x43, 0x6a, 0x52, 0x52, 0x70, 0x76, 0x6a, 0x36, 0x6d, 0x42, 0x4f, 0x79, 0x4a, 0x51, 0x3d, 0x3d, 0xa, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x45, 0x4e, 0x44, 0x20, 0x45, 0x43, 0x20, 0x50, 0x52, 0x49, 0x56, 0x41, 0x54, 0x45, 0x20, 0x4b, 0x45, 0x59, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0xa} (expected) != []byte{0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x42, 0x45, 0x47, 0x49, 0x4e, 0x20, 0x45, 0x43, 0x20, 0x50, 0x52, 0x49, 0x56, 0x41, 0x54, 0x45, 0x20, 0x4b, 0x45, 0x59, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0xa, 0x4d, 0x48, 0x63, 0x43, 0x41, 0x51, 0x45, 0x45, 0x49, 0x50, 0x76, 0x32, 0x71, 0x63, 0x46, 0x6f, 0x4f, 0x79, 0x34, 0x65, 0x6e, 0x4a, 0x59, 0x2b, 0x45, 0x39, 0x7a, 0x62, 0x62, 0x7a, 0x63, 0x4b, 0x72, 0x5a, 0x4d, 0x6c, 0x73, 0x34, 0x43, 0x44, 0x48, 0x77, 0x62, 0x6e, 0x2f, 0x38, 0x49, 0x74, 0x61, 0x63, 0x79, 0x73, 0x6f, 0x41, 0x6f, 0x47, 0x43, 0x43, 0x71, 0x47, 0x53, 0x4d, 0x34, 0x39, 0xa, 0x41, 0x77, 0x45, 0x48, 0x6f, 0x55, 0x51, 0x44, 0x51, 0x67, 0x41, 0x45, 0x48, 0x54, 0x34, 0x63, 0x6e, 0x55, 0x4d, 0x44, 0x58, 0x7a, 0x33, 0x58, 0x6b, 0x68, 0x37, 0x6c, 0x57, 0x68, 0x49, 0x2b, 0x53, 0x47, 0x56, 0x57, 0x36, 0x4b, 0x41, 0x74, 0x6d, 0x7a, 0x64, 0x76, 0x32, 0x6a, 0x42, 0x6d, 0x42, 0x57, 0x6d, 0x5a, 0x52, 0x32, 0x34, 0x4f, 0x44, 0x77, 0x38, 0x4c, 0x6e, 0x35, 0x44, 0x4b, 0xa, 0x48, 0x56, 0x43, 0x5a, 0x32, 0x67, 0x6d, 0x4c, 0x73, 0x36, 0x67, 0x69, 0x4b, 0x48, 0x73, 0x31, 0x69, 0x31, 0x6a, 0x36, 0x56, 0x49, 0x69, 0x55, 0x62, 0x4b, 0x30, 0x49, 0x31, 0x79, 0x55, 0x45, 0x47, 0x77, 0x3d, 0x3d, 0xa, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0x45, 0x4e, 0x44, 0x20, 0x45, 0x43, 0x20, 0x50, 0x52, 0x49, 0x56, 0x41, 0x54, 0x45, 0x20, 0x4b, 0x45, 0x59, 0x2d, 0x2d, 0x2d, 0x2d, 0x2d, 0xa} (actual) Diff: --- Expected +++ Actual @@ -1,17 +1,17 @@ -([]uint8) (len=228 cap=228) { - 00000000 0a 2d 2d 2d 2d 2d 42 45 47 49 4e 20 45 43 20 50 |.-----BEGIN EC P| - 00000010 52 49 56 41 54 45 20 4b 45 59 2d 2d 2d 2d 2d 0a |RIVATE KEY-----.| - 00000020 4d 48 63 43 41 51 45 45 49 4b 58 6b 76 46 66 55 |MHcCAQEEIKXkvFfU| - 00000030 63 56 62 48 39 55 71 78 6b 64 6f 34 4f 62 77 63 |cVbH9Uqxkdo4Obwc| - 00000040 33 52 53 4a 66 45 48 32 32 35 34 73 66 71 6b 78 |3RSJfEH2254sfqkx| - 00000050 35 30 78 42 6f 41 6f 47 43 43 71 47 53 4d 34 39 |50xBoAoGCCqGSM49| - 00000060 0a 41 77 45 48 6f 55 51 44 51 67 41 45 65 62 4a |.AwEHoUQDQgAEebJ| - 00000070 2b 41 55 6b 75 37 33 67 6a 49 39 68 35 69 2f 2b |+AUku73gjI9h5i/+| - 00000080 56 6f 4e 52 37 70 78 64 78 6c 5a 6b 76 72 5a 31 |VoNR7pxdxlZkvrZ1| - 00000090 62 65 32 62 72 56 68 51 4e 46 42 76 6e 76 4d 6e |be2brVhQNFBvnvMn| - 000000a0 47 0a 39 6f 68 34 65 78 33 73 70 62 52 6a 42 41 |G.9oh4ex3spbRjBA| - 000000b0 52 73 35 36 48 5a 43 6a 52 52 70 76 6a 36 6d 42 |Rs56HZCjRRpvj6mB| - 000000c0 4f 79 4a 51 3d 3d 0a 2d 2d 2d 2d 2d 45 4e 44 20 |OyJQ==.-----END | - 000000d0 45 43 20 50 52 49 56 41 54 45 20 4b 45 59 2d 2d |EC PRIVATE KEY--| - 000000e0 2d 2d 2d 0a |---.| +([]uint8) (len=227 cap=227) { + 00000000 2d 2d 2d 2d 2d 42 45 47 49 4e 20 45 43 20 50 52 |-----BEGIN EC PR| + 00000010 49 56 41 54 45 20 4b 45 59 2d 2d 2d 2d 2d 0a 4d |IVATE KEY-----.M| + 00000020 48 63 43 41 51 45 45 49 50 76 32 71 63 46 6f 4f |HcCAQEEIPv2qcFoO| + 00000030 79 34 65 6e 4a 59 2b 45 39 7a 62 62 7a 63 4b 72 |y4enJY+E9zbbzcKr| + 00000040 5a 4d 6c 73 34 43 44 48 77 62 6e 2f 38 49 74 61 |ZMls4CDHwbn/8Ita| + 00000050 63 79 73 6f 41 6f 47 43 43 71 47 53 4d 34 39 0a |cysoAoGCCqGSM49.| + 00000060 41 77 45 48 6f 55 51 44 51 67 41 45 48 54 34 63 |AwEHoUQDQgAEHT4c| + 00000070 6e 55 4d 44 58 7a 33 58 6b 68 37 6c 57 68 49 2b |nUMDXz3Xkh7lWhI+| + 00000080 53 47 56 57 36 4b 41 74 6d 7a 64 76 32 6a 42 6d |SGVW6KAtmzdv2jBm| + 00000090 42 57 6d 5a 52 32 34 4f 44 77 38 4c 6e 35 44 4b |BWmZR24ODw8Ln5DK| + 000000a0 0a 48 56 43 5a 32 67 6d 4c 73 36 67 69 4b 48 73 |.HVCZ2gmLs6giKHs| + 000000b0 31 69 31 6a 36 56 49 69 55 62 4b 30 49 31 79 55 |1i1j6VIiUbK0I1yU| + 000000c0 45 47 77 3d 3d 0a 2d 2d 2d 2d 2d 45 4e 44 20 45 |EGw==.-----END E| + 000000d0 43 20 50 52 49 56 41 54 45 20 4b 45 59 2d 2d 2d |C PRIVATE KEY---| + 000000e0 2d 2d 0a |--.| } Messages: Nodes already in rotate state, even if they currently have the correct TLS issuer, will be left in the rotate state even if root rotation is aborted because we don't know if they're already in the process of getting a new cert. Even if they're issued by a different issuer, they will be left alone because they'll have an interemdiate that chains up to the old issuer. ```
non_defect
flaky test testrootrotationreconciliationnochanges i saw this failure in ci it does not seem to be related to the changes in that pr fail testrootrotationreconciliationnochanges error trace server test go error not equal byte expected byte actual diff expected actual len cap begin ec p rivate key mhccaqeeikxkvffu awehouqdqgaeebj g oyjq end ec private key len cap begin ec pr ivate key m egw end e c private key messages nodes already in rotate state even if they currently have the correct tls issuer will be left in the rotate state even if root rotation is aborted because we don t know if they re already in the process of getting a new cert even if they re issued by a different issuer they will be left alone because they ll have an interemdiate that chains up to the old issuer
0
63,133
17,391,172,317
IssuesEvent
2021-08-02 07:35:03
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
UI events have no apparent effect, app window doesn't change
A-Electron P-low S-Critical T-Defect X-Needs-Info Z-Platform-Specific
### Description Desktop app crashes on startup. ### Steps to reproduce ``` $ element-desktop &/home/user/.config/Element exists: yes /home/user/.config/Riot exists: no Starting auto update with base URL: https://packages.riot.im/desktop/update/ Auto update not supported on this platform Segmentation fault $ ``` Strace says it crashes after a series of calls to recvmsg; one returns 32 bytes, then four fail with EAGAIN. Gdb says SIGSEGV in thread 1 at 0x000055555a03b652, which points into /usr/bin/element-desktop, v1.7.21 from packages.riot.im/debian/. So, probably not a dependency. Gdb disas says "no function contains program counter for selected frame." It dies with 40 threads running. It pops up a window momentarily before crashing, but has not cleared it before the SIGSEGV. Note that this used to work, in this environment. I.e., it started crashing without an upgrade. Upgrading did not fix it. ### Version information Desktop, amd64, Debian 11, partly sid. 1.7.21 from packages.riot.im/debian/ A VM running under Qubes, with no access to a GPU, so wholly unaccelerated X rendering. ``` Thread 1 "element-desktop" received signal SIGSEGV, Segmentation fault. 0x000055555a03b652 in ?? () (gdb) i thr Id Target Id Frame * 1 Thread 0x7ffff3de1340 (LWP 3631) "element-desktop" 0x000055555a03b652 in ?? () 2 Thread 0x7ffff30e1700 (LWP 3635) "sandbox_ipc_thr" 0x00007ffff57314bf in __GI___poll (fds=0x7ffff30e06f0, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 3 Thread 0x7ffff28e0700 (LWP 3641) "element-desktop" 0x00007ffff57092c7 in __GI___wait4 (pid=3638, stat_loc=0x7ffff28df824, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27 4 Thread 0x7ffff20df700 (LWP 3642) "ThreadPoolServi" 0x00007ffff573c1d6 in epoll_wait (epfd=14, events=0x1e6e27c61780, maxevents=32, timeout=45000) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 5 Thread 0x7ffff18de700 (LWP 3643) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7ffff18dd570, clockid=-242363152, expected=0, futex_word=0x7ffff18dd6c8) at ../sysdeps/nptl/futex-internal.h:320 6 Thread 0x7ffff10dd700 (LWP 3644) "Chrome_IOThread" 0x00007ffff5731dd7 in fallocate64 (fd=127, mode=0, offset=0, len=1048576) at ../sysdeps/unix/sysv/linux/fallocate64.c:27 7 Thread 0x7ffff08dc700 (LWP 3645) "MemoryInfra" futex_wait_cancelable (private=0, expected=0, futex_word=0x7ffff08db628) at ../sysdeps/nptl/futex-internal.h:183 8 Thread 0x7ffff002f700 (LWP 3646) "element-desktop" 0x00007ffff573c1d6 in epoll_wait (epfd=36, events=0x7ffff002b6f0, maxevents=1024, timeout=-1) --Type <RET> for more, q to quit, c to continue without paging-- at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 9 Thread 0x7fffef82e700 (LWP 3647) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x1e6e27ce8d6c) at ../sysdeps/nptl/futex-internal.h:183 10 Thread 0x7fffef02d700 (LWP 3648) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x1e6e27ce8d68) at ../sysdeps/nptl/futex-internal.h:183 11 Thread 0x7fffee82c700 (LWP 3649) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x1e6e27ce8d68) at ../sysdeps/nptl/futex-internal.h:183 12 Thread 0x7fffee02b700 (LWP 3650) "element-desktop" futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x55555d36a8a8) at ../sysdeps/nptl/futex-internal.h:320 13 Thread 0x7fffecf6b700 (LWP 3655) "gmain" 0x00007ffff57314bf in __GI___poll (fds=0x1e6e283ef7e0, nfds=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 14 Thread 0x7fffec76a700 (LWP 3656) "gdbus" 0x00007ffff57314bf in __GI___poll (fds=0x1e6e2855f780, nfds=3, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 15 Thread 0x7fffebf69700 (LWP 3657) "Bluez D-Bus thr" 0x00007ffff573c1d6 in epoll_wait (epfd=49, events=0x1e6e28430d80, maxevents=32, timeout=24972) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 16 Thread 0x7fffede15700 (LWP 3658) "CrShutdownDetec" __libc_read ( --Type <RET> for more, q to quit, c to continue without paging-- nbytes=4, buf=0x7fffede146fc, fd=50) at ../sysdeps/unix/sysv/linux/read.c:26 17 Thread 0x7fffeb27e700 (LWP 3659) "dconf worker" 0x00007ffff57314bf in __GI___poll (fds=0x1e6e284be260, nfds=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 18 Thread 0x7fffeaa7d700 (LWP 3660) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffeaa7c570, clockid=-358103824, expected=0, futex_word=0x7fffeaa7c6c8) at ../sysdeps/nptl/futex-internal.h:320 19 Thread 0x7fffea27c700 (LWP 3661) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffea27b570, clockid=-366496528, expected=0, futex_word=0x7fffea27b6c8) at ../sysdeps/nptl/futex-internal.h:320 20 Thread 0x7fffe9a7b700 (LWP 3662) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffe9a7a570, clockid=-374889232, expected=0, futex_word=0x7fffe9a7a6c8) at ../sysdeps/nptl/futex-internal.h:320 21 Thread 0x7fffe927a700 (LWP 3663) "inotify_reader" 0x00007ffff5733973 in __GI___select (nfds=64, readfds=0x7fffe9279740, writefds=0x0, exceptfds=0x0, timeout=0x0) at ../sysdeps/unix/sysv/linux/select.c:41 22 Thread 0x7fffe8a79700 (LWP 3664) "CompositorTileW" futex_wait_cancelable (private=0, expected=0, futex_word=0x1e6e2840dd08) at ../sysdeps/nptl/futex-internal.h:183 23 Thread 0x7fffe8278700 (LWP 3665) "VideoCaptureThr" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe8277628) at ../sysdeps/nptl/futex-internal.h:183 --Type <RET> for more, q to quit, c to continue without paging-- 24 Thread 0x7fffe7a77700 (LWP 3666) "element-desktop" futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x1e6e27c45790) at ../sysdeps/nptl/futex-internal.h:320 25 Thread 0x7fffe6a3b700 (LWP 3668) "ThreadPoolSingl" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe6a3a6a8) at ../sysdeps/nptl/futex-internal.h:183 26 Thread 0x7fffe7256700 (LWP 3667) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffe7255570, clockid=-416983824, expected=0, futex_word=0x7fffe72556c8) at ../sysdeps/nptl/futex-internal.h:320 27 Thread 0x7fffe5a38700 (LWP 3670) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x55555d36aa74) at ../sysdeps/nptl/futex-internal.h:183 28 Thread 0x7fffe623a700 (LWP 3669) "gpu-process_cra" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe6239628) at ../sysdeps/nptl/futex-internal.h:183 29 Thread 0x7fffe5237700 (LWP 3671) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x55555d36aa74) at ../sysdeps/nptl/futex-internal.h:183 30 Thread 0x7fffe4a10700 (LWP 3673) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x55555d36aa74) at ../sysdeps/nptl/futex-internal.h:183 31 Thread 0x7fffe420f700 (LWP 3674) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x55555d36aa74) --Type <RET> for more, q to quit, c to continue without paging-- at ../sysdeps/nptl/futex-internal.h:183 32 Thread 0x7fffe3a0e700 (LWP 3676) "utility_crash_u" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe3a0d628) at ../sysdeps/nptl/futex-internal.h:183 33 Thread 0x7fffe3154700 (LWP 3678) "CacheThread_Blo" 0x00007ffff573c1d6 in epoll_wait (epfd=95, events=0x1e6e28862d80, maxevents=32, timeout=29995) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 34 Thread 0x7fffe2953700 (LWP 3679) "renderer_crash_" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe2952628) at ../sysdeps/nptl/futex-internal.h:183 35 Thread 0x7fffe20b1700 (LWP 3697) "pool-element-de" syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 36 Thread 0x7fffe18b0700 (LWP 3698) "pool-element-de" syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 37 Thread 0x7fffe10af700 (LWP 3699) "ThreadPoolSingl" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe10ae6a8) at ../sysdeps/nptl/futex-internal.h:183 38 Thread 0x7fffe03cb700 (LWP 3709) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffe03ca570, clockid=-532896528, expected=0, futex_word=0x7fffe03ca6c8) at ../sysdeps/nptl/futex-internal.h:320 39 Thread 0x7fffdf3c9700 (LWP 3711) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffdf3c8570, clockid=-549681936, expected=0, futex_word=0x7fffdf3c86c8) at ../sysdeps/nptl/futex-internal.h:320 --Type <RET> for more, q to quit, c to continue without paging-- 40 Thread 0x7fffdfbca700 (LWP 3710) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffdfbc9570, clockid=-541289232, expected=0, futex_word=0x7fffdfbc96c8) at ../sysdeps/nptl/futex-internal.h:320 (gdb) disas No function contains program counter for selected frame. ```
1.0
UI events have no apparent effect, app window doesn't change - ### Description Desktop app crashes on startup. ### Steps to reproduce ``` $ element-desktop &/home/user/.config/Element exists: yes /home/user/.config/Riot exists: no Starting auto update with base URL: https://packages.riot.im/desktop/update/ Auto update not supported on this platform Segmentation fault $ ``` Strace says it crashes after a series of calls to recvmsg; one returns 32 bytes, then four fail with EAGAIN. Gdb says SIGSEGV in thread 1 at 0x000055555a03b652, which points into /usr/bin/element-desktop, v1.7.21 from packages.riot.im/debian/. So, probably not a dependency. Gdb disas says "no function contains program counter for selected frame." It dies with 40 threads running. It pops up a window momentarily before crashing, but has not cleared it before the SIGSEGV. Note that this used to work, in this environment. I.e., it started crashing without an upgrade. Upgrading did not fix it. ### Version information Desktop, amd64, Debian 11, partly sid. 1.7.21 from packages.riot.im/debian/ A VM running under Qubes, with no access to a GPU, so wholly unaccelerated X rendering. ``` Thread 1 "element-desktop" received signal SIGSEGV, Segmentation fault. 0x000055555a03b652 in ?? () (gdb) i thr Id Target Id Frame * 1 Thread 0x7ffff3de1340 (LWP 3631) "element-desktop" 0x000055555a03b652 in ?? () 2 Thread 0x7ffff30e1700 (LWP 3635) "sandbox_ipc_thr" 0x00007ffff57314bf in __GI___poll (fds=0x7ffff30e06f0, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 3 Thread 0x7ffff28e0700 (LWP 3641) "element-desktop" 0x00007ffff57092c7 in __GI___wait4 (pid=3638, stat_loc=0x7ffff28df824, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27 4 Thread 0x7ffff20df700 (LWP 3642) "ThreadPoolServi" 0x00007ffff573c1d6 in epoll_wait (epfd=14, events=0x1e6e27c61780, maxevents=32, timeout=45000) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 5 Thread 0x7ffff18de700 (LWP 3643) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7ffff18dd570, clockid=-242363152, expected=0, futex_word=0x7ffff18dd6c8) at ../sysdeps/nptl/futex-internal.h:320 6 Thread 0x7ffff10dd700 (LWP 3644) "Chrome_IOThread" 0x00007ffff5731dd7 in fallocate64 (fd=127, mode=0, offset=0, len=1048576) at ../sysdeps/unix/sysv/linux/fallocate64.c:27 7 Thread 0x7ffff08dc700 (LWP 3645) "MemoryInfra" futex_wait_cancelable (private=0, expected=0, futex_word=0x7ffff08db628) at ../sysdeps/nptl/futex-internal.h:183 8 Thread 0x7ffff002f700 (LWP 3646) "element-desktop" 0x00007ffff573c1d6 in epoll_wait (epfd=36, events=0x7ffff002b6f0, maxevents=1024, timeout=-1) --Type <RET> for more, q to quit, c to continue without paging-- at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 9 Thread 0x7fffef82e700 (LWP 3647) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x1e6e27ce8d6c) at ../sysdeps/nptl/futex-internal.h:183 10 Thread 0x7fffef02d700 (LWP 3648) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x1e6e27ce8d68) at ../sysdeps/nptl/futex-internal.h:183 11 Thread 0x7fffee82c700 (LWP 3649) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x1e6e27ce8d68) at ../sysdeps/nptl/futex-internal.h:183 12 Thread 0x7fffee02b700 (LWP 3650) "element-desktop" futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x55555d36a8a8) at ../sysdeps/nptl/futex-internal.h:320 13 Thread 0x7fffecf6b700 (LWP 3655) "gmain" 0x00007ffff57314bf in __GI___poll (fds=0x1e6e283ef7e0, nfds=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 14 Thread 0x7fffec76a700 (LWP 3656) "gdbus" 0x00007ffff57314bf in __GI___poll (fds=0x1e6e2855f780, nfds=3, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 15 Thread 0x7fffebf69700 (LWP 3657) "Bluez D-Bus thr" 0x00007ffff573c1d6 in epoll_wait (epfd=49, events=0x1e6e28430d80, maxevents=32, timeout=24972) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 16 Thread 0x7fffede15700 (LWP 3658) "CrShutdownDetec" __libc_read ( --Type <RET> for more, q to quit, c to continue without paging-- nbytes=4, buf=0x7fffede146fc, fd=50) at ../sysdeps/unix/sysv/linux/read.c:26 17 Thread 0x7fffeb27e700 (LWP 3659) "dconf worker" 0x00007ffff57314bf in __GI___poll (fds=0x1e6e284be260, nfds=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 18 Thread 0x7fffeaa7d700 (LWP 3660) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffeaa7c570, clockid=-358103824, expected=0, futex_word=0x7fffeaa7c6c8) at ../sysdeps/nptl/futex-internal.h:320 19 Thread 0x7fffea27c700 (LWP 3661) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffea27b570, clockid=-366496528, expected=0, futex_word=0x7fffea27b6c8) at ../sysdeps/nptl/futex-internal.h:320 20 Thread 0x7fffe9a7b700 (LWP 3662) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffe9a7a570, clockid=-374889232, expected=0, futex_word=0x7fffe9a7a6c8) at ../sysdeps/nptl/futex-internal.h:320 21 Thread 0x7fffe927a700 (LWP 3663) "inotify_reader" 0x00007ffff5733973 in __GI___select (nfds=64, readfds=0x7fffe9279740, writefds=0x0, exceptfds=0x0, timeout=0x0) at ../sysdeps/unix/sysv/linux/select.c:41 22 Thread 0x7fffe8a79700 (LWP 3664) "CompositorTileW" futex_wait_cancelable (private=0, expected=0, futex_word=0x1e6e2840dd08) at ../sysdeps/nptl/futex-internal.h:183 23 Thread 0x7fffe8278700 (LWP 3665) "VideoCaptureThr" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe8277628) at ../sysdeps/nptl/futex-internal.h:183 --Type <RET> for more, q to quit, c to continue without paging-- 24 Thread 0x7fffe7a77700 (LWP 3666) "element-desktop" futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x1e6e27c45790) at ../sysdeps/nptl/futex-internal.h:320 25 Thread 0x7fffe6a3b700 (LWP 3668) "ThreadPoolSingl" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe6a3a6a8) at ../sysdeps/nptl/futex-internal.h:183 26 Thread 0x7fffe7256700 (LWP 3667) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffe7255570, clockid=-416983824, expected=0, futex_word=0x7fffe72556c8) at ../sysdeps/nptl/futex-internal.h:320 27 Thread 0x7fffe5a38700 (LWP 3670) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x55555d36aa74) at ../sysdeps/nptl/futex-internal.h:183 28 Thread 0x7fffe623a700 (LWP 3669) "gpu-process_cra" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe6239628) at ../sysdeps/nptl/futex-internal.h:183 29 Thread 0x7fffe5237700 (LWP 3671) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x55555d36aa74) at ../sysdeps/nptl/futex-internal.h:183 30 Thread 0x7fffe4a10700 (LWP 3673) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x55555d36aa74) at ../sysdeps/nptl/futex-internal.h:183 31 Thread 0x7fffe420f700 (LWP 3674) "element-desktop" futex_wait_cancelable (private=0, expected=0, futex_word=0x55555d36aa74) --Type <RET> for more, q to quit, c to continue without paging-- at ../sysdeps/nptl/futex-internal.h:183 32 Thread 0x7fffe3a0e700 (LWP 3676) "utility_crash_u" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe3a0d628) at ../sysdeps/nptl/futex-internal.h:183 33 Thread 0x7fffe3154700 (LWP 3678) "CacheThread_Blo" 0x00007ffff573c1d6 in epoll_wait (epfd=95, events=0x1e6e28862d80, maxevents=32, timeout=29995) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 34 Thread 0x7fffe2953700 (LWP 3679) "renderer_crash_" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe2952628) at ../sysdeps/nptl/futex-internal.h:183 35 Thread 0x7fffe20b1700 (LWP 3697) "pool-element-de" syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 36 Thread 0x7fffe18b0700 (LWP 3698) "pool-element-de" syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 37 Thread 0x7fffe10af700 (LWP 3699) "ThreadPoolSingl" futex_wait_cancelable (private=0, expected=0, futex_word=0x7fffe10ae6a8) at ../sysdeps/nptl/futex-internal.h:183 38 Thread 0x7fffe03cb700 (LWP 3709) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffe03ca570, clockid=-532896528, expected=0, futex_word=0x7fffe03ca6c8) at ../sysdeps/nptl/futex-internal.h:320 39 Thread 0x7fffdf3c9700 (LWP 3711) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffdf3c8570, clockid=-549681936, expected=0, futex_word=0x7fffdf3c86c8) at ../sysdeps/nptl/futex-internal.h:320 --Type <RET> for more, q to quit, c to continue without paging-- 40 Thread 0x7fffdfbca700 (LWP 3710) "ThreadPoolForeg" futex_abstimed_wait_cancelable (private=0, abstime=0x7fffdfbc9570, clockid=-541289232, expected=0, futex_word=0x7fffdfbc96c8) at ../sysdeps/nptl/futex-internal.h:320 (gdb) disas No function contains program counter for selected frame. ```
defect
ui events have no apparent effect app window doesn t change description desktop app crashes on startup steps to reproduce element desktop home user config element exists yes home user config riot exists no starting auto update with base url auto update not supported on this platform segmentation fault strace says it crashes after a series of calls to recvmsg one returns bytes then four fail with eagain gdb says sigsegv in thread at which points into usr bin element desktop from packages riot im debian so probably not a dependency gdb disas says no function contains program counter for selected frame it dies with threads running it pops up a window momentarily before crashing but has not cleared it before the sigsegv note that this used to work in this environment i e it started crashing without an upgrade upgrading did not fix it version information desktop debian partly sid from packages riot im debian a vm running under qubes with no access to a gpu so wholly unaccelerated x rendering thread element desktop received signal sigsegv segmentation fault in gdb i thr id target id frame thread lwp element desktop in thread lwp sandbox ipc thr in gi poll fds nfds timeout at sysdeps unix sysv linux poll c thread lwp element desktop in gi pid stat loc options usage at sysdeps unix sysv linux c thread lwp threadpoolservi in epoll wait epfd events maxevents timeout at sysdeps unix sysv linux epoll wait c thread lwp threadpoolforeg futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h thread lwp chrome iothread in fd mode offset len at sysdeps unix sysv linux c thread lwp memoryinfra futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp element desktop in epoll wait epfd events maxevents timeout type for more q to quit c to continue without paging at sysdeps unix sysv linux epoll wait c thread lwp element desktop futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp element desktop futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp element desktop futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp element desktop futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h thread lwp gmain in gi poll fds nfds timeout at sysdeps unix sysv linux poll c thread lwp gdbus in gi poll fds nfds timeout at sysdeps unix sysv linux poll c thread lwp bluez d bus thr in epoll wait epfd events maxevents timeout at sysdeps unix sysv linux epoll wait c thread lwp crshutdowndetec libc read type for more q to quit c to continue without paging nbytes buf fd at sysdeps unix sysv linux read c thread lwp dconf worker in gi poll fds nfds timeout at sysdeps unix sysv linux poll c thread lwp threadpoolforeg futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h thread lwp threadpoolforeg futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h thread lwp threadpoolforeg futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h thread lwp inotify reader in gi select nfds readfds writefds exceptfds timeout at sysdeps unix sysv linux select c thread lwp compositortilew futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp videocapturethr futex wait cancelable private expected futex word at sysdeps nptl futex internal h type for more q to quit c to continue without paging thread lwp element desktop futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h thread lwp threadpoolsingl futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp threadpoolforeg futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h thread lwp element desktop futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp gpu process cra futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp element desktop futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp element desktop futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp element desktop futex wait cancelable private expected futex word type for more q to quit c to continue without paging at sysdeps nptl futex internal h thread lwp utility crash u futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp cachethread blo in epoll wait epfd events maxevents timeout at sysdeps unix sysv linux epoll wait c thread lwp renderer crash futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp pool element de syscall at sysdeps unix sysv linux syscall s thread lwp pool element de syscall at sysdeps unix sysv linux syscall s thread lwp threadpoolsingl futex wait cancelable private expected futex word at sysdeps nptl futex internal h thread lwp threadpoolforeg futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h thread lwp threadpoolforeg futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h type for more q to quit c to continue without paging thread lwp threadpoolforeg futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h gdb disas no function contains program counter for selected frame
1
52,419
13,224,722,231
IssuesEvent
2020-08-17 19:42:48
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
3D Projections crashes when I3MCTree is included (Trac #2188)
Incomplete Migration Migrated from Trac analysis defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2188">https://code.icecube.wisc.edu/projects/icecube/ticket/2188</a>, reported by icecube</summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:15:23", "_ts": "1550067323910946", "description": "Using the Mac App to save an event view in Steamshovel is not possible when the I3MCTree is included. \n\nError looks like:\n\n/Users/theoglauch/Software/combo/build/lib/icecube/steamshovel/util/projection.pyc in get_projection(filename, frame, include_xyz, include_colorscale, width, height, dpi, scale, gamma)\n 304 with ProjectionRenderer(window.gl, dpi, scale, gamma) as pr:\n 305 # render main\n--> 306 pivot, loc = get_view(window.gl, frame)\n 307 window.gl.perspectiveView = True\n 308 pr.render(True, pivot, loc, (0,0,1), width, basename+\"_main.png\")\n\n/Users/theoglauch/Software/combo/build/lib/icecube/steamshovel/util/projection.pyc in get_view(gl, frame)\n 131 #from whichever side is closer to the cog of the pulses\n 132 # If this direction happens to be undefined, default to the cog-to-center line\n--> 133 if not math.isnan(track.dir.azimuth) and (math.sin(track.dir.zenith) != 0.):\n 134 x = -track.dir.y\n 135 y = track.dir.x\n\nAttributeError: 'I3MCTree' object has no attribute 'dir'\nERROR (steamshovel): icecube.steamshovel.util.projection.get_projection() failed (ProjectionDialog.cpp:118 in virtual void ProjectionDialog::accept())", "reporter": "icecube", "cc": "", "resolution": "insufficient resources", "time": "2018-09-13T09:39:31", "component": "analysis", "summary": "3D Projections crashes when I3MCTree is included", "priority": "normal", "keywords": "", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
1.0
3D Projections crashes when I3MCTree is included (Trac #2188) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2188">https://code.icecube.wisc.edu/projects/icecube/ticket/2188</a>, reported by icecube</summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:15:23", "_ts": "1550067323910946", "description": "Using the Mac App to save an event view in Steamshovel is not possible when the I3MCTree is included. \n\nError looks like:\n\n/Users/theoglauch/Software/combo/build/lib/icecube/steamshovel/util/projection.pyc in get_projection(filename, frame, include_xyz, include_colorscale, width, height, dpi, scale, gamma)\n 304 with ProjectionRenderer(window.gl, dpi, scale, gamma) as pr:\n 305 # render main\n--> 306 pivot, loc = get_view(window.gl, frame)\n 307 window.gl.perspectiveView = True\n 308 pr.render(True, pivot, loc, (0,0,1), width, basename+\"_main.png\")\n\n/Users/theoglauch/Software/combo/build/lib/icecube/steamshovel/util/projection.pyc in get_view(gl, frame)\n 131 #from whichever side is closer to the cog of the pulses\n 132 # If this direction happens to be undefined, default to the cog-to-center line\n--> 133 if not math.isnan(track.dir.azimuth) and (math.sin(track.dir.zenith) != 0.):\n 134 x = -track.dir.y\n 135 y = track.dir.x\n\nAttributeError: 'I3MCTree' object has no attribute 'dir'\nERROR (steamshovel): icecube.steamshovel.util.projection.get_projection() failed (ProjectionDialog.cpp:118 in virtual void ProjectionDialog::accept())", "reporter": "icecube", "cc": "", "resolution": "insufficient resources", "time": "2018-09-13T09:39:31", "component": "analysis", "summary": "3D Projections crashes when I3MCTree is included", "priority": "normal", "keywords": "", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
defect
projections crashes when is included trac migrated from json status closed changetime ts description using the mac app to save an event view in steamshovel is not possible when the is included n nerror looks like n n users theoglauch software combo build lib icecube steamshovel util projection pyc in get projection filename frame include xyz include colorscale width height dpi scale gamma n with projectionrenderer window gl dpi scale gamma as pr n render main n pivot loc get view window gl frame n window gl perspectiveview true n pr render true pivot loc width basename main png n n users theoglauch software combo build lib icecube steamshovel util projection pyc in get view gl frame n from whichever side is closer to the cog of the pulses n if this direction happens to be undefined default to the cog to center line n if not math isnan track dir azimuth and math sin track dir zenith n x track dir y n y track dir x n nattributeerror object has no attribute dir nerror steamshovel icecube steamshovel util projection get projection failed projectiondialog cpp in virtual void projectiondialog accept reporter icecube cc resolution insufficient resources time component analysis summary projections crashes when is included priority normal keywords milestone owner type defect
1
46,439
13,055,912,200
IssuesEvent
2020-07-30 03:05:57
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
[fill-ratio] examples/test scripts need updates (Trac #1200)
Incomplete Migration Migrated from Trac combo reconstruction defect
Migrated from https://code.icecube.wisc.edu/ticket/1200 ```json { "status": "closed", "changetime": "2019-02-13T14:11:57", "description": "- FillRatioExample.py should be moved to ../resources/examples\n- FillRatioTest.py should be moved to ../resources/tests\n- python/test_modules/FRTestModule.py should be moved to ../resources/tests or be included into FillRatioTest.py\n- All test and examples scripts need updates; they still use the old tray python syntax\n- More tests are needed", "reporter": "kkrings", "cc": "", "resolution": "fixed", "_ts": "1550067117911749", "component": "combo reconstruction", "summary": "[fill-ratio] examples/test scripts need updates", "priority": "blocker", "keywords": "", "time": "2015-08-19T18:33:48", "milestone": "", "owner": "mjl5147", "type": "defect" } ```
1.0
[fill-ratio] examples/test scripts need updates (Trac #1200) - Migrated from https://code.icecube.wisc.edu/ticket/1200 ```json { "status": "closed", "changetime": "2019-02-13T14:11:57", "description": "- FillRatioExample.py should be moved to ../resources/examples\n- FillRatioTest.py should be moved to ../resources/tests\n- python/test_modules/FRTestModule.py should be moved to ../resources/tests or be included into FillRatioTest.py\n- All test and examples scripts need updates; they still use the old tray python syntax\n- More tests are needed", "reporter": "kkrings", "cc": "", "resolution": "fixed", "_ts": "1550067117911749", "component": "combo reconstruction", "summary": "[fill-ratio] examples/test scripts need updates", "priority": "blocker", "keywords": "", "time": "2015-08-19T18:33:48", "milestone": "", "owner": "mjl5147", "type": "defect" } ```
defect
examples test scripts need updates trac migrated from json status closed changetime description fillratioexample py should be moved to resources examples n fillratiotest py should be moved to resources tests n python test modules frtestmodule py should be moved to resources tests or be included into fillratiotest py n all test and examples scripts need updates they still use the old tray python syntax n more tests are needed reporter kkrings cc resolution fixed ts component combo reconstruction summary examples test scripts need updates priority blocker keywords time milestone owner type defect
1
35,964
7,838,681,335
IssuesEvent
2018-06-18 11:09:05
Cockatrice/Cockatrice
https://api.github.com/repos/Cockatrice/Cockatrice
opened
updater: reinstall option throws error
App - Cockatrice Defect - Regression UI / UX
<b>System Information:</b> Client Version: 2.6.0 (2018-06-17) Client Operating System: Windows 10 (10.0) Build Architecture: 64-bit Qt Version: 5.9.5 System Locale: de_DE __________________________________________________________________________________________ ![reinstall update](https://user-images.githubusercontent.com/9874850/41532603-fa441b92-72f7-11e8-89f0-7bb6f813efb3.png) It doesn't matter if I select `Stable` or `Beta` channel in settings. With `2.5.1` it works fine!
1.0
updater: reinstall option throws error - <b>System Information:</b> Client Version: 2.6.0 (2018-06-17) Client Operating System: Windows 10 (10.0) Build Architecture: 64-bit Qt Version: 5.9.5 System Locale: de_DE __________________________________________________________________________________________ ![reinstall update](https://user-images.githubusercontent.com/9874850/41532603-fa441b92-72f7-11e8-89f0-7bb6f813efb3.png) It doesn't matter if I select `Stable` or `Beta` channel in settings. With `2.5.1` it works fine!
defect
updater reinstall option throws error system information client version client operating system windows build architecture bit qt version system locale de de it doesn t matter if i select stable or beta channel in settings with it works fine
1
53,070
13,260,860,916
IssuesEvent
2020-08-20 18:53:14
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
"make inspect" fails (Trac #653)
IceTray Migrated from Trac defect
on software.iwe: ```text Traceback (most recent call last): File "/net/software/doc_build/offline-head/build/bin/icetray-inspect", line 140, in <module> module = __import__('icecube.%s' % modname, globals(), locals(), [modname]) File "/net/software/doc_build/offline-head/build/lib/icecube/rootwriter/__init__.py", line 7, in <module> def I3ROOTWriter(tray, name, Output=None, **kwargs): File "/net/software/doc_build/offline-head/build/lib/icecube/icetray/traysegment.py", line 20, in traysegment if inspect.getdoc(function) is None: AttributeError: 'module' object has no attribute 'getdoc' make[3]: *** [rootwriter/CMakeFiles/rootwriter-rootwriter-inspect] Error 1 make[2]: *** [rootwriter/CMakeFiles/rootwriter-rootwriter-inspect.dir/all] Error 2 make[1]: *** [CMakeFiles/inspect.dir/rule] Error 2 make: *** [inspect] Error 2 ``` on my laptop: ```text [ 0%] Generating html from icetray-inspect of WaveCalibrator Logging configured from file /Users/nega/i3/offline-software/build/log4cplus.conf WARN: Can't load pure-Python bits of WaveCalibrator (No module named tables); you'll only have the C++ library./Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : Specification mandate value for attribute DOMSimulatorCalibrator <function DOMSimulatorCalibrator at 0x11069fde8> ^ /Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : attributes construct error <function DOMSimulatorCalibrator at 0x11069fde8> ^ /Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : Couldn't find end of Start Tag function line 4 <function DOMSimulatorCalibrator at 0x11069fde8> ^ unable to parse /Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml make[3]: *** [WaveCalibrator/CMakeFiles/WaveCalibrator-WaveCalibrator-inspect] Error 6 make[2]: *** [WaveCalibrator/CMakeFiles/WaveCalibrator-WaveCalibrator-inspect.dir/all] Error 2 make[1]: *** [CMakeFiles/inspect.dir/rule] Error 2 make: *** [inspect] Error 2 ``` <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/653">https://code.icecube.wisc.edu/projects/icecube/ticket/653</a>, reported by negaand owned by jvansanten</em></summary> <p> ```json { "status": "closed", "changetime": "2011-09-13T14:26:39", "_ts": "1315923999000000", "description": "on software.iwe:\n\n{{{\nTraceback (most recent call last):\n File \"/net/software/doc_build/offline-head/build/bin/icetray-inspect\", line 140, in <module>\n module = __import__('icecube.%s' % modname, globals(), locals(), [modname])\n File \"/net/software/doc_build/offline-head/build/lib/icecube/rootwriter/__init__.py\", line 7, in <module>\n def I3ROOTWriter(tray, name, Output=None, **kwargs):\n File \"/net/software/doc_build/offline-head/build/lib/icecube/icetray/traysegment.py\", line 20, in traysegment\n if inspect.getdoc(function) is None:\nAttributeError: 'module' object has no attribute 'getdoc'\nmake[3]: *** [rootwriter/CMakeFiles/rootwriter-rootwriter-inspect] Error 1\nmake[2]: *** [rootwriter/CMakeFiles/rootwriter-rootwriter-inspect.dir/all] Error 2\nmake[1]: *** [CMakeFiles/inspect.dir/rule] Error 2\nmake: *** [inspect] Error 2\n}}}\n\non my laptop:\n{{{\n\n[ 0%] Generating html from icetray-inspect of WaveCalibrator\nLogging configured from file /Users/nega/i3/offline-software/build/log4cplus.conf\nWARN: Can't load pure-Python bits of WaveCalibrator (No module named tables); you'll only have the C++ library./Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : Specification mandate value for attribute DOMSimulatorCalibrator\n<function DOMSimulatorCalibrator at 0x11069fde8>\n ^\n/Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : attributes construct error\n<function DOMSimulatorCalibrator at 0x11069fde8>\n ^\n/Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : Couldn't find end of Start Tag function line 4\n<function DOMSimulatorCalibrator at 0x11069fde8>\n ^\nunable to parse /Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml\nmake[3]: *** [WaveCalibrator/CMakeFiles/WaveCalibrator-WaveCalibrator-inspect] Error 6\nmake[2]: *** [WaveCalibrator/CMakeFiles/WaveCalibrator-WaveCalibrator-inspect.dir/all] Error 2\nmake[1]: *** [CMakeFiles/inspect.dir/rule] Error 2\nmake: *** [inspect] Error 2\n}}}", "reporter": "nega", "cc": "", "resolution": "fixed", "time": "2011-09-13T14:18:03", "component": "IceTray", "summary": "\"make inspect\" fails", "priority": "normal", "keywords": "", "milestone": "", "owner": "jvansanten", "type": "defect" } ``` </p> </details>
1.0
"make inspect" fails (Trac #653) - on software.iwe: ```text Traceback (most recent call last): File "/net/software/doc_build/offline-head/build/bin/icetray-inspect", line 140, in <module> module = __import__('icecube.%s' % modname, globals(), locals(), [modname]) File "/net/software/doc_build/offline-head/build/lib/icecube/rootwriter/__init__.py", line 7, in <module> def I3ROOTWriter(tray, name, Output=None, **kwargs): File "/net/software/doc_build/offline-head/build/lib/icecube/icetray/traysegment.py", line 20, in traysegment if inspect.getdoc(function) is None: AttributeError: 'module' object has no attribute 'getdoc' make[3]: *** [rootwriter/CMakeFiles/rootwriter-rootwriter-inspect] Error 1 make[2]: *** [rootwriter/CMakeFiles/rootwriter-rootwriter-inspect.dir/all] Error 2 make[1]: *** [CMakeFiles/inspect.dir/rule] Error 2 make: *** [inspect] Error 2 ``` on my laptop: ```text [ 0%] Generating html from icetray-inspect of WaveCalibrator Logging configured from file /Users/nega/i3/offline-software/build/log4cplus.conf WARN: Can't load pure-Python bits of WaveCalibrator (No module named tables); you'll only have the C++ library./Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : Specification mandate value for attribute DOMSimulatorCalibrator <function DOMSimulatorCalibrator at 0x11069fde8> ^ /Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : attributes construct error <function DOMSimulatorCalibrator at 0x11069fde8> ^ /Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : Couldn't find end of Start Tag function line 4 <function DOMSimulatorCalibrator at 0x11069fde8> ^ unable to parse /Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml make[3]: *** [WaveCalibrator/CMakeFiles/WaveCalibrator-WaveCalibrator-inspect] Error 6 make[2]: *** [WaveCalibrator/CMakeFiles/WaveCalibrator-WaveCalibrator-inspect.dir/all] Error 2 make[1]: *** [CMakeFiles/inspect.dir/rule] Error 2 make: *** [inspect] Error 2 ``` <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/653">https://code.icecube.wisc.edu/projects/icecube/ticket/653</a>, reported by negaand owned by jvansanten</em></summary> <p> ```json { "status": "closed", "changetime": "2011-09-13T14:26:39", "_ts": "1315923999000000", "description": "on software.iwe:\n\n{{{\nTraceback (most recent call last):\n File \"/net/software/doc_build/offline-head/build/bin/icetray-inspect\", line 140, in <module>\n module = __import__('icecube.%s' % modname, globals(), locals(), [modname])\n File \"/net/software/doc_build/offline-head/build/lib/icecube/rootwriter/__init__.py\", line 7, in <module>\n def I3ROOTWriter(tray, name, Output=None, **kwargs):\n File \"/net/software/doc_build/offline-head/build/lib/icecube/icetray/traysegment.py\", line 20, in traysegment\n if inspect.getdoc(function) is None:\nAttributeError: 'module' object has no attribute 'getdoc'\nmake[3]: *** [rootwriter/CMakeFiles/rootwriter-rootwriter-inspect] Error 1\nmake[2]: *** [rootwriter/CMakeFiles/rootwriter-rootwriter-inspect.dir/all] Error 2\nmake[1]: *** [CMakeFiles/inspect.dir/rule] Error 2\nmake: *** [inspect] Error 2\n}}}\n\non my laptop:\n{{{\n\n[ 0%] Generating html from icetray-inspect of WaveCalibrator\nLogging configured from file /Users/nega/i3/offline-software/build/log4cplus.conf\nWARN: Can't load pure-Python bits of WaveCalibrator (No module named tables); you'll only have the C++ library./Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : Specification mandate value for attribute DOMSimulatorCalibrator\n<function DOMSimulatorCalibrator at 0x11069fde8>\n ^\n/Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : attributes construct error\n<function DOMSimulatorCalibrator at 0x11069fde8>\n ^\n/Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml:4: parser error : Couldn't find end of Start Tag function line 4\n<function DOMSimulatorCalibrator at 0x11069fde8>\n ^\nunable to parse /Users/nega/i3/offline-software/build/CMakeFiles/WaveCalibrator-inspection.xml\nmake[3]: *** [WaveCalibrator/CMakeFiles/WaveCalibrator-WaveCalibrator-inspect] Error 6\nmake[2]: *** [WaveCalibrator/CMakeFiles/WaveCalibrator-WaveCalibrator-inspect.dir/all] Error 2\nmake[1]: *** [CMakeFiles/inspect.dir/rule] Error 2\nmake: *** [inspect] Error 2\n}}}", "reporter": "nega", "cc": "", "resolution": "fixed", "time": "2011-09-13T14:18:03", "component": "IceTray", "summary": "\"make inspect\" fails", "priority": "normal", "keywords": "", "milestone": "", "owner": "jvansanten", "type": "defect" } ``` </p> </details>
defect
make inspect fails trac on software iwe text traceback most recent call last file net software doc build offline head build bin icetray inspect line in module import icecube s modname globals locals file net software doc build offline head build lib icecube rootwriter init py line in def tray name output none kwargs file net software doc build offline head build lib icecube icetray traysegment py line in traysegment if inspect getdoc function is none attributeerror module object has no attribute getdoc make error make error make error make error on my laptop text generating html from icetray inspect of wavecalibrator logging configured from file users nega offline software build conf warn can t load pure python bits of wavecalibrator no module named tables you ll only have the c library users nega offline software build cmakefiles wavecalibrator inspection xml parser error specification mandate value for attribute domsimulatorcalibrator users nega offline software build cmakefiles wavecalibrator inspection xml parser error attributes construct error users nega offline software build cmakefiles wavecalibrator inspection xml parser error couldn t find end of start tag function line unable to parse users nega offline software build cmakefiles wavecalibrator inspection xml make error make error make error make error migrated from json status closed changetime ts description on software iwe n n ntraceback most recent call last n file net software doc build offline head build bin icetray inspect line in n module import icecube s modname globals locals n file net software doc build offline head build lib icecube rootwriter init py line in n def tray name output none kwargs n file net software doc build offline head build lib icecube icetray traysegment py line in traysegment n if inspect getdoc function is none nattributeerror module object has no attribute getdoc nmake error nmake error nmake error nmake error n n non my laptop n n n generating html from icetray inspect of wavecalibrator nlogging configured from file users nega offline software build conf nwarn can t load pure python bits of wavecalibrator no module named tables you ll only have the c library users nega offline software build cmakefiles wavecalibrator inspection xml parser error specification mandate value for attribute domsimulatorcalibrator n n n users nega offline software build cmakefiles wavecalibrator inspection xml parser error attributes construct error n n n users nega offline software build cmakefiles wavecalibrator inspection xml parser error couldn t find end of start tag function line n n nunable to parse users nega offline software build cmakefiles wavecalibrator inspection xml nmake error nmake error nmake error nmake error n reporter nega cc resolution fixed time component icetray summary make inspect fails priority normal keywords milestone owner jvansanten type defect
1
255,798
27,504,346,178
IssuesEvent
2023-03-06 01:10:46
panasalap/linux-4.19.72_1
https://api.github.com/repos/panasalap/linux-4.19.72_1
opened
CVE-2022-36280 (Medium) detected in linux-yoctov5.4.51
security vulnerability
## CVE-2022-36280 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An out-of-bounds(OOB) memory access vulnerability was found in vmwgfx driver in drivers/gpu/vmxgfx/vmxgfx_kms.c in GPU component in the Linux kernel with device file '/dev/dri/renderD128 (or Dxxx)'. This flaw allows a local attacker with a user account on the system to gain privilege, causing a denial of service(DoS). <p>Publish Date: 2022-09-09 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-36280>CVE-2022-36280</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-36280">https://www.linuxkernelcves.com/cves/CVE-2022-36280</a></p> <p>Release Date: 2022-09-09</p> <p>Fix Resolution: v4.9.337,v4.14.303,v4.19.270,v5.4.229,v5.10.163,v5.15.87,v6.0.18,v6.1.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-36280 (Medium) detected in linux-yoctov5.4.51 - ## CVE-2022-36280 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary> <p> <p>Yocto Linux Embedded kernel</p> <p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An out-of-bounds(OOB) memory access vulnerability was found in vmwgfx driver in drivers/gpu/vmxgfx/vmxgfx_kms.c in GPU component in the Linux kernel with device file '/dev/dri/renderD128 (or Dxxx)'. This flaw allows a local attacker with a user account on the system to gain privilege, causing a denial of service(DoS). <p>Publish Date: 2022-09-09 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-36280>CVE-2022-36280</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-36280">https://www.linuxkernelcves.com/cves/CVE-2022-36280</a></p> <p>Release Date: 2022-09-09</p> <p>Fix Resolution: v4.9.337,v4.14.303,v4.19.270,v5.4.229,v5.10.163,v5.15.87,v6.0.18,v6.1.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in linux cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files drivers gpu drm vmwgfx vmwgfx kms c drivers gpu drm vmwgfx vmwgfx kms c vulnerability details an out of bounds oob memory access vulnerability was found in vmwgfx driver in drivers gpu vmxgfx vmxgfx kms c in gpu component in the linux kernel with device file dev dri or dxxx this flaw allows a local attacker with a user account on the system to gain privilege causing a denial of service dos publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
59,603
17,023,173,335
IssuesEvent
2021-07-03 00:42:19
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
better description of what "make edits public" checkbox means
Component: website Priority: major Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 11.05am, Friday, 20th July 2007]** Recently in talk ML: ``` > > (B.t.w. I'm still not certain what it means to "make edits public".) It means that the any edits made by that user will include the user's display name when they are downloaded via the API. The display name will also be shown against any public GPX traces you upload. It also causes you to appear on people "nearby mappers" list if you set a home location that is near them. Tom ``` Can this be added as some hint to the webpage?
1.0
better description of what "make edits public" checkbox means - **[Submitted to the original trac issue database at 11.05am, Friday, 20th July 2007]** Recently in talk ML: ``` > > (B.t.w. I'm still not certain what it means to "make edits public".) It means that the any edits made by that user will include the user's display name when they are downloaded via the API. The display name will also be shown against any public GPX traces you upload. It also causes you to appear on people "nearby mappers" list if you set a home location that is near them. Tom ``` Can this be added as some hint to the webpage?
defect
better description of what make edits public checkbox means recently in talk ml b t w i m still not certain what it means to make edits public it means that the any edits made by that user will include the user s display name when they are downloaded via the api the display name will also be shown against any public gpx traces you upload it also causes you to appear on people nearby mappers list if you set a home location that is near them tom can this be added as some hint to the webpage
1
55,215
14,281,540,891
IssuesEvent
2020-11-23 08:14:16
hazelcast/hazelcast-jet
https://api.github.com/repos/hazelcast/hazelcast-jet
opened
User-closed SqlResult.iterator doesn't throw
defect sql
```java sqlResult.close(); for (SqlRow sqlRow : sqlResult) { // gets stuck here, should throw } ```
1.0
User-closed SqlResult.iterator doesn't throw - ```java sqlResult.close(); for (SqlRow sqlRow : sqlResult) { // gets stuck here, should throw } ```
defect
user closed sqlresult iterator doesn t throw java sqlresult close for sqlrow sqlrow sqlresult gets stuck here should throw
1
123,772
16,536,207,217
IssuesEvent
2021-05-27 12:10:18
wordpress-mobile/WordPress-Android
https://api.github.com/repos/wordpress-mobile/WordPress-Android
opened
Scheduled Posts: Calendar date/time picker unintuitive
Design Needed [Type] Enhancement
The experience of scheduling a post for a future date or time is a bit confusing at various points. After tapping “Date and Time” I got a calendar-style date picker, but [it seemed] there was no way to choose a time. ...it wasn’t immediately clear to me that choosing a date would open a time picker. From beta-testing: p5T066-2hM-p2#comment-8392 ### Steps to reproduce the behavior 1. Create a new post 2. Tap Publish, then "immediately" to open the scheduling options 3. Tap "immediately" again beneath the date to open the date picker 4. Tap "ok" to continue, then choose a time ### Issues to address * It's unclear that tapping "Date and Time" will show the time-picker after showing the date-picker * On the initial Publish dialogue, there's no clear path to scheduling. You have to know to tap on "immediately" to start the process, and then on "immediately" again to change the date/time. ##### Tested on Pixel 5 with Android 11, WPAndroid 17.4-rc-2
1.0
Scheduled Posts: Calendar date/time picker unintuitive - The experience of scheduling a post for a future date or time is a bit confusing at various points. After tapping “Date and Time” I got a calendar-style date picker, but [it seemed] there was no way to choose a time. ...it wasn’t immediately clear to me that choosing a date would open a time picker. From beta-testing: p5T066-2hM-p2#comment-8392 ### Steps to reproduce the behavior 1. Create a new post 2. Tap Publish, then "immediately" to open the scheduling options 3. Tap "immediately" again beneath the date to open the date picker 4. Tap "ok" to continue, then choose a time ### Issues to address * It's unclear that tapping "Date and Time" will show the time-picker after showing the date-picker * On the initial Publish dialogue, there's no clear path to scheduling. You have to know to tap on "immediately" to start the process, and then on "immediately" again to change the date/time. ##### Tested on Pixel 5 with Android 11, WPAndroid 17.4-rc-2
non_defect
scheduled posts calendar date time picker unintuitive the experience of scheduling a post for a future date or time is a bit confusing at various points after tapping “date and time” i got a calendar style date picker but there was no way to choose a time it wasn’t immediately clear to me that choosing a date would open a time picker from beta testing comment steps to reproduce the behavior create a new post tap publish then immediately to open the scheduling options tap immediately again beneath the date to open the date picker tap ok to continue then choose a time issues to address it s unclear that tapping date and time will show the time picker after showing the date picker on the initial publish dialogue there s no clear path to scheduling you have to know to tap on immediately to start the process and then on immediately again to change the date time tested on pixel with android wpandroid rc
0
31,339
4,703,294,053
IssuesEvent
2016-10-13 07:23:48
odlgroup/odl
https://api.github.com/repos/odlgroup/odl
opened
Change almost_equal parameter places to significant_digits
api change testing
Minor style change that helps readability of the tests.
1.0
Change almost_equal parameter places to significant_digits - Minor style change that helps readability of the tests.
non_defect
change almost equal parameter places to significant digits minor style change that helps readability of the tests
0
277,321
30,611,983,887
IssuesEvent
2023-07-23 18:18:32
hinoshiba/news
https://api.github.com/repos/hinoshiba/news
closed
[SecurityWeek] Tech Titans Promise Watermarks to Expose AI Creations
SecurityWeek Stale
Amazon, Google, Meta, Microsoft, OpenAI and other tech firms have voluntary agreed to AI safeguards set by the White House. The post [Tech Titans Promise Watermarks to Expose AI Creations](https://www.securityweek.com/tech-titans-promise-watermarks-to-expose-ai-creations/) appeared first on [SecurityWeek](https://www.securityweek.com). <https://www.securityweek.com/tech-titans-promise-watermarks-to-expose-ai-creations/>
True
[SecurityWeek] Tech Titans Promise Watermarks to Expose AI Creations - Amazon, Google, Meta, Microsoft, OpenAI and other tech firms have voluntary agreed to AI safeguards set by the White House. The post [Tech Titans Promise Watermarks to Expose AI Creations](https://www.securityweek.com/tech-titans-promise-watermarks-to-expose-ai-creations/) appeared first on [SecurityWeek](https://www.securityweek.com). <https://www.securityweek.com/tech-titans-promise-watermarks-to-expose-ai-creations/>
non_defect
tech titans promise watermarks to expose ai creations amazon google meta microsoft openai and other tech firms have voluntary agreed to ai safeguards set by the white house the post appeared first on
0
14,142
2,789,933,919
IssuesEvent
2015-05-08 22:31:33
google/google-visualization-api-issues
https://api.github.com/repos/google/google-visualization-api-issues
opened
Issue: Piechart and Treemap don't work when deploying to IIS6 & 7
Priority-Medium Type-Defect
Original [issue 562](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=562) created by orwant on 2011-03-25T13:24:47.000Z: Hi, I really like your visualization library and it works really nicely for my project. I'm using your annotated timeline, gauges, piechart and the beautiful treemap. Anway, all these visualizations are brilliant, you guys have done a great job! I'm working with Microsoft ASP.NET MVC with JQuery. All graphs run fine in visual studio debug mode. When deploying to my live server running IIS6 or my local server on IIS7.5, the annotated timeline and gauges still work fine. The piechart and treemap don't work and show empty results. I tried debugging with firebug but can't see any issues. Here's some of my code: --- google.load('visualization', '1', { 'packages': ['treemap'] }); function drawVisualization() { $(&quot;.PleaseWait&quot;).show(); var exclude_portfolios = getIds(&quot;PortfolioIdList&quot;); var query = new google.visualization.Query(' &lt;%: Url.Action(&quot;GetDashboardSectorTreeMap&quot;,&quot;Dashboards&quot;, new { someParam = &quot;paramValue&quot; }) %&gt;' + '&amp;exclude_portfolios=' + exclude_portfolios); query.send(handleQueryResponse); $(&quot;.PleaseWait&quot;).hide(); $.get('&lt;%= Url.Action(&quot;ShowPositions&quot;,&quot;Dashboards&quot;, new { someParam = &quot;paramValue&quot; }) %&gt;' + '&amp;exclude_portfolios=' + exclude_portfolios + '&amp;date=' + date, function (data) { if (data.match(/id=&quot;ajaxexpiry&quot;/gi) != null) // pattern to locate { window.location.href = &quot;&lt;%= Url.Action(&quot;Logon&quot;,&quot;Account&quot;) %&gt;&quot;; } else { $('#positions_list').html(data); } }); } function handleQueryResponse(response) { if (response.isError()) { alert('Error in query: ' + response.getMessage() + ' ' + response.getDetailedMessage()); return; } $(&quot;#visualization&quot;).empty(); var data = response.getDataTable(); var chart = new google.visualization.TreeMap( document.getElementById('visualization')); chart.draw(data, { minColor: '#f00', midColor: '#ddd', maxColor: '#&nbsp;0d0', headerHeight: 15, fontColor: 'black', showScale: true}); } google.setOnLoadCallback(drawVisualization); --- It seems to just step over the query part and never steps in the handlequeryresponse function. All this works without any issue on the visual studio webserver but not on IIS; I have no idea what IIS has to do with this as the code should be running off the client if Iunderstand well. I can't find any help on google for this issue but would be grateful if you could help. Kind regards Olivier
1.0
Issue: Piechart and Treemap don't work when deploying to IIS6 & 7 - Original [issue 562](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=562) created by orwant on 2011-03-25T13:24:47.000Z: Hi, I really like your visualization library and it works really nicely for my project. I'm using your annotated timeline, gauges, piechart and the beautiful treemap. Anway, all these visualizations are brilliant, you guys have done a great job! I'm working with Microsoft ASP.NET MVC with JQuery. All graphs run fine in visual studio debug mode. When deploying to my live server running IIS6 or my local server on IIS7.5, the annotated timeline and gauges still work fine. The piechart and treemap don't work and show empty results. I tried debugging with firebug but can't see any issues. Here's some of my code: --- google.load('visualization', '1', { 'packages': ['treemap'] }); function drawVisualization() { $(&quot;.PleaseWait&quot;).show(); var exclude_portfolios = getIds(&quot;PortfolioIdList&quot;); var query = new google.visualization.Query(' &lt;%: Url.Action(&quot;GetDashboardSectorTreeMap&quot;,&quot;Dashboards&quot;, new { someParam = &quot;paramValue&quot; }) %&gt;' + '&amp;exclude_portfolios=' + exclude_portfolios); query.send(handleQueryResponse); $(&quot;.PleaseWait&quot;).hide(); $.get('&lt;%= Url.Action(&quot;ShowPositions&quot;,&quot;Dashboards&quot;, new { someParam = &quot;paramValue&quot; }) %&gt;' + '&amp;exclude_portfolios=' + exclude_portfolios + '&amp;date=' + date, function (data) { if (data.match(/id=&quot;ajaxexpiry&quot;/gi) != null) // pattern to locate { window.location.href = &quot;&lt;%= Url.Action(&quot;Logon&quot;,&quot;Account&quot;) %&gt;&quot;; } else { $('#positions_list').html(data); } }); } function handleQueryResponse(response) { if (response.isError()) { alert('Error in query: ' + response.getMessage() + ' ' + response.getDetailedMessage()); return; } $(&quot;#visualization&quot;).empty(); var data = response.getDataTable(); var chart = new google.visualization.TreeMap( document.getElementById('visualization')); chart.draw(data, { minColor: '#f00', midColor: '#ddd', maxColor: '#&nbsp;0d0', headerHeight: 15, fontColor: 'black', showScale: true}); } google.setOnLoadCallback(drawVisualization); --- It seems to just step over the query part and never steps in the handlequeryresponse function. All this works without any issue on the visual studio webserver but not on IIS; I have no idea what IIS has to do with this as the code should be running off the client if Iunderstand well. I can't find any help on google for this issue but would be grateful if you could help. Kind regards Olivier
defect
issue piechart and treemap don t work when deploying to original created by orwant on hi i really like your visualization library and it works really nicely for my project i m using your annotated timeline gauges piechart and the beautiful treemap anway all these visualizations are brilliant you guys have done a great job i m working with microsoft asp net mvc with jquery all graphs run fine in visual studio debug mode when deploying to my live server running or my local server on the annotated timeline and gauges still work fine the piechart and treemap don t work and show empty results i tried debugging with firebug but can t see any issues here s some of my code google load visualization packages function drawvisualization quot pleasewait quot show var exclude portfolios getids quot portfolioidlist quot var query new google visualization query lt url action quot getdashboardsectortreemap quot quot dashboards quot new someparam quot paramvalue quot gt amp exclude portfolios exclude portfolios query send handlequeryresponse quot pleasewait quot hide get lt url action quot showpositions quot quot dashboards quot new someparam quot paramvalue quot gt amp exclude portfolios exclude portfolios amp date date function data if data match id quot ajaxexpiry quot gi null pattern to locate window location href quot lt url action quot logon quot quot account quot gt quot else positions list html data function handlequeryresponse response if response iserror alert error in query response getmessage response getdetailedmessage return quot visualization quot empty var data response getdatatable var chart new google visualization treemap document getelementbyid visualization chart draw data mincolor midcolor ddd maxcolor nbsp headerheight fontcolor black showscale true google setonloadcallback drawvisualization it seems to just step over the query part and never steps in the handlequeryresponse function all this works without any issue on the visual studio webserver but not on iis i have no idea what iis has to do with this as the code should be running off the client if iunderstand well i can t find any help on google for this issue but would be grateful if you could help kind regards olivier
1
11,120
2,633,131,864
IssuesEvent
2015-03-08 21:13:13
AsyncHttpClient/async-http-client
https://api.github.com/repos/AsyncHttpClient/async-http-client
closed
Netty ChannelManager.tryToOfferChannelToPool makes channel available for selection prematurely
Defect Netty
Hello, the Netty ChannelManager(AHC 1.9.11) seems to have an issue on tryToOfferChannelToPool. ```java public final void tryToOfferChannelToPool(Channel channel, boolean keepAlive, String partition) { if (channel.isConnected() && keepAlive && channel.isReadable()) { LOGGER.debug("Adding key: {} for channel {}", partition, channel); channelPool.offer(channel, partition); if (maxConnectionsPerHostEnabled) channelId2KeyPool.putIfAbsent(channel.getId(), partition); Channels.setDiscard(channel); } else { // not offered closeChannel(channel); } } ``` Looks like channelPool.offer should happen after Channels.setDiscard, otherwise the channel becomes available for selection while the previous request cleanup is still happening. This will cause a "java.util.concurrent.TimeoutException: Request timed out" under certain loads. This is the snippet I used to reproduce the issue: ```java final AsyncCompletionHandler<Response> handler = new AsyncCompletionHandler<Response>() { @Override public void onThrowable(Throwable t) { System.out.println(t.toString()); } @Override public Response onCompleted(Response response) throws Exception { return response; } }; Request request = httpClient.prepareGet("http://...").build(); for (int i = 0; i < 500; ++i) { httpClient.executeRequest(request, handler); } ``` I think the fix should be something similar to: ```java Channels.setDiscard(channel); channelPool.offer(channel, partition); ``` instead of : ```java channelPool.offer(channel, partition); Channels.setDiscard(channel); ```
1.0
Netty ChannelManager.tryToOfferChannelToPool makes channel available for selection prematurely - Hello, the Netty ChannelManager(AHC 1.9.11) seems to have an issue on tryToOfferChannelToPool. ```java public final void tryToOfferChannelToPool(Channel channel, boolean keepAlive, String partition) { if (channel.isConnected() && keepAlive && channel.isReadable()) { LOGGER.debug("Adding key: {} for channel {}", partition, channel); channelPool.offer(channel, partition); if (maxConnectionsPerHostEnabled) channelId2KeyPool.putIfAbsent(channel.getId(), partition); Channels.setDiscard(channel); } else { // not offered closeChannel(channel); } } ``` Looks like channelPool.offer should happen after Channels.setDiscard, otherwise the channel becomes available for selection while the previous request cleanup is still happening. This will cause a "java.util.concurrent.TimeoutException: Request timed out" under certain loads. This is the snippet I used to reproduce the issue: ```java final AsyncCompletionHandler<Response> handler = new AsyncCompletionHandler<Response>() { @Override public void onThrowable(Throwable t) { System.out.println(t.toString()); } @Override public Response onCompleted(Response response) throws Exception { return response; } }; Request request = httpClient.prepareGet("http://...").build(); for (int i = 0; i < 500; ++i) { httpClient.executeRequest(request, handler); } ``` I think the fix should be something similar to: ```java Channels.setDiscard(channel); channelPool.offer(channel, partition); ``` instead of : ```java channelPool.offer(channel, partition); Channels.setDiscard(channel); ```
defect
netty channelmanager trytoofferchanneltopool makes channel available for selection prematurely hello the netty channelmanager ahc seems to have an issue on trytoofferchanneltopool java public final void trytoofferchanneltopool channel channel boolean keepalive string partition if channel isconnected keepalive channel isreadable logger debug adding key for channel partition channel channelpool offer channel partition if maxconnectionsperhostenabled putifabsent channel getid partition channels setdiscard channel else not offered closechannel channel looks like channelpool offer should happen after channels setdiscard otherwise the channel becomes available for selection while the previous request cleanup is still happening this will cause a java util concurrent timeoutexception request timed out under certain loads this is the snippet i used to reproduce the issue java final asynccompletionhandler handler new asynccompletionhandler override public void onthrowable throwable t system out println t tostring override public response oncompleted response response throws exception return response request request httpclient prepareget for int i i i httpclient executerequest request handler i think the fix should be something similar to java channels setdiscard channel channelpool offer channel partition instead of java channelpool offer channel partition channels setdiscard channel
1
29,353
5,657,679,809
IssuesEvent
2017-04-10 07:57:22
BOINC/boinc
https://api.github.com/repos/BOINC/boinc
closed
Cppcheck full report
C: Undetermined P: Undetermined T: Defect
**Reported by serval2412 on 21 Feb 42735874 06:05 UTC** Hello, I runned cppcheck (git updated version 2 days ago) on svn Boinc sources 2 days ago. I attached the complete report. There may be some false positives but I think it should help. Julien Migrated-From: http://boinc.berkeley.edu/trac/ticket/1211
1.0
Cppcheck full report - **Reported by serval2412 on 21 Feb 42735874 06:05 UTC** Hello, I runned cppcheck (git updated version 2 days ago) on svn Boinc sources 2 days ago. I attached the complete report. There may be some false positives but I think it should help. Julien Migrated-From: http://boinc.berkeley.edu/trac/ticket/1211
defect
cppcheck full report reported by on feb utc hello i runned cppcheck git updated version days ago on svn boinc sources days ago i attached the complete report there may be some false positives but i think it should help julien migrated from
1
5,166
2,610,182,373
IssuesEvent
2015-02-26 18:58:07
chrsmith/quchuseban
https://api.github.com/repos/chrsmith/quchuseban
opened
指南怎么样才能去除色斑
auto-migrated Priority-Medium Type-Defect
``` 《摘要》 落雨萧疏,秋风送爽,凉意十足的秋风快乐浪漫地拂过诗意�� �然的花丛。天上美丽动人的白云轻纱般地遮挡着广阔的天宇� ��麻雀儿吱吱喳喳地飞来跳去,寂静的院子里月季花依然十分 的妩媚,粉红色的花瓣散发出脉脉的香味。葡萄今年没有结�� �一颗,也许是因为关爱不周,丧失了结果的心思。绿色的葡� ��叶似乎有些失落地在风中摇曳。院子里摆着十几盆美丽可爱 的花,娇小的红花无畏地绽放着自己的笑容。月亮又快圆的�� �候每一株花儿都期盼着团圆的快乐。年年岁岁盼月圆,八月� ��五幸福暖。人间处处多吉祥,明月千里送安康。黄褐斑是由 于组织细胞间的微细循环受淤阻,细胞溶解死亡,黑色素增�� �形成色斑沉着所造成的,脸部的表皮层薄,毛细血管丰富,� ��易形成色素沉着。色素沉着部位主要在表皮基底层,黑色素 颗粒明显增多,较为严重者真皮层的噬黑素细胞内也有较多�� �色素。与正常相比,色素细胞的数目,黑色素形成以及黑色� ��颗粒的活性都有不同的增长。那么该怎么治疗色斑呢怎么样 才能去除色斑, 《客户案例》   侯女士 30岁<br>   生完小孩后,我的脸上长了很多黄褐斑,朋友见到我都�� �我:“准备成黄脸婆了,看来日子过得不协调啊,要注意保� ��哦。哎,那时候我天天与这些可恶的色斑做斗争,还是没能 把色斑从脸上移开。自从使用了老公送给我的「黛芙薇尔精�� �液」,色斑就奇迹般的消失了!<br>   在结婚之前我的皮肤比较白,所以平时也不太注意保养�� �生完小孩后,脸上就出现了一些色斑,皮肤变得又黑又暗,� ��怕变成男人口中的黄脸婆,我就开始寻找美容的“药石”, 下定决心一定要把这张脸整美丽了。<br>   因为自己没有什么保养的知识,第一想到的就是去美容�� �,感觉她们的专业知识会比较丰富,于是就开始到美容院去� ��斑,每周去两次,不是给我用一种去斑的膏膏,就是给我一 些自己调配的面膜让我每天回家做。去了有四次,我就有点�� �慌了,感觉不对劲,皮肤开始脱皮,脸上也总感觉火辣辣的� ��又坚持去了两次,不好了,脸上开始出现红血丝了,那几天 连洗脸都感到皮肤疼。不能再去了。<br>   几天以后,老公下班回来,非常神秘,好像藏了什么东�� �,过晚饭,他才拿出来,原来是一款祛除色斑的产品「黛芙薇 尔精华液」。我问,这管用吗?老公说:不知道,别人说好用� ��不知道你用行不行,试试吧。在老公的鼓励下,我鼓起勇气 用了一段时间,「黛芙薇尔精华液」是纯天然植物产品,从�� �到外解决色斑、暗斑、色素沉着等皮肤问题,还原皮肤色素� ��祛除深层斑点。能够排出体内毒素,增强机体免疫能力,加 强肌肤对日光的耐受力,成就无斑雪白肌肤。效果真的非常�� �!<br>   用了两个周期下来,色斑彻底被清除了。脸色更好了,�� �肤没有那么干燥了,也变白了很多,不像以前那样蜡黄蜡黄� ��。我变得比以前更美丽更自信了,我感觉「黛芙薇尔精华液 」是结婚以来,老公送我的最好的礼物 阅读了怎么样才能去除色斑,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》   1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐�� �去掉吗?   答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新�� �客都是通过老顾客介绍而来,口碑由此而来!   2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技�� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!   3,去除黄褐斑之后,会反弹吗?   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌!我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗?   4,你们的价格有点贵,能不能便宜一点?   答:如果您使用西药最少需要2000元,煎服的药最少需要3 000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗   5,我适合用黛芙薇尔精华液吗?   答:黛芙薇尔适用人群:   1、生理紊乱引起的黄褐斑人群   2、生育引起的妊娠斑人群   3、年纪增长引起的老年斑人群   4、化妆品色素沉积、辐射斑人群   5、长期日照引起的日晒斑人群   6、肌肤暗淡急需美白的人群 《祛斑小方法》 怎么样才能去除色斑,同时为您分享祛斑小方法 1、葛根羹 原料:葛粉9克,葡萄干9粒。 做法:将葛粉、葡萄干放入碗中,加少量饮用净水调匀,再�� �沸水冲泡。边冲边搅拌成糊状即可 2、葛根小排汤 原料:葛根100克,山药50克,猪小排250克,食盐2克。 做法:将小排洗净、过水,再放入煮沸的汤水中,加葛根、�� �药同煮,先用旺火再改用文火煲1小时,加入食盐调味即成 以上就是简单介绍了如何美容祛斑,这只能淡化色斑,要想�� �底祛斑还是中药内调好。 ``` ----- Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:47
1.0
指南怎么样才能去除色斑 - ``` 《摘要》 落雨萧疏,秋风送爽,凉意十足的秋风快乐浪漫地拂过诗意�� �然的花丛。天上美丽动人的白云轻纱般地遮挡着广阔的天宇� ��麻雀儿吱吱喳喳地飞来跳去,寂静的院子里月季花依然十分 的妩媚,粉红色的花瓣散发出脉脉的香味。葡萄今年没有结�� �一颗,也许是因为关爱不周,丧失了结果的心思。绿色的葡� ��叶似乎有些失落地在风中摇曳。院子里摆着十几盆美丽可爱 的花,娇小的红花无畏地绽放着自己的笑容。月亮又快圆的�� �候每一株花儿都期盼着团圆的快乐。年年岁岁盼月圆,八月� ��五幸福暖。人间处处多吉祥,明月千里送安康。黄褐斑是由 于组织细胞间的微细循环受淤阻,细胞溶解死亡,黑色素增�� �形成色斑沉着所造成的,脸部的表皮层薄,毛细血管丰富,� ��易形成色素沉着。色素沉着部位主要在表皮基底层,黑色素 颗粒明显增多,较为严重者真皮层的噬黑素细胞内也有较多�� �色素。与正常相比,色素细胞的数目,黑色素形成以及黑色� ��颗粒的活性都有不同的增长。那么该怎么治疗色斑呢怎么样 才能去除色斑, 《客户案例》   侯女士 30岁<br>   生完小孩后,我的脸上长了很多黄褐斑,朋友见到我都�� �我:“准备成黄脸婆了,看来日子过得不协调啊,要注意保� ��哦。哎,那时候我天天与这些可恶的色斑做斗争,还是没能 把色斑从脸上移开。自从使用了老公送给我的「黛芙薇尔精�� �液」,色斑就奇迹般的消失了!<br>   在结婚之前我的皮肤比较白,所以平时也不太注意保养�� �生完小孩后,脸上就出现了一些色斑,皮肤变得又黑又暗,� ��怕变成男人口中的黄脸婆,我就开始寻找美容的“药石”, 下定决心一定要把这张脸整美丽了。<br>   因为自己没有什么保养的知识,第一想到的就是去美容�� �,感觉她们的专业知识会比较丰富,于是就开始到美容院去� ��斑,每周去两次,不是给我用一种去斑的膏膏,就是给我一 些自己调配的面膜让我每天回家做。去了有四次,我就有点�� �慌了,感觉不对劲,皮肤开始脱皮,脸上也总感觉火辣辣的� ��又坚持去了两次,不好了,脸上开始出现红血丝了,那几天 连洗脸都感到皮肤疼。不能再去了。<br>   几天以后,老公下班回来,非常神秘,好像藏了什么东�� �,过晚饭,他才拿出来,原来是一款祛除色斑的产品「黛芙薇 尔精华液」。我问,这管用吗?老公说:不知道,别人说好用� ��不知道你用行不行,试试吧。在老公的鼓励下,我鼓起勇气 用了一段时间,「黛芙薇尔精华液」是纯天然植物产品,从�� �到外解决色斑、暗斑、色素沉着等皮肤问题,还原皮肤色素� ��祛除深层斑点。能够排出体内毒素,增强机体免疫能力,加 强肌肤对日光的耐受力,成就无斑雪白肌肤。效果真的非常�� �!<br>   用了两个周期下来,色斑彻底被清除了。脸色更好了,�� �肤没有那么干燥了,也变白了很多,不像以前那样蜡黄蜡黄� ��。我变得比以前更美丽更自信了,我感觉「黛芙薇尔精华液 」是结婚以来,老公送我的最好的礼物 阅读了怎么样才能去除色斑,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》   1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐�� �去掉吗?   答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新�� �客都是通过老顾客介绍而来,口碑由此而来!   2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技�� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!   3,去除黄褐斑之后,会反弹吗?   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌!我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗?   4,你们的价格有点贵,能不能便宜一点?   答:如果您使用西药最少需要2000元,煎服的药最少需要3 000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗   5,我适合用黛芙薇尔精华液吗?   答:黛芙薇尔适用人群:   1、生理紊乱引起的黄褐斑人群   2、生育引起的妊娠斑人群   3、年纪增长引起的老年斑人群   4、化妆品色素沉积、辐射斑人群   5、长期日照引起的日晒斑人群   6、肌肤暗淡急需美白的人群 《祛斑小方法》 怎么样才能去除色斑,同时为您分享祛斑小方法 1、葛根羹 原料:葛粉9克,葡萄干9粒。 做法:将葛粉、葡萄干放入碗中,加少量饮用净水调匀,再�� �沸水冲泡。边冲边搅拌成糊状即可 2、葛根小排汤 原料:葛根100克,山药50克,猪小排250克,食盐2克。 做法:将小排洗净、过水,再放入煮沸的汤水中,加葛根、�� �药同煮,先用旺火再改用文火煲1小时,加入食盐调味即成 以上就是简单介绍了如何美容祛斑,这只能淡化色斑,要想�� �底祛斑还是中药内调好。 ``` ----- Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:47
defect
指南怎么样才能去除色斑 《摘要》 落雨萧疏,秋风送爽,凉意十足的秋风快乐浪漫地拂过诗意�� �然的花丛。天上美丽动人的白云轻纱般地遮挡着广阔的天宇� ��麻雀儿吱吱喳喳地飞来跳去,寂静的院子里月季花依然十分 的妩媚,粉红色的花瓣散发出脉脉的香味。葡萄今年没有结�� �一颗,也许是因为关爱不周,丧失了结果的心思。绿色的葡� ��叶似乎有些失落地在风中摇曳。院子里摆着十几盆美丽可爱 的花,娇小的红花无畏地绽放着自己的笑容。月亮又快圆的�� �候每一株花儿都期盼着团圆的快乐。年年岁岁盼月圆,八月� ��五幸福暖。人间处处多吉祥,明月千里送安康。黄褐斑是由 于组织细胞间的微细循环受淤阻,细胞溶解死亡,黑色素增�� �形成色斑沉着所造成的,脸部的表皮层薄,毛细血管丰富,� ��易形成色素沉着。色素沉着部位主要在表皮基底层,黑色素 颗粒明显增多,较为严重者真皮层的噬黑素细胞内也有较多�� �色素。与正常相比,色素细胞的数目,黑色素形成以及黑色� ��颗粒的活性都有不同的增长。那么该怎么治疗色斑呢怎么样 才能去除色斑, 《客户案例》   侯女士   生完小孩后,我的脸上长了很多黄褐斑,朋友见到我都�� �我:“准备成黄脸婆了,看来日子过得不协调啊,要注意保� ��哦。哎,那时候我天天与这些可恶的色斑做斗争,还是没能 把色斑从脸上移开。自从使用了老公送给我的「黛芙薇尔精�� �液」,色斑就奇迹般的消失了   在结婚之前我的皮肤比较白,所以平时也不太注意保养�� �生完小孩后,脸上就出现了一些色斑,皮肤变得又黑又暗,� ��怕变成男人口中的黄脸婆,我就开始寻找美容的“药石”, 下定决心一定要把这张脸整美丽了。   因为自己没有什么保养的知识,第一想到的就是去美容�� �,感觉她们的专业知识会比较丰富,于是就开始到美容院去� ��斑,每周去两次,不是给我用一种去斑的膏膏,就是给我一 些自己调配的面膜让我每天回家做。去了有四次,我就有点�� �慌了,感觉不对劲,皮肤开始脱皮,脸上也总感觉火辣辣的� ��又坚持去了两次,不好了,脸上开始出现红血丝了,那几天 连洗脸都感到皮肤疼。不能再去了。   几天以后,老公下班回来,非常神秘,好像藏了什么东�� �,过晚饭 他才拿出来,原来是一款祛除色斑的产品「黛芙薇 尔精华液」。我问,这管用吗 老公说:不知道,别人说好用� ��不知道你用行不行,试试吧。在老公的鼓励下,我鼓起勇气 用了一段时间,「黛芙薇尔精华液」是纯天然植物产品,从�� �到外解决色斑、暗斑、色素沉着等皮肤问题,还原皮肤色素� ��祛除深层斑点。能够排出体内毒素,增强机体免疫能力,加 强肌肤对日光的耐受力,成就无斑雪白肌肤。效果真的非常�� �   用了两个周期下来,色斑彻底被清除了。脸色更好了,�� �肤没有那么干燥了,也变白了很多,不像以前那样蜡黄蜡黄� ��。我变得比以前更美丽更自信了,我感觉「黛芙薇尔精华液 」是结婚以来,老公送我的最好的礼物 阅读了怎么样才能去除色斑,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》    黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗   答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来    ,服用黛芙薇尔美白,会伤身体吗 有副作用吗   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖    ,去除黄褐斑之后,会反弹吗   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗    ,你们的价格有点贵,能不能便宜一点   答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗    ,我适合用黛芙薇尔精华液吗   答:黛芙薇尔适用人群:    、生理紊乱引起的黄褐斑人群    、生育引起的妊娠斑人群    、年纪增长引起的老年斑人群    、化妆品色素沉积、辐射斑人群    、长期日照引起的日晒斑人群    、肌肤暗淡急需美白的人群 《祛斑小方法》 怎么样才能去除色斑,同时为您分享祛斑小方法 、葛根羹 原料: , 。 做法:将葛粉、葡萄干放入碗中,加少量饮用净水调匀,再�� �沸水冲泡。边冲边搅拌成糊状即可 、葛根小排汤 原料: , , , 。 做法:将小排洗净、过水,再放入煮沸的汤水中,加葛根、�� �药同煮, ,加入食盐调味即成 以上就是简单介绍了如何美容祛斑,这只能淡化色斑,要想�� �底祛斑还是中药内调好。 original issue reported on code google com by additive gmail com on jul at
1
770,475
27,041,481,528
IssuesEvent
2023-02-13 05:57:01
hypersign-protocol/hid-node
https://api.github.com/repos/hypersign-protocol/hid-node
closed
Implement `blockchainAccountId` field in Verificaition method
hid-node ssi did diddoc high-priority need-to-come-back
- https://www.w3.org/TR/did-spec-registries/#blockchainaccountid - https://github.com/ChainAgnostic/CAIPs/blob/master/CAIPs/caip-10.md ``` account_id: chain_id + ":" + account_address chain_id: [:-a-zA-Z0-9]{5,41} account_address: [a-zA-Z0-9]{1,64} ```
1.0
Implement `blockchainAccountId` field in Verificaition method - - https://www.w3.org/TR/did-spec-registries/#blockchainaccountid - https://github.com/ChainAgnostic/CAIPs/blob/master/CAIPs/caip-10.md ``` account_id: chain_id + ":" + account_address chain_id: [:-a-zA-Z0-9]{5,41} account_address: [a-zA-Z0-9]{1,64} ```
non_defect
implement blockchainaccountid field in verificaition method account id chain id account address chain id account address
0
15,509
2,858,498,001
IssuesEvent
2015-06-03 03:06:12
michaelcdillon/ice4j
https://api.github.com/repos/michaelcdillon/ice4j
closed
Move project to Github
auto-migrated Priority-Medium Type-Defect
``` Github is more popular and contains more productive efficient tools for contributing. Could you export project to github? ``` Original issue reported on code.google.com by `stok...@gmail.com` on 8 Jan 2015 at 9:48
1.0
Move project to Github - ``` Github is more popular and contains more productive efficient tools for contributing. Could you export project to github? ``` Original issue reported on code.google.com by `stok...@gmail.com` on 8 Jan 2015 at 9:48
defect
move project to github github is more popular and contains more productive efficient tools for contributing could you export project to github original issue reported on code google com by stok gmail com on jan at
1
18,719
3,081,424,551
IssuesEvent
2015-08-22 18:16:25
WildBamaBoy/minecraft-comes-alive
https://api.github.com/repos/WildBamaBoy/minecraft-comes-alive
closed
[mca] No mapping found for requested phrase ID: sleep.invalid
1.7.10 1.8 defect
These errors keep popping up on server log with 5.0.7.2 on 2.0.2 Radix-Core for 1.7.10 with forge 10.13.4.1448. Something to do with the villagers trying to sleep? [19:38:44 ERROR]: [mca] No mapping found for requested phrase ID: sleep.invalid [19:38:44 ERROR]: catching java.lang.Throwable at radixcore.lang.LanguageManager.getString(LanguageManager.java:185) [LanguageManager.class:?] at mca.entity.EntityHuman.say(EntityHuman.java:683) [EntityHuman.class:?] at mca.entity.EntityHuman.say(EntityHuman.java:697) [EntityHuman.class:?] at mca.ai.AISleep.onUpdateServer(AISleep.java:97) [AISleep.class:?] at mca.ai.AIManager.onUpdate(AIManager.java:45) [AIManager.class:?] at mca.entity.EntityHuman.func_70071_h_(EntityHuman.java:307) [EntityHuman.class:?] at net.minecraft.world.World.func_72866_a(World.java:2655) [ahb.class:?] at net.minecraft.world.WorldServer.func_72866_a(WorldServer.java:837) [mt.class:?] at net.minecraft.world.World.func_72870_g(World.java:2607) [ahb.class:?] at net.minecraft.world.World.func_72939_s(World.java:2423) [ahb.class:?] at net.minecraft.world.WorldServer.func_72939_s(WorldServer.java:669) [mt.class:?] at net.minecraft.server.MinecraftServer.func_71190_q(MinecraftServer.java:954) [MinecraftServer.class:?] at net.minecraft.server.dedicated.DedicatedServer.func_71190_q(DedicatedServer.java:431) [lt.class:?] at net.minecraft.server.MinecraftServer.func_71217_p(MinecraftServer.java:809) [MinecraftServer.class:?] at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:669) [MinecraftServer.class:?] at java.lang.Thread.run(Unknown Source) [?:1.8.0_51] [19:38:44 ERROR]: Unexpected exception/(Stacktrace for non-fatal error.). null
1.0
[mca] No mapping found for requested phrase ID: sleep.invalid - These errors keep popping up on server log with 5.0.7.2 on 2.0.2 Radix-Core for 1.7.10 with forge 10.13.4.1448. Something to do with the villagers trying to sleep? [19:38:44 ERROR]: [mca] No mapping found for requested phrase ID: sleep.invalid [19:38:44 ERROR]: catching java.lang.Throwable at radixcore.lang.LanguageManager.getString(LanguageManager.java:185) [LanguageManager.class:?] at mca.entity.EntityHuman.say(EntityHuman.java:683) [EntityHuman.class:?] at mca.entity.EntityHuman.say(EntityHuman.java:697) [EntityHuman.class:?] at mca.ai.AISleep.onUpdateServer(AISleep.java:97) [AISleep.class:?] at mca.ai.AIManager.onUpdate(AIManager.java:45) [AIManager.class:?] at mca.entity.EntityHuman.func_70071_h_(EntityHuman.java:307) [EntityHuman.class:?] at net.minecraft.world.World.func_72866_a(World.java:2655) [ahb.class:?] at net.minecraft.world.WorldServer.func_72866_a(WorldServer.java:837) [mt.class:?] at net.minecraft.world.World.func_72870_g(World.java:2607) [ahb.class:?] at net.minecraft.world.World.func_72939_s(World.java:2423) [ahb.class:?] at net.minecraft.world.WorldServer.func_72939_s(WorldServer.java:669) [mt.class:?] at net.minecraft.server.MinecraftServer.func_71190_q(MinecraftServer.java:954) [MinecraftServer.class:?] at net.minecraft.server.dedicated.DedicatedServer.func_71190_q(DedicatedServer.java:431) [lt.class:?] at net.minecraft.server.MinecraftServer.func_71217_p(MinecraftServer.java:809) [MinecraftServer.class:?] at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:669) [MinecraftServer.class:?] at java.lang.Thread.run(Unknown Source) [?:1.8.0_51] [19:38:44 ERROR]: Unexpected exception/(Stacktrace for non-fatal error.). null
defect
no mapping found for requested phrase id sleep invalid these errors keep popping up on server log with on radix core for with forge something to do with the villagers trying to sleep no mapping found for requested phrase id sleep invalid catching java lang throwable at radixcore lang languagemanager getstring languagemanager java at mca entity entityhuman say entityhuman java at mca entity entityhuman say entityhuman java at mca ai aisleep onupdateserver aisleep java at mca ai aimanager onupdate aimanager java at mca entity entityhuman func h entityhuman java at net minecraft world world func a world java at net minecraft world worldserver func a worldserver java at net minecraft world world func g world java at net minecraft world world func s world java at net minecraft world worldserver func s worldserver java at net minecraft server minecraftserver func q minecraftserver java at net minecraft server dedicated dedicatedserver func q dedicatedserver java at net minecraft server minecraftserver func p minecraftserver java at net minecraft server minecraftserver run minecraftserver java at java lang thread run unknown source unexpected exception stacktrace for non fatal error null
1
57,071
15,648,197,159
IssuesEvent
2021-03-23 05:11:21
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Add enabled delay to 'complete' button in discovery section too
A-Identity-Server Help Wanted P3 S-Tolerable T-Defect
We have a few second delay on enabling the 'continue' button when adding an email address, but this delay is not present in the discovery section - we should just copy/paste it there
1.0
Add enabled delay to 'complete' button in discovery section too - We have a few second delay on enabling the 'continue' button when adding an email address, but this delay is not present in the discovery section - we should just copy/paste it there
defect
add enabled delay to complete button in discovery section too we have a few second delay on enabling the continue button when adding an email address but this delay is not present in the discovery section we should just copy paste it there
1
36,345
7,893,880,841
IssuesEvent
2018-06-28 19:33:25
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
Timeline: groupsWidth attribute not working
defect
# IMPORTANT - !!! If you open an issue, fill every item. Otherwise the issue might be closed as invalid. !!! - Before you open an issue, test it with the current/newest version. - Try to find an explanation to your problem by yourself, by simply debugging. This will help us to solve your issue 10x faster - Clone this repository https://github.com/primefaces/primefaces-test.git in order to reproduce your problem, you'll have better chance to receive an answer and a solution. - Otherwise the example must be as small and simple as possible! It must be runnable without any other dependencies (like Spring,..., or project/company internal classes)! - Feel free to provide a PR (Primefaces is an open-source project, any fixes or improvements are welcome.) ## 1) Environment - PrimeFaces version: 6.2.1 - Does it work on the newest released PrimeFaces version? NO Version? 6.2.1 - Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source) - Application server + version: Tomcat 7.0.34 - Affected browsers: Chrome ## 2) Expected behavior the groupsWidth attribute for timeline component should set the width of the group column ... ## 3) Actual behavior groupsWidth attribute is not working and the browser consol shows an error related to groupsWidth .. ## 4) Steps to reproduce .. ## 5) Sample XHTML <p:timeline id="PN_timel" value="#{beanEsempio.model}" var="attivita" varGroup="group" eventMargin="5" eventMarginAxis="0" showMajorLabels="false" axisOnTop="true" selectable="false" editable="false" timeChangeable="false" groupsChangeable="false" groupsOnRight="false" start="#{beanEsempio.fil.start}" min="#{beanEsempio.fil.data}" max="#{beanEsempio.fil.finegg}" zoomMin="36000000" groupsWidth="200px" widgetVar="PNtimel" zoomable="#{beanEsempio.zoomable}" > <p:ajax event="rangechange" oncomplete="onrangechange1()" rendered="#{beanEsempio.timelineGruppo}"/> <p:ajax event="rangechange" oncomplete="onrangechange3()" rendered="#{!beanEsempio.timelineGruppo}"/> <p:ajax event="add" update="" listener="#{beanEsempio.preparaNuovaTime}" oncomplete="PF('PNNuovo').show()"/> <p:ajax event="edit" update="" listener="#{beanEsempio.onChangeTim}" oncomplete="PF('PNNuovo').show()"/> <p:ajax event="delete" update="" listener="#{beanEsempio.onDelete}" onstart="PF('PNtimel').cancelDelete()" oncomplete="PF('dialogCancella').show()"/> <p:ajax event="change" update="@none" listener="#{beanEsempio.onChange}"/> <f:facet name="group"> <h:outputText value="#{group.descrizione}" style="font-weight:bold;"/> </f:facet> <h:outputText value="#{attivita.descrizioneCompletaSCProg}" escape="false"/> </p:timeline> .. ## 6) Sample bean ..
1.0
Timeline: groupsWidth attribute not working - # IMPORTANT - !!! If you open an issue, fill every item. Otherwise the issue might be closed as invalid. !!! - Before you open an issue, test it with the current/newest version. - Try to find an explanation to your problem by yourself, by simply debugging. This will help us to solve your issue 10x faster - Clone this repository https://github.com/primefaces/primefaces-test.git in order to reproduce your problem, you'll have better chance to receive an answer and a solution. - Otherwise the example must be as small and simple as possible! It must be runnable without any other dependencies (like Spring,..., or project/company internal classes)! - Feel free to provide a PR (Primefaces is an open-source project, any fixes or improvements are welcome.) ## 1) Environment - PrimeFaces version: 6.2.1 - Does it work on the newest released PrimeFaces version? NO Version? 6.2.1 - Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source) - Application server + version: Tomcat 7.0.34 - Affected browsers: Chrome ## 2) Expected behavior the groupsWidth attribute for timeline component should set the width of the group column ... ## 3) Actual behavior groupsWidth attribute is not working and the browser consol shows an error related to groupsWidth .. ## 4) Steps to reproduce .. ## 5) Sample XHTML <p:timeline id="PN_timel" value="#{beanEsempio.model}" var="attivita" varGroup="group" eventMargin="5" eventMarginAxis="0" showMajorLabels="false" axisOnTop="true" selectable="false" editable="false" timeChangeable="false" groupsChangeable="false" groupsOnRight="false" start="#{beanEsempio.fil.start}" min="#{beanEsempio.fil.data}" max="#{beanEsempio.fil.finegg}" zoomMin="36000000" groupsWidth="200px" widgetVar="PNtimel" zoomable="#{beanEsempio.zoomable}" > <p:ajax event="rangechange" oncomplete="onrangechange1()" rendered="#{beanEsempio.timelineGruppo}"/> <p:ajax event="rangechange" oncomplete="onrangechange3()" rendered="#{!beanEsempio.timelineGruppo}"/> <p:ajax event="add" update="" listener="#{beanEsempio.preparaNuovaTime}" oncomplete="PF('PNNuovo').show()"/> <p:ajax event="edit" update="" listener="#{beanEsempio.onChangeTim}" oncomplete="PF('PNNuovo').show()"/> <p:ajax event="delete" update="" listener="#{beanEsempio.onDelete}" onstart="PF('PNtimel').cancelDelete()" oncomplete="PF('dialogCancella').show()"/> <p:ajax event="change" update="@none" listener="#{beanEsempio.onChange}"/> <f:facet name="group"> <h:outputText value="#{group.descrizione}" style="font-weight:bold;"/> </f:facet> <h:outputText value="#{attivita.descrizioneCompletaSCProg}" escape="false"/> </p:timeline> .. ## 6) Sample bean ..
defect
timeline groupswidth attribute not working important if you open an issue fill every item otherwise the issue might be closed as invalid before you open an issue test it with the current newest version try to find an explanation to your problem by yourself by simply debugging this will help us to solve your issue faster clone this repository in order to reproduce your problem you ll have better chance to receive an answer and a solution otherwise the example must be as small and simple as possible it must be runnable without any other dependencies like spring or project company internal classes feel free to provide a pr primefaces is an open source project any fixes or improvements are welcome environment primefaces version does it work on the newest released primefaces version no version does it work on the newest sources in github build by source application server version tomcat affected browsers chrome expected behavior the groupswidth attribute for timeline component should set the width of the group column actual behavior groupswidth attribute is not working and the browser consol shows an error related to groupswidth steps to reproduce sample xhtml p timeline id pn timel value beanesempio model var attivita vargroup group eventmargin eventmarginaxis showmajorlabels false axisontop true selectable false editable false timechangeable false groupschangeable false groupsonright false start beanesempio fil start min beanesempio fil data max beanesempio fil finegg zoommin groupswidth widgetvar pntimel zoomable beanesempio zoomable sample bean
1
65,219
19,275,337,266
IssuesEvent
2021-12-10 11:08:26
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
closed
[🐛 Bug]: NetworkInterface.getByName("en0") throw NullPointException
C-java I-defect
### What happened? NetworkInterface.getByName("en0") throw NullPointException ### How can we reproduce the issue? ```shell I use a proxy options.addArguments("--proxy-server=http://" + proxy); ``` ### Relevant log output ```shell WARNING: Failed to resolve host address java.lang.NullPointerException at org.openqa.selenium.net.HostIdentifier.resolveHostAddress(HostIdentifier.java:92) at org.openqa.selenium.net.HostIdentifier.getHostAddress(HostIdentifier.java:123) at org.openqa.selenium.WebDriverException.getSystemInformation(WebDriverException.java:95) at org.openqa.selenium.WebDriverException.createMessage(WebDriverException.java:87) at org.openqa.selenium.WebDriverException.getMessage(WebDriverException.java:65) at java.lang.Throwable.getLocalizedMessage(Throwable.java:391) at java.lang.Throwable.toString(Throwable.java:480) at java.lang.String.valueOf(String.java:2994) at java.io.PrintStream.println(PrintStream.java:821) at java.lang.Throwable$WrappedPrintStream.println(Throwable.java:748) at java.lang.Throwable.printStackTrace(Throwable.java:655) at java.lang.Throwable.printStackTrace(Throwable.java:643) at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1061) at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1052) at java.lang.Thread.dispatchUncaughtException(Thread.java:1959) ``` ### Operating System macOS Monterey 12.0.1 ### Selenium version java 4.1.0 ### What are the browser(s) and version(s) where you see this issue? chrome 96.0.4664.55 ### What are the browser driver(s) and version(s) where you see this issue? chromedriver 95 ### Are you using Selenium Grid? _No response_
1.0
[🐛 Bug]: NetworkInterface.getByName("en0") throw NullPointException - ### What happened? NetworkInterface.getByName("en0") throw NullPointException ### How can we reproduce the issue? ```shell I use a proxy options.addArguments("--proxy-server=http://" + proxy); ``` ### Relevant log output ```shell WARNING: Failed to resolve host address java.lang.NullPointerException at org.openqa.selenium.net.HostIdentifier.resolveHostAddress(HostIdentifier.java:92) at org.openqa.selenium.net.HostIdentifier.getHostAddress(HostIdentifier.java:123) at org.openqa.selenium.WebDriverException.getSystemInformation(WebDriverException.java:95) at org.openqa.selenium.WebDriverException.createMessage(WebDriverException.java:87) at org.openqa.selenium.WebDriverException.getMessage(WebDriverException.java:65) at java.lang.Throwable.getLocalizedMessage(Throwable.java:391) at java.lang.Throwable.toString(Throwable.java:480) at java.lang.String.valueOf(String.java:2994) at java.io.PrintStream.println(PrintStream.java:821) at java.lang.Throwable$WrappedPrintStream.println(Throwable.java:748) at java.lang.Throwable.printStackTrace(Throwable.java:655) at java.lang.Throwable.printStackTrace(Throwable.java:643) at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1061) at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1052) at java.lang.Thread.dispatchUncaughtException(Thread.java:1959) ``` ### Operating System macOS Monterey 12.0.1 ### Selenium version java 4.1.0 ### What are the browser(s) and version(s) where you see this issue? chrome 96.0.4664.55 ### What are the browser driver(s) and version(s) where you see this issue? chromedriver 95 ### Are you using Selenium Grid? _No response_
defect
networkinterface getbyname throw nullpointexception what happened networkinterface getbyname throw nullpointexception how can we reproduce the issue shell i use a proxy options addarguments proxy server proxy relevant log output shell warning failed to resolve host address java lang nullpointerexception at org openqa selenium net hostidentifier resolvehostaddress hostidentifier java at org openqa selenium net hostidentifier gethostaddress hostidentifier java at org openqa selenium webdriverexception getsysteminformation webdriverexception java at org openqa selenium webdriverexception createmessage webdriverexception java at org openqa selenium webdriverexception getmessage webdriverexception java at java lang throwable getlocalizedmessage throwable java at java lang throwable tostring throwable java at java lang string valueof string java at java io printstream println printstream java at java lang throwable wrappedprintstream println throwable java at java lang throwable printstacktrace throwable java at java lang throwable printstacktrace throwable java at java lang threadgroup uncaughtexception threadgroup java at java lang threadgroup uncaughtexception threadgroup java at java lang thread dispatchuncaughtexception thread java operating system macos monterey selenium version java what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no response
1
13,944
8,743,488,890
IssuesEvent
2018-12-12 19:19:00
mercycorps/TolaActivity
https://api.github.com/repos/mercycorps/TolaActivity
closed
Add indicator success message: include a link to the program page (quick fix)
usability
**Note:** A longer term preferred solution is explored in #796. This is a quick fix for the short term... Right now, after creating an indicator, you just sit there. Why not give you an easy way to the program page? # Affects "Form data saved" message that you see after adding an indicator AND completing all required fields. (This does NOT apply to the "Basic indicator created" message you see immediately after adding, prior to completion of targets, etc.) ![image](https://user-images.githubusercontent.com/33670923/49319258-c321e500-f4b0-11e8-8986-e76f2cd357c1.png) # Acceptance criteria - [x] Replace the text in the "Success, form data saved" with the following: Success! View your indicator on the program page. - [x] "View your indicator on the program page." is a link to the program page where the new indicator appears. ## Housekeeping (where applicable) - [ ] Unit test/s written - [x] i18n strings included - [ ] Is accessible to people using screen readers
True
Add indicator success message: include a link to the program page (quick fix) - **Note:** A longer term preferred solution is explored in #796. This is a quick fix for the short term... Right now, after creating an indicator, you just sit there. Why not give you an easy way to the program page? # Affects "Form data saved" message that you see after adding an indicator AND completing all required fields. (This does NOT apply to the "Basic indicator created" message you see immediately after adding, prior to completion of targets, etc.) ![image](https://user-images.githubusercontent.com/33670923/49319258-c321e500-f4b0-11e8-8986-e76f2cd357c1.png) # Acceptance criteria - [x] Replace the text in the "Success, form data saved" with the following: Success! View your indicator on the program page. - [x] "View your indicator on the program page." is a link to the program page where the new indicator appears. ## Housekeeping (where applicable) - [ ] Unit test/s written - [x] i18n strings included - [ ] Is accessible to people using screen readers
non_defect
add indicator success message include a link to the program page quick fix note a longer term preferred solution is explored in this is a quick fix for the short term right now after creating an indicator you just sit there why not give you an easy way to the program page affects form data saved message that you see after adding an indicator and completing all required fields this does not apply to the basic indicator created message you see immediately after adding prior to completion of targets etc acceptance criteria replace the text in the success form data saved with the following success view your indicator on the program page view your indicator on the program page is a link to the program page where the new indicator appears housekeeping where applicable unit test s written strings included is accessible to people using screen readers
0
8,882
2,612,923,710
IssuesEvent
2015-02-27 17:32:17
chrsmith/windows-package-manager
https://api.github.com/repos/chrsmith/windows-package-manager
closed
All Packages are deleted from registry on Npackd start
auto-migrated Milestone-1.18 Type-Defect
``` What steps will reproduce the problem? 1. Install any package, for example 7-Zip 2. Close Npackd 3. Open Npackd What is the expected output? What do you see instead? Expected: I should see 7-Zip listed as "Installed" Actual: 7-Zip is listed as "Not Installed" What version of the product are you using? On what operating system? - Npackd 1.17.9 64 bit - Windows 8 64 bit Please provide any additional information below. I have a fairly complicated / non-standard set up. For example, my `C:\Program Files` is symlinked to `D:\Program Files`. My Npackd is installed to `D:\Npackd` and the Npackd installation directory is also set to `D:\Npackd` My "HKCU_Class" registry (`%LocalAppData%\Microsoft\Windows\UsrClass.dat`) is encrypted via EFS. I diagnosed the problem with Process Monitor a bit. It appears that after installation, Npackd managed to save the package entries to `HKEY_LOCAL_MACHINE\SOFTWARE\Npacked\Packages`. However, the 7-Zip entry's `DetectionInfo` value is an empty string. Upon starting Npackd again, Npackd deletes the 7-Zip's `Package` entry. This is after a reinstallation of Windows. I had basically the same set up (with the symlinks and encryption stuff) working fine before. Apparently something changed in this installation. Any pointers how to debug this? You can also point me to the relevant part in the source code. Thanks ``` Original issue reported on code.google.com by `kiz...@gmail.com` on 9 Jul 2013 at 10:38
1.0
All Packages are deleted from registry on Npackd start - ``` What steps will reproduce the problem? 1. Install any package, for example 7-Zip 2. Close Npackd 3. Open Npackd What is the expected output? What do you see instead? Expected: I should see 7-Zip listed as "Installed" Actual: 7-Zip is listed as "Not Installed" What version of the product are you using? On what operating system? - Npackd 1.17.9 64 bit - Windows 8 64 bit Please provide any additional information below. I have a fairly complicated / non-standard set up. For example, my `C:\Program Files` is symlinked to `D:\Program Files`. My Npackd is installed to `D:\Npackd` and the Npackd installation directory is also set to `D:\Npackd` My "HKCU_Class" registry (`%LocalAppData%\Microsoft\Windows\UsrClass.dat`) is encrypted via EFS. I diagnosed the problem with Process Monitor a bit. It appears that after installation, Npackd managed to save the package entries to `HKEY_LOCAL_MACHINE\SOFTWARE\Npacked\Packages`. However, the 7-Zip entry's `DetectionInfo` value is an empty string. Upon starting Npackd again, Npackd deletes the 7-Zip's `Package` entry. This is after a reinstallation of Windows. I had basically the same set up (with the symlinks and encryption stuff) working fine before. Apparently something changed in this installation. Any pointers how to debug this? You can also point me to the relevant part in the source code. Thanks ``` Original issue reported on code.google.com by `kiz...@gmail.com` on 9 Jul 2013 at 10:38
defect
all packages are deleted from registry on npackd start what steps will reproduce the problem install any package for example zip close npackd open npackd what is the expected output what do you see instead expected i should see zip listed as installed actual zip is listed as not installed what version of the product are you using on what operating system npackd bit windows bit please provide any additional information below i have a fairly complicated non standard set up for example my c program files is symlinked to d program files my npackd is installed to d npackd and the npackd installation directory is also set to d npackd my hkcu class registry localappdata microsoft windows usrclass dat is encrypted via efs i diagnosed the problem with process monitor a bit it appears that after installation npackd managed to save the package entries to hkey local machine software npacked packages however the zip entry s detectioninfo value is an empty string upon starting npackd again npackd deletes the zip s package entry this is after a reinstallation of windows i had basically the same set up with the symlinks and encryption stuff working fine before apparently something changed in this installation any pointers how to debug this you can also point me to the relevant part in the source code thanks original issue reported on code google com by kiz gmail com on jul at
1
22,601
3,670,879,373
IssuesEvent
2016-02-22 02:15:37
plv8/plv8
https://api.github.com/repos/plv8/plv8
closed
Custom error error
auto-migrated Priority-Medium Type-Defect
``` Custom error loose properties What steps will reproduce the problem 1. create function CREATE OR REPLACE FUNCTION public.utils ( ) RETURNS void AS $body$ this.dbError = function(message){ this.message = (message || ''); }; dbError.prototype = Error.prototype; dbError.prototype.value = function(key, value){ if (typeof value !== 'undefined') { this[key] = value; } else { return this[key]; } }; $body$ LANGUAGE 'plv8' VOLATILE RETURNS NULL ON NULL INPUT SECURITY DEFINER COST 100; 2. create trigger function CREATE OR REPLACE FUNCTION public.test_trigger func ( ) RETURNS trigger AS $body$ var fn = plv8.find_function('public.utils'); fn(); var err = new dbError('this is a dbError'); err.someProp = 'lalala'; throw err; return NEW; $body$ LANGUAGE 'plv8' VOLATILE CALLED ON NULL INPUT SECURITY DEFINER COST 100; 3. set this trigger on any table on insteadof event and try to execute it DO $$ plv8.find_function('public.utils')(); try{ plv8.execute('insert into Temp (field) value ($1)', [100]); } catch(ex){ plv8.elog(NOTICE, ex instanceof dbError); plv8.elog(NOTICE, ex.message); plv8.elog(NOTICE, ex.someProp); } we will see true this is a dbError undefined What is the expected output? What do you see instead? true this is a dbError lalala What version of the product are you using? On what operating system? PG 9.3 Windows Server R2 64 ``` Original issue reported on code.google.com by `vovapjat...@gmail.com` on 28 Jan 2014 at 6:25
1.0
Custom error error - ``` Custom error loose properties What steps will reproduce the problem 1. create function CREATE OR REPLACE FUNCTION public.utils ( ) RETURNS void AS $body$ this.dbError = function(message){ this.message = (message || ''); }; dbError.prototype = Error.prototype; dbError.prototype.value = function(key, value){ if (typeof value !== 'undefined') { this[key] = value; } else { return this[key]; } }; $body$ LANGUAGE 'plv8' VOLATILE RETURNS NULL ON NULL INPUT SECURITY DEFINER COST 100; 2. create trigger function CREATE OR REPLACE FUNCTION public.test_trigger func ( ) RETURNS trigger AS $body$ var fn = plv8.find_function('public.utils'); fn(); var err = new dbError('this is a dbError'); err.someProp = 'lalala'; throw err; return NEW; $body$ LANGUAGE 'plv8' VOLATILE CALLED ON NULL INPUT SECURITY DEFINER COST 100; 3. set this trigger on any table on insteadof event and try to execute it DO $$ plv8.find_function('public.utils')(); try{ plv8.execute('insert into Temp (field) value ($1)', [100]); } catch(ex){ plv8.elog(NOTICE, ex instanceof dbError); plv8.elog(NOTICE, ex.message); plv8.elog(NOTICE, ex.someProp); } we will see true this is a dbError undefined What is the expected output? What do you see instead? true this is a dbError lalala What version of the product are you using? On what operating system? PG 9.3 Windows Server R2 64 ``` Original issue reported on code.google.com by `vovapjat...@gmail.com` on 28 Jan 2014 at 6:25
defect
custom error error custom error loose properties what steps will reproduce the problem create function create or replace function public utils returns void as body this dberror function message this message message dberror prototype error prototype dberror prototype value function key value if typeof value undefined this value else return this body language volatile returns null on null input security definer cost create trigger function create or replace function public test trigger func returns trigger as body var fn find function public utils fn var err new dberror this is a dberror err someprop lalala throw err return new body language volatile called on null input security definer cost set this trigger on any table on insteadof event and try to execute it do find function public utils try execute insert into temp field value catch ex elog notice ex instanceof dberror elog notice ex message elog notice ex someprop we will see true this is a dberror undefined what is the expected output what do you see instead true this is a dberror lalala what version of the product are you using on what operating system pg windows server original issue reported on code google com by vovapjat gmail com on jan at
1
12,250
2,685,534,378
IssuesEvent
2015-03-30 02:19:39
IssueMigrationTest/Test5
https://api.github.com/repos/IssueMigrationTest/Test5
closed
unpack_int() may only return unsigned integers for short and long
auto-migrated Priority-Medium Type-Defect
**Issue by RealGran...@gmail.com** _24 Feb 2013 at 1:11 GMT_ _Originally opened on Google Code_ ---- ``` What steps will reproduce the problem? 1. try to struct.unpack() a negative short What is the expected output? What do you see instead? short is unsigned rather than negative What version of the product are you using? On what operating system? git mainline Please provide any additional information below. unpacking should respect real data size and signed/unsigned, also it would be nice to use platform-independent integer size names ```
1.0
unpack_int() may only return unsigned integers for short and long - **Issue by RealGran...@gmail.com** _24 Feb 2013 at 1:11 GMT_ _Originally opened on Google Code_ ---- ``` What steps will reproduce the problem? 1. try to struct.unpack() a negative short What is the expected output? What do you see instead? short is unsigned rather than negative What version of the product are you using? On what operating system? git mainline Please provide any additional information below. unpacking should respect real data size and signed/unsigned, also it would be nice to use platform-independent integer size names ```
defect
unpack int may only return unsigned integers for short and long issue by realgran gmail com feb at gmt originally opened on google code what steps will reproduce the problem try to struct unpack a negative short what is the expected output what do you see instead short is unsigned rather than negative what version of the product are you using on what operating system git mainline please provide any additional information below unpacking should respect real data size and signed unsigned also it would be nice to use platform independent integer size names
1
215,594
16,680,496,669
IssuesEvent
2021-06-07 22:42:05
ethereum/eth2.0-specs
https://api.github.com/repos/ethereum/eth2.0-specs
closed
test_process_sync_committee.py does not have the right logic
scope:CI/tests/pyspec
The current tests for Sync Committee Participation rewards do not reflect the current logic in `specs/altair/beacon-chain.md`. In particular, rewards in the former are dependent of the participant index via its effective balance, while the rewards in the latter are independent of this.
1.0
test_process_sync_committee.py does not have the right logic - The current tests for Sync Committee Participation rewards do not reflect the current logic in `specs/altair/beacon-chain.md`. In particular, rewards in the former are dependent of the participant index via its effective balance, while the rewards in the latter are independent of this.
non_defect
test process sync committee py does not have the right logic the current tests for sync committee participation rewards do not reflect the current logic in specs altair beacon chain md in particular rewards in the former are dependent of the participant index via its effective balance while the rewards in the latter are independent of this
0
15,254
2,850,478,850
IssuesEvent
2015-05-31 16:21:29
damonkohler/android-scripting
https://api.github.com/repos/damonkohler/android-scripting
closed
LuaSocket segfault after some amount of I/O
auto-migrated Priority-Medium Type-Defect
``` What device(s) are you experiencing the problem on? Iconia Tab A200 What firmware version are you running on the device? CyanogenMod 9 What steps will reproduce the problem? 1. Create a Lua script which opens a socket for listening 2. Connect to the socket and send to it 3. After a while, the receiver segfaults. Example receiver script: require "android" local listener = assert(socket.bind('localhost', 9999)) local sock = assert(listener:accept()) while true do sock:receive('*l') end sender script: require "android" local sock = assert(socket.connect('localhost', 9999)) for i = 1, 1000000 do if not sock:send('.') then error("failed at " .. i) end end print("finished") Start receiver, then sender, and receiver should segfault after a few seconds. What is the expected output? What do you see instead? Expected: receiver never terminates; sender runs for a few seconds then prints "finished". Actual: receiver segfaults after about 90,000 messages received. What version of the product are you using? On what operating system? Release 6 on CM9. Please provide any additional information below. The number of messages before segfault varies between about 60,000 and 95,000, and is reduced by sending longer messages. ``` Original issue reported on code.google.com by `hyperhac...@gmail.com` on 15 Jul 2012 at 7:21
1.0
LuaSocket segfault after some amount of I/O - ``` What device(s) are you experiencing the problem on? Iconia Tab A200 What firmware version are you running on the device? CyanogenMod 9 What steps will reproduce the problem? 1. Create a Lua script which opens a socket for listening 2. Connect to the socket and send to it 3. After a while, the receiver segfaults. Example receiver script: require "android" local listener = assert(socket.bind('localhost', 9999)) local sock = assert(listener:accept()) while true do sock:receive('*l') end sender script: require "android" local sock = assert(socket.connect('localhost', 9999)) for i = 1, 1000000 do if not sock:send('.') then error("failed at " .. i) end end print("finished") Start receiver, then sender, and receiver should segfault after a few seconds. What is the expected output? What do you see instead? Expected: receiver never terminates; sender runs for a few seconds then prints "finished". Actual: receiver segfaults after about 90,000 messages received. What version of the product are you using? On what operating system? Release 6 on CM9. Please provide any additional information below. The number of messages before segfault varies between about 60,000 and 95,000, and is reduced by sending longer messages. ``` Original issue reported on code.google.com by `hyperhac...@gmail.com` on 15 Jul 2012 at 7:21
defect
luasocket segfault after some amount of i o what device s are you experiencing the problem on iconia tab what firmware version are you running on the device cyanogenmod what steps will reproduce the problem create a lua script which opens a socket for listening connect to the socket and send to it after a while the receiver segfaults example receiver script require android local listener assert socket bind localhost local sock assert listener accept while true do sock receive l end sender script require android local sock assert socket connect localhost for i do if not sock send then error failed at i end end print finished start receiver then sender and receiver should segfault after a few seconds what is the expected output what do you see instead expected receiver never terminates sender runs for a few seconds then prints finished actual receiver segfaults after about messages received what version of the product are you using on what operating system release on please provide any additional information below the number of messages before segfault varies between about and and is reduced by sending longer messages original issue reported on code google com by hyperhac gmail com on jul at
1
78,520
27,571,677,650
IssuesEvent
2023-03-08 09:47:17
openslide/openslide
https://api.github.com/repos/openslide/openslide
closed
ERROR when installing in 'error__use__openslide_fclose_instead' function
defect
### Operating system Ubuntu 22.04 ### Platform x86_64 ### OpenSlide version Github ### Slide format None ### Issue details When try to install the github version both using meson or make I got an error when compiling, here it is pasted the error after `meson compile -C builddir`: ``` ninja: Entering directory `/home/joan/CLAM/openslide/builddir' [7/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide.c.o -MF src/libopenslide.so.0.4.1.p/openslide.c.o.d -o src/libopenslide.so.0.4.1.p/openslide.c.o -c ../src/openslide.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide.c:25: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [9/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-decode-xml.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide-decode-xml.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide-decode-xml.c.o -MF src/libopenslide.so.0.4.1.p/openslide-decode-xml.c.o.d -o src/libopenslide.so.0.4.1.p/openslide-decode-xml.c.o -c ../src/openslide-decode-xml.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide-decode-xml.c:23: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [11/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-vendor-philips.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide-vendor-philips.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide-vendor-philips.c.o -MF src/libopenslide.so.0.4.1.p/openslide-vendor-philips.c.o.d -o src/libopenslide.so.0.4.1.p/openslide-vendor-philips.c.o -c ../src/openslide-vendor-philips.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide-vendor-philips.c:30: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [16/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-vendor-leica.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide-vendor-leica.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide-vendor-leica.c.o -MF src/libopenslide.so.0.4.1.p/openslide-vendor-leica.c.o.d -o src/libopenslide.so.0.4.1.p/openslide-vendor-leica.c.o -c ../src/openslide-vendor-leica.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide-vendor-leica.c:30: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [18/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-vendor-ventana.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide-vendor-ventana.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide-vendor-ventana.c.o -MF src/libopenslide.so.0.4.1.p/openslide-vendor-ventana.c.o.d -o src/libopenslide.so.0.4.1.p/openslide-vendor-ventana.c.o -c ../src/openslide-vendor-ventana.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide-vendor-ventana.c:31: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [20/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-vendor-synthetic.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide-vendor-synthetic.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide-vendor-synthetic.c.o -MF src/libopenslide.so.0.4.1.p/openslide-vendor-synthetic.c.o.d -o src/libopenslide.so.0.4.1.p/openslide-vendor-synthetic.c.o -c ../src/openslide-vendor-synthetic.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide-vendor-synthetic.c:32: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [32/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-vendor-mirax.c.o ninja: build stopped: subcommand failed. ``` As stated, using this [reply](https://github.com/openslide/openslide/issues/351#issuecomment-1303132190) I got a similar error
1.0
ERROR when installing in 'error__use__openslide_fclose_instead' function - ### Operating system Ubuntu 22.04 ### Platform x86_64 ### OpenSlide version Github ### Slide format None ### Issue details When try to install the github version both using meson or make I got an error when compiling, here it is pasted the error after `meson compile -C builddir`: ``` ninja: Entering directory `/home/joan/CLAM/openslide/builddir' [7/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide.c.o -MF src/libopenslide.so.0.4.1.p/openslide.c.o.d -o src/libopenslide.so.0.4.1.p/openslide.c.o -c ../src/openslide.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide.c:25: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [9/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-decode-xml.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide-decode-xml.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide-decode-xml.c.o -MF src/libopenslide.so.0.4.1.p/openslide-decode-xml.c.o.d -o src/libopenslide.so.0.4.1.p/openslide-decode-xml.c.o -c ../src/openslide-decode-xml.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide-decode-xml.c:23: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [11/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-vendor-philips.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide-vendor-philips.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide-vendor-philips.c.o -MF src/libopenslide.so.0.4.1.p/openslide-vendor-philips.c.o.d -o src/libopenslide.so.0.4.1.p/openslide-vendor-philips.c.o -c ../src/openslide-vendor-philips.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide-vendor-philips.c:30: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [16/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-vendor-leica.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide-vendor-leica.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide-vendor-leica.c.o -MF src/libopenslide.so.0.4.1.p/openslide-vendor-leica.c.o.d -o src/libopenslide.so.0.4.1.p/openslide-vendor-leica.c.o -c ../src/openslide-vendor-leica.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide-vendor-leica.c:30: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [18/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-vendor-ventana.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide-vendor-ventana.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide-vendor-ventana.c.o -MF src/libopenslide.so.0.4.1.p/openslide-vendor-ventana.c.o.d -o src/libopenslide.so.0.4.1.p/openslide-vendor-ventana.c.o -c ../src/openslide-vendor-ventana.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide-vendor-ventana.c:31: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [20/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-vendor-synthetic.c.o FAILED: src/libopenslide.so.0.4.1.p/openslide-vendor-synthetic.c.o cc -Isrc/libopenslide.so.0.4.1.p -Isrc -I../src -I. -I.. -I/home/joan/anaconda3/envs/CLAM/include/glib-2.0 -I/home/joan/anaconda3/envs/CLAM/lib/glib-2.0/include -I/home/joan/anaconda3/envs/CLAM/include -I/home/joan/anaconda3/envs/CLAM/include/gdk-pixbuf-2.0 -I/home/joan/anaconda3/envs/CLAM/include/libpng16 -I/home/joan/anaconda3/envs/CLAM/include/cairo -I/home/joan/anaconda3/envs/CLAM/include/pixman-1 -I/home/joan/anaconda3/envs/CLAM/include/freetype2 -I/home/joan/anaconda3/envs/CLAM/include/libxml2 -I/home/joan/anaconda3/envs/CLAM/include/openjpeg-2.5 -fvisibility=hidden -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu99 -O2 -g -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-common -DG_DISABLE_SINGLE_INCLUDES -DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_56 -DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_MIN_REQUIRED -fPIC -pthread -D_OPENSLIDE_BUILDING_DLL '-DG_LOG_DOMAIN="Openslide"' -MD -MQ src/libopenslide.so.0.4.1.p/openslide-vendor-synthetic.c.o -MF src/libopenslide.so.0.4.1.p/openslide-vendor-synthetic.c.o.d -o src/libopenslide.so.0.4.1.p/openslide-vendor-synthetic.c.o -c ../src/openslide-vendor-synthetic.c In file included from /usr/include/features.h:486, from /usr/include/x86_64-linux-gnu/bits/libc-header-start.h:33, from /usr/include/stdint.h:26, from /usr/lib/gcc/x86_64-linux-gnu/11/include/stdint.h:9, from ../src/openslide.h:37, from ../src/openslide-private.h:25, from ../src/openslide-vendor-synthetic.c:32: ../src/openslide-private.h:396:40: error: ‘error__use__openslide_fclose_instead’ undeclared here (not in a function) 396 | #define _OPENSLIDE_POISON(replacement) error__use_ ## replacement ## _instead | ^~~~~~~~~~~ [32/56] Compiling C object src/libopenslide.so.0.4.1.p/openslide-vendor-mirax.c.o ninja: build stopped: subcommand failed. ``` As stated, using this [reply](https://github.com/openslide/openslide/issues/351#issuecomment-1303132190) I got a similar error
defect
error when installing in error use openslide fclose instead function operating system ubuntu platform openslide version github slide format none issue details when try to install the github version both using meson or make i got an error when compiling here it is pasted the error after meson compile c builddir ninja entering directory home joan clam openslide builddir compiling c object src libopenslide so p openslide c o failed src libopenslide so p openslide c o cc isrc libopenslide so p isrc i src i i i home joan envs clam include glib i home joan envs clam lib glib include i home joan envs clam include i home joan envs clam include gdk pixbuf i home joan envs clam include i home joan envs clam include cairo i home joan envs clam include pixman i home joan envs clam include i home joan envs clam include i home joan envs clam include openjpeg fvisibility hidden fdiagnostics color always d file offset bits wall winvalid pch wextra std g wstrict prototypes wmissing prototypes wmissing declarations wnested externs fno common dg disable single includes dglib version min required glib version dglib version max allowed glib version min required fpic pthread d openslide building dll dg log domain openslide md mq src libopenslide so p openslide c o mf src libopenslide so p openslide c o d o src libopenslide so p openslide c o c src openslide c in file included from usr include features h from usr include linux gnu bits libc header start h from usr include stdint h from usr lib gcc linux gnu include stdint h from src openslide h from src openslide private h from src openslide c src openslide private h error ‘error use openslide fclose instead’ undeclared here not in a function define openslide poison replacement error use replacement instead compiling c object src libopenslide so p openslide decode xml c o failed src libopenslide so p openslide decode xml c o cc isrc libopenslide so p isrc i src i i i home joan envs clam include glib i home joan envs clam lib glib include i home joan envs clam include i home joan envs clam include gdk pixbuf i home joan envs clam include i home joan envs clam include cairo i home joan envs clam include pixman i home joan envs clam include i home joan envs clam include i home joan envs clam include openjpeg fvisibility hidden fdiagnostics color always d file offset bits wall winvalid pch wextra std g wstrict prototypes wmissing prototypes wmissing declarations wnested externs fno common dg disable single includes dglib version min required glib version dglib version max allowed glib version min required fpic pthread d openslide building dll dg log domain openslide md mq src libopenslide so p openslide decode xml c o mf src libopenslide so p openslide decode xml c o d o src libopenslide so p openslide decode xml c o c src openslide decode xml c in file included from usr include features h from usr include linux gnu bits libc header start h from usr include stdint h from usr lib gcc linux gnu include stdint h from src openslide h from src openslide private h from src openslide decode xml c src openslide private h error ‘error use openslide fclose instead’ undeclared here not in a function define openslide poison replacement error use replacement instead compiling c object src libopenslide so p openslide vendor philips c o failed src libopenslide so p openslide vendor philips c o cc isrc libopenslide so p isrc i src i i i home joan envs clam include glib i home joan envs clam lib glib include i home joan envs clam include i home joan envs clam include gdk pixbuf i home joan envs clam include i home joan envs clam include cairo i home joan envs clam include pixman i home joan envs clam include i home joan envs clam include i home joan envs clam include openjpeg fvisibility hidden fdiagnostics color always d file offset bits wall winvalid pch wextra std g wstrict prototypes wmissing prototypes wmissing declarations wnested externs fno common dg disable single includes dglib version min required glib version dglib version max allowed glib version min required fpic pthread d openslide building dll dg log domain openslide md mq src libopenslide so p openslide vendor philips c o mf src libopenslide so p openslide vendor philips c o d o src libopenslide so p openslide vendor philips c o c src openslide vendor philips c in file included from usr include features h from usr include linux gnu bits libc header start h from usr include stdint h from usr lib gcc linux gnu include stdint h from src openslide h from src openslide private h from src openslide vendor philips c src openslide private h error ‘error use openslide fclose instead’ undeclared here not in a function define openslide poison replacement error use replacement instead compiling c object src libopenslide so p openslide vendor leica c o failed src libopenslide so p openslide vendor leica c o cc isrc libopenslide so p isrc i src i i i home joan envs clam include glib i home joan envs clam lib glib include i home joan envs clam include i home joan envs clam include gdk pixbuf i home joan envs clam include i home joan envs clam include cairo i home joan envs clam include pixman i home joan envs clam include i home joan envs clam include i home joan envs clam include openjpeg fvisibility hidden fdiagnostics color always d file offset bits wall winvalid pch wextra std g wstrict prototypes wmissing prototypes wmissing declarations wnested externs fno common dg disable single includes dglib version min required glib version dglib version max allowed glib version min required fpic pthread d openslide building dll dg log domain openslide md mq src libopenslide so p openslide vendor leica c o mf src libopenslide so p openslide vendor leica c o d o src libopenslide so p openslide vendor leica c o c src openslide vendor leica c in file included from usr include features h from usr include linux gnu bits libc header start h from usr include stdint h from usr lib gcc linux gnu include stdint h from src openslide h from src openslide private h from src openslide vendor leica c src openslide private h error ‘error use openslide fclose instead’ undeclared here not in a function define openslide poison replacement error use replacement instead compiling c object src libopenslide so p openslide vendor ventana c o failed src libopenslide so p openslide vendor ventana c o cc isrc libopenslide so p isrc i src i i i home joan envs clam include glib i home joan envs clam lib glib include i home joan envs clam include i home joan envs clam include gdk pixbuf i home joan envs clam include i home joan envs clam include cairo i home joan envs clam include pixman i home joan envs clam include i home joan envs clam include i home joan envs clam include openjpeg fvisibility hidden fdiagnostics color always d file offset bits wall winvalid pch wextra std g wstrict prototypes wmissing prototypes wmissing declarations wnested externs fno common dg disable single includes dglib version min required glib version dglib version max allowed glib version min required fpic pthread d openslide building dll dg log domain openslide md mq src libopenslide so p openslide vendor ventana c o mf src libopenslide so p openslide vendor ventana c o d o src libopenslide so p openslide vendor ventana c o c src openslide vendor ventana c in file included from usr include features h from usr include linux gnu bits libc header start h from usr include stdint h from usr lib gcc linux gnu include stdint h from src openslide h from src openslide private h from src openslide vendor ventana c src openslide private h error ‘error use openslide fclose instead’ undeclared here not in a function define openslide poison replacement error use replacement instead compiling c object src libopenslide so p openslide vendor synthetic c o failed src libopenslide so p openslide vendor synthetic c o cc isrc libopenslide so p isrc i src i i i home joan envs clam include glib i home joan envs clam lib glib include i home joan envs clam include i home joan envs clam include gdk pixbuf i home joan envs clam include i home joan envs clam include cairo i home joan envs clam include pixman i home joan envs clam include i home joan envs clam include i home joan envs clam include openjpeg fvisibility hidden fdiagnostics color always d file offset bits wall winvalid pch wextra std g wstrict prototypes wmissing prototypes wmissing declarations wnested externs fno common dg disable single includes dglib version min required glib version dglib version max allowed glib version min required fpic pthread d openslide building dll dg log domain openslide md mq src libopenslide so p openslide vendor synthetic c o mf src libopenslide so p openslide vendor synthetic c o d o src libopenslide so p openslide vendor synthetic c o c src openslide vendor synthetic c in file included from usr include features h from usr include linux gnu bits libc header start h from usr include stdint h from usr lib gcc linux gnu include stdint h from src openslide h from src openslide private h from src openslide vendor synthetic c src openslide private h error ‘error use openslide fclose instead’ undeclared here not in a function define openslide poison replacement error use replacement instead compiling c object src libopenslide so p openslide vendor mirax c o ninja build stopped subcommand failed as stated using this i got a similar error
1
538,165
15,763,970,214
IssuesEvent
2021-03-31 12:47:08
neuropoly/axondeepseg
https://api.github.com/repos/neuropoly/axondeepseg
opened
Create and share Linux images with ADS pre-installed for releases?
feature good first issue installation priority:LOW
Following up on @Stoyan-I-A 's instructions for running ADS with FSLeyes on a Windows computer ([see here](https://github.com/neuropoly/axondeepseg/discussions/495#discussioncomment-550544)), I'm wondering if it may be worth it to create and share a Linux OS image with ADS & FSLeyes already installed. This could lower the hurdle for users to install this (they would more or less simply need to download VirtualBox, download our image, and run it), as one user already hinted that it sounds a bit complicated to do from scratch [here](https://github.com/neuropoly/axondeepseg/discussions/495#discussioncomment-551677). Thoughts?
1.0
Create and share Linux images with ADS pre-installed for releases? - Following up on @Stoyan-I-A 's instructions for running ADS with FSLeyes on a Windows computer ([see here](https://github.com/neuropoly/axondeepseg/discussions/495#discussioncomment-550544)), I'm wondering if it may be worth it to create and share a Linux OS image with ADS & FSLeyes already installed. This could lower the hurdle for users to install this (they would more or less simply need to download VirtualBox, download our image, and run it), as one user already hinted that it sounds a bit complicated to do from scratch [here](https://github.com/neuropoly/axondeepseg/discussions/495#discussioncomment-551677). Thoughts?
non_defect
create and share linux images with ads pre installed for releases following up on stoyan i a s instructions for running ads with fsleyes on a windows computer i m wondering if it may be worth it to create and share a linux os image with ads fsleyes already installed this could lower the hurdle for users to install this they would more or less simply need to download virtualbox download our image and run it as one user already hinted that it sounds a bit complicated to do from scratch thoughts
0
2,748
2,607,938,275
IssuesEvent
2015-02-26 00:29:45
chrsmithdemos/minify
https://api.github.com/repos/chrsmithdemos/minify
opened
Cite tag considered block/undisplayed
auto-migrated Priority-Medium Release-2.1.5 Type-Defect
``` Because the cite tag is listed in the pregreplace for block/undisplayed content, spaces before cite tags are removed. Are you sure this is not a problem with your configuration? (ask on the Google Group) Yes Minify commit/version: Current PHP version: Doesn't matter What steps will reproduce the problem? 1. Using a <cite> tag per w3.org HTML5 spec (http://www.w3.org/html/wg/drafts/html/master/text-level-semantics.html#the-cite -element) Code: <p>Fizz-pop band Eternal Summers is following up 2012′s <cite>Correct Behavior</cite> with <cite>Beneath The Drop</cite>, out March 4 via Kanine.<p> Expected output: Fizz-pop band Eternal Summers is following up 2012′s Correct Behavior with Beneath The Drop, out March 4 via Kanine. Actual output: Fizz-pop band Eternal Summers is following up 2012′sCorrect Behavior withBeneath The Drop, out March 4 via Kanine. ``` ----- Original issue reported on code.google.com by `dvelasq...@cmj.com` on 13 Jan 2014 at 7:38
1.0
Cite tag considered block/undisplayed - ``` Because the cite tag is listed in the pregreplace for block/undisplayed content, spaces before cite tags are removed. Are you sure this is not a problem with your configuration? (ask on the Google Group) Yes Minify commit/version: Current PHP version: Doesn't matter What steps will reproduce the problem? 1. Using a <cite> tag per w3.org HTML5 spec (http://www.w3.org/html/wg/drafts/html/master/text-level-semantics.html#the-cite -element) Code: <p>Fizz-pop band Eternal Summers is following up 2012′s <cite>Correct Behavior</cite> with <cite>Beneath The Drop</cite>, out March 4 via Kanine.<p> Expected output: Fizz-pop band Eternal Summers is following up 2012′s Correct Behavior with Beneath The Drop, out March 4 via Kanine. Actual output: Fizz-pop band Eternal Summers is following up 2012′sCorrect Behavior withBeneath The Drop, out March 4 via Kanine. ``` ----- Original issue reported on code.google.com by `dvelasq...@cmj.com` on 13 Jan 2014 at 7:38
defect
cite tag considered block undisplayed because the cite tag is listed in the pregreplace for block undisplayed content spaces before cite tags are removed are you sure this is not a problem with your configuration ask on the google group yes minify commit version current php version doesn t matter what steps will reproduce the problem using a tag per org spec element code fizz pop band eternal summers is following up ′s correct behavior with beneath the drop out march via kanine expected output fizz pop band eternal summers is following up ′s correct behavior with beneath the drop out march via kanine actual output fizz pop band eternal summers is following up ′scorrect behavior withbeneath the drop out march via kanine original issue reported on code google com by dvelasq cmj com on jan at
1
1,401
15,836,090,275
IssuesEvent
2021-04-06 18:50:59
emmamei/cdkey
https://api.github.com/repos/emmamei/cdkey
closed
Rework transformLockExpire setting and management
bug reliabilityfix simplification
The variable `transformLockExpire` is a time value, and is used to prevent a dolly from changing types before the lock time. There's no reason to have a way to set the expire time, as it is set once and is basically hard-coded with the duration in code. So the value should be set once in `lmSetConfig()` either to zero or the predetermined value. This is an outgrowth of issue #639 .
True
Rework transformLockExpire setting and management - The variable `transformLockExpire` is a time value, and is used to prevent a dolly from changing types before the lock time. There's no reason to have a way to set the expire time, as it is set once and is basically hard-coded with the duration in code. So the value should be set once in `lmSetConfig()` either to zero or the predetermined value. This is an outgrowth of issue #639 .
non_defect
rework transformlockexpire setting and management the variable transformlockexpire is a time value and is used to prevent a dolly from changing types before the lock time there s no reason to have a way to set the expire time as it is set once and is basically hard coded with the duration in code so the value should be set once in lmsetconfig either to zero or the predetermined value this is an outgrowth of issue
0
188,544
22,046,610,191
IssuesEvent
2022-05-30 02:59:08
sshivananda/ts-sqs-consumer
https://api.github.com/repos/sshivananda/ts-sqs-consumer
closed
CVE-2020-7774 (High) detected in y18n-4.0.0.tgz - autoclosed
security vulnerability
## CVE-2020-7774 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>y18n-4.0.0.tgz</b></p></summary> <p>the bare-bones internationalization library used by yargs</p> <p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/y18n/package.json</p> <p> Dependency Hierarchy: - jest-26.3.0.tgz (Root Library) - jest-cli-26.3.0.tgz - yargs-15.4.1.tgz - :x: **y18n-4.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/sshivananda/ts-sqs-consumer/commit/a2e850096afda1f130ad4fac1c8fb2eb555f463f">a2e850096afda1f130ad4fac1c8fb2eb555f463f</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package y18n before 3.2.2, 4.0.1 and 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true <p>Publish Date: 2020-11-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1654">https://www.npmjs.com/advisories/1654</a></p> <p>Release Date: 2020-11-17</p> <p>Fix Resolution: 3.2.2, 4.0.1, 5.0.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7774 (High) detected in y18n-4.0.0.tgz - autoclosed - ## CVE-2020-7774 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>y18n-4.0.0.tgz</b></p></summary> <p>the bare-bones internationalization library used by yargs</p> <p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/y18n/package.json</p> <p> Dependency Hierarchy: - jest-26.3.0.tgz (Root Library) - jest-cli-26.3.0.tgz - yargs-15.4.1.tgz - :x: **y18n-4.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/sshivananda/ts-sqs-consumer/commit/a2e850096afda1f130ad4fac1c8fb2eb555f463f">a2e850096afda1f130ad4fac1c8fb2eb555f463f</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package y18n before 3.2.2, 4.0.1 and 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true <p>Publish Date: 2020-11-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1654">https://www.npmjs.com/advisories/1654</a></p> <p>Release Date: 2020-11-17</p> <p>Fix Resolution: 3.2.2, 4.0.1, 5.0.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in tgz autoclosed cve high severity vulnerability vulnerable library tgz the bare bones internationalization library used by yargs library home page a href path to dependency file package json path to vulnerable library node modules package json dependency hierarchy jest tgz root library jest cli tgz yargs tgz x tgz vulnerable library found in head commit a href vulnerability details this affects the package before and poc by const require setlocale proto updatelocale polluted true console log polluted true publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
796,483
28,115,094,249
IssuesEvent
2023-03-31 10:07:06
googleapis/python-automl
https://api.github.com/repos/googleapis/python-automl
closed
tests.system.gapic.v1beta1.test_system_tables_client_v1.TestSystemTablesClient: test_get_model_evaluation[rest] failed
type: bug priority: p1 api: automl flakybot: issue flakybot: flaky
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: fc93dce1791e37c12a12b4f1cc97306f62a471a1 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/36be1f68-67ff-47c6-b2b0-07bcb9f5cac7), [Sponge](http://sponge2/36be1f68-67ff-47c6-b2b0-07bcb9f5cac7) status: failed <details><summary>Test output</summary><br><pre>self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> conn = <urllib3.connection.HTTPSConnection object at 0x7f0908264dc0> method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' timeout = Timeout(connect=5.0, read=5.0, total=None), chunked = False httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.28.2', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*'...TWDbutDoAShNYmfulVItHxSHce0ivOnBs3dOYKBrcOOSMqhqlI-zEv6zeIVGKCnLrBNlI13BKctyJhqEOhkTRzCBU_u0avvGJQR1iryG4sEUq29SDgpA'}} timeout_obj = Timeout(connect=5.0, read=5.0, total=None), read_timeout = 5.0 def _make_request( self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw ): """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param timeout: Socket timeout in seconds for the request. This can be a float or integer, which will set the same timeout value for the socket connect and the socket read, or an instance of :class:`urllib3.util.Timeout`, which gives you more fine-grained control over your timeouts. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = timeout_obj.connect_timeout # Trigger any extra validation we need to do. try: self._validate_conn(conn) except (SocketTimeout, BaseSSLError) as e: # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) raise # conn.request() calls http.client.*.request, not the method in # urllib3.request. It also calls makefile (recv) on the socket. try: if chunked: conn.request_chunked(method, url, **httplib_request_kw) else: conn.request(method, url, **httplib_request_kw) # We are swallowing BrokenPipeError (errno.EPIPE) since the server is # legitimately able to close the connection after sending a valid response. # With this behaviour, the received response is still readable. except BrokenPipeError: # Python 3 pass except IOError as e: # Python 2 and macOS/Linux # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno not in { errno.EPIPE, errno.ESHUTDOWN, errno.EPROTOTYPE, }: raise # Reset the timeout for the recv() on the socket read_timeout = timeout_obj.read_timeout # App Engine doesn't have a sock attr if getattr(conn, "sock", None): # In Python 3 socket.py will catch EAGAIN and return None when you # try and read into the file pointer created by http.client, which # instead raises a BadStatusLine exception. Instead of catching # the exception and assuming all BadStatusLine exceptions are read # timeouts, check for a zero timeout before making the request. if read_timeout == 0: raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % read_timeout ) if read_timeout is Timeout.DEFAULT_TIMEOUT: conn.sock.settimeout(socket.getdefaulttimeout()) else: # None or a value conn.sock.settimeout(read_timeout) # Receive the response from the server try: try: # Python 2.7, use buffering of HTTP responses httplib_response = conn.getresponse(buffering=True) except TypeError: # Python 3 try: httplib_response = conn.getresponse() except BaseException as e: # Remove the TypeError from the exception chain in # Python 3 (including for exceptions like SystemExit). # Otherwise it looks like a bug in the code. > six.raise_from(e, None) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:449: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = None, from_value = None > ??? <string>:3: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> conn = <urllib3.connection.HTTPSConnection object at 0x7f0908264dc0> method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' timeout = Timeout(connect=5.0, read=5.0, total=None), chunked = False httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.28.2', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*'...TWDbutDoAShNYmfulVItHxSHce0ivOnBs3dOYKBrcOOSMqhqlI-zEv6zeIVGKCnLrBNlI13BKctyJhqEOhkTRzCBU_u0avvGJQR1iryG4sEUq29SDgpA'}} timeout_obj = Timeout(connect=5.0, read=5.0, total=None), read_timeout = 5.0 def _make_request( self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw ): """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param timeout: Socket timeout in seconds for the request. This can be a float or integer, which will set the same timeout value for the socket connect and the socket read, or an instance of :class:`urllib3.util.Timeout`, which gives you more fine-grained control over your timeouts. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = timeout_obj.connect_timeout # Trigger any extra validation we need to do. try: self._validate_conn(conn) except (SocketTimeout, BaseSSLError) as e: # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) raise # conn.request() calls http.client.*.request, not the method in # urllib3.request. It also calls makefile (recv) on the socket. try: if chunked: conn.request_chunked(method, url, **httplib_request_kw) else: conn.request(method, url, **httplib_request_kw) # We are swallowing BrokenPipeError (errno.EPIPE) since the server is # legitimately able to close the connection after sending a valid response. # With this behaviour, the received response is still readable. except BrokenPipeError: # Python 3 pass except IOError as e: # Python 2 and macOS/Linux # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno not in { errno.EPIPE, errno.ESHUTDOWN, errno.EPROTOTYPE, }: raise # Reset the timeout for the recv() on the socket read_timeout = timeout_obj.read_timeout # App Engine doesn't have a sock attr if getattr(conn, "sock", None): # In Python 3 socket.py will catch EAGAIN and return None when you # try and read into the file pointer created by http.client, which # instead raises a BadStatusLine exception. Instead of catching # the exception and assuming all BadStatusLine exceptions are read # timeouts, check for a zero timeout before making the request. if read_timeout == 0: raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % read_timeout ) if read_timeout is Timeout.DEFAULT_TIMEOUT: conn.sock.settimeout(socket.getdefaulttimeout()) else: # None or a value conn.sock.settimeout(read_timeout) # Receive the response from the server try: try: # Python 2.7, use buffering of HTTP responses httplib_response = conn.getresponse(buffering=True) except TypeError: # Python 3 try: > httplib_response = conn.getresponse() .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:444: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connection.HTTPSConnection object at 0x7f0908264dc0> def getresponse(self): """Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. """ # if a prior response has been completed, then forget about it. if self.__response and self.__response.isclosed(): self.__response = None # if a prior response exists, then it must be completed (otherwise, we # cannot read this response's header to determine the connection-close # behavior) # # note: if a prior response existed, but was connection-close, then the # socket and response were made independent of this HTTPConnection # object since a new request requires that we open a whole new # connection # # this means the prior response had one of two states: # 1) will_close: this connection was reset and the prior socket and # response operate independently # 2) persistent: the response was retained and we await its # isclosed() status to become true. # if self.__state != _CS_REQ_SENT or self.__response: raise ResponseNotReady(self.__state) if self.debuglevel > 0: response = self.response_class(self.sock, self.debuglevel, method=self._method) else: response = self.response_class(self.sock, method=self._method) try: try: > response.begin() /usr/local/lib/python3.8/http/client.py:1348: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <http.client.HTTPResponse object at 0x7f0908264c70> def begin(self): if self.headers is not None: # we've already started reading the response return # read until we get a non-100 response while True: > version, status, reason = self._read_status() /usr/local/lib/python3.8/http/client.py:316: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <http.client.HTTPResponse object at 0x7f0908264c70> def _read_status(self): > line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") /usr/local/lib/python3.8/http/client.py:277: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <socket.SocketIO object at 0x7f0908264370> b = <memory at 0x7f0908265b80> def readinto(self, b): """Read up to len(b) bytes into the writable buffer *b* and return the number of bytes read. If the socket is non-blocking and no bytes are available, None is returned. If *b* is non-empty, a 0 return value indicates that the connection was shutdown at the other end. """ self._checkClosed() self._checkReadable() if self._timeout_occurred: raise OSError("cannot read from timed out object") while True: try: > return self._sock.recv_into(b) /usr/local/lib/python3.8/socket.py:669: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6> buffer = <memory at 0x7f0908265b80>, nbytes = 8192, flags = 0 def recv_into(self, buffer, nbytes=None, flags=0): self._checkClosed() if buffer and (nbytes is None): nbytes = len(buffer) elif nbytes is None: nbytes = 1024 if self._sslobj is not None: if flags != 0: raise ValueError( "non-zero flags not allowed in calls to recv_into() on %s" % self.__class__) > return self.read(nbytes, buffer) /usr/local/lib/python3.8/ssl.py:1241: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6> len = 8192, buffer = <memory at 0x7f0908265b80> def read(self, len=1024, buffer=None): """Read up to LEN bytes and return them. Return zero-length string on EOF.""" self._checkClosed() if self._sslobj is None: raise ValueError("Read on closed or unwrapped SSL socket.") try: if buffer is not None: > return self._sslobj.read(len, buffer) E socket.timeout: The read operation timed out /usr/local/lib/python3.8/ssl.py:1099: timeout During handling of the above exception, another exception occurred: self = <requests.adapters.HTTPAdapter object at 0x7f09081fbd60> request = <PreparedRequest [GET]>, stream = False timeout = Timeout(connect=5.0, read=5.0, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection(request.url, proxies) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: if not chunked: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, ) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/requests/adapters.py:489: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' body = None headers = {'User-Agent': 'python-requests/2.28.2', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-aliv...TTWDbutDoAShNYmfulVItHxSHce0ivOnBs3dOYKBrcOOSMqhqlI-zEv6zeIVGKCnLrBNlI13BKctyJhqEOhkTRzCBU_u0avvGJQR1iryG4sEUq29SDgpA'} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=5.0, read=5.0, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/v1beta1/projects/precise-truck-742/locations/us-central1/models', query='%24alt=json%3Benum-encoding%3Dint', fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw ): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get("preload_content", True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = six.ensure_str(_encode_target(url)) else: url = six.ensure_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] <https://github.com/urllib3/urllib3/issues/651> release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr( conn, "sock", None ) if is_new_proxy_conn and http_tunnel_required: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, ) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw["request_method"] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib( httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw ) # Everything went great! clean_exit = True except EmptyPoolError: # Didn't get a connection from the pool, no need to clean up clean_exit = True release_this_conn = False raise except ( TimeoutError, HTTPException, SocketError, ProtocolError, BaseSSLError, SSLError, CertificateError, ) as e: # Discard the connection for these exceptions. It will be # replaced during the next _get_conn() call. clean_exit = False def _is_ssl_error_message_from_http_proxy(ssl_error): # We're trying to detect the message 'WRONG_VERSION_NUMBER' but # SSLErrors are kinda all over the place when it comes to the message, # so we try to cover our bases here! message = " ".join(re.split("[^a-z]", str(ssl_error).lower())) return ( "wrong version number" in message or "unknown protocol" in message ) # Try to detect a common user error with proxies which is to # set an HTTP proxy to be HTTPS when it should be 'http://' # (ie {'http': 'http://proxy', 'https': 'https://proxy'}) # Instead we add a nice error message and point to a URL. if ( isinstance(e, BaseSSLError) and self.proxy and _is_ssl_error_message_from_http_proxy(e) and conn.proxy and conn.proxy.scheme == "https" ): e = ProxyError( "Your proxy appears to only use HTTP and not HTTPS, " "try changing your proxy URL to be HTTP. See: " "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" "#https-proxy-error-http-proxy", SSLError(e), ) elif isinstance(e, (BaseSSLError, CertificateError)): e = SSLError(e) elif isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError("Cannot connect to proxy.", e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError("Connection aborted.", e) > retries = retries.increment( method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] ) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:787: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' response = None error = ReadTimeoutError("HTTPSConnectionPool(host='automl.googleapis.com', port=443): Read timed out. (read timeout=5.0)") _pool = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> _stacktrace = <traceback object at 0x7f09081ee980> def increment( self, method=None, url=None, response=None, error=None, _pool=None, _stacktrace=None, ): """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.HTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise six.reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise six.reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or not self._is_method_retryable(method): > raise six.reraise(type(error), error, _stacktrace) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/util/retry.py:550: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tp = <class 'urllib3.exceptions.ReadTimeoutError'>, value = None, tb = None def reraise(tp, value, tb=None): try: if value is None: value = tp() if value.__traceback__ is not tb: raise value.with_traceback(tb) > raise value .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/packages/six.py:770: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' body = None headers = {'User-Agent': 'python-requests/2.28.2', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-aliv...TTWDbutDoAShNYmfulVItHxSHce0ivOnBs3dOYKBrcOOSMqhqlI-zEv6zeIVGKCnLrBNlI13BKctyJhqEOhkTRzCBU_u0avvGJQR1iryG4sEUq29SDgpA'} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=5.0, read=5.0, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/v1beta1/projects/precise-truck-742/locations/us-central1/models', query='%24alt=json%3Benum-encoding%3Dint', fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw ): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get("preload_content", True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = six.ensure_str(_encode_target(url)) else: url = six.ensure_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] <https://github.com/urllib3/urllib3/issues/651> release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr( conn, "sock", None ) if is_new_proxy_conn and http_tunnel_required: self._prepare_proxy(conn) # Make the request on the httplib connection object. > httplib_response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, ) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:703: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> conn = <urllib3.connection.HTTPSConnection object at 0x7f0908264dc0> method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' timeout = Timeout(connect=5.0, read=5.0, total=None), chunked = False httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.28.2', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*'...TWDbutDoAShNYmfulVItHxSHce0ivOnBs3dOYKBrcOOSMqhqlI-zEv6zeIVGKCnLrBNlI13BKctyJhqEOhkTRzCBU_u0avvGJQR1iryG4sEUq29SDgpA'}} timeout_obj = Timeout(connect=5.0, read=5.0, total=None), read_timeout = 5.0 def _make_request( self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw ): """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param timeout: Socket timeout in seconds for the request. This can be a float or integer, which will set the same timeout value for the socket connect and the socket read, or an instance of :class:`urllib3.util.Timeout`, which gives you more fine-grained control over your timeouts. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = timeout_obj.connect_timeout # Trigger any extra validation we need to do. try: self._validate_conn(conn) except (SocketTimeout, BaseSSLError) as e: # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) raise # conn.request() calls http.client.*.request, not the method in # urllib3.request. It also calls makefile (recv) on the socket. try: if chunked: conn.request_chunked(method, url, **httplib_request_kw) else: conn.request(method, url, **httplib_request_kw) # We are swallowing BrokenPipeError (errno.EPIPE) since the server is # legitimately able to close the connection after sending a valid response. # With this behaviour, the received response is still readable. except BrokenPipeError: # Python 3 pass except IOError as e: # Python 2 and macOS/Linux # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno not in { errno.EPIPE, errno.ESHUTDOWN, errno.EPROTOTYPE, }: raise # Reset the timeout for the recv() on the socket read_timeout = timeout_obj.read_timeout # App Engine doesn't have a sock attr if getattr(conn, "sock", None): # In Python 3 socket.py will catch EAGAIN and return None when you # try and read into the file pointer created by http.client, which # instead raises a BadStatusLine exception. Instead of catching # the exception and assuming all BadStatusLine exceptions are read # timeouts, check for a zero timeout before making the request. if read_timeout == 0: raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % read_timeout ) if read_timeout is Timeout.DEFAULT_TIMEOUT: conn.sock.settimeout(socket.getdefaulttimeout()) else: # None or a value conn.sock.settimeout(read_timeout) # Receive the response from the server try: try: # Python 2.7, use buffering of HTTP responses httplib_response = conn.getresponse(buffering=True) except TypeError: # Python 3 try: httplib_response = conn.getresponse() except BaseException as e: # Remove the TypeError from the exception chain in # Python 3 (including for exceptions like SystemExit). # Otherwise it looks like a bug in the code. six.raise_from(e, None) except (SocketTimeout, BaseSSLError, SocketError) as e: > self._raise_timeout(err=e, url=url, timeout_value=read_timeout) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:451: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> err = timeout('The read operation timed out') url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' timeout_value = 5.0 def _raise_timeout(self, err, url, timeout_value): """Is the error actually a timeout? Will raise a ReadTimeout or pass""" if isinstance(err, SocketTimeout): > raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % timeout_value ) E urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='automl.googleapis.com', port=443): Read timed out. (read timeout=5.0) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:340: ReadTimeoutError During handling of the above exception, another exception occurred: self = <test_system_tables_client_v1.TestSystemTablesClient object at 0x7f0909b5f370> transport = 'rest' @vpcsc_config.skip_if_inside_vpcsc @pytest.mark.parametrize("transport", ["grpc", "rest"]) def test_get_model_evaluation(self, transport): client = automl_v1beta1.TablesClient( project=PROJECT, region=REGION, transport=transport ) > model = self.ensure_model_online(client) tests/system/gapic/v1beta1/test_system_tables_client_v1.py:284: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/system/gapic/v1beta1/test_system_tables_client_v1.py:319: in ensure_model_online model = self.ensure_model_ready(client) tests/system/gapic/v1beta1/test_system_tables_client_v1.py:327: in ensure_model_ready return client.get_model(model_display_name=STATIC_MODEL) google/cloud/automl_v1beta1/services/tables/tables_client.py:2576: in get_model self.list_models(project=project, region=region), google/cloud/automl_v1beta1/services/tables/tables_client.py:2136: in list_models return self.auto_ml_client.list_models(request=request, **method_kwargs) google/cloud/automl_v1beta1/services/auto_ml/client.py:2544: in list_models response = rpc( .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py:113: in __call__ return wrapped_func(*args, **kwargs) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/retry.py:349: in retry_wrapped_func return retry_target( .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/retry.py:191: in retry_target return target() .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/timeout.py:120: in func_with_timeout return func(*args, **kwargs) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:72: in error_remapped_callable return callable_(*args, **kwargs) google/cloud/automl_v1beta1/services/auto_ml/transports/rest.py:2731: in __call__ response = getattr(self._session, method)( .nox/prerelease_deps-3-8/lib/python3.8/site-packages/requests/sessions.py:600: in get return self.request("GET", url, **kwargs) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/auth/transport/requests.py:549: in request response = super(AuthorizedSession, self).request( .nox/prerelease_deps-3-8/lib/python3.8/site-packages/requests/sessions.py:587: in request resp = self.send(prep, **send_kwargs) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/requests/sessions.py:701: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <requests.adapters.HTTPAdapter object at 0x7f09081fbd60> request = <PreparedRequest [GET]>, stream = False timeout = Timeout(connect=5.0, read=5.0, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection(request.url, proxies) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: if not chunked: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, ) # Send the request. else: if hasattr(conn, "proxy_pool"): conn = conn.proxy_pool low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) try: skip_host = "Host" in request.headers low_conn.putrequest( request.method, url, skip_accept_encoding=True, skip_host=skip_host, ) for header, value in request.headers.items(): low_conn.putheader(header, value) low_conn.endheaders() for i in request.body: low_conn.send(hex(len(i))[2:].encode("utf-8")) low_conn.send(b"\r\n") low_conn.send(i) low_conn.send(b"\r\n") low_conn.send(b"0\r\n\r\n") # Receive the response from the server r = low_conn.getresponse() resp = HTTPResponse.from_httplib( r, pool=conn, connection=low_conn, preload_content=False, decode_content=False, ) except Exception: # If we hit any problems here, clean up the connection. # Then, raise so that we can handle the actual exception. low_conn.close() raise except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) raise ConnectionError(e, request=request) except ClosedPoolError as e: raise ConnectionError(e, request=request) except _ProxyError as e: raise ProxyError(e) except (_SSLError, _HTTPError) as e: if isinstance(e, _SSLError): # This branch is for urllib3 versions earlier than v1.22 raise SSLError(e, request=request) elif isinstance(e, ReadTimeoutError): > raise ReadTimeout(e, request=request) E requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='automl.googleapis.com', port=443): Read timed out. (read timeout=5.0) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/requests/adapters.py:578: ReadTimeout</pre></details>
1.0
tests.system.gapic.v1beta1.test_system_tables_client_v1.TestSystemTablesClient: test_get_model_evaluation[rest] failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: fc93dce1791e37c12a12b4f1cc97306f62a471a1 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/36be1f68-67ff-47c6-b2b0-07bcb9f5cac7), [Sponge](http://sponge2/36be1f68-67ff-47c6-b2b0-07bcb9f5cac7) status: failed <details><summary>Test output</summary><br><pre>self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> conn = <urllib3.connection.HTTPSConnection object at 0x7f0908264dc0> method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' timeout = Timeout(connect=5.0, read=5.0, total=None), chunked = False httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.28.2', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*'...TWDbutDoAShNYmfulVItHxSHce0ivOnBs3dOYKBrcOOSMqhqlI-zEv6zeIVGKCnLrBNlI13BKctyJhqEOhkTRzCBU_u0avvGJQR1iryG4sEUq29SDgpA'}} timeout_obj = Timeout(connect=5.0, read=5.0, total=None), read_timeout = 5.0 def _make_request( self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw ): """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param timeout: Socket timeout in seconds for the request. This can be a float or integer, which will set the same timeout value for the socket connect and the socket read, or an instance of :class:`urllib3.util.Timeout`, which gives you more fine-grained control over your timeouts. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = timeout_obj.connect_timeout # Trigger any extra validation we need to do. try: self._validate_conn(conn) except (SocketTimeout, BaseSSLError) as e: # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) raise # conn.request() calls http.client.*.request, not the method in # urllib3.request. It also calls makefile (recv) on the socket. try: if chunked: conn.request_chunked(method, url, **httplib_request_kw) else: conn.request(method, url, **httplib_request_kw) # We are swallowing BrokenPipeError (errno.EPIPE) since the server is # legitimately able to close the connection after sending a valid response. # With this behaviour, the received response is still readable. except BrokenPipeError: # Python 3 pass except IOError as e: # Python 2 and macOS/Linux # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno not in { errno.EPIPE, errno.ESHUTDOWN, errno.EPROTOTYPE, }: raise # Reset the timeout for the recv() on the socket read_timeout = timeout_obj.read_timeout # App Engine doesn't have a sock attr if getattr(conn, "sock", None): # In Python 3 socket.py will catch EAGAIN and return None when you # try and read into the file pointer created by http.client, which # instead raises a BadStatusLine exception. Instead of catching # the exception and assuming all BadStatusLine exceptions are read # timeouts, check for a zero timeout before making the request. if read_timeout == 0: raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % read_timeout ) if read_timeout is Timeout.DEFAULT_TIMEOUT: conn.sock.settimeout(socket.getdefaulttimeout()) else: # None or a value conn.sock.settimeout(read_timeout) # Receive the response from the server try: try: # Python 2.7, use buffering of HTTP responses httplib_response = conn.getresponse(buffering=True) except TypeError: # Python 3 try: httplib_response = conn.getresponse() except BaseException as e: # Remove the TypeError from the exception chain in # Python 3 (including for exceptions like SystemExit). # Otherwise it looks like a bug in the code. > six.raise_from(e, None) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:449: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = None, from_value = None > ??? <string>:3: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> conn = <urllib3.connection.HTTPSConnection object at 0x7f0908264dc0> method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' timeout = Timeout(connect=5.0, read=5.0, total=None), chunked = False httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.28.2', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*'...TWDbutDoAShNYmfulVItHxSHce0ivOnBs3dOYKBrcOOSMqhqlI-zEv6zeIVGKCnLrBNlI13BKctyJhqEOhkTRzCBU_u0avvGJQR1iryG4sEUq29SDgpA'}} timeout_obj = Timeout(connect=5.0, read=5.0, total=None), read_timeout = 5.0 def _make_request( self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw ): """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param timeout: Socket timeout in seconds for the request. This can be a float or integer, which will set the same timeout value for the socket connect and the socket read, or an instance of :class:`urllib3.util.Timeout`, which gives you more fine-grained control over your timeouts. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = timeout_obj.connect_timeout # Trigger any extra validation we need to do. try: self._validate_conn(conn) except (SocketTimeout, BaseSSLError) as e: # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) raise # conn.request() calls http.client.*.request, not the method in # urllib3.request. It also calls makefile (recv) on the socket. try: if chunked: conn.request_chunked(method, url, **httplib_request_kw) else: conn.request(method, url, **httplib_request_kw) # We are swallowing BrokenPipeError (errno.EPIPE) since the server is # legitimately able to close the connection after sending a valid response. # With this behaviour, the received response is still readable. except BrokenPipeError: # Python 3 pass except IOError as e: # Python 2 and macOS/Linux # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno not in { errno.EPIPE, errno.ESHUTDOWN, errno.EPROTOTYPE, }: raise # Reset the timeout for the recv() on the socket read_timeout = timeout_obj.read_timeout # App Engine doesn't have a sock attr if getattr(conn, "sock", None): # In Python 3 socket.py will catch EAGAIN and return None when you # try and read into the file pointer created by http.client, which # instead raises a BadStatusLine exception. Instead of catching # the exception and assuming all BadStatusLine exceptions are read # timeouts, check for a zero timeout before making the request. if read_timeout == 0: raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % read_timeout ) if read_timeout is Timeout.DEFAULT_TIMEOUT: conn.sock.settimeout(socket.getdefaulttimeout()) else: # None or a value conn.sock.settimeout(read_timeout) # Receive the response from the server try: try: # Python 2.7, use buffering of HTTP responses httplib_response = conn.getresponse(buffering=True) except TypeError: # Python 3 try: > httplib_response = conn.getresponse() .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:444: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connection.HTTPSConnection object at 0x7f0908264dc0> def getresponse(self): """Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. """ # if a prior response has been completed, then forget about it. if self.__response and self.__response.isclosed(): self.__response = None # if a prior response exists, then it must be completed (otherwise, we # cannot read this response's header to determine the connection-close # behavior) # # note: if a prior response existed, but was connection-close, then the # socket and response were made independent of this HTTPConnection # object since a new request requires that we open a whole new # connection # # this means the prior response had one of two states: # 1) will_close: this connection was reset and the prior socket and # response operate independently # 2) persistent: the response was retained and we await its # isclosed() status to become true. # if self.__state != _CS_REQ_SENT or self.__response: raise ResponseNotReady(self.__state) if self.debuglevel > 0: response = self.response_class(self.sock, self.debuglevel, method=self._method) else: response = self.response_class(self.sock, method=self._method) try: try: > response.begin() /usr/local/lib/python3.8/http/client.py:1348: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <http.client.HTTPResponse object at 0x7f0908264c70> def begin(self): if self.headers is not None: # we've already started reading the response return # read until we get a non-100 response while True: > version, status, reason = self._read_status() /usr/local/lib/python3.8/http/client.py:316: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <http.client.HTTPResponse object at 0x7f0908264c70> def _read_status(self): > line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") /usr/local/lib/python3.8/http/client.py:277: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <socket.SocketIO object at 0x7f0908264370> b = <memory at 0x7f0908265b80> def readinto(self, b): """Read up to len(b) bytes into the writable buffer *b* and return the number of bytes read. If the socket is non-blocking and no bytes are available, None is returned. If *b* is non-empty, a 0 return value indicates that the connection was shutdown at the other end. """ self._checkClosed() self._checkReadable() if self._timeout_occurred: raise OSError("cannot read from timed out object") while True: try: > return self._sock.recv_into(b) /usr/local/lib/python3.8/socket.py:669: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6> buffer = <memory at 0x7f0908265b80>, nbytes = 8192, flags = 0 def recv_into(self, buffer, nbytes=None, flags=0): self._checkClosed() if buffer and (nbytes is None): nbytes = len(buffer) elif nbytes is None: nbytes = 1024 if self._sslobj is not None: if flags != 0: raise ValueError( "non-zero flags not allowed in calls to recv_into() on %s" % self.__class__) > return self.read(nbytes, buffer) /usr/local/lib/python3.8/ssl.py:1241: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6> len = 8192, buffer = <memory at 0x7f0908265b80> def read(self, len=1024, buffer=None): """Read up to LEN bytes and return them. Return zero-length string on EOF.""" self._checkClosed() if self._sslobj is None: raise ValueError("Read on closed or unwrapped SSL socket.") try: if buffer is not None: > return self._sslobj.read(len, buffer) E socket.timeout: The read operation timed out /usr/local/lib/python3.8/ssl.py:1099: timeout During handling of the above exception, another exception occurred: self = <requests.adapters.HTTPAdapter object at 0x7f09081fbd60> request = <PreparedRequest [GET]>, stream = False timeout = Timeout(connect=5.0, read=5.0, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection(request.url, proxies) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: if not chunked: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, ) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/requests/adapters.py:489: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' body = None headers = {'User-Agent': 'python-requests/2.28.2', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-aliv...TTWDbutDoAShNYmfulVItHxSHce0ivOnBs3dOYKBrcOOSMqhqlI-zEv6zeIVGKCnLrBNlI13BKctyJhqEOhkTRzCBU_u0avvGJQR1iryG4sEUq29SDgpA'} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=5.0, read=5.0, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/v1beta1/projects/precise-truck-742/locations/us-central1/models', query='%24alt=json%3Benum-encoding%3Dint', fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw ): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get("preload_content", True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = six.ensure_str(_encode_target(url)) else: url = six.ensure_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] <https://github.com/urllib3/urllib3/issues/651> release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr( conn, "sock", None ) if is_new_proxy_conn and http_tunnel_required: self._prepare_proxy(conn) # Make the request on the httplib connection object. httplib_response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, ) # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Pass method to Response for length checking response_kw["request_method"] = method # Import httplib's response into our own wrapper object response = self.ResponseCls.from_httplib( httplib_response, pool=self, connection=response_conn, retries=retries, **response_kw ) # Everything went great! clean_exit = True except EmptyPoolError: # Didn't get a connection from the pool, no need to clean up clean_exit = True release_this_conn = False raise except ( TimeoutError, HTTPException, SocketError, ProtocolError, BaseSSLError, SSLError, CertificateError, ) as e: # Discard the connection for these exceptions. It will be # replaced during the next _get_conn() call. clean_exit = False def _is_ssl_error_message_from_http_proxy(ssl_error): # We're trying to detect the message 'WRONG_VERSION_NUMBER' but # SSLErrors are kinda all over the place when it comes to the message, # so we try to cover our bases here! message = " ".join(re.split("[^a-z]", str(ssl_error).lower())) return ( "wrong version number" in message or "unknown protocol" in message ) # Try to detect a common user error with proxies which is to # set an HTTP proxy to be HTTPS when it should be 'http://' # (ie {'http': 'http://proxy', 'https': 'https://proxy'}) # Instead we add a nice error message and point to a URL. if ( isinstance(e, BaseSSLError) and self.proxy and _is_ssl_error_message_from_http_proxy(e) and conn.proxy and conn.proxy.scheme == "https" ): e = ProxyError( "Your proxy appears to only use HTTP and not HTTPS, " "try changing your proxy URL to be HTTP. See: " "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" "#https-proxy-error-http-proxy", SSLError(e), ) elif isinstance(e, (BaseSSLError, CertificateError)): e = SSLError(e) elif isinstance(e, (SocketError, NewConnectionError)) and self.proxy: e = ProxyError("Cannot connect to proxy.", e) elif isinstance(e, (SocketError, HTTPException)): e = ProtocolError("Connection aborted.", e) > retries = retries.increment( method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] ) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:787: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' response = None error = ReadTimeoutError("HTTPSConnectionPool(host='automl.googleapis.com', port=443): Read timed out. (read timeout=5.0)") _pool = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> _stacktrace = <traceback object at 0x7f09081ee980> def increment( self, method=None, url=None, response=None, error=None, _pool=None, _stacktrace=None, ): """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.HTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise six.reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise six.reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or not self._is_method_retryable(method): > raise six.reraise(type(error), error, _stacktrace) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/util/retry.py:550: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tp = <class 'urllib3.exceptions.ReadTimeoutError'>, value = None, tb = None def reraise(tp, value, tb=None): try: if value is None: value = tp() if value.__traceback__ is not tb: raise value.with_traceback(tb) > raise value .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/packages/six.py:770: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' body = None headers = {'User-Agent': 'python-requests/2.28.2', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-aliv...TTWDbutDoAShNYmfulVItHxSHce0ivOnBs3dOYKBrcOOSMqhqlI-zEv6zeIVGKCnLrBNlI13BKctyJhqEOhkTRzCBU_u0avvGJQR1iryG4sEUq29SDgpA'} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=5.0, read=5.0, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None response_kw = {'decode_content': False, 'preload_content': False} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/v1beta1/projects/precise-truck-742/locations/us-central1/models', query='%24alt=json%3Benum-encoding%3Dint', fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( self, method, url, body=None, headers=None, retries=None, redirect=True, assert_same_host=True, timeout=_Default, pool_timeout=None, release_conn=None, chunked=False, body_pos=None, **response_kw ): """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method provided by :class:`.RequestMethods`, such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``response_kw.get('preload_content', True)``. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. :param \\**response_kw: Additional parameters are passed to :meth:`urllib3.response.HTTPResponse.from_httplib` """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = response_kw.get("preload_content", True) # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = six.ensure_str(_encode_target(url)) else: url = six.ensure_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] <https://github.com/urllib3/urllib3/issues/651> release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() headers.update(self.proxy_headers) # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout is_new_proxy_conn = self.proxy is not None and not getattr( conn, "sock", None ) if is_new_proxy_conn and http_tunnel_required: self._prepare_proxy(conn) # Make the request on the httplib connection object. > httplib_response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, ) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:703: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> conn = <urllib3.connection.HTTPSConnection object at 0x7f0908264dc0> method = 'GET' url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' timeout = Timeout(connect=5.0, read=5.0, total=None), chunked = False httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.28.2', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*'...TWDbutDoAShNYmfulVItHxSHce0ivOnBs3dOYKBrcOOSMqhqlI-zEv6zeIVGKCnLrBNlI13BKctyJhqEOhkTRzCBU_u0avvGJQR1iryG4sEUq29SDgpA'}} timeout_obj = Timeout(connect=5.0, read=5.0, total=None), read_timeout = 5.0 def _make_request( self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw ): """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param timeout: Socket timeout in seconds for the request. This can be a float or integer, which will set the same timeout value for the socket connect and the socket read, or an instance of :class:`urllib3.util.Timeout`, which gives you more fine-grained control over your timeouts. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = timeout_obj.connect_timeout # Trigger any extra validation we need to do. try: self._validate_conn(conn) except (SocketTimeout, BaseSSLError) as e: # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) raise # conn.request() calls http.client.*.request, not the method in # urllib3.request. It also calls makefile (recv) on the socket. try: if chunked: conn.request_chunked(method, url, **httplib_request_kw) else: conn.request(method, url, **httplib_request_kw) # We are swallowing BrokenPipeError (errno.EPIPE) since the server is # legitimately able to close the connection after sending a valid response. # With this behaviour, the received response is still readable. except BrokenPipeError: # Python 3 pass except IOError as e: # Python 2 and macOS/Linux # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno not in { errno.EPIPE, errno.ESHUTDOWN, errno.EPROTOTYPE, }: raise # Reset the timeout for the recv() on the socket read_timeout = timeout_obj.read_timeout # App Engine doesn't have a sock attr if getattr(conn, "sock", None): # In Python 3 socket.py will catch EAGAIN and return None when you # try and read into the file pointer created by http.client, which # instead raises a BadStatusLine exception. Instead of catching # the exception and assuming all BadStatusLine exceptions are read # timeouts, check for a zero timeout before making the request. if read_timeout == 0: raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % read_timeout ) if read_timeout is Timeout.DEFAULT_TIMEOUT: conn.sock.settimeout(socket.getdefaulttimeout()) else: # None or a value conn.sock.settimeout(read_timeout) # Receive the response from the server try: try: # Python 2.7, use buffering of HTTP responses httplib_response = conn.getresponse(buffering=True) except TypeError: # Python 3 try: httplib_response = conn.getresponse() except BaseException as e: # Remove the TypeError from the exception chain in # Python 3 (including for exceptions like SystemExit). # Otherwise it looks like a bug in the code. six.raise_from(e, None) except (SocketTimeout, BaseSSLError, SocketError) as e: > self._raise_timeout(err=e, url=url, timeout_value=read_timeout) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:451: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7f0908264340> err = timeout('The read operation timed out') url = '/v1beta1/projects/precise-truck-742/locations/us-central1/models?%24alt=json%3Benum-encoding%3Dint' timeout_value = 5.0 def _raise_timeout(self, err, url, timeout_value): """Is the error actually a timeout? Will raise a ReadTimeout or pass""" if isinstance(err, SocketTimeout): > raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % timeout_value ) E urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='automl.googleapis.com', port=443): Read timed out. (read timeout=5.0) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:340: ReadTimeoutError During handling of the above exception, another exception occurred: self = <test_system_tables_client_v1.TestSystemTablesClient object at 0x7f0909b5f370> transport = 'rest' @vpcsc_config.skip_if_inside_vpcsc @pytest.mark.parametrize("transport", ["grpc", "rest"]) def test_get_model_evaluation(self, transport): client = automl_v1beta1.TablesClient( project=PROJECT, region=REGION, transport=transport ) > model = self.ensure_model_online(client) tests/system/gapic/v1beta1/test_system_tables_client_v1.py:284: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/system/gapic/v1beta1/test_system_tables_client_v1.py:319: in ensure_model_online model = self.ensure_model_ready(client) tests/system/gapic/v1beta1/test_system_tables_client_v1.py:327: in ensure_model_ready return client.get_model(model_display_name=STATIC_MODEL) google/cloud/automl_v1beta1/services/tables/tables_client.py:2576: in get_model self.list_models(project=project, region=region), google/cloud/automl_v1beta1/services/tables/tables_client.py:2136: in list_models return self.auto_ml_client.list_models(request=request, **method_kwargs) google/cloud/automl_v1beta1/services/auto_ml/client.py:2544: in list_models response = rpc( .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py:113: in __call__ return wrapped_func(*args, **kwargs) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/retry.py:349: in retry_wrapped_func return retry_target( .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/retry.py:191: in retry_target return target() .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/timeout.py:120: in func_with_timeout return func(*args, **kwargs) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:72: in error_remapped_callable return callable_(*args, **kwargs) google/cloud/automl_v1beta1/services/auto_ml/transports/rest.py:2731: in __call__ response = getattr(self._session, method)( .nox/prerelease_deps-3-8/lib/python3.8/site-packages/requests/sessions.py:600: in get return self.request("GET", url, **kwargs) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/google/auth/transport/requests.py:549: in request response = super(AuthorizedSession, self).request( .nox/prerelease_deps-3-8/lib/python3.8/site-packages/requests/sessions.py:587: in request resp = self.send(prep, **send_kwargs) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/requests/sessions.py:701: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <requests.adapters.HTTPAdapter object at 0x7f09081fbd60> request = <PreparedRequest [GET]>, stream = False timeout = Timeout(connect=5.0, read=5.0, total=None), verify = True, cert = None proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection(request.url, proxies) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: if not chunked: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, ) # Send the request. else: if hasattr(conn, "proxy_pool"): conn = conn.proxy_pool low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) try: skip_host = "Host" in request.headers low_conn.putrequest( request.method, url, skip_accept_encoding=True, skip_host=skip_host, ) for header, value in request.headers.items(): low_conn.putheader(header, value) low_conn.endheaders() for i in request.body: low_conn.send(hex(len(i))[2:].encode("utf-8")) low_conn.send(b"\r\n") low_conn.send(i) low_conn.send(b"\r\n") low_conn.send(b"0\r\n\r\n") # Receive the response from the server r = low_conn.getresponse() resp = HTTPResponse.from_httplib( r, pool=conn, connection=low_conn, preload_content=False, decode_content=False, ) except Exception: # If we hit any problems here, clean up the connection. # Then, raise so that we can handle the actual exception. low_conn.close() raise except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. raise SSLError(e, request=request) raise ConnectionError(e, request=request) except ClosedPoolError as e: raise ConnectionError(e, request=request) except _ProxyError as e: raise ProxyError(e) except (_SSLError, _HTTPError) as e: if isinstance(e, _SSLError): # This branch is for urllib3 versions earlier than v1.22 raise SSLError(e, request=request) elif isinstance(e, ReadTimeoutError): > raise ReadTimeout(e, request=request) E requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='automl.googleapis.com', port=443): Read timed out. (read timeout=5.0) .nox/prerelease_deps-3-8/lib/python3.8/site-packages/requests/adapters.py:578: ReadTimeout</pre></details>
non_defect
tests system gapic test system tables client testsystemtablesclient test get model evaluation failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output self conn method get url projects precise truck locations us models json encoding timeout timeout connect read total none chunked false httplib request kw body none headers user agent python requests accept encoding gzip deflate accept timeout obj timeout connect read total none read timeout def make request self conn method url timeout default chunked false httplib request kw perform a request on a given urllib connection object taken from our pool param conn a connection from one of our connection pools param timeout socket timeout in seconds for the request this can be a float or integer which will set the same timeout value for the socket connect and the socket read or an instance of class util timeout which gives you more fine grained control over your timeouts self num requests timeout obj self get timeout timeout timeout obj start connect conn timeout timeout obj connect timeout trigger any extra validation we need to do try self validate conn conn except sockettimeout basesslerror as e raises this as a basesslerror raises it as socket timeout self raise timeout err e url url timeout value conn timeout raise conn request calls http client request not the method in request it also calls makefile recv on the socket try if chunked conn request chunked method url httplib request kw else conn request method url httplib request kw we are swallowing brokenpipeerror errno epipe since the server is legitimately able to close the connection after sending a valid response with this behaviour the received response is still readable except brokenpipeerror python pass except ioerror as e python and macos linux epipe and eshutdown are brokenpipeerror on python and eprototype is needed on macos if e errno not in errno epipe errno eshutdown errno eprototype raise reset the timeout for the recv on the socket read timeout timeout obj read timeout app engine doesn t have a sock attr if getattr conn sock none in python socket py will catch eagain and return none when you try and read into the file pointer created by http client which instead raises a badstatusline exception instead of catching the exception and assuming all badstatusline exceptions are read timeouts check for a zero timeout before making the request if read timeout raise readtimeouterror self url read timed out read timeout s read timeout if read timeout is timeout default timeout conn sock settimeout socket getdefaulttimeout else none or a value conn sock settimeout read timeout receive the response from the server try try python use buffering of http responses httplib response conn getresponse buffering true except typeerror python try httplib response conn getresponse except baseexception as e remove the typeerror from the exception chain in python including for exceptions like systemexit otherwise it looks like a bug in the code six raise from e none nox prerelease deps lib site packages connectionpool py value none from value none self conn method get url projects precise truck locations us models json encoding timeout timeout connect read total none chunked false httplib request kw body none headers user agent python requests accept encoding gzip deflate accept timeout obj timeout connect read total none read timeout def make request self conn method url timeout default chunked false httplib request kw perform a request on a given urllib connection object taken from our pool param conn a connection from one of our connection pools param timeout socket timeout in seconds for the request this can be a float or integer which will set the same timeout value for the socket connect and the socket read or an instance of class util timeout which gives you more fine grained control over your timeouts self num requests timeout obj self get timeout timeout timeout obj start connect conn timeout timeout obj connect timeout trigger any extra validation we need to do try self validate conn conn except sockettimeout basesslerror as e raises this as a basesslerror raises it as socket timeout self raise timeout err e url url timeout value conn timeout raise conn request calls http client request not the method in request it also calls makefile recv on the socket try if chunked conn request chunked method url httplib request kw else conn request method url httplib request kw we are swallowing brokenpipeerror errno epipe since the server is legitimately able to close the connection after sending a valid response with this behaviour the received response is still readable except brokenpipeerror python pass except ioerror as e python and macos linux epipe and eshutdown are brokenpipeerror on python and eprototype is needed on macos if e errno not in errno epipe errno eshutdown errno eprototype raise reset the timeout for the recv on the socket read timeout timeout obj read timeout app engine doesn t have a sock attr if getattr conn sock none in python socket py will catch eagain and return none when you try and read into the file pointer created by http client which instead raises a badstatusline exception instead of catching the exception and assuming all badstatusline exceptions are read timeouts check for a zero timeout before making the request if read timeout raise readtimeouterror self url read timed out read timeout s read timeout if read timeout is timeout default timeout conn sock settimeout socket getdefaulttimeout else none or a value conn sock settimeout read timeout receive the response from the server try try python use buffering of http responses httplib response conn getresponse buffering true except typeerror python try httplib response conn getresponse nox prerelease deps lib site packages connectionpool py self def getresponse self get the response from the server if the httpconnection is in the correct state returns an instance of httpresponse or of whatever object is returned by the response class variable if a request has not been sent or if a previous response has not be handled responsenotready is raised if the http response indicates that the connection should be closed then it will be closed before the response is returned when the connection is closed the underlying socket is closed if a prior response has been completed then forget about it if self response and self response isclosed self response none if a prior response exists then it must be completed otherwise we cannot read this response s header to determine the connection close behavior note if a prior response existed but was connection close then the socket and response were made independent of this httpconnection object since a new request requires that we open a whole new connection this means the prior response had one of two states will close this connection was reset and the prior socket and response operate independently persistent the response was retained and we await its isclosed status to become true if self state cs req sent or self response raise responsenotready self state if self debuglevel response self response class self sock self debuglevel method self method else response self response class self sock method self method try try response begin usr local lib http client py self def begin self if self headers is not none we ve already started reading the response return read until we get a non response while true version status reason self read status usr local lib http client py self def read status self line str self fp readline maxline iso usr local lib http client py self b def readinto self b read up to len b bytes into the writable buffer b and return the number of bytes read if the socket is non blocking and no bytes are available none is returned if b is non empty a return value indicates that the connection was shutdown at the other end self checkclosed self checkreadable if self timeout occurred raise oserror cannot read from timed out object while true try return self sock recv into b usr local lib socket py self buffer nbytes flags def recv into self buffer nbytes none flags self checkclosed if buffer and nbytes is none nbytes len buffer elif nbytes is none nbytes if self sslobj is not none if flags raise valueerror non zero flags not allowed in calls to recv into on s self class return self read nbytes buffer usr local lib ssl py self len buffer def read self len buffer none read up to len bytes and return them return zero length string on eof self checkclosed if self sslobj is none raise valueerror read on closed or unwrapped ssl socket try if buffer is not none return self sslobj read len buffer e socket timeout the read operation timed out usr local lib ssl py timeout during handling of the above exception another exception occurred self request stream false timeout timeout connect read total none verify true cert none proxies ordereddict def send self request stream false timeout none verify true cert none proxies none sends preparedrequest object returns response object param request the class preparedrequest being sent param stream optional whether to stream the request content param timeout optional how long to wait for the server to send data before giving up as a float or a ref connect timeout read timeout tuple type timeout float or tuple or timeout object param verify optional either a boolean in which case it controls whether we verify the server s tls certificate or a string in which case it must be a path to a ca bundle to use param cert optional any user provided ssl certificate to be trusted param proxies optional the proxies dictionary to apply to the request rtype requests response try conn self get connection request url proxies except locationvalueerror as e raise invalidurl e request request self cert verify conn request url verify cert url self request url request proxies self add headers request stream stream timeout timeout verify verify cert cert proxies proxies chunked not request body is none or content length in request headers if isinstance timeout tuple try connect read timeout timeout timeoutsauce connect connect read read except valueerror raise valueerror f invalid timeout timeout pass a connect read timeout tuple f or a single float to set both timeouts to the same value elif isinstance timeout timeoutsauce pass else timeout timeoutsauce connect timeout read timeout try if not chunked resp conn urlopen method request method url url body request body headers request headers redirect false assert same host false preload content false decode content false retries self max retries timeout timeout nox prerelease deps lib site packages requests adapters py self method get url projects precise truck locations us models json encoding body none headers user agent python requests accept encoding gzip deflate accept connection keep aliv retries retry total connect none read false redirect none status none redirect false assert same host false timeout timeout connect read total none pool timeout none release conn false chunked false body pos none response kw decode content false preload content false parsed url url scheme none auth none host none port none path projects precise truck locations us models query json encoding fragment none destination scheme none conn none release this conn true http tunnel required false err none clean exit false def urlopen self method url body none headers none retries none redirect true assert same host true timeout default pool timeout none release conn none chunked false body pos none response kw get a connection from the pool and perform an http request this is the lowest level call for making a request so you ll need to specify all the raw details note more commonly it s appropriate to use a convenience method provided by class requestmethods such as meth request note release conn will only behave as expected if preload content false because we want to make preload content false the default behaviour someday soon without breaking backwards compatibility param method http request method such as get post put etc param url the url to perform the request on param body data to send in the request body either class str class bytes an iterable of class str class bytes or a file like object param headers dictionary of custom headers to send such as user agent if none match etc if none pool headers are used if provided these headers completely replace any pool specific headers param retries configure the number of retries to allow before raising a class exceptions maxretryerror exception pass none to retry until you receive a response pass a class util retry retry object for fine grained control over different types of retries pass an integer number to retry connection errors that many times but no other types of errors pass zero to never retry if false then retries are disabled and any exception is raised immediately also instead of raising a maxretryerror on redirects the redirect response will be returned type retries class util retry retry false or an int param redirect if true automatically handle redirects status codes each redirect counts as a retry disabling retries will disable redirect too param assert same host if true will make sure that the host of the pool requests is consistent else will raise hostchangederror when false you can use the pool on an http proxy and request foreign hosts param timeout if specified overrides the default timeout for this one request it may be a float in seconds or an instance of class util timeout param pool timeout if set and the pool is set to block true then this method will block for pool timeout seconds and raise emptypoolerror if no connection is available within the time period param release conn if false then the urlopen call will not release the connection back into the pool once a response is received but will release if you read the entire contents of the response such as when preload content true this is useful if you re not preloading the response s content immediately you will need to call r release conn on the response r to return the connection back into the pool if none it takes the value of response kw get preload content true param chunked if true will send the body using chunked transfer encoding otherwise will send the body using the standard content length form defaults to false param int body pos position to seek to in file like body in the event of a retry or redirect typically this won t need to be set because will auto populate the value when needed param response kw additional parameters are passed to meth response httpresponse from httplib parsed url parse url url destination scheme parsed url scheme if headers is none headers self headers if not isinstance retries retry retries retry from int retries redirect redirect default self retries if release conn is none release conn response kw get preload content true check host if assert same host and not self is same host url raise hostchangederror self url retries ensure that the url we re connecting to is properly encoded if url startswith url six ensure str encode target url else url six ensure str parsed url url conn none track whether conn needs to be released before returning raising recursing update this variable if necessary and leave release conn constant throughout the function that way if the function recurses the original value of release conn will be passed down into the recursive call and its value will be respected see issue for details release this conn release conn http tunnel required connection requires http tunnel self proxy self proxy config destination scheme merge the proxy headers only done when not using http connect we have to copy the headers dict so we can safely change it without those changes being reflected in anyone else s copy if not http tunnel required headers headers copy headers update self proxy headers must keep the exception bound to a separate variable or else python complains about unboundlocalerror err none keep track of whether we cleanly exited the except block this ensures we do proper cleanup in finally clean exit false rewind body position if needed record current position for future rewinds in the event of a redirect retry body pos set file position body body pos try request a connection from the queue timeout obj self get timeout timeout conn self get conn timeout pool timeout conn timeout timeout obj connect timeout is new proxy conn self proxy is not none and not getattr conn sock none if is new proxy conn and http tunnel required self prepare proxy conn make the request on the httplib connection object httplib response self make request conn method url timeout timeout obj body body headers headers chunked chunked if we re going to release the connection in finally then the response doesn t need to know about the connection otherwise it will also try to release it and we ll have a double release mess response conn conn if not release conn else none pass method to response for length checking response kw method import httplib s response into our own wrapper object response self responsecls from httplib httplib response pool self connection response conn retries retries response kw everything went great clean exit true except emptypoolerror didn t get a connection from the pool no need to clean up clean exit true release this conn false raise except timeouterror httpexception socketerror protocolerror basesslerror sslerror certificateerror as e discard the connection for these exceptions it will be replaced during the next get conn call clean exit false def is ssl error message from http proxy ssl error we re trying to detect the message wrong version number but sslerrors are kinda all over the place when it comes to the message so we try to cover our bases here message join re split str ssl error lower return wrong version number in message or unknown protocol in message try to detect a common user error with proxies which is to set an http proxy to be https when it should be ie http https instead we add a nice error message and point to a url if isinstance e basesslerror and self proxy and is ssl error message from http proxy e and conn proxy and conn proxy scheme https e proxyerror your proxy appears to only use http and not https try changing your proxy url to be http see https proxy error http proxy sslerror e elif isinstance e basesslerror certificateerror e sslerror e elif isinstance e socketerror newconnectionerror and self proxy e proxyerror cannot connect to proxy e elif isinstance e socketerror httpexception e protocolerror connection aborted e retries retries increment method url error e pool self stacktrace sys exc info nox prerelease deps lib site packages connectionpool py self retry total connect none read false redirect none status none method get url projects precise truck locations us models json encoding response none error readtimeouterror httpsconnectionpool host automl googleapis com port read timed out read timeout pool stacktrace def increment self method none url none response none error none pool none stacktrace none return a new retry object with incremented retry counters param response a response object or none if the server did not return a response type response class response httpresponse param exception error an error encountered during the request or none if the response was received successfully return a new retry object if self total is false and error disabled indicate to re raise the error raise six reraise type error error stacktrace total self total if total is not none total connect self connect read self read redirect self redirect status count self status other self other cause unknown status none redirect location none if error and self is connection error error connect retry if connect is false raise six reraise type error error stacktrace elif connect is not none connect elif error and self is read error error read retry if read is false or not self is method retryable method raise six reraise type error error stacktrace nox prerelease deps lib site packages util retry py tp value none tb none def reraise tp value tb none try if value is none value tp if value traceback is not tb raise value with traceback tb raise value nox prerelease deps lib site packages packages six py self method get url projects precise truck locations us models json encoding body none headers user agent python requests accept encoding gzip deflate accept connection keep aliv retries retry total connect none read false redirect none status none redirect false assert same host false timeout timeout connect read total none pool timeout none release conn false chunked false body pos none response kw decode content false preload content false parsed url url scheme none auth none host none port none path projects precise truck locations us models query json encoding fragment none destination scheme none conn none release this conn true http tunnel required false err none clean exit false def urlopen self method url body none headers none retries none redirect true assert same host true timeout default pool timeout none release conn none chunked false body pos none response kw get a connection from the pool and perform an http request this is the lowest level call for making a request so you ll need to specify all the raw details note more commonly it s appropriate to use a convenience method provided by class requestmethods such as meth request note release conn will only behave as expected if preload content false because we want to make preload content false the default behaviour someday soon without breaking backwards compatibility param method http request method such as get post put etc param url the url to perform the request on param body data to send in the request body either class str class bytes an iterable of class str class bytes or a file like object param headers dictionary of custom headers to send such as user agent if none match etc if none pool headers are used if provided these headers completely replace any pool specific headers param retries configure the number of retries to allow before raising a class exceptions maxretryerror exception pass none to retry until you receive a response pass a class util retry retry object for fine grained control over different types of retries pass an integer number to retry connection errors that many times but no other types of errors pass zero to never retry if false then retries are disabled and any exception is raised immediately also instead of raising a maxretryerror on redirects the redirect response will be returned type retries class util retry retry false or an int param redirect if true automatically handle redirects status codes each redirect counts as a retry disabling retries will disable redirect too param assert same host if true will make sure that the host of the pool requests is consistent else will raise hostchangederror when false you can use the pool on an http proxy and request foreign hosts param timeout if specified overrides the default timeout for this one request it may be a float in seconds or an instance of class util timeout param pool timeout if set and the pool is set to block true then this method will block for pool timeout seconds and raise emptypoolerror if no connection is available within the time period param release conn if false then the urlopen call will not release the connection back into the pool once a response is received but will release if you read the entire contents of the response such as when preload content true this is useful if you re not preloading the response s content immediately you will need to call r release conn on the response r to return the connection back into the pool if none it takes the value of response kw get preload content true param chunked if true will send the body using chunked transfer encoding otherwise will send the body using the standard content length form defaults to false param int body pos position to seek to in file like body in the event of a retry or redirect typically this won t need to be set because will auto populate the value when needed param response kw additional parameters are passed to meth response httpresponse from httplib parsed url parse url url destination scheme parsed url scheme if headers is none headers self headers if not isinstance retries retry retries retry from int retries redirect redirect default self retries if release conn is none release conn response kw get preload content true check host if assert same host and not self is same host url raise hostchangederror self url retries ensure that the url we re connecting to is properly encoded if url startswith url six ensure str encode target url else url six ensure str parsed url url conn none track whether conn needs to be released before returning raising recursing update this variable if necessary and leave release conn constant throughout the function that way if the function recurses the original value of release conn will be passed down into the recursive call and its value will be respected see issue for details release this conn release conn http tunnel required connection requires http tunnel self proxy self proxy config destination scheme merge the proxy headers only done when not using http connect we have to copy the headers dict so we can safely change it without those changes being reflected in anyone else s copy if not http tunnel required headers headers copy headers update self proxy headers must keep the exception bound to a separate variable or else python complains about unboundlocalerror err none keep track of whether we cleanly exited the except block this ensures we do proper cleanup in finally clean exit false rewind body position if needed record current position for future rewinds in the event of a redirect retry body pos set file position body body pos try request a connection from the queue timeout obj self get timeout timeout conn self get conn timeout pool timeout conn timeout timeout obj connect timeout is new proxy conn self proxy is not none and not getattr conn sock none if is new proxy conn and http tunnel required self prepare proxy conn make the request on the httplib connection object httplib response self make request conn method url timeout timeout obj body body headers headers chunked chunked nox prerelease deps lib site packages connectionpool py self conn method get url projects precise truck locations us models json encoding timeout timeout connect read total none chunked false httplib request kw body none headers user agent python requests accept encoding gzip deflate accept timeout obj timeout connect read total none read timeout def make request self conn method url timeout default chunked false httplib request kw perform a request on a given urllib connection object taken from our pool param conn a connection from one of our connection pools param timeout socket timeout in seconds for the request this can be a float or integer which will set the same timeout value for the socket connect and the socket read or an instance of class util timeout which gives you more fine grained control over your timeouts self num requests timeout obj self get timeout timeout timeout obj start connect conn timeout timeout obj connect timeout trigger any extra validation we need to do try self validate conn conn except sockettimeout basesslerror as e raises this as a basesslerror raises it as socket timeout self raise timeout err e url url timeout value conn timeout raise conn request calls http client request not the method in request it also calls makefile recv on the socket try if chunked conn request chunked method url httplib request kw else conn request method url httplib request kw we are swallowing brokenpipeerror errno epipe since the server is legitimately able to close the connection after sending a valid response with this behaviour the received response is still readable except brokenpipeerror python pass except ioerror as e python and macos linux epipe and eshutdown are brokenpipeerror on python and eprototype is needed on macos if e errno not in errno epipe errno eshutdown errno eprototype raise reset the timeout for the recv on the socket read timeout timeout obj read timeout app engine doesn t have a sock attr if getattr conn sock none in python socket py will catch eagain and return none when you try and read into the file pointer created by http client which instead raises a badstatusline exception instead of catching the exception and assuming all badstatusline exceptions are read timeouts check for a zero timeout before making the request if read timeout raise readtimeouterror self url read timed out read timeout s read timeout if read timeout is timeout default timeout conn sock settimeout socket getdefaulttimeout else none or a value conn sock settimeout read timeout receive the response from the server try try python use buffering of http responses httplib response conn getresponse buffering true except typeerror python try httplib response conn getresponse except baseexception as e remove the typeerror from the exception chain in python including for exceptions like systemexit otherwise it looks like a bug in the code six raise from e none except sockettimeout basesslerror socketerror as e self raise timeout err e url url timeout value read timeout nox prerelease deps lib site packages connectionpool py self err timeout the read operation timed out url projects precise truck locations us models json encoding timeout value def raise timeout self err url timeout value is the error actually a timeout will raise a readtimeout or pass if isinstance err sockettimeout raise readtimeouterror self url read timed out read timeout s timeout value e exceptions readtimeouterror httpsconnectionpool host automl googleapis com port read timed out read timeout nox prerelease deps lib site packages connectionpool py readtimeouterror during handling of the above exception another exception occurred self transport rest vpcsc config skip if inside vpcsc pytest mark parametrize transport def test get model evaluation self transport client automl tablesclient project project region region transport transport model self ensure model online client tests system gapic test system tables client py tests system gapic test system tables client py in ensure model online model self ensure model ready client tests system gapic test system tables client py in ensure model ready return client get model model display name static model google cloud automl services tables tables client py in get model self list models project project region region google cloud automl services tables tables client py in list models return self auto ml client list models request request method kwargs google cloud automl services auto ml client py in list models response rpc nox prerelease deps lib site packages google api core gapic method py in call return wrapped func args kwargs nox prerelease deps lib site packages google api core retry py in retry wrapped func return retry target nox prerelease deps lib site packages google api core retry py in retry target return target nox prerelease deps lib site packages google api core timeout py in func with timeout return func args kwargs nox prerelease deps lib site packages google api core grpc helpers py in error remapped callable return callable args kwargs google cloud automl services auto ml transports rest py in call response getattr self session method nox prerelease deps lib site packages requests sessions py in get return self request get url kwargs nox prerelease deps lib site packages google auth transport requests py in request response super authorizedsession self request nox prerelease deps lib site packages requests sessions py in request resp self send prep send kwargs nox prerelease deps lib site packages requests sessions py in send r adapter send request kwargs self request stream false timeout timeout connect read total none verify true cert none proxies ordereddict def send self request stream false timeout none verify true cert none proxies none sends preparedrequest object returns response object param request the class preparedrequest being sent param stream optional whether to stream the request content param timeout optional how long to wait for the server to send data before giving up as a float or a ref connect timeout read timeout tuple type timeout float or tuple or timeout object param verify optional either a boolean in which case it controls whether we verify the server s tls certificate or a string in which case it must be a path to a ca bundle to use param cert optional any user provided ssl certificate to be trusted param proxies optional the proxies dictionary to apply to the request rtype requests response try conn self get connection request url proxies except locationvalueerror as e raise invalidurl e request request self cert verify conn request url verify cert url self request url request proxies self add headers request stream stream timeout timeout verify verify cert cert proxies proxies chunked not request body is none or content length in request headers if isinstance timeout tuple try connect read timeout timeout timeoutsauce connect connect read read except valueerror raise valueerror f invalid timeout timeout pass a connect read timeout tuple f or a single float to set both timeouts to the same value elif isinstance timeout timeoutsauce pass else timeout timeoutsauce connect timeout read timeout try if not chunked resp conn urlopen method request method url url body request body headers request headers redirect false assert same host false preload content false decode content false retries self max retries timeout timeout send the request else if hasattr conn proxy pool conn conn proxy pool low conn conn get conn timeout default pool timeout try skip host host in request headers low conn putrequest request method url skip accept encoding true skip host skip host for header value in request headers items low conn putheader header value low conn endheaders for i in request body low conn send hex len i encode utf low conn send b r n low conn send i low conn send b r n low conn send b r n r n receive the response from the server r low conn getresponse resp httpresponse from httplib r pool conn connection low conn preload content false decode content false except exception if we hit any problems here clean up the connection then raise so that we can handle the actual exception low conn close raise except protocolerror oserror as err raise connectionerror err request request except maxretryerror as e if isinstance e reason connecttimeouterror todo remove this in see if not isinstance e reason newconnectionerror raise connecttimeout e request request if isinstance e reason responseerror raise retryerror e request request if isinstance e reason proxyerror raise proxyerror e request request if isinstance e reason sslerror this branch is for and later raise sslerror e request request raise connectionerror e request request except closedpoolerror as e raise connectionerror e request request except proxyerror as e raise proxyerror e except sslerror httperror as e if isinstance e sslerror this branch is for versions earlier than raise sslerror e request request elif isinstance e readtimeouterror raise readtimeout e request request e requests exceptions readtimeout httpsconnectionpool host automl googleapis com port read timed out read timeout nox prerelease deps lib site packages requests adapters py readtimeout
0
62,181
17,023,867,888
IssuesEvent
2021-07-03 04:16:12
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
regression: URL isn't displayed as full link for permalinks
Component: website Priority: major Resolution: invalid Type: defect
**[Submitted to the original trac issue database at 10.01am, Monday, 15th July 2013]** The browser URL isn't updated anymore after you click on "permalink" (it remains http://www.openstreetmap.org and is missing coords, layer and zoom information in the address bar). Discovered this just now, so I guess this is a recent regression (until some short time ago it worked fine). This is an important feature as it allows to take the position and see it on another map. The permalink in general works (it is created correctly ("copy link" works), it is only a usability issue for the address bar which remains "short".
1.0
regression: URL isn't displayed as full link for permalinks - **[Submitted to the original trac issue database at 10.01am, Monday, 15th July 2013]** The browser URL isn't updated anymore after you click on "permalink" (it remains http://www.openstreetmap.org and is missing coords, layer and zoom information in the address bar). Discovered this just now, so I guess this is a recent regression (until some short time ago it worked fine). This is an important feature as it allows to take the position and see it on another map. The permalink in general works (it is created correctly ("copy link" works), it is only a usability issue for the address bar which remains "short".
defect
regression url isn t displayed as full link for permalinks the browser url isn t updated anymore after you click on permalink it remains and is missing coords layer and zoom information in the address bar discovered this just now so i guess this is a recent regression until some short time ago it worked fine this is an important feature as it allows to take the position and see it on another map the permalink in general works it is created correctly copy link works it is only a usability issue for the address bar which remains short
1
71,715
23,773,645,407
IssuesEvent
2022-09-01 18:40:27
SAP/fundamental-ngx
https://api.github.com/repos/SAP/fundamental-ngx
closed
Defect Hunting: Pagination bug
bug Defect Hunting
#### Is this a bug, enhancement, or feature request? bug #### Briefly describe your proposal. - [ ] No keyboard navigation for pagination. Once you click on a page you can't go to another page using only the keyboard. <img width="1046" alt="Screen Shot 2021-12-13 at 4 25 53 PM" src="https://user-images.githubusercontent.com/39598672/145891442-428f449f-aa88-4a15-8ae0-fa96c268e127.png">
1.0
Defect Hunting: Pagination bug - #### Is this a bug, enhancement, or feature request? bug #### Briefly describe your proposal. - [ ] No keyboard navigation for pagination. Once you click on a page you can't go to another page using only the keyboard. <img width="1046" alt="Screen Shot 2021-12-13 at 4 25 53 PM" src="https://user-images.githubusercontent.com/39598672/145891442-428f449f-aa88-4a15-8ae0-fa96c268e127.png">
defect
defect hunting pagination bug is this a bug enhancement or feature request bug briefly describe your proposal no keyboard navigation for pagination once you click on a page you can t go to another page using only the keyboard img width alt screen shot at pm src
1
23,368
3,801,110,653
IssuesEvent
2016-03-23 21:33:51
PowerDNS/pdns
https://api.github.com/repos/PowerDNS/pdns
closed
recursor packetcache ignores edns size
defect rec
The Recursor packet cache appears to ignore EDNS bufsize, meaning that if client A sends a query with a big bufsize, client B (who does not even use EDNS) could get the big response too.
1.0
recursor packetcache ignores edns size - The Recursor packet cache appears to ignore EDNS bufsize, meaning that if client A sends a query with a big bufsize, client B (who does not even use EDNS) could get the big response too.
defect
recursor packetcache ignores edns size the recursor packet cache appears to ignore edns bufsize meaning that if client a sends a query with a big bufsize client b who does not even use edns could get the big response too
1
76,849
26,629,403,025
IssuesEvent
2023-01-24 16:42:30
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
closed
Send voice message should not be allowed during a voice broadcast recording
T-Defect A-Voice Broadcast
### Steps to reproduce 1 start a voice broadcast 2 send some voice messages ### Outcome #### What did you expect? <img width="720" alt="image" src="https://user-images.githubusercontent.com/8969772/210623968-d985a71e-ca5e-4f8d-95af-62483b86ebc5.png"> #### What did you observe? the voice message are sent in parallel to the VB NOK ### Your phone model _No response_ ### Operating system version _No response_ ### Application version and app store _No response_ ### Homeserver _No response_ ### Will you send logs? No ### Are you willing to provide a PR? Yes
1.0
Send voice message should not be allowed during a voice broadcast recording - ### Steps to reproduce 1 start a voice broadcast 2 send some voice messages ### Outcome #### What did you expect? <img width="720" alt="image" src="https://user-images.githubusercontent.com/8969772/210623968-d985a71e-ca5e-4f8d-95af-62483b86ebc5.png"> #### What did you observe? the voice message are sent in parallel to the VB NOK ### Your phone model _No response_ ### Operating system version _No response_ ### Application version and app store _No response_ ### Homeserver _No response_ ### Will you send logs? No ### Are you willing to provide a PR? Yes
defect
send voice message should not be allowed during a voice broadcast recording steps to reproduce start a voice broadcast send some voice messages outcome what did you expect img width alt image src what did you observe the voice message are sent in parallel to the vb nok your phone model no response operating system version no response application version and app store no response homeserver no response will you send logs no are you willing to provide a pr yes
1
51,366
13,635,111,796
IssuesEvent
2020-09-25 01:55:49
nasifimtiazohi/openmrs-module-reporting-1.20.0
https://api.github.com/repos/nasifimtiazohi/openmrs-module-reporting-1.20.0
opened
CVE-2018-12023 (High) detected in jackson-databind-2.9.0.jar
security vulnerability
## CVE-2018-12023 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.0.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: openmrs-module-reporting-1.20.0/api-2.2/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.0/jackson-databind-2.9.0.jar</p> <p> Dependency Hierarchy: - openmrs-api-2.2.0.jar (Root Library) - :x: **jackson-databind-2.9.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/nasifimtiazohi/openmrs-module-reporting-1.20.0/commit/43757d56a9ab9f7202e297fea95f1861af41888c">43757d56a9ab9f7202e297fea95f1861af41888c</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in FasterXML jackson-databind prior to 2.7.9.4, 2.8.11.2, and 2.9.6. When Default Typing is enabled (either globally or for a specific property), the service has the Oracle JDBC jar in the classpath, and an attacker can provide an LDAP service to access, it is possible to make the service execute a malicious payload. <p>Publish Date: 2019-03-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12023>CVE-2018-12023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022</a></p> <p>Release Date: 2019-03-21</p> <p>Fix Resolution: 2.7.9.4, 2.8.11.2, 2.9.6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-12023 (High) detected in jackson-databind-2.9.0.jar - ## CVE-2018-12023 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.0.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: openmrs-module-reporting-1.20.0/api-2.2/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.0/jackson-databind-2.9.0.jar</p> <p> Dependency Hierarchy: - openmrs-api-2.2.0.jar (Root Library) - :x: **jackson-databind-2.9.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/nasifimtiazohi/openmrs-module-reporting-1.20.0/commit/43757d56a9ab9f7202e297fea95f1861af41888c">43757d56a9ab9f7202e297fea95f1861af41888c</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in FasterXML jackson-databind prior to 2.7.9.4, 2.8.11.2, and 2.9.6. When Default Typing is enabled (either globally or for a specific property), the service has the Oracle JDBC jar in the classpath, and an attacker can provide an LDAP service to access, it is possible to make the service execute a malicious payload. <p>Publish Date: 2019-03-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12023>CVE-2018-12023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-12022</a></p> <p>Release Date: 2019-03-21</p> <p>Fix Resolution: 2.7.9.4, 2.8.11.2, 2.9.6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file openmrs module reporting api pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy openmrs api jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in fasterxml jackson databind prior to and when default typing is enabled either globally or for a specific property the service has the oracle jdbc jar in the classpath and an attacker can provide an ldap service to access it is possible to make the service execute a malicious payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
246,394
18,843,735,580
IssuesEvent
2021-11-11 12:41:26
tarantool/cartridge-java-testcontainers
https://api.github.com/repos/tarantool/cartridge-java-testcontainers
closed
Reuse of testcontaines requires additional setup.
documentation
On windows environment you need to add the following .testcontainers.properties file in your home directory to get 'withReuse' option working: ``` testcontainers.reuse.enable=true ``` See https://stackoverflow.com/questions/62425598/how-to-reuse-testcontainers-between-multiple-springboottests
1.0
Reuse of testcontaines requires additional setup. - On windows environment you need to add the following .testcontainers.properties file in your home directory to get 'withReuse' option working: ``` testcontainers.reuse.enable=true ``` See https://stackoverflow.com/questions/62425598/how-to-reuse-testcontainers-between-multiple-springboottests
non_defect
reuse of testcontaines requires additional setup on windows environment you need to add the following testcontainers properties file in your home directory to get withreuse option working testcontainers reuse enable true see
0
53,275
13,261,335,451
IssuesEvent
2020-08-20 19:42:34
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
errors when running L2 processing on noise generation files (Trac #1105)
Migrated from Trac cmake defect
I tried to do some L2 processing on L1 vuvuzela noise files. By running level2_Master.py from the project std-processing, I ran into the following error from cobol machines 38-39, 40-42, and 89-90 at UMD. Other machines can successfully run my jobs. python: /data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/src/dataclasses/private/dataclasses/payload/I3SuperDST.cxx:1181: void I3SuperDST::load_v1(Archive&) [with Archive = boost::archive::portable_binary_iarchive]: Assertion `width_it != width_end' failed. /data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/build/env-shell.sh: line 139: 10105 Aborted $NEW_SHELL $ARGV <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1105">https://code.icecube.wisc.edu/projects/icecube/ticket/1105</a>, reported by elimsand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2015-08-11T19:24:51", "_ts": "1439321091027396", "description": "I tried to do some L2 processing on L1 vuvuzela noise files. By running level2_Master.py from the project std-processing, I ran into the following error from cobol machines 38-39, 40-42, and 89-90 at UMD. Other machines can successfully run my jobs.\n\npython: /data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/src/dataclasses/private/dataclasses/payload/I3SuperDST.cxx:1181: void I3SuperDST::load_v1(Archive&) [with Archive = boost::archive::portable_binary_iarchive]: Assertion `width_it != width_end' failed.\n/data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/build/env-shell.sh: line 139: 10105 Aborted $NEW_SHELL $ARGV\n", "reporter": "elims", "cc": "", "resolution": "duplicate", "time": "2015-08-11T16:35:51", "component": "cmake", "summary": "errors when running L2 processing on noise generation files", "priority": "normal", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
errors when running L2 processing on noise generation files (Trac #1105) - I tried to do some L2 processing on L1 vuvuzela noise files. By running level2_Master.py from the project std-processing, I ran into the following error from cobol machines 38-39, 40-42, and 89-90 at UMD. Other machines can successfully run my jobs. python: /data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/src/dataclasses/private/dataclasses/payload/I3SuperDST.cxx:1181: void I3SuperDST::load_v1(Archive&) [with Archive = boost::archive::portable_binary_iarchive]: Assertion `width_it != width_end' failed. /data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/build/env-shell.sh: line 139: 10105 Aborted $NEW_SHELL $ARGV <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1105">https://code.icecube.wisc.edu/projects/icecube/ticket/1105</a>, reported by elimsand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2015-08-11T19:24:51", "_ts": "1439321091027396", "description": "I tried to do some L2 processing on L1 vuvuzela noise files. By running level2_Master.py from the project std-processing, I ran into the following error from cobol machines 38-39, 40-42, and 89-90 at UMD. Other machines can successfully run my jobs.\n\npython: /data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/src/dataclasses/private/dataclasses/payload/I3SuperDST.cxx:1181: void I3SuperDST::load_v1(Archive&) [with Archive = boost::archive::portable_binary_iarchive]: Assertion `width_it != width_end' failed.\n/data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/build/env-shell.sh: line 139: 10105 Aborted $NEW_SHELL $ARGV\n", "reporter": "elims", "cc": "", "resolution": "duplicate", "time": "2015-08-11T16:35:51", "component": "cmake", "summary": "errors when running L2 processing on noise generation files", "priority": "normal", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
defect
errors when running processing on noise generation files trac i tried to do some processing on vuvuzela noise files by running master py from the project std processing i ran into the following error from cobol machines and at umd other machines can successfully run my jobs python data condor builds users elims software src dataclasses private dataclasses payload cxx void load archive assertion width it width end failed data condor builds users elims software build env shell sh line aborted new shell argv migrated from json status closed changetime ts description i tried to do some processing on vuvuzela noise files by running master py from the project std processing i ran into the following error from cobol machines and at umd other machines can successfully run my jobs n npython data condor builds users elims software src dataclasses private dataclasses payload cxx void load archive assertion width it width end failed n data condor builds users elims software build env shell sh line aborted new shell argv n reporter elims cc resolution duplicate time component cmake summary errors when running processing on noise generation files priority normal keywords milestone owner nega type defect
1
180,368
14,762,749,427
IssuesEvent
2021-01-09 05:36:29
luizribeiro/mariner
https://api.github.com/repos/luizribeiro/mariner
closed
[INFO] Works with Mars 2 Pro
documentation good first issue
Not a bug, just confirming that mariner works fine with the Mars 2 Pro, though I cannot recommend putting the Pi Zero inside the printer as the wifi is not strong enough for a reliable connection.
1.0
[INFO] Works with Mars 2 Pro - Not a bug, just confirming that mariner works fine with the Mars 2 Pro, though I cannot recommend putting the Pi Zero inside the printer as the wifi is not strong enough for a reliable connection.
non_defect
works with mars pro not a bug just confirming that mariner works fine with the mars pro though i cannot recommend putting the pi zero inside the printer as the wifi is not strong enough for a reliable connection
0
30,701
6,229,814,943
IssuesEvent
2017-07-11 05:50:54
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
signal.resample is broken for even-length inputs
defect duplicate scipy.signal
signal.resample is broken for even-length inputs. ### Reproducing code example: ``` >>> from scipy.signal import resample >>> resample(resample([1, 2, 3, 4], 6), 4) array([ 1.25, 1.75, 3.25, 3.75]) ``` I would expect resampling to a higher number of samples and back to be the identity operation as a consequence of Fourier resampling being "perfect" for band-limited signals. This appears to be due to incorrect handling of the Nyquist frequency component. ### Scipy/Numpy/Python version information: ``` >>> import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info) ('0.14.1', '1.8.2', sys.version_info(major=2, minor=7, micro=10, releaselevel='final', serial=0)) ```
1.0
signal.resample is broken for even-length inputs - signal.resample is broken for even-length inputs. ### Reproducing code example: ``` >>> from scipy.signal import resample >>> resample(resample([1, 2, 3, 4], 6), 4) array([ 1.25, 1.75, 3.25, 3.75]) ``` I would expect resampling to a higher number of samples and back to be the identity operation as a consequence of Fourier resampling being "perfect" for band-limited signals. This appears to be due to incorrect handling of the Nyquist frequency component. ### Scipy/Numpy/Python version information: ``` >>> import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info) ('0.14.1', '1.8.2', sys.version_info(major=2, minor=7, micro=10, releaselevel='final', serial=0)) ```
defect
signal resample is broken for even length inputs signal resample is broken for even length inputs reproducing code example from scipy signal import resample resample resample array i would expect resampling to a higher number of samples and back to be the identity operation as a consequence of fourier resampling being perfect for band limited signals this appears to be due to incorrect handling of the nyquist frequency component scipy numpy python version information import sys scipy numpy print scipy version numpy version sys version info sys version info major minor micro releaselevel final serial
1
53,924
13,262,518,477
IssuesEvent
2020-08-20 21:58:07
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
[PROPOSAL] post-restructuring PROPOSAL output confuses I3MuonSlicer (Trac #2340)
Migrated from Trac combo simulation defect
I3MuonSlicer was designed to reverse-engineer the MMC output format to recover the energy of the muon between stochastic losses. When run against new PROPOSAL simulation, I3MuonSlicer inserts a muon segment ''after'' a muon decays, like so: ```text 23 MuPlus (1117.15m, 60.9397m, 1949.98m) (30.0081deg, 359.703deg) 11.5111ns 429.205GeV 1546.08m 73 MuPlus (453.863m, 64.374m, 801.499m) (30.0081deg, 359.703deg) 4435.44ns 50.5399GeV 28.1962m 15 DeltaE (439.762m, 64.447m, 777.083m) (30.0081deg, 359.703deg) 4523.72ns 1.73634GeV 0m 74 MuPlus (439.762m, 64.447m, 777.083m) (30.0081deg, 359.703deg) 4529.49ns 46.994GeV 76.9354m 16 DeltaE (401.285m, 64.6462m, 710.46m) (30.0081deg, 359.703deg) 4780.35ns 0.625515GeV 0m 75 MuPlus (401.285m, 64.6462m, 710.46m) (30.0081deg, 359.703deg) 4786.12ns 41.4309GeV 114.689m 17 EPlus (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0504133GeV 0m 18 NuE (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0393494GeV 0m 19 NuMuBar (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0158957GeV 0m 78 MuPlus (343.927m, 64.9432m, 611.144m) (30.0081deg, 359.703deg) 5168.69ns 33.9646GeV 527.572m ``` This really screws up stopping muons, since instead of stopping, they burst back to life. A likely culprit is MMCTrack. For MMC (and PROPOSAL Classic), stopping tracks had negative values for Ef, which were interpreted as -1 times the distance from the decay to trajectory exit point in cm (hooray, units!). For example ```text [I3MMCTrack = [ (xi, yi, zi, ti, Ei) = (404.223 ,-0.996252 ,800 ,4522.09 ,0.338673) (xc, yc, zc, tc, Ec) = (-44.3509 ,-0.849819 ,25.6933 ,5404.34 ,0) (xf, yf, zf, tf, Ef) = (-522.694 ,-0.693668 ,-800 ,5404.34 ,-1330.06) Elost = 0.338673 Particle = [ I3Particle MajorID : 2817278544529359988 MinorID : 22 Zenith : 0.525077 Azimuth : 6.28286 X : 1070.44 Y : -1.21373 Z : 1949.99 Time : 88.7298 Energy : 407.132 Speed : 0.299792 Length : 1330.06 Type : MuPlus PDG encoding : -13 Shape : StartingTrack Status : NotSet Location : InIce ]] ``` For reasons that probably seemed reasonable at the time, new PROPOSAL sets the checkpoint energies to the muon rest mass: ```text [I3MMCTrack = [ (xi, yi, zi, ti, Ei) = (452.997 ,64.3785 ,800 ,4435.44 ,50.5399) (xc, yc, zc, tc, Ec) = (343.927 ,64.9432 ,611.144 ,6928.48 ,0.105658) (xf, yf, zf, tf, Ef) = (343.927 ,64.9432 ,611.144 ,6928.48 ,0.105658) Elost = 50.4343 Particle = [ I3Particle MajorID : 2817278544529359988 MinorID : 23 Zenith : 0.52374 Azimuth : 6.27801 X : 1117.15 Y : 60.9397 Z : 1949.98 Time : 11.5111 Energy : 429.205 Speed : 0.299792 Length : 1546.08 Type : MuPlus PDG encoding : -13 Shape : StartingTrack Status : NotSet Location : InIce ]] ``` Since there are multiple places in IceSim where I3MMCTracks are used to infer the energies of muons, and no general way to detect which edition of PROPOSAL created a given I3MMCTrack, it would best to make the new output values consistent with MMC. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2340">https://code.icecube.wisc.edu/projects/icecube/ticket/2340</a>, reported by jvansantenand owned by jsoedingrekso</em></summary> <p> ```json { "status": "closed", "changetime": "2019-09-16T09:08:57", "_ts": "1568624937075517", "description": "I3MuonSlicer was designed to reverse-engineer the MMC output format to recover the energy of the muon between stochastic losses. When run against new PROPOSAL simulation, I3MuonSlicer inserts a muon segment ''after'' a muon decays, like so:\n\n{{{\n 23 MuPlus (1117.15m, 60.9397m, 1949.98m) (30.0081deg, 359.703deg) 11.5111ns 429.205GeV 1546.08m\n 73 MuPlus (453.863m, 64.374m, 801.499m) (30.0081deg, 359.703deg) 4435.44ns 50.5399GeV 28.1962m\n 15 DeltaE (439.762m, 64.447m, 777.083m) (30.0081deg, 359.703deg) 4523.72ns 1.73634GeV 0m\n 74 MuPlus (439.762m, 64.447m, 777.083m) (30.0081deg, 359.703deg) 4529.49ns 46.994GeV 76.9354m\n 16 DeltaE (401.285m, 64.6462m, 710.46m) (30.0081deg, 359.703deg) 4780.35ns 0.625515GeV 0m\n 75 MuPlus (401.285m, 64.6462m, 710.46m) (30.0081deg, 359.703deg) 4786.12ns 41.4309GeV 114.689m\n 17 EPlus (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0504133GeV 0m\n 18 NuE (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0393494GeV 0m\n 19 NuMuBar (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0158957GeV 0m\n 78 MuPlus (343.927m, 64.9432m, 611.144m) (30.0081deg, 359.703deg) 5168.69ns 33.9646GeV 527.572m\n}}}\n\nThis really screws up stopping muons, since instead of stopping, they burst back to life. A likely culprit is MMCTrack. For MMC (and PROPOSAL Classic), stopping tracks had negative values for Ef, which were interpreted as -1 times the distance from the decay to trajectory exit point in cm (hooray, units!). For example\n\n{{{\n[I3MMCTrack = [\n (xi, yi, zi, ti, Ei) = (404.223 ,-0.996252 ,800 ,4522.09 ,0.338673)\n (xc, yc, zc, tc, Ec) = (-44.3509 ,-0.849819 ,25.6933 ,5404.34 ,0)\n (xf, yf, zf, tf, Ef) = (-522.694 ,-0.693668 ,-800 ,5404.34 ,-1330.06)\n Elost = 0.338673\n Particle = [ I3Particle MajorID : 2817278544529359988\n MinorID : 22\n Zenith : 0.525077\n Azimuth : 6.28286\n X : 1070.44\n Y : -1.21373\n Z : 1949.99\n Time : 88.7298\n Energy : 407.132\n Speed : 0.299792\n Length : 1330.06\n Type : MuPlus\n PDG encoding : -13\n Shape : StartingTrack\n Status : NotSet\n Location : InIce\n]]\n}}}\n\nFor reasons that probably seemed reasonable at the time, new PROPOSAL sets the checkpoint energies to the muon rest mass:\n\n{{{\n[I3MMCTrack = [\n (xi, yi, zi, ti, Ei) = (452.997 ,64.3785 ,800 ,4435.44 ,50.5399)\n (xc, yc, zc, tc, Ec) = (343.927 ,64.9432 ,611.144 ,6928.48 ,0.105658)\n (xf, yf, zf, tf, Ef) = (343.927 ,64.9432 ,611.144 ,6928.48 ,0.105658)\n Elost = 50.4343\n Particle = [ I3Particle MajorID : 2817278544529359988\n MinorID : 23\n Zenith : 0.52374\n Azimuth : 6.27801\n X : 1117.15\n Y : 60.9397\n Z : 1949.98\n Time : 11.5111\n Energy : 429.205\n Speed : 0.299792\n Length : 1546.08\n Type : MuPlus\n PDG encoding : -13\n Shape : StartingTrack\n Status : NotSet\n Location : InIce\n]]\n}}}\n\nSince there are multiple places in IceSim where I3MMCTracks are used to infer the energies of muons, and no general way to detect which edition of PROPOSAL created a given I3MMCTrack, it would best to make the new output values consistent with MMC.", "reporter": "jvansanten", "cc": "kjmeagher", "resolution": "fixed", "time": "2019-08-07T15:57:56", "component": "combo simulation", "summary": "[PROPOSAL] post-restructuring PROPOSAL output confuses I3MuonSlicer", "priority": "blocker", "keywords": "", "milestone": "Autumnal Equinox 2019", "owner": "jsoedingrekso", "type": "defect" } ``` </p> </details>
1.0
[PROPOSAL] post-restructuring PROPOSAL output confuses I3MuonSlicer (Trac #2340) - I3MuonSlicer was designed to reverse-engineer the MMC output format to recover the energy of the muon between stochastic losses. When run against new PROPOSAL simulation, I3MuonSlicer inserts a muon segment ''after'' a muon decays, like so: ```text 23 MuPlus (1117.15m, 60.9397m, 1949.98m) (30.0081deg, 359.703deg) 11.5111ns 429.205GeV 1546.08m 73 MuPlus (453.863m, 64.374m, 801.499m) (30.0081deg, 359.703deg) 4435.44ns 50.5399GeV 28.1962m 15 DeltaE (439.762m, 64.447m, 777.083m) (30.0081deg, 359.703deg) 4523.72ns 1.73634GeV 0m 74 MuPlus (439.762m, 64.447m, 777.083m) (30.0081deg, 359.703deg) 4529.49ns 46.994GeV 76.9354m 16 DeltaE (401.285m, 64.6462m, 710.46m) (30.0081deg, 359.703deg) 4780.35ns 0.625515GeV 0m 75 MuPlus (401.285m, 64.6462m, 710.46m) (30.0081deg, 359.703deg) 4786.12ns 41.4309GeV 114.689m 17 EPlus (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0504133GeV 0m 18 NuE (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0393494GeV 0m 19 NuMuBar (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0158957GeV 0m 78 MuPlus (343.927m, 64.9432m, 611.144m) (30.0081deg, 359.703deg) 5168.69ns 33.9646GeV 527.572m ``` This really screws up stopping muons, since instead of stopping, they burst back to life. A likely culprit is MMCTrack. For MMC (and PROPOSAL Classic), stopping tracks had negative values for Ef, which were interpreted as -1 times the distance from the decay to trajectory exit point in cm (hooray, units!). For example ```text [I3MMCTrack = [ (xi, yi, zi, ti, Ei) = (404.223 ,-0.996252 ,800 ,4522.09 ,0.338673) (xc, yc, zc, tc, Ec) = (-44.3509 ,-0.849819 ,25.6933 ,5404.34 ,0) (xf, yf, zf, tf, Ef) = (-522.694 ,-0.693668 ,-800 ,5404.34 ,-1330.06) Elost = 0.338673 Particle = [ I3Particle MajorID : 2817278544529359988 MinorID : 22 Zenith : 0.525077 Azimuth : 6.28286 X : 1070.44 Y : -1.21373 Z : 1949.99 Time : 88.7298 Energy : 407.132 Speed : 0.299792 Length : 1330.06 Type : MuPlus PDG encoding : -13 Shape : StartingTrack Status : NotSet Location : InIce ]] ``` For reasons that probably seemed reasonable at the time, new PROPOSAL sets the checkpoint energies to the muon rest mass: ```text [I3MMCTrack = [ (xi, yi, zi, ti, Ei) = (452.997 ,64.3785 ,800 ,4435.44 ,50.5399) (xc, yc, zc, tc, Ec) = (343.927 ,64.9432 ,611.144 ,6928.48 ,0.105658) (xf, yf, zf, tf, Ef) = (343.927 ,64.9432 ,611.144 ,6928.48 ,0.105658) Elost = 50.4343 Particle = [ I3Particle MajorID : 2817278544529359988 MinorID : 23 Zenith : 0.52374 Azimuth : 6.27801 X : 1117.15 Y : 60.9397 Z : 1949.98 Time : 11.5111 Energy : 429.205 Speed : 0.299792 Length : 1546.08 Type : MuPlus PDG encoding : -13 Shape : StartingTrack Status : NotSet Location : InIce ]] ``` Since there are multiple places in IceSim where I3MMCTracks are used to infer the energies of muons, and no general way to detect which edition of PROPOSAL created a given I3MMCTrack, it would best to make the new output values consistent with MMC. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2340">https://code.icecube.wisc.edu/projects/icecube/ticket/2340</a>, reported by jvansantenand owned by jsoedingrekso</em></summary> <p> ```json { "status": "closed", "changetime": "2019-09-16T09:08:57", "_ts": "1568624937075517", "description": "I3MuonSlicer was designed to reverse-engineer the MMC output format to recover the energy of the muon between stochastic losses. When run against new PROPOSAL simulation, I3MuonSlicer inserts a muon segment ''after'' a muon decays, like so:\n\n{{{\n 23 MuPlus (1117.15m, 60.9397m, 1949.98m) (30.0081deg, 359.703deg) 11.5111ns 429.205GeV 1546.08m\n 73 MuPlus (453.863m, 64.374m, 801.499m) (30.0081deg, 359.703deg) 4435.44ns 50.5399GeV 28.1962m\n 15 DeltaE (439.762m, 64.447m, 777.083m) (30.0081deg, 359.703deg) 4523.72ns 1.73634GeV 0m\n 74 MuPlus (439.762m, 64.447m, 777.083m) (30.0081deg, 359.703deg) 4529.49ns 46.994GeV 76.9354m\n 16 DeltaE (401.285m, 64.6462m, 710.46m) (30.0081deg, 359.703deg) 4780.35ns 0.625515GeV 0m\n 75 MuPlus (401.285m, 64.6462m, 710.46m) (30.0081deg, 359.703deg) 4786.12ns 41.4309GeV 114.689m\n 17 EPlus (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0504133GeV 0m\n 18 NuE (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0393494GeV 0m\n 19 NuMuBar (343.927m, 64.9432m, 611.144m) (180deg, 180deg) 6928.48ns 0.0158957GeV 0m\n 78 MuPlus (343.927m, 64.9432m, 611.144m) (30.0081deg, 359.703deg) 5168.69ns 33.9646GeV 527.572m\n}}}\n\nThis really screws up stopping muons, since instead of stopping, they burst back to life. A likely culprit is MMCTrack. For MMC (and PROPOSAL Classic), stopping tracks had negative values for Ef, which were interpreted as -1 times the distance from the decay to trajectory exit point in cm (hooray, units!). For example\n\n{{{\n[I3MMCTrack = [\n (xi, yi, zi, ti, Ei) = (404.223 ,-0.996252 ,800 ,4522.09 ,0.338673)\n (xc, yc, zc, tc, Ec) = (-44.3509 ,-0.849819 ,25.6933 ,5404.34 ,0)\n (xf, yf, zf, tf, Ef) = (-522.694 ,-0.693668 ,-800 ,5404.34 ,-1330.06)\n Elost = 0.338673\n Particle = [ I3Particle MajorID : 2817278544529359988\n MinorID : 22\n Zenith : 0.525077\n Azimuth : 6.28286\n X : 1070.44\n Y : -1.21373\n Z : 1949.99\n Time : 88.7298\n Energy : 407.132\n Speed : 0.299792\n Length : 1330.06\n Type : MuPlus\n PDG encoding : -13\n Shape : StartingTrack\n Status : NotSet\n Location : InIce\n]]\n}}}\n\nFor reasons that probably seemed reasonable at the time, new PROPOSAL sets the checkpoint energies to the muon rest mass:\n\n{{{\n[I3MMCTrack = [\n (xi, yi, zi, ti, Ei) = (452.997 ,64.3785 ,800 ,4435.44 ,50.5399)\n (xc, yc, zc, tc, Ec) = (343.927 ,64.9432 ,611.144 ,6928.48 ,0.105658)\n (xf, yf, zf, tf, Ef) = (343.927 ,64.9432 ,611.144 ,6928.48 ,0.105658)\n Elost = 50.4343\n Particle = [ I3Particle MajorID : 2817278544529359988\n MinorID : 23\n Zenith : 0.52374\n Azimuth : 6.27801\n X : 1117.15\n Y : 60.9397\n Z : 1949.98\n Time : 11.5111\n Energy : 429.205\n Speed : 0.299792\n Length : 1546.08\n Type : MuPlus\n PDG encoding : -13\n Shape : StartingTrack\n Status : NotSet\n Location : InIce\n]]\n}}}\n\nSince there are multiple places in IceSim where I3MMCTracks are used to infer the energies of muons, and no general way to detect which edition of PROPOSAL created a given I3MMCTrack, it would best to make the new output values consistent with MMC.", "reporter": "jvansanten", "cc": "kjmeagher", "resolution": "fixed", "time": "2019-08-07T15:57:56", "component": "combo simulation", "summary": "[PROPOSAL] post-restructuring PROPOSAL output confuses I3MuonSlicer", "priority": "blocker", "keywords": "", "milestone": "Autumnal Equinox 2019", "owner": "jsoedingrekso", "type": "defect" } ``` </p> </details>
defect
post restructuring proposal output confuses trac was designed to reverse engineer the mmc output format to recover the energy of the muon between stochastic losses when run against new proposal simulation inserts a muon segment after a muon decays like so text muplus muplus deltae muplus deltae muplus eplus nue numubar muplus this really screws up stopping muons since instead of stopping they burst back to life a likely culprit is mmctrack for mmc and proposal classic stopping tracks had negative values for ef which were interpreted as times the distance from the decay to trajectory exit point in cm hooray units for example text xi yi zi ti ei xc yc zc tc ec xf yf zf tf ef elost particle majorid minorid zenith azimuth x y z time energy speed length type muplus pdg encoding shape startingtrack status notset location inice for reasons that probably seemed reasonable at the time new proposal sets the checkpoint energies to the muon rest mass text xi yi zi ti ei xc yc zc tc ec xf yf zf tf ef elost particle majorid minorid zenith azimuth x y z time energy speed length type muplus pdg encoding shape startingtrack status notset location inice since there are multiple places in icesim where are used to infer the energies of muons and no general way to detect which edition of proposal created a given it would best to make the new output values consistent with mmc migrated from json status closed changetime ts description was designed to reverse engineer the mmc output format to recover the energy of the muon between stochastic losses when run against new proposal simulation inserts a muon segment after a muon decays like so n n n muplus n muplus n deltae n muplus n deltae n muplus n eplus n nue n numubar n muplus n n nthis really screws up stopping muons since instead of stopping they burst back to life a likely culprit is mmctrack for mmc and proposal classic stopping tracks had negative values for ef which were interpreted as times the distance from the decay to trajectory exit point in cm hooray units for example n n n n n nfor reasons that probably seemed reasonable at the time new proposal sets the checkpoint energies to the muon rest mass n n n n n nsince there are multiple places in icesim where are used to infer the energies of muons and no general way to detect which edition of proposal created a given it would best to make the new output values consistent with mmc reporter jvansanten cc kjmeagher resolution fixed time component combo simulation summary post restructuring proposal output confuses priority blocker keywords milestone autumnal equinox owner jsoedingrekso type defect
1
96,940
16,174,131,227
IssuesEvent
2021-05-03 01:20:05
xmidt-org/docs
https://api.github.com/repos/xmidt-org/docs
opened
CVE-2021-28965 (High) detected in rexml-3.2.4.gem
security vulnerability
## CVE-2021-28965 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rexml-3.2.4.gem</b></p></summary> <p>An XML toolkit for Ruby</p> <p>Library home page: <a href="https://rubygems.org/gems/rexml-3.2.4.gem">https://rubygems.org/gems/rexml-3.2.4.gem</a></p> <p> Dependency Hierarchy: - kramdown-2.3.0.gem (Root Library) - :x: **rexml-3.2.4.gem** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The REXML gem before 3.2.5 in Ruby before 2.6.7, 2.7.x before 2.7.3, and 3.x before 3.0.1 does not properly address XML round-trip issues. An incorrect document can be produced after parsing and serializing. <p>Publish Date: 2021-04-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28965>CVE-2021-28965</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-8cr8-4vfw-mr7h">https://github.com/advisories/GHSA-8cr8-4vfw-mr7h</a></p> <p>Release Date: 2021-04-21</p> <p>Fix Resolution: rexml - 3.2.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-28965 (High) detected in rexml-3.2.4.gem - ## CVE-2021-28965 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rexml-3.2.4.gem</b></p></summary> <p>An XML toolkit for Ruby</p> <p>Library home page: <a href="https://rubygems.org/gems/rexml-3.2.4.gem">https://rubygems.org/gems/rexml-3.2.4.gem</a></p> <p> Dependency Hierarchy: - kramdown-2.3.0.gem (Root Library) - :x: **rexml-3.2.4.gem** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The REXML gem before 3.2.5 in Ruby before 2.6.7, 2.7.x before 2.7.3, and 3.x before 3.0.1 does not properly address XML round-trip issues. An incorrect document can be produced after parsing and serializing. <p>Publish Date: 2021-04-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28965>CVE-2021-28965</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-8cr8-4vfw-mr7h">https://github.com/advisories/GHSA-8cr8-4vfw-mr7h</a></p> <p>Release Date: 2021-04-21</p> <p>Fix Resolution: rexml - 3.2.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in rexml gem cve high severity vulnerability vulnerable library rexml gem an xml toolkit for ruby library home page a href dependency hierarchy kramdown gem root library x rexml gem vulnerable library found in base branch main vulnerability details the rexml gem before in ruby before x before and x before does not properly address xml round trip issues an incorrect document can be produced after parsing and serializing publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rexml step up your open source security game with whitesource
0
56,958
15,530,632,383
IssuesEvent
2021-03-13 19:54:14
Cockatrice/Cockatrice
https://api.github.com/repos/Cockatrice/Cockatrice
closed
Feature Request: Always reveal top card to me.
Defect - Game Rules Compliance Medium Priority
Newer cards like Experimental Frenzy and Mystic Forge allow you to look at the top card of your library at any time, essentially meaning it is always revealed to you. The only way to handle this in the client currently is to do `Library > View top cards of library... > 1` every time the top card changes. It would be nice for players to have an option to enable always revealing the top card to just themselves.
1.0
Feature Request: Always reveal top card to me. - Newer cards like Experimental Frenzy and Mystic Forge allow you to look at the top card of your library at any time, essentially meaning it is always revealed to you. The only way to handle this in the client currently is to do `Library > View top cards of library... > 1` every time the top card changes. It would be nice for players to have an option to enable always revealing the top card to just themselves.
defect
feature request always reveal top card to me newer cards like experimental frenzy and mystic forge allow you to look at the top card of your library at any time essentially meaning it is always revealed to you the only way to handle this in the client currently is to do library view top cards of library every time the top card changes it would be nice for players to have an option to enable always revealing the top card to just themselves
1
51,762
13,211,303,478
IssuesEvent
2020-08-15 22:10:13
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
Steamshovel in Python 3 (Trac #1005)
Incomplete Migration Migrated from Trac combo core defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1005">https://code.icecube.wisc.edu/projects/icecube/ticket/1005</a>, reported by moriah.tobinand owned by hdembinski</em></summary> <p> ```json { "status": "closed", "changetime": "2015-10-12T14:11:44", "_ts": "1444659104505675", "description": "{{{\n[ 86%] Building CXX object steamshovel/CMakeFiles/shovelart-pybindings.dir/private/shovelart/pybindings/Types.cpp.o\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp: In\n static member function \u2018static void * scripting::shovelart\n ::QStringConversion::convertible(PyObject *)\u2019:\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp:210:\n error: \u2018PyString_Check\u2019 was not declared in this scope\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp: In\n static member function \u2018static void scripting::shovelart\n ::QStringConversion::construct(\n PyObject *, boost::python::converter::rvalue_from_python_stage1_data\n *)\u2019:\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp:215:\n error: \u2018PyString_AsString\u2019 was not declared in this scope\nmake[3]: *** [steamshovel/CMakeFiles/shovelart-pybindings.dir/private/shovelart/pybindings/Types.cpp.o] Error 1\nmake[2]: *** [steamshovel/CMakeFiles/shovelart-pybindings.dir/all] Error 2\nmake[1]: *** [steamshovel/CMakeFiles/steamshovel.dir/rule] Error 2\nmake: *** [steamshovel] Error 2\n}}}", "reporter": "moriah.tobin", "cc": "david.schultz", "resolution": "fixed", "time": "2015-05-28T20:57:28", "component": "combo core", "summary": "Steamshovel in Python 3", "priority": "blocker", "keywords": "", "milestone": "Long-Term Future", "owner": "hdembinski", "type": "defect" } ``` </p> </details>
1.0
Steamshovel in Python 3 (Trac #1005) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1005">https://code.icecube.wisc.edu/projects/icecube/ticket/1005</a>, reported by moriah.tobinand owned by hdembinski</em></summary> <p> ```json { "status": "closed", "changetime": "2015-10-12T14:11:44", "_ts": "1444659104505675", "description": "{{{\n[ 86%] Building CXX object steamshovel/CMakeFiles/shovelart-pybindings.dir/private/shovelart/pybindings/Types.cpp.o\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp: In\n static member function \u2018static void * scripting::shovelart\n ::QStringConversion::convertible(PyObject *)\u2019:\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp:210:\n error: \u2018PyString_Check\u2019 was not declared in this scope\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp: In\n static member function \u2018static void scripting::shovelart\n ::QStringConversion::construct(\n PyObject *, boost::python::converter::rvalue_from_python_stage1_data\n *)\u2019:\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp:215:\n error: \u2018PyString_AsString\u2019 was not declared in this scope\nmake[3]: *** [steamshovel/CMakeFiles/shovelart-pybindings.dir/private/shovelart/pybindings/Types.cpp.o] Error 1\nmake[2]: *** [steamshovel/CMakeFiles/shovelart-pybindings.dir/all] Error 2\nmake[1]: *** [steamshovel/CMakeFiles/steamshovel.dir/rule] Error 2\nmake: *** [steamshovel] Error 2\n}}}", "reporter": "moriah.tobin", "cc": "david.schultz", "resolution": "fixed", "time": "2015-05-28T20:57:28", "component": "combo core", "summary": "Steamshovel in Python 3", "priority": "blocker", "keywords": "", "milestone": "Long-Term Future", "owner": "hdembinski", "type": "defect" } ``` </p> </details>
defect
steamshovel in python trac migrated from json status closed changetime ts description n building cxx object steamshovel cmakefiles shovelart pybindings dir private shovelart pybindings types cpp o n home mntobin icerec src steamshovel private shovelart pybindings types cpp in n static member function void scripting shovelart n qstringconversion convertible pyobject n home mntobin icerec src steamshovel private shovelart pybindings types cpp n error check was not declared in this scope n home mntobin icerec src steamshovel private shovelart pybindings types cpp in n static member function void scripting shovelart n qstringconversion construct n pyobject boost python converter rvalue from python data n n home mntobin icerec src steamshovel private shovelart pybindings types cpp n error asstring was not declared in this scope nmake error nmake error nmake error nmake error n reporter moriah tobin cc david schultz resolution fixed time component combo core summary steamshovel in python priority blocker keywords milestone long term future owner hdembinski type defect
1
566,450
16,821,866,588
IssuesEvent
2021-06-17 13:59:29
Aggelowe/thunderbolt-server-manager
https://api.github.com/repos/Aggelowe/thunderbolt-server-manager
opened
Add system tray
medium priority
A system tray will be added in order to make the servers able to run at the background
1.0
Add system tray - A system tray will be added in order to make the servers able to run at the background
non_defect
add system tray a system tray will be added in order to make the servers able to run at the background
0
422,754
28,479,426,943
IssuesEvent
2023-04-18 00:28:33
SeidKorac/Necronomiconet
https://api.github.com/repos/SeidKorac/Necronomiconet
opened
Initial project documentation and repo setup
documentation
Setup initial project documentation: - Wiki File/Folder structure - Project specifications - Workflow system - Rules and procedures - ...
1.0
Initial project documentation and repo setup - Setup initial project documentation: - Wiki File/Folder structure - Project specifications - Workflow system - Rules and procedures - ...
non_defect
initial project documentation and repo setup setup initial project documentation wiki file folder structure project specifications workflow system rules and procedures
0
318,702
9,696,430,935
IssuesEvent
2019-05-25 07:26:53
zephyrproject-rtos/west
https://api.github.com/repos/zephyrproject-rtos/west
closed
Add `west forall` options to filter repositories
enhancement priority: low
It would be nice to have some filters on the repositories considered by `west forall`. For example, if we had a filter that restricted the forall to projects which had a local branch checked out, the following would be an easy way to sync up a local forest with its remotes: ``` west forall --branch-only -c "git push" ``` (The above assumes the local branches have upstreams properly configured for plain `git push` to work.)
1.0
Add `west forall` options to filter repositories - It would be nice to have some filters on the repositories considered by `west forall`. For example, if we had a filter that restricted the forall to projects which had a local branch checked out, the following would be an easy way to sync up a local forest with its remotes: ``` west forall --branch-only -c "git push" ``` (The above assumes the local branches have upstreams properly configured for plain `git push` to work.)
non_defect
add west forall options to filter repositories it would be nice to have some filters on the repositories considered by west forall for example if we had a filter that restricted the forall to projects which had a local branch checked out the following would be an easy way to sync up a local forest with its remotes west forall branch only c git push the above assumes the local branches have upstreams properly configured for plain git push to work
0
16,678
2,615,121,703
IssuesEvent
2015-03-01 05:48:43
chrsmith/google-api-java-client
https://api.github.com/repos/chrsmith/google-api-java-client
closed
How to pass the parameters to the request in OAuthentication in java
auto-migrated Priority-Medium Type-Sample
``` Hello, I have been trying to pass the parameters to an OAuth request since a week,But i am totally help less now.My application desires the parameters to be passed which is not at all supported by OAuth request.Actually i am using Scribe jar for Authentication purpose and i got the access Token successfully.But to make api calls i need a support of passing the parameters in to the request.So please tell me how to pass the parameters in to a request.And also let me know whether i am using the correct jar or else i should go for another frame work to build such an applications in java. Thanks and regards. ``` Original issue reported on code.google.com by `farhana....@gmail.com` on 12 Jul 2011 at 12:32
1.0
How to pass the parameters to the request in OAuthentication in java - ``` Hello, I have been trying to pass the parameters to an OAuth request since a week,But i am totally help less now.My application desires the parameters to be passed which is not at all supported by OAuth request.Actually i am using Scribe jar for Authentication purpose and i got the access Token successfully.But to make api calls i need a support of passing the parameters in to the request.So please tell me how to pass the parameters in to a request.And also let me know whether i am using the correct jar or else i should go for another frame work to build such an applications in java. Thanks and regards. ``` Original issue reported on code.google.com by `farhana....@gmail.com` on 12 Jul 2011 at 12:32
non_defect
how to pass the parameters to the request in oauthentication in java hello i have been trying to pass the parameters to an oauth request since a week but i am totally help less now my application desires the parameters to be passed which is not at all supported by oauth request actually i am using scribe jar for authentication purpose and i got the access token successfully but to make api calls i need a support of passing the parameters in to the request so please tell me how to pass the parameters in to a request and also let me know whether i am using the correct jar or else i should go for another frame work to build such an applications in java thanks and regards original issue reported on code google com by farhana gmail com on jul at
0
647,939
21,160,446,461
IssuesEvent
2022-04-07 08:53:16
School-Simplified/Timmy-SchoolSimplified
https://api.github.com/repos/School-Simplified/Timmy-SchoolSimplified
closed
Replace raw IDs with those in common.py and configcat
bug Priority: High
It's mandatory to use the IDs in common.py by using configcat since raw IDs are **not descriptive**.
1.0
Replace raw IDs with those in common.py and configcat - It's mandatory to use the IDs in common.py by using configcat since raw IDs are **not descriptive**.
non_defect
replace raw ids with those in common py and configcat it s mandatory to use the ids in common py by using configcat since raw ids are not descriptive
0
58,290
16,473,354,977
IssuesEvent
2021-05-23 21:16:20
Questie/Questie
https://api.github.com/repos/Questie/Questie
closed
bug when using guild roles & permissions
Type - Defect
## Bug description Hi, I noticed that when I have Questie activated, managing Guild's role & permission is bugged. When I use role selection to change permissions, it renames the first role on the role list and apply permission to it instead the selected role. ## Questie version Questie v6.3.11 (Beta 4)
1.0
bug when using guild roles & permissions - ## Bug description Hi, I noticed that when I have Questie activated, managing Guild's role & permission is bugged. When I use role selection to change permissions, it renames the first role on the role list and apply permission to it instead the selected role. ## Questie version Questie v6.3.11 (Beta 4)
defect
bug when using guild roles permissions bug description hi i noticed that when i have questie activated managing guild s role permission is bugged when i use role selection to change permissions it renames the first role on the role list and apply permission to it instead the selected role questie version questie beta
1
66,853
20,733,728,176
IssuesEvent
2022-03-14 11:51:14
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Room counter is acting up with threads server support
T-Defect
### Steps to reproduce Uploading Screen Recording 2022-03-14 at 11.49.16.mov… ### Outcome #### What did you expect? Not that #### What happened instead? See the video ### Operating system _No response_ ### Browser information _No response_ ### URL for webapp _No response_ ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? No
1.0
Room counter is acting up with threads server support - ### Steps to reproduce Uploading Screen Recording 2022-03-14 at 11.49.16.mov… ### Outcome #### What did you expect? Not that #### What happened instead? See the video ### Operating system _No response_ ### Browser information _No response_ ### URL for webapp _No response_ ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? No
defect
room counter is acting up with threads server support steps to reproduce uploading screen recording at mov… outcome what did you expect not that what happened instead see the video operating system no response browser information no response url for webapp no response application version no response homeserver no response will you send logs no
1
9,642
3,297,119,985
IssuesEvent
2015-11-02 06:06:15
asciidoctor/asciidoctor-pdf
https://api.github.com/repos/asciidoctor/asciidoctor-pdf
opened
Migrate notes in README about development to CONTRIBUTING-CODE guide
documentation
Migrate the notes in the README.adoc file that pertain to development to a CONTRIBUTING-CODE.adoc guide. This shortens the README and gives us an opportunity to cover more details about development. We are following the model used in Asciidoctor.js.
1.0
Migrate notes in README about development to CONTRIBUTING-CODE guide - Migrate the notes in the README.adoc file that pertain to development to a CONTRIBUTING-CODE.adoc guide. This shortens the README and gives us an opportunity to cover more details about development. We are following the model used in Asciidoctor.js.
non_defect
migrate notes in readme about development to contributing code guide migrate the notes in the readme adoc file that pertain to development to a contributing code adoc guide this shortens the readme and gives us an opportunity to cover more details about development we are following the model used in asciidoctor js
0
48,462
13,382,425,797
IssuesEvent
2020-09-02 08:48:01
parj/SampleSpringBootApp
https://api.github.com/repos/parj/SampleSpringBootApp
closed
CVE-2019-16335 (High) detected in jackson-databind-2.9.9.jar
security vulnerability
## CVE-2019-16335 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/SampleSpringBootApp/bin/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.1.7.RELEASE.jar (Root Library) - spring-boot-starter-json-2.1.7.RELEASE.jar - :x: **jackson-databind-2.9.9.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/parj/SampleSpringBootApp/commit/6f1820a24e1325743c2bb6b201177dba792653ec">6f1820a24e1325743c2bb6b201177dba792653ec</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540. <p>Publish Date: 2019-09-15 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16335>CVE-2019-16335</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x">https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x</a></p> <p>Release Date: 2019-09-15</p> <p>Fix Resolution: 2.9.10</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-16335 (High) detected in jackson-databind-2.9.9.jar - ## CVE-2019-16335 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/SampleSpringBootApp/bin/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.1.7.RELEASE.jar (Root Library) - spring-boot-starter-json-2.1.7.RELEASE.jar - :x: **jackson-databind-2.9.9.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/parj/SampleSpringBootApp/commit/6f1820a24e1325743c2bb6b201177dba792653ec">6f1820a24e1325743c2bb6b201177dba792653ec</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540. <p>Publish Date: 2019-09-15 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16335>CVE-2019-16335</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x">https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x</a></p> <p>Release Date: 2019-09-15</p> <p>Fix Resolution: 2.9.10</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm samplespringbootapp bin pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library found in head commit a href vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind before it is related to com zaxxer hikari hikaridatasource this is a different vulnerability than cve publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
29,816
5,906,024,607
IssuesEvent
2017-05-19 14:16:04
jasongaylord/markdownsharp
https://api.github.com/repos/jasongaylord/markdownsharp
closed
Migrate to GitHub!
auto-migrated Priority-Medium Type-Defect
``` Right, now, there are dozens of unofficial GitHub repos for MarkdownSharp, because of the lack of an official one. You should create an official repo on GitHub to avoid that. Google Code is about to die, and migration to Github is just one click away. If this project is not dead, please migrate it! ``` Original issue reported on code.google.com by `thomas.l...@gmail.com` on 4 Jul 2015 at 1:33
1.0
Migrate to GitHub! - ``` Right, now, there are dozens of unofficial GitHub repos for MarkdownSharp, because of the lack of an official one. You should create an official repo on GitHub to avoid that. Google Code is about to die, and migration to Github is just one click away. If this project is not dead, please migrate it! ``` Original issue reported on code.google.com by `thomas.l...@gmail.com` on 4 Jul 2015 at 1:33
defect
migrate to github right now there are dozens of unofficial github repos for markdownsharp because of the lack of an official one you should create an official repo on github to avoid that google code is about to die and migration to github is just one click away if this project is not dead please migrate it original issue reported on code google com by thomas l gmail com on jul at
1
41,055
10,278,730,314
IssuesEvent
2019-08-25 16:46:27
joshuaulrich/IBrokers
https://api.github.com/repos/joshuaulrich/IBrokers
closed
Connection problem
Priority-Medium Type-Defect auto-migrated
``` What steps will reproduce the problem? 1. tws <- twsConnect() What is the expected output? What do you see instead? Error in structure(list(s, clientId = clientId, port = port, server.version = SERVER_VERSION, : object 'SERVER_VERSION' not found What version of the product are you using? On what operating system? R version 2.10.0 (2009-10-26) Ibrokers 0.2-4 (2009-08-11) OS: Debian Testing Please provide any additional information below. ``` Original issue reported on code.google.com by `arnaudba...@gmail.com` on 12 Jan 2010 at 12:49
1.0
Connection problem - ``` What steps will reproduce the problem? 1. tws <- twsConnect() What is the expected output? What do you see instead? Error in structure(list(s, clientId = clientId, port = port, server.version = SERVER_VERSION, : object 'SERVER_VERSION' not found What version of the product are you using? On what operating system? R version 2.10.0 (2009-10-26) Ibrokers 0.2-4 (2009-08-11) OS: Debian Testing Please provide any additional information below. ``` Original issue reported on code.google.com by `arnaudba...@gmail.com` on 12 Jan 2010 at 12:49
defect
connection problem what steps will reproduce the problem tws twsconnect what is the expected output what do you see instead error in structure list s clientid clientid port port server version server version object server version not found what version of the product are you using on what operating system r version ibrokers os debian testing please provide any additional information below original issue reported on code google com by arnaudba gmail com on jan at
1
70,885
7,202,144,684
IssuesEvent
2018-02-06 02:11:53
kcigeospatial/SWMFAC-Enhancements
https://api.github.com/repos/kcigeospatial/SWMFAC-Enhancements
opened
Export Function - Working but system returning error message
SHA Dev - Post UAT Testing
The export function is functioning in SHA Dev (not functioning in KCI Dev) however system returns an IO error pop up message and a runtime error in a new browser after user executes the export workflow. Errors not occurring in production
1.0
Export Function - Working but system returning error message - The export function is functioning in SHA Dev (not functioning in KCI Dev) however system returns an IO error pop up message and a runtime error in a new browser after user executes the export workflow. Errors not occurring in production
non_defect
export function working but system returning error message the export function is functioning in sha dev not functioning in kci dev however system returns an io error pop up message and a runtime error in a new browser after user executes the export workflow errors not occurring in production
0
80,251
30,193,495,706
IssuesEvent
2023-07-04 17:47:08
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
KotlinGenerator does not respect not null fields on view
T: Defect
### Expected behavior KotlinGenerator should generate nonnullable fields for views the same way it does so for tables. ### Actual behavior KotlinGenerator generates all fields as nullable in view pojos/records regardless of view columns nullability in db ### Steps to reproduce the problem Create a view: ```sql create table foos ( id uuid not null constraint "FOOS_pk" primary key, title text not null ); create table bars ( id uuid not null constraint "BARS_pk" primary key, foo_id uuid not null constraint "BARS_foos_id_fk" references foos ); create view foobars as select foos.*, count(foos.id) from foos left join bars m on foos.id = m.foo_id group by foos.id; ``` Run jooq codegen with following configuration: ``` isPojos = true isImmutablePojos = true isSerializablePojos = false isPojosEqualsAndHashCode = false isPojosToString = false isPojosAsKotlinDataClasses = true isKotlinNotNullPojoAttributes = true ``` ### jOOQ Version jOOQ Open Source 3.18.4 ### Database product and version PostgreSQL 15.2 via docker image postgres:15.2-alpine ### Java Version openjdk 17.0.3 2022-04-19 LTS ### OS Version Microsoft Windows 11 Pro Version 10.0.22621 Build 22621 ### JDBC driver name and version (include name if unofficial driver) org.postgresql:postgresql:42.6.0
1.0
KotlinGenerator does not respect not null fields on view - ### Expected behavior KotlinGenerator should generate nonnullable fields for views the same way it does so for tables. ### Actual behavior KotlinGenerator generates all fields as nullable in view pojos/records regardless of view columns nullability in db ### Steps to reproduce the problem Create a view: ```sql create table foos ( id uuid not null constraint "FOOS_pk" primary key, title text not null ); create table bars ( id uuid not null constraint "BARS_pk" primary key, foo_id uuid not null constraint "BARS_foos_id_fk" references foos ); create view foobars as select foos.*, count(foos.id) from foos left join bars m on foos.id = m.foo_id group by foos.id; ``` Run jooq codegen with following configuration: ``` isPojos = true isImmutablePojos = true isSerializablePojos = false isPojosEqualsAndHashCode = false isPojosToString = false isPojosAsKotlinDataClasses = true isKotlinNotNullPojoAttributes = true ``` ### jOOQ Version jOOQ Open Source 3.18.4 ### Database product and version PostgreSQL 15.2 via docker image postgres:15.2-alpine ### Java Version openjdk 17.0.3 2022-04-19 LTS ### OS Version Microsoft Windows 11 Pro Version 10.0.22621 Build 22621 ### JDBC driver name and version (include name if unofficial driver) org.postgresql:postgresql:42.6.0
defect
kotlingenerator does not respect not null fields on view expected behavior kotlingenerator should generate nonnullable fields for views the same way it does so for tables actual behavior kotlingenerator generates all fields as nullable in view pojos records regardless of view columns nullability in db steps to reproduce the problem create a view sql create table foos id uuid not null constraint foos pk primary key title text not null create table bars id uuid not null constraint bars pk primary key foo id uuid not null constraint bars foos id fk references foos create view foobars as select foos count foos id from foos left join bars m on foos id m foo id group by foos id run jooq codegen with following configuration ispojos true isimmutablepojos true isserializablepojos false ispojosequalsandhashcode false ispojostostring false ispojosaskotlindataclasses true iskotlinnotnullpojoattributes true jooq version jooq open source database product and version postgresql via docker image postgres alpine java version openjdk lts os version microsoft windows pro version build jdbc driver name and version include name if unofficial driver org postgresql postgresql
1
71,200
23,488,646,514
IssuesEvent
2022-08-17 16:26:23
zed-industries/feedback
https://api.github.com/repos/zed-industries/feedback
opened
Crash on cmd+shift+w
defect triage
### Check for existing issues - [X] Completed ### Describe the bug After closing a project window, Zed crashed ### To reproduce I can't reproduce it again, but right after manually installing Zed 0.51.1 1. Open the project 2. Close window via shortcut cmd+shift+w ### Expected behavior The window closes without crash ### Environment Zed 0.51.1 – /Applications/Zed.app macOS 12.1 architecture arm64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue 16:19:46 [ERROR] thread 'main' panicked at 'already borrowed: BorrowMutError': crates/gpui/src/platform/mac/window.rs:947 0: backtrace::capture::Backtrace::new 1: Zed::init_panic_hook::{{closure}} 2: std::panicking::rust_panic_with_hook 3: std::panicking::begin_panic_handler::{{closure}} 4: std::sys_common::backtrace::__rust_end_short_backtrace 5: _rust_begin_unwind 6: core::panicking::panic_fmt 7: core::result::unwrap_failed 8: gpui::platform::mac::window::window_fullscreen_changed 9: <unknown> 10: <unknown> 11: <unknown> 12: <unknown> 13: <unknown> 14: <unknown> 15: <unknown> 16: <unknown> 17: <unknown> 18: <unknown> 19: <unknown> 20: <unknown> 21: <unknown> 22: gpui::platform::mac::window::close_window 23: core::ptr::drop_in_place<gpui::platform::mac::window::Window> 24: core::ptr::drop_in_place<core::option::Option<(alloc::rc::Rc<core::cell::RefCell<gpui::presenter::Presenter>>,alloc::boxed::Box<dyn gpui::platform::Window>)>> 25: gpui::app::MutableAppContext::remove_window 26: <gpui::app::MutableAppContext as gpui::app::UpdateView>::update_view 27: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 28: <async_task::runnable::spawn_local::Checked<F> as core::future::future::Future>::poll 29: async_task::raw::RawTask<F,T,S>::run 30: <unknown> 31: <unknown> 32: <unknown> 33: <unknown> 34: <unknown> 35: <unknown> 36: <unknown> 37: <unknown> 38: <unknown> 39: <unknown> 40: <unknown> 41: <gpui::platform::mac::platform::MacForegroundPlatform as gpui::platform::ForegroundPlatform>::run 42: gpui::app::App::run 43: Zed::main 44: std::sys_common::backtrace::__rust_begin_short_backtrace 45: std::rt::lang_start::{{closure}} 46: std::rt::lang_start_internal 47: _main
1.0
Crash on cmd+shift+w - ### Check for existing issues - [X] Completed ### Describe the bug After closing a project window, Zed crashed ### To reproduce I can't reproduce it again, but right after manually installing Zed 0.51.1 1. Open the project 2. Close window via shortcut cmd+shift+w ### Expected behavior The window closes without crash ### Environment Zed 0.51.1 – /Applications/Zed.app macOS 12.1 architecture arm64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue 16:19:46 [ERROR] thread 'main' panicked at 'already borrowed: BorrowMutError': crates/gpui/src/platform/mac/window.rs:947 0: backtrace::capture::Backtrace::new 1: Zed::init_panic_hook::{{closure}} 2: std::panicking::rust_panic_with_hook 3: std::panicking::begin_panic_handler::{{closure}} 4: std::sys_common::backtrace::__rust_end_short_backtrace 5: _rust_begin_unwind 6: core::panicking::panic_fmt 7: core::result::unwrap_failed 8: gpui::platform::mac::window::window_fullscreen_changed 9: <unknown> 10: <unknown> 11: <unknown> 12: <unknown> 13: <unknown> 14: <unknown> 15: <unknown> 16: <unknown> 17: <unknown> 18: <unknown> 19: <unknown> 20: <unknown> 21: <unknown> 22: gpui::platform::mac::window::close_window 23: core::ptr::drop_in_place<gpui::platform::mac::window::Window> 24: core::ptr::drop_in_place<core::option::Option<(alloc::rc::Rc<core::cell::RefCell<gpui::presenter::Presenter>>,alloc::boxed::Box<dyn gpui::platform::Window>)>> 25: gpui::app::MutableAppContext::remove_window 26: <gpui::app::MutableAppContext as gpui::app::UpdateView>::update_view 27: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 28: <async_task::runnable::spawn_local::Checked<F> as core::future::future::Future>::poll 29: async_task::raw::RawTask<F,T,S>::run 30: <unknown> 31: <unknown> 32: <unknown> 33: <unknown> 34: <unknown> 35: <unknown> 36: <unknown> 37: <unknown> 38: <unknown> 39: <unknown> 40: <unknown> 41: <gpui::platform::mac::platform::MacForegroundPlatform as gpui::platform::ForegroundPlatform>::run 42: gpui::app::App::run 43: Zed::main 44: std::sys_common::backtrace::__rust_begin_short_backtrace 45: std::rt::lang_start::{{closure}} 46: std::rt::lang_start_internal 47: _main
defect
crash on cmd shift w check for existing issues completed describe the bug after closing a project window zed crashed to reproduce i can t reproduce it again but right after manually installing zed open the project close window via shortcut cmd shift w expected behavior the window closes without crash environment zed – applications zed app macos architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue thread main panicked at already borrowed borrowmuterror crates gpui src platform mac window rs backtrace capture backtrace new zed init panic hook closure std panicking rust panic with hook std panicking begin panic handler closure std sys common backtrace rust end short backtrace rust begin unwind core panicking panic fmt core result unwrap failed gpui platform mac window window fullscreen changed gpui platform mac window close window core ptr drop in place core ptr drop in place alloc boxed box gpui app mutableappcontext remove window update view as core future future future poll as core future future future poll async task raw rawtask run run gpui app app run zed main std sys common backtrace rust begin short backtrace std rt lang start closure std rt lang start internal main
1
228,853
18,267,081,721
IssuesEvent
2021-10-04 09:41:35
ckeditor/ckeditor4
https://api.github.com/repos/ckeditor/ckeditor4
closed
Failing test: tests/core/htmldataprocessor/htmldataprocessor
type:bug status:confirmed browser:safari browser:firefox browser:ie8 browser:ie9 browser:ie11 browser:android type:failingtest
Failing test: `test protected source in iframe` ``` Editor data does not match. Expected: <!doctype html><html><head><title>foo</title></head><body><p><iframe name="aa">[[mytag]]</iframe></p></body></html> (string) Actual: <!doctype html><html><head><title>foo</title></head><body><p><iframe name="aa"></iframe></p></body></html> (string) ``` ## Other details * Browser: Chrome 92.0.4515.131 * OS: Android 9
1.0
Failing test: tests/core/htmldataprocessor/htmldataprocessor - Failing test: `test protected source in iframe` ``` Editor data does not match. Expected: <!doctype html><html><head><title>foo</title></head><body><p><iframe name="aa">[[mytag]]</iframe></p></body></html> (string) Actual: <!doctype html><html><head><title>foo</title></head><body><p><iframe name="aa"></iframe></p></body></html> (string) ``` ## Other details * Browser: Chrome 92.0.4515.131 * OS: Android 9
non_defect
failing test tests core htmldataprocessor htmldataprocessor failing test test protected source in iframe editor data does not match expected foo string actual foo string other details browser chrome os android
0
167
2,517,153,227
IssuesEvent
2015-01-16 12:16:27
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
CacheConfig(CacheSimpleConfig simpleConfig) constructor broken in 3.4
Team: Core Type: Defect
Looks like @asimarslan forgot to fix variable assignments in [this](https://github.com/hazelcast/hazelcast/commit/2b274e6521ea45ba3f56fc50cbbe51d807794e3c#diff-da4f4c8466d2549f67ffb738b98cf597R97) commit. This prevents JCache declarative initialization given a configuration XML containing entries for `cache-writer-factory` or `expiry-policy-factory`: ``` java.lang.ClassCastException: javax.cache.expiry.AccessedExpiryPolicy cannot be cast to javax.cache.integration.CacheLoader at com.hazelcast.cache.impl.AbstractCacheProxyBase.<init>(AbstractCacheProxyBase.java:82) at com.hazelcast.cache.impl.AbstractInternalCacheProxy.<init>(AbstractInternalCacheProxy.java:76) at com.hazelcast.cache.impl.AbstractCacheProxy.<init>(AbstractCacheProxy.java:61) at com.hazelcast.cache.impl.CacheProxy.<init>(CacheProxy.java:79) at com.hazelcast.cache.impl.HazelcastServerCacheManager.createCacheProxy(HazelcastServerCacheManager.java:130) at com.hazelcast.cache.impl.AbstractHazelcastCacheManager.getCacheUnchecked(AbstractHazelcastCacheManager.java:202) at com.hazelcast.cache.impl.AbstractHazelcastCacheManager.getCache(AbstractHazelcastCacheManager.java:152) at com.hazelcast.cache.impl.AbstractHazelcastCacheManager.getCache(AbstractHazelcastCacheManager.java:44) ``` Here's the broken code in question: ```java public CacheConfig(CacheSimpleConfig simpleConfig) throws Exception { ... if (simpleConfig.getCacheLoaderFactory() != null) { this.cacheLoaderFactory = ClassLoaderUtil.newInstance(null, simpleConfig.getCacheLoaderFactory()); } if (simpleConfig.getCacheWriterFactory() != null) { this.cacheLoaderFactory = ClassLoaderUtil.newInstance(null, simpleConfig.getCacheWriterFactory()); } if (simpleConfig.getExpiryPolicyFactory() != null) { this.cacheLoaderFactory = ClassLoaderUtil.newInstance(null, simpleConfig.getExpiryPolicyFactory()); } ... } ```
1.0
CacheConfig(CacheSimpleConfig simpleConfig) constructor broken in 3.4 - Looks like @asimarslan forgot to fix variable assignments in [this](https://github.com/hazelcast/hazelcast/commit/2b274e6521ea45ba3f56fc50cbbe51d807794e3c#diff-da4f4c8466d2549f67ffb738b98cf597R97) commit. This prevents JCache declarative initialization given a configuration XML containing entries for `cache-writer-factory` or `expiry-policy-factory`: ``` java.lang.ClassCastException: javax.cache.expiry.AccessedExpiryPolicy cannot be cast to javax.cache.integration.CacheLoader at com.hazelcast.cache.impl.AbstractCacheProxyBase.<init>(AbstractCacheProxyBase.java:82) at com.hazelcast.cache.impl.AbstractInternalCacheProxy.<init>(AbstractInternalCacheProxy.java:76) at com.hazelcast.cache.impl.AbstractCacheProxy.<init>(AbstractCacheProxy.java:61) at com.hazelcast.cache.impl.CacheProxy.<init>(CacheProxy.java:79) at com.hazelcast.cache.impl.HazelcastServerCacheManager.createCacheProxy(HazelcastServerCacheManager.java:130) at com.hazelcast.cache.impl.AbstractHazelcastCacheManager.getCacheUnchecked(AbstractHazelcastCacheManager.java:202) at com.hazelcast.cache.impl.AbstractHazelcastCacheManager.getCache(AbstractHazelcastCacheManager.java:152) at com.hazelcast.cache.impl.AbstractHazelcastCacheManager.getCache(AbstractHazelcastCacheManager.java:44) ``` Here's the broken code in question: ```java public CacheConfig(CacheSimpleConfig simpleConfig) throws Exception { ... if (simpleConfig.getCacheLoaderFactory() != null) { this.cacheLoaderFactory = ClassLoaderUtil.newInstance(null, simpleConfig.getCacheLoaderFactory()); } if (simpleConfig.getCacheWriterFactory() != null) { this.cacheLoaderFactory = ClassLoaderUtil.newInstance(null, simpleConfig.getCacheWriterFactory()); } if (simpleConfig.getExpiryPolicyFactory() != null) { this.cacheLoaderFactory = ClassLoaderUtil.newInstance(null, simpleConfig.getExpiryPolicyFactory()); } ... } ```
defect
cacheconfig cachesimpleconfig simpleconfig constructor broken in looks like asimarslan forgot to fix variable assignments in commit this prevents jcache declarative initialization given a configuration xml containing entries for cache writer factory or expiry policy factory java lang classcastexception javax cache expiry accessedexpirypolicy cannot be cast to javax cache integration cacheloader at com hazelcast cache impl abstractcacheproxybase abstractcacheproxybase java at com hazelcast cache impl abstractinternalcacheproxy abstractinternalcacheproxy java at com hazelcast cache impl abstractcacheproxy abstractcacheproxy java at com hazelcast cache impl cacheproxy cacheproxy java at com hazelcast cache impl hazelcastservercachemanager createcacheproxy hazelcastservercachemanager java at com hazelcast cache impl abstracthazelcastcachemanager getcacheunchecked abstracthazelcastcachemanager java at com hazelcast cache impl abstracthazelcastcachemanager getcache abstracthazelcastcachemanager java at com hazelcast cache impl abstracthazelcastcachemanager getcache abstracthazelcastcachemanager java here s the broken code in question java public cacheconfig cachesimpleconfig simpleconfig throws exception if simpleconfig getcacheloaderfactory null this cacheloaderfactory classloaderutil newinstance null simpleconfig getcacheloaderfactory if simpleconfig getcachewriterfactory null this cacheloaderfactory classloaderutil newinstance null simpleconfig getcachewriterfactory if simpleconfig getexpirypolicyfactory null this cacheloaderfactory classloaderutil newinstance null simpleconfig getexpirypolicyfactory
1
31,166
4,696,373,078
IssuesEvent
2016-10-12 04:00:06
samuelmolinski/issueTests
https://api.github.com/repos/samuelmolinski/issueTests
opened
Facebook Authentication Method should use authenticationmethod attribute
EE-1344 Ready To Test Sprint 27 - Capture The Flag Story
The portal currently uses an attribute called "link" to detect authentication coming from Facebook. This needs to be changed to use authenticationmethod attribute.
1.0
Facebook Authentication Method should use authenticationmethod attribute - The portal currently uses an attribute called "link" to detect authentication coming from Facebook. This needs to be changed to use authenticationmethod attribute.
non_defect
facebook authentication method should use authenticationmethod attribute the portal currently uses an attribute called link to detect authentication coming from facebook this needs to be changed to use authenticationmethod attribute
0
317,544
9,666,476,732
IssuesEvent
2019-05-21 10:55:21
canonical-web-and-design/conjure-up.io
https://api.github.com/repos/canonical-web-and-design/conjure-up.io
closed
Install instructions single out “Ubuntu Xenial 16.04”
Priority: Medium Type: Bug
1\. Find [the Conjure-up install instructions](https://docs.conjure-up.io/2.4.0/en/). What you see: - “`conjure-up` is available on Ubuntu Xenial 16.04 LTS and macOS” What’s wrong with this: - It’s technically true, but misleading. Since it’s a snap, conjure-up is also available for Ubuntu 14.04, 17.04, 17.10, and 18.04. (Whether it actually _works_ on 14.04, I don’t know.) - “Ubuntu Xenial 16.04 LTS” is unusual. Usual would be “Ubuntu 16.04 LTS”. - There’s no full stop at the end of the sentence. It may save time to fix #23 at the same time as this issue.
1.0
Install instructions single out “Ubuntu Xenial 16.04” - 1\. Find [the Conjure-up install instructions](https://docs.conjure-up.io/2.4.0/en/). What you see: - “`conjure-up` is available on Ubuntu Xenial 16.04 LTS and macOS” What’s wrong with this: - It’s technically true, but misleading. Since it’s a snap, conjure-up is also available for Ubuntu 14.04, 17.04, 17.10, and 18.04. (Whether it actually _works_ on 14.04, I don’t know.) - “Ubuntu Xenial 16.04 LTS” is unusual. Usual would be “Ubuntu 16.04 LTS”. - There’s no full stop at the end of the sentence. It may save time to fix #23 at the same time as this issue.
non_defect
install instructions single out “ubuntu xenial ” find what you see “ conjure up is available on ubuntu xenial lts and macos” what’s wrong with this it’s technically true but misleading since it’s a snap conjure up is also available for ubuntu and whether it actually works on i don’t know “ubuntu xenial lts” is unusual usual would be “ubuntu lts” there’s no full stop at the end of the sentence it may save time to fix at the same time as this issue
0
439,206
30,684,450,460
IssuesEvent
2023-07-26 11:22:23
linrunner/TLP
https://api.github.com/repos/linrunner/TLP
closed
Exceptions from RUNTIME_PM_ON_*="" are not possible atm
documentation change feature request committed
TLP 1.5.0 What I want: Using the Bios defaults for all PCI devices in AC and BAT mode with one exception (Nvidia dGPU should always be in PM-mode, because the Intel iGPU alone is used). How to accomplish that: Method 1a (works): ``` RUNTIME_PM_ON_AC="auto" RUNTIME_PM_ON_BAT="auto" RUNTIME_PM_DISABLE="..." # exclude all devices from PM that aren't in PM-mode by Bios default except the dGPU -> list 10 devices ``` Method 1b (works): ``` RUNTIME_PM_ON_AC="on" RUNTIME_PM_ON_BAT="on" RUNTIME_PM_ENABLE="..." # include all devices that are in PM-mode by Bios default, adding the dGPU -> list 13 devices ``` Method 2 (preferred, much more elegant, but it doesn't work because RUNTIME_PM_ENABLE is ignored): ``` RUNTIME_PM_ON_AC="" # Bios defaults RUNTIME_PM_ON_BAT="" # Bios defaults RUNTIME_PM_ENABLE="01:00.0" # always enable PM for the dGPU -> list 1 device ``` The "faulty" code part is in /usr/share/tlp/func.d/05-tlp-func-pm: ``` if [ -z "$ccontrol" ]; then # do nothing if unconfigured echo_debug "pm" "set_runtime_pm($1).not_configured" return 0 fi ``` I'm sure it's there for a reason, but for me disabling the "return 0" does the trick. What do you think? Should method 2 be allowed, or rather why must there be a return if RUNTIME_PM_ON_* is unconfigured?
1.0
Exceptions from RUNTIME_PM_ON_*="" are not possible atm - TLP 1.5.0 What I want: Using the Bios defaults for all PCI devices in AC and BAT mode with one exception (Nvidia dGPU should always be in PM-mode, because the Intel iGPU alone is used). How to accomplish that: Method 1a (works): ``` RUNTIME_PM_ON_AC="auto" RUNTIME_PM_ON_BAT="auto" RUNTIME_PM_DISABLE="..." # exclude all devices from PM that aren't in PM-mode by Bios default except the dGPU -> list 10 devices ``` Method 1b (works): ``` RUNTIME_PM_ON_AC="on" RUNTIME_PM_ON_BAT="on" RUNTIME_PM_ENABLE="..." # include all devices that are in PM-mode by Bios default, adding the dGPU -> list 13 devices ``` Method 2 (preferred, much more elegant, but it doesn't work because RUNTIME_PM_ENABLE is ignored): ``` RUNTIME_PM_ON_AC="" # Bios defaults RUNTIME_PM_ON_BAT="" # Bios defaults RUNTIME_PM_ENABLE="01:00.0" # always enable PM for the dGPU -> list 1 device ``` The "faulty" code part is in /usr/share/tlp/func.d/05-tlp-func-pm: ``` if [ -z "$ccontrol" ]; then # do nothing if unconfigured echo_debug "pm" "set_runtime_pm($1).not_configured" return 0 fi ``` I'm sure it's there for a reason, but for me disabling the "return 0" does the trick. What do you think? Should method 2 be allowed, or rather why must there be a return if RUNTIME_PM_ON_* is unconfigured?
non_defect
exceptions from runtime pm on are not possible atm tlp what i want using the bios defaults for all pci devices in ac and bat mode with one exception nvidia dgpu should always be in pm mode because the intel igpu alone is used how to accomplish that method works runtime pm on ac auto runtime pm on bat auto runtime pm disable exclude all devices from pm that aren t in pm mode by bios default except the dgpu list devices method works runtime pm on ac on runtime pm on bat on runtime pm enable include all devices that are in pm mode by bios default adding the dgpu list devices method preferred much more elegant but it doesn t work because runtime pm enable is ignored runtime pm on ac bios defaults runtime pm on bat bios defaults runtime pm enable always enable pm for the dgpu list device the faulty code part is in usr share tlp func d tlp func pm if then do nothing if unconfigured echo debug pm set runtime pm not configured return fi i m sure it s there for a reason but for me disabling the return does the trick what do you think should method be allowed or rather why must there be a return if runtime pm on is unconfigured
0
197,142
22,575,828,474
IssuesEvent
2022-06-28 07:14:53
bitbar/android-gradle-plugin
https://api.github.com/repos/bitbar/android-gradle-plugin
closed
CVE-2021-22569 (Medium) detected in protobuf-java-3.0.0.jar - autoclosed
security vulnerability
## CVE-2021-22569 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobuf-java-3.0.0.jar</b></p></summary> <p>Core Protocol Buffers library. Protocol Buffers are a way of encoding structured data in an efficient yet extensible format.</p> <p>Library home page: <a href="https://developers.google.com/protocol-buffers/">https://developers.google.com/protocol-buffers/</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java/3.0.0/6d325aa7c921661d84577c0a93d82da4df9fa4c8/protobuf-java-3.0.0.jar</p> <p> Dependency Hierarchy: - gradle-core-2.3.0.jar (Root Library) - :x: **protobuf-java-3.0.0.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue in protobuf-java allowed the interleaving of com.google.protobuf.UnknownFieldSet fields in such a way that would be processed out of order. A small malicious payload can occupy the parser for several minutes by creating large numbers of short-lived objects that cause frequent, repeated pauses. We recommend upgrading libraries beyond the vulnerable versions. <p>Publish Date: 2022-01-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22569>CVE-2021-22569</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-wrvw-hg22-4m67">https://github.com/advisories/GHSA-wrvw-hg22-4m67</a></p> <p>Release Date: 2022-01-10</p> <p>Fix Resolution: com.google.protobuf:protobuf-java:3.16.1,3.18.2,3.19.2; com.google.protobuf:protobuf-kotlin:3.18.2,3.19.2; google-protobuf - 3.19.2</p> </p> </details> <p></p>
True
CVE-2021-22569 (Medium) detected in protobuf-java-3.0.0.jar - autoclosed - ## CVE-2021-22569 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>protobuf-java-3.0.0.jar</b></p></summary> <p>Core Protocol Buffers library. Protocol Buffers are a way of encoding structured data in an efficient yet extensible format.</p> <p>Library home page: <a href="https://developers.google.com/protocol-buffers/">https://developers.google.com/protocol-buffers/</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protobuf-java/3.0.0/6d325aa7c921661d84577c0a93d82da4df9fa4c8/protobuf-java-3.0.0.jar</p> <p> Dependency Hierarchy: - gradle-core-2.3.0.jar (Root Library) - :x: **protobuf-java-3.0.0.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue in protobuf-java allowed the interleaving of com.google.protobuf.UnknownFieldSet fields in such a way that would be processed out of order. A small malicious payload can occupy the parser for several minutes by creating large numbers of short-lived objects that cause frequent, repeated pauses. We recommend upgrading libraries beyond the vulnerable versions. <p>Publish Date: 2022-01-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22569>CVE-2021-22569</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-wrvw-hg22-4m67">https://github.com/advisories/GHSA-wrvw-hg22-4m67</a></p> <p>Release Date: 2022-01-10</p> <p>Fix Resolution: com.google.protobuf:protobuf-java:3.16.1,3.18.2,3.19.2; com.google.protobuf:protobuf-kotlin:3.18.2,3.19.2; google-protobuf - 3.19.2</p> </p> </details> <p></p>
non_defect
cve medium detected in protobuf java jar autoclosed cve medium severity vulnerability vulnerable library protobuf java jar core protocol buffers library protocol buffers are a way of encoding structured data in an efficient yet extensible format library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files com google protobuf protobuf java protobuf java jar dependency hierarchy gradle core jar root library x protobuf java jar vulnerable library found in base branch master vulnerability details an issue in protobuf java allowed the interleaving of com google protobuf unknownfieldset fields in such a way that would be processed out of order a small malicious payload can occupy the parser for several minutes by creating large numbers of short lived objects that cause frequent repeated pauses we recommend upgrading libraries beyond the vulnerable versions publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com google protobuf protobuf java com google protobuf protobuf kotlin google protobuf
0
34,056
7,331,323,299
IssuesEvent
2018-03-05 13:08:21
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
opened
[3.6.0-beta1] Failure to load "cascading plugins bootstrap files"
Defect plugins
This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.6.0-beta1 * Platform and Target: PHP 7.1 / MacOS X / No DB. ### What you did I'm working on an app that relies on a plugin (let's call it `PluginA`) which itself relies on another plugin (`PluginB`). Both those plugins have a bootstrap file (mainly to load other plugins and to hook to some events). When `PluginA` loads `PluginB`, the bootstrap file of `PluginB` is not loaded, therefore, the handlers are not hooked. I'm leaving a test app zip folder for you to test. Just `composer install` it and fire it up with a `bin/cake server`. There are `debug` calls in the `routes.php` and `bootstrap.php` files for both plugins. You will see that routes are correctly called, but only the bootstrap of `PluginA` is loaded. Here is the app : [test-bootstrap-plugins.zip](https://github.com/cakephp/cakephp/files/1780869/test-bootstrap-plugins.zip) My knowledge of the internal is not really up to date but I suspect this is a side-effect of this PR : https://github.com/cakephp/cakephp/pull/11785 Note that in my real app scenario, the `PluginA` would be loaded using the new "plugin class" concept but the `PluginB` is loaded using the `Plugin::load()` method since I do not control how this last plugin is implemented. Right now I'm avoiding the issue by manually calling the bootstrap files needed.
1.0
[3.6.0-beta1] Failure to load "cascading plugins bootstrap files" - This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.6.0-beta1 * Platform and Target: PHP 7.1 / MacOS X / No DB. ### What you did I'm working on an app that relies on a plugin (let's call it `PluginA`) which itself relies on another plugin (`PluginB`). Both those plugins have a bootstrap file (mainly to load other plugins and to hook to some events). When `PluginA` loads `PluginB`, the bootstrap file of `PluginB` is not loaded, therefore, the handlers are not hooked. I'm leaving a test app zip folder for you to test. Just `composer install` it and fire it up with a `bin/cake server`. There are `debug` calls in the `routes.php` and `bootstrap.php` files for both plugins. You will see that routes are correctly called, but only the bootstrap of `PluginA` is loaded. Here is the app : [test-bootstrap-plugins.zip](https://github.com/cakephp/cakephp/files/1780869/test-bootstrap-plugins.zip) My knowledge of the internal is not really up to date but I suspect this is a side-effect of this PR : https://github.com/cakephp/cakephp/pull/11785 Note that in my real app scenario, the `PluginA` would be loaded using the new "plugin class" concept but the `PluginB` is loaded using the `Plugin::load()` method since I do not control how this last plugin is implemented. Right now I'm avoiding the issue by manually calling the bootstrap files needed.
defect
failure to load cascading plugins bootstrap files this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target php macos x no db what you did i m working on an app that relies on a plugin let s call it plugina which itself relies on another plugin pluginb both those plugins have a bootstrap file mainly to load other plugins and to hook to some events when plugina loads pluginb the bootstrap file of pluginb is not loaded therefore the handlers are not hooked i m leaving a test app zip folder for you to test just composer install it and fire it up with a bin cake server there are debug calls in the routes php and bootstrap php files for both plugins you will see that routes are correctly called but only the bootstrap of plugina is loaded here is the app my knowledge of the internal is not really up to date but i suspect this is a side effect of this pr note that in my real app scenario the plugina would be loaded using the new plugin class concept but the pluginb is loaded using the plugin load method since i do not control how this last plugin is implemented right now i m avoiding the issue by manually calling the bootstrap files needed
1