Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
666,480
22,357,247,364
IssuesEvent
2022-06-15 16:44:42
xlg8/APMIS-Project
https://api.github.com/repos/xlg8/APMIS-Project
opened
Country Request and Prioritization
functional concern country priority
<!-- If you've never submitted an issue to the SORMAS repository before or this is your first time using this template, please read the Contributing guidelines (https://github.com/hzi-braunschweig/SORMAS-Project/blob/development/docs/CONTRIBUTING.md) for an explanation of the information we need you to provide. You don't have to remove this comment or any other comment from this issue as they will automatically be hidden. --> ### Feature Description <!-- Mandatory --> - [ ] User managed password - [ ] Dashboard discussion face to face to discuss indicators - [ ] Email notification in APMIS. - [ ] Create campaign for specific regions. Hide campaign from regions not implementing - [ ] Pick data automatically from Days 1-3 within Day 4, pass already entered data in database back to form without re-entering them - [ ] Export needs further adjustment. We need more information about export??/ (ccode as validator, add to template instructions- use dcode, pcode - [ ] Data entry validation rules- same cluster data twice or thrice, needs to be restricted and inform user - [ ] Length of form very high, open and close sections- expand and hide options for easy data entry use - [ ] optimization of application to work on low band width internet (mobile app should solve this) offline data entry - [ ] Algorithm for cluster selection is not filtered, cascade should work up to cluster filter. to reduce the slowness of the system - [ ] Use radio button for yes/no option. - [ ] Report menu- how much data entry done today, how many checklists, responsible officer. - [ ] Responsiveness of system to work on mobile devices- mobile app and also web access via mobile - [ ] API to connect to DHIS2, EPI system, UNICEF and other data sources. - [ ] Admin coverage by day= should be disaggregated. - [ ] offline mode- can only work on mobile app- we might have to prioritize mobile app development. - [ ] Ability to change password within the system instead of just after log out. - [ ] 18. Dashboard tables - [ ] 19. Mobile browser offline capability- responsiveness of the web app on web browser - [ ] ### Problem Description <!-- Mandatory --> ### Proposed Change <!-- Mandatory --> ### Possible Alternatives <!-- Optional --> ### Additional Information <!-- Optional -->
1.0
Country Request and Prioritization - <!-- If you've never submitted an issue to the SORMAS repository before or this is your first time using this template, please read the Contributing guidelines (https://github.com/hzi-braunschweig/SORMAS-Project/blob/development/docs/CONTRIBUTING.md) for an explanation of the information we need you to provide. You don't have to remove this comment or any other comment from this issue as they will automatically be hidden. --> ### Feature Description <!-- Mandatory --> - [ ] User managed password - [ ] Dashboard discussion face to face to discuss indicators - [ ] Email notification in APMIS. - [ ] Create campaign for specific regions. Hide campaign from regions not implementing - [ ] Pick data automatically from Days 1-3 within Day 4, pass already entered data in database back to form without re-entering them - [ ] Export needs further adjustment. We need more information about export??/ (ccode as validator, add to template instructions- use dcode, pcode - [ ] Data entry validation rules- same cluster data twice or thrice, needs to be restricted and inform user - [ ] Length of form very high, open and close sections- expand and hide options for easy data entry use - [ ] optimization of application to work on low band width internet (mobile app should solve this) offline data entry - [ ] Algorithm for cluster selection is not filtered, cascade should work up to cluster filter. to reduce the slowness of the system - [ ] Use radio button for yes/no option. - [ ] Report menu- how much data entry done today, how many checklists, responsible officer. - [ ] Responsiveness of system to work on mobile devices- mobile app and also web access via mobile - [ ] API to connect to DHIS2, EPI system, UNICEF and other data sources. - [ ] Admin coverage by day= should be disaggregated. - [ ] offline mode- can only work on mobile app- we might have to prioritize mobile app development. - [ ] Ability to change password within the system instead of just after log out. - [ ] 18. Dashboard tables - [ ] 19. Mobile browser offline capability- responsiveness of the web app on web browser - [ ] ### Problem Description <!-- Mandatory --> ### Proposed Change <!-- Mandatory --> ### Possible Alternatives <!-- Optional --> ### Additional Information <!-- Optional -->
non_defect
country request and prioritization if you ve never submitted an issue to the sormas repository before or this is your first time using this template please read the contributing guidelines for an explanation of the information we need you to provide you don t have to remove this comment or any other comment from this issue as they will automatically be hidden feature description user managed password dashboard discussion face to face to discuss indicators email notification in apmis create campaign for specific regions hide campaign from regions not implementing pick data automatically from days within day pass already entered data in database back to form without re entering them export needs further adjustment we need more information about export ccode as validator add to template instructions use dcode pcode data entry validation rules same cluster data twice or thrice needs to be restricted and inform user length of form very high open and close sections expand and hide options for easy data entry use optimization of application to work on low band width internet mobile app should solve this offline data entry algorithm for cluster selection is not filtered cascade should work up to cluster filter to reduce the slowness of the system use radio button for yes no option report menu how much data entry done today how many checklists responsible officer responsiveness of system to work on mobile devices mobile app and also web access via mobile api to connect to epi system unicef and other data sources admin coverage by day should be disaggregated offline mode can only work on mobile app we might have to prioritize mobile app development ability to change password within the system instead of just after log out dashboard tables mobile browser offline capability responsiveness of the web app on web browser problem description proposed change possible alternatives additional information
0
276,349
20,980,545,934
IssuesEvent
2022-03-28 19:27:33
NicolasDuciaume/SpeechBot
https://api.github.com/repos/NicolasDuciaume/SpeechBot
closed
Update README
documentation
- add how to's for each component i.e. how to train the STT and Chatbot, how to run STT, Chatbot, TTS, GUI - how to run the whole project (TBD since we haven't completed integration yet) - expected outputs - member roles
1.0
Update README - - add how to's for each component i.e. how to train the STT and Chatbot, how to run STT, Chatbot, TTS, GUI - how to run the whole project (TBD since we haven't completed integration yet) - expected outputs - member roles
non_defect
update readme add how to s for each component i e how to train the stt and chatbot how to run stt chatbot tts gui how to run the whole project tbd since we haven t completed integration yet expected outputs member roles
0
50,738
13,187,703,290
IssuesEvent
2020-08-13 04:17:36
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
closed
[millipede] documentation incomplete (Trac #1254)
Migrated from Trac combo reconstruction defect
the documentation in index.rst does not list all the options of millipede, e.g. the option ReadoutWindow is missing (there may be more missing, please check). <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1254">https://code.icecube.wisc.edu/ticket/1254</a>, reported by hdembinski and owned by jbraun</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:11:57", "description": "the documentation in index.rst does not list all the options of millipede, e.g. the option ReadoutWindow is missing (there may be more missing, please check).", "reporter": "hdembinski", "cc": "", "resolution": "fixed", "_ts": "1550067117911749", "component": "combo reconstruction", "summary": "[millipede] documentation incomplete", "priority": "blocker", "keywords": "", "time": "2015-08-20T19:13:30", "milestone": "", "owner": "jbraun", "type": "defect" } ``` </p> </details>
1.0
[millipede] documentation incomplete (Trac #1254) - the documentation in index.rst does not list all the options of millipede, e.g. the option ReadoutWindow is missing (there may be more missing, please check). <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1254">https://code.icecube.wisc.edu/ticket/1254</a>, reported by hdembinski and owned by jbraun</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:11:57", "description": "the documentation in index.rst does not list all the options of millipede, e.g. the option ReadoutWindow is missing (there may be more missing, please check).", "reporter": "hdembinski", "cc": "", "resolution": "fixed", "_ts": "1550067117911749", "component": "combo reconstruction", "summary": "[millipede] documentation incomplete", "priority": "blocker", "keywords": "", "time": "2015-08-20T19:13:30", "milestone": "", "owner": "jbraun", "type": "defect" } ``` </p> </details>
defect
documentation incomplete trac the documentation in index rst does not list all the options of millipede e g the option readoutwindow is missing there may be more missing please check migrated from json status closed changetime description the documentation in index rst does not list all the options of millipede e g the option readoutwindow is missing there may be more missing please check reporter hdembinski cc resolution fixed ts component combo reconstruction summary documentation incomplete priority blocker keywords time milestone owner jbraun type defect
1
13,795
5,451,888,718
IssuesEvent
2017-03-08 00:47:07
docker/docker
https://api.github.com/repos/docker/docker
opened
File cannot be excluded in .dockerignore when parent directory is included as an exception
area/builder version/unsupported
<!-- If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information. For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues --------------------------------------------------- GENERAL SUPPORT INFORMATION --------------------------------------------------- The GitHub issue tracker is for bug reports and feature requests. General support can be found at the following locations: - Docker Support Forums - https://forums.docker.com - IRC - irc.freenode.net #docker channel - Post a question on StackOverflow, using the Docker tag --------------------------------------------------- BUG REPORT INFORMATION --------------------------------------------------- Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST --> **Description** In `.dockerignore`, if a directory is included as an exception, files under this directory cannot be excluded afterwards. **Steps to reproduce the issue:** Given the following directory structure: ``` $ tree -a . ├── data │   ├── dir │   │   ├── file2 │   │   └── file3 │   └── file1 ├── .dockerignore └── Dockerfile ``` The following `Dockerfile`: ``` FROM busybox:1.26 COPY data /data/ ``` The following `.dockerignore`: ``` * !data data/file2 ``` When the image is built and inspected in a container: ``` $ docker build -t test -q . && docker run --rm test find /data ``` **Describe the results you received:** The files copied include the file being excluded: ``` sha256:b310efa080739767c793cf2058984958c6d9659187de6e6b36f898ea46650471 /data /data/dir /data/dir/file2 /data/dir/file3 /data/file1 ``` **Describe the results you expected:** `/data/dir/file2` should not be copied to the image because it should take priority over the `!data` inclusion since the exclusion is declared afterwards. **Additional information you deem important (e.g. issue happens only occasionally):** Possibly related to https://github.com/docker/docker/issues/30018 **Output of `docker version`:** ``` Client: Version: 17.03.0-ce API version: 1.26 Go version: go1.7.5 Git commit: 60ccb22 Built: Thu Feb 23 10:40:59 2017 OS/Arch: darwin/amd64 Server: Version: 17.03.0-ce API version: 1.26 (minimum version 1.12) Go version: go1.7.5 Git commit: 3a232c8 Built: Tue Feb 28 07:52:04 2017 OS/Arch: linux/amd64 Experimental: true ``` **Output of `docker info`:** ``` Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 7 Server Version: 17.03.0-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 977c511eda0925a723debdc94d09459af49d082a runc version: a01dafd48bc1c7cc12bdb01206f9fea7dd6feb70 init version: 949e6fa Security Options: seccomp Profile: default Kernel Version: 4.9.12-moby Operating System: Alpine Linux v3.5 OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 1.952 GiB Name: moby ID: V3CL:X4RL:O5MJ:BSJZ:JRNG:SVCE:672T:XPMF:5EDA:2SXO:NPMX:Y7LV Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): true File Descriptors: 17 Goroutines: 28 System Time: 2017-03-08T00:44:35.903208479Z EventsListeners: 1 No Proxy: *.local, 169.254/16 Username: diwo Registry: https://index.docker.io/v1/ Experimental: true Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false ```
1.0
File cannot be excluded in .dockerignore when parent directory is included as an exception - <!-- If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information. For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues --------------------------------------------------- GENERAL SUPPORT INFORMATION --------------------------------------------------- The GitHub issue tracker is for bug reports and feature requests. General support can be found at the following locations: - Docker Support Forums - https://forums.docker.com - IRC - irc.freenode.net #docker channel - Post a question on StackOverflow, using the Docker tag --------------------------------------------------- BUG REPORT INFORMATION --------------------------------------------------- Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST --> **Description** In `.dockerignore`, if a directory is included as an exception, files under this directory cannot be excluded afterwards. **Steps to reproduce the issue:** Given the following directory structure: ``` $ tree -a . ├── data │   ├── dir │   │   ├── file2 │   │   └── file3 │   └── file1 ├── .dockerignore └── Dockerfile ``` The following `Dockerfile`: ``` FROM busybox:1.26 COPY data /data/ ``` The following `.dockerignore`: ``` * !data data/file2 ``` When the image is built and inspected in a container: ``` $ docker build -t test -q . && docker run --rm test find /data ``` **Describe the results you received:** The files copied include the file being excluded: ``` sha256:b310efa080739767c793cf2058984958c6d9659187de6e6b36f898ea46650471 /data /data/dir /data/dir/file2 /data/dir/file3 /data/file1 ``` **Describe the results you expected:** `/data/dir/file2` should not be copied to the image because it should take priority over the `!data` inclusion since the exclusion is declared afterwards. **Additional information you deem important (e.g. issue happens only occasionally):** Possibly related to https://github.com/docker/docker/issues/30018 **Output of `docker version`:** ``` Client: Version: 17.03.0-ce API version: 1.26 Go version: go1.7.5 Git commit: 60ccb22 Built: Thu Feb 23 10:40:59 2017 OS/Arch: darwin/amd64 Server: Version: 17.03.0-ce API version: 1.26 (minimum version 1.12) Go version: go1.7.5 Git commit: 3a232c8 Built: Tue Feb 28 07:52:04 2017 OS/Arch: linux/amd64 Experimental: true ``` **Output of `docker info`:** ``` Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 7 Server Version: 17.03.0-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 977c511eda0925a723debdc94d09459af49d082a runc version: a01dafd48bc1c7cc12bdb01206f9fea7dd6feb70 init version: 949e6fa Security Options: seccomp Profile: default Kernel Version: 4.9.12-moby Operating System: Alpine Linux v3.5 OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 1.952 GiB Name: moby ID: V3CL:X4RL:O5MJ:BSJZ:JRNG:SVCE:672T:XPMF:5EDA:2SXO:NPMX:Y7LV Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): true File Descriptors: 17 Goroutines: 28 System Time: 2017-03-08T00:44:35.903208479Z EventsListeners: 1 No Proxy: *.local, 169.254/16 Username: diwo Registry: https://index.docker.io/v1/ Experimental: true Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false ```
non_defect
file cannot be excluded in dockerignore when parent directory is included as an exception if you are reporting a new issue make sure that we do not have any duplicates already open you can ensure this by searching the issue list for this repository if there is a duplicate please close your issue and add a comment to the existing issue instead if you suspect your issue is a bug please edit your issue description to include the bug report information shown below if you fail to provide this information within days we cannot debug your issue and will close it we will however reopen it if you later provide the information for more information about reporting issues see general support information the github issue tracker is for bug reports and feature requests general support can be found at the following locations docker support forums irc irc freenode net docker channel post a question on stackoverflow using the docker tag bug report information use the commands below to provide key information from your environment you do not have to include this information if this is a feature request description in dockerignore if a directory is included as an exception files under this directory cannot be excluded afterwards steps to reproduce the issue given the following directory structure tree a ├── data │   ├── dir │   │   ├── │   │   └── │   └── ├── dockerignore └── dockerfile the following dockerfile from busybox copy data data the following dockerignore data data when the image is built and inspected in a container docker build t test q docker run rm test find data describe the results you received the files copied include the file being excluded data data dir data dir data dir data describe the results you expected data dir should not be copied to the image because it should take priority over the data inclusion since the exclusion is declared afterwards additional information you deem important e g issue happens only occasionally possibly related to output of docker version client version ce api version go version git commit built thu feb os arch darwin server version ce api version minimum version go version git commit built tue feb os arch linux experimental true output of docker info containers running paused stopped images server version ce storage driver backing filesystem extfs supports d type true native overlay diff true logging driver json file cgroup driver cgroupfs plugins volume local network bridge host ipvlan macvlan null overlay swarm inactive runtimes runc default runtime runc init binary docker init containerd version runc version init version security options seccomp profile default kernel version moby operating system alpine linux ostype linux architecture cpus total memory gib name moby id bsjz jrng svce xpmf npmx docker root dir var lib docker debug mode client false debug mode server true file descriptors goroutines system time eventslisteners no proxy local username diwo registry experimental true insecure registries live restore enabled false
0
17,924
3,013,785,192
IssuesEvent
2015-07-29 11:12:52
yawlfoundation/yawl
https://api.github.com/repos/yawlfoundation/yawl
closed
Task appears as 'unnamed' in control center
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Create a workflow (see attachment) 2. Declare a local net variable a of type string 3. Create a task A using decompose to direct data transfer 4. Use variable a as input and output 5. Load the workflow into engine and start it What is the expected output? What do you see instead? I would expect that the task name 'A' appears in the worklist. Instead the work item appears as 'unnamed' What version of the product are you using? On what operating system? YAWL 2.3.5 with two recent classes. Please provide any additional information below. Using Linux OS. The problem occured sporadically in many workflows recently. ``` Original issue reported on code.google.com by `andreas....@gmail.com` on 3 Dec 2013 at 1:17 Attachments: * [wf.yawl](https://storage.googleapis.com/google-code-attachments/yawl/issue-490/comment-0/wf.yawl)
1.0
Task appears as 'unnamed' in control center - ``` What steps will reproduce the problem? 1. Create a workflow (see attachment) 2. Declare a local net variable a of type string 3. Create a task A using decompose to direct data transfer 4. Use variable a as input and output 5. Load the workflow into engine and start it What is the expected output? What do you see instead? I would expect that the task name 'A' appears in the worklist. Instead the work item appears as 'unnamed' What version of the product are you using? On what operating system? YAWL 2.3.5 with two recent classes. Please provide any additional information below. Using Linux OS. The problem occured sporadically in many workflows recently. ``` Original issue reported on code.google.com by `andreas....@gmail.com` on 3 Dec 2013 at 1:17 Attachments: * [wf.yawl](https://storage.googleapis.com/google-code-attachments/yawl/issue-490/comment-0/wf.yawl)
defect
task appears as unnamed in control center what steps will reproduce the problem create a workflow see attachment declare a local net variable a of type string create a task a using decompose to direct data transfer use variable a as input and output load the workflow into engine and start it what is the expected output what do you see instead i would expect that the task name a appears in the worklist instead the work item appears as unnamed what version of the product are you using on what operating system yawl with two recent classes please provide any additional information below using linux os the problem occured sporadically in many workflows recently original issue reported on code google com by andreas gmail com on dec at attachments
1
38,292
8,736,975,655
IssuesEvent
2018-12-11 21:08:28
telus/tds-core
https://api.github.com/repos/telus/tds-core
closed
[Bug] TDS PriceLockup - Component center alignment issues
owner: contributor priority: medium status: in progress type: defect :bug:
<!-- ### IMPORTANT SECURITY NOTE ### When opening issues, be sure NOT to include any private or personal information such as secrets, passwords, or any source code that involves data retrieval. Also, do not include links to sites on staging. --> ## Description TDS PriceLockup does not centre correctly when wrapped by a centrally aligned component. PriceLockup was implemented in Site Builder - wrapped in a block (BlockPriceLockup). When BlockPriceLockup gets used in a centrally aligned fashion, the top and bottom text get centred while the price, sign, and rate text remain left aligned. This was replicated in the TDS catalogue. ## Reproduction Steps - go here https://tds.telus.com/components/index.html#pricelockup - In the view code section enter this snippet: ``` <FlexGrid> <FlexGrid.Row horizontalAlign="center"> <FlexGrid.Col xs={12} md={12}> <PriceLockup size="medium" topText="Starting at" price="25" rateText="/month" bottomText="$68 /month after 3 months" signDirection="left" /> </FlexGrid.Col> </FlexGrid.Row> </FlexGrid> ``` ## Meta - TDS version: v1.0.0 - Willing to develop solution: Willing to get help - Has workaround: Yes (wrap in left aligned components) - High impact: No ## Screenshots ![image](https://user-images.githubusercontent.com/29206760/48962130-cfe28e00-ef30-11e8-8816-49c8f06f11ff.png)
1.0
[Bug] TDS PriceLockup - Component center alignment issues - <!-- ### IMPORTANT SECURITY NOTE ### When opening issues, be sure NOT to include any private or personal information such as secrets, passwords, or any source code that involves data retrieval. Also, do not include links to sites on staging. --> ## Description TDS PriceLockup does not centre correctly when wrapped by a centrally aligned component. PriceLockup was implemented in Site Builder - wrapped in a block (BlockPriceLockup). When BlockPriceLockup gets used in a centrally aligned fashion, the top and bottom text get centred while the price, sign, and rate text remain left aligned. This was replicated in the TDS catalogue. ## Reproduction Steps - go here https://tds.telus.com/components/index.html#pricelockup - In the view code section enter this snippet: ``` <FlexGrid> <FlexGrid.Row horizontalAlign="center"> <FlexGrid.Col xs={12} md={12}> <PriceLockup size="medium" topText="Starting at" price="25" rateText="/month" bottomText="$68 /month after 3 months" signDirection="left" /> </FlexGrid.Col> </FlexGrid.Row> </FlexGrid> ``` ## Meta - TDS version: v1.0.0 - Willing to develop solution: Willing to get help - Has workaround: Yes (wrap in left aligned components) - High impact: No ## Screenshots ![image](https://user-images.githubusercontent.com/29206760/48962130-cfe28e00-ef30-11e8-8816-49c8f06f11ff.png)
defect
tds pricelockup component center alignment issues important security note when opening issues be sure not to include any private or personal information such as secrets passwords or any source code that involves data retrieval also do not include links to sites on staging description tds pricelockup does not centre correctly when wrapped by a centrally aligned component pricelockup was implemented in site builder wrapped in a block blockpricelockup when blockpricelockup gets used in a centrally aligned fashion the top and bottom text get centred while the price sign and rate text remain left aligned this was replicated in the tds catalogue reproduction steps go here in the view code section enter this snippet pricelockup size medium toptext starting at price ratetext month bottomtext month after months signdirection left meta tds version willing to develop solution willing to get help has workaround yes wrap in left aligned components high impact no screenshots
1
337,608
30,251,278,114
IssuesEvent
2023-07-06 20:47:05
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
opened
Fix math.test_jax_positive
JAX Frontend Sub Task Failing Test
| | | |---|---| |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5475017266"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5475588308"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5479301929"><img src=https://img.shields.io/badge/-failure-red></a>
1.0
Fix math.test_jax_positive - | | | |---|---| |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5475017266"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5475588308"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5479301929"><img src=https://img.shields.io/badge/-failure-red></a>
non_defect
fix math test jax positive paddle a href src numpy a href src tensorflow a href src
0
49,424
13,186,703,970
IssuesEvent
2020-08-13 01:02:46
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
I3TRandomService does not compile with ROOT 6.05/02 (Trac #1363)
Incomplete Migration Migrated from Trac combo core defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1363">https://code.icecube.wisc.edu/ticket/1363</a>, reported by chraab and owned by </em></summary> <p> ```json { "status": "closed", "changetime": "2015-09-22T13:28:23", "description": "Compiling I3TRandomService \n- in offline-software release V15-08-00\n- configured with cmake -DSYSTEM_PACKAGES=True\n- and ROOT 6.05/02 installed from source into /usr/local/\n- on Linux Mint 17.2, with GCC 4.8.4-2ubuntu1~14.04 and matching libstdc++\ngives the error\n\n{{{\nIn file included from /usr/local/include/TNamed.h:29:0,\n from /usr/local/include/TRandom.h:26,\n from /usr/local/include/TRandom3.h:26,\n from /home/chris/Software/offline/src/phys-services/public/phys-services/I3TRandomService.h:5,\n from /home/chris/Software/offline/src/phys-services/private/phys-services/I3TRandomService.cxx:11:\n/usr/local/include/TString.h:787:32: error: \u2018string_view\u2019 in namespace \u2018std\u2019 does not name a type\n std::string printValue(const std::string_view &val);\n}}}\n\n(full error message attached)", "reporter": "chraab", "cc": "", "resolution": "wontfix", "_ts": "1442928503192644", "component": "combo core", "summary": "I3TRandomService does not compile with ROOT 6.05/02", "priority": "normal", "keywords": "offline,root,std,string_view", "time": "2015-09-22T09:40:23", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
1.0
I3TRandomService does not compile with ROOT 6.05/02 (Trac #1363) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1363">https://code.icecube.wisc.edu/ticket/1363</a>, reported by chraab and owned by </em></summary> <p> ```json { "status": "closed", "changetime": "2015-09-22T13:28:23", "description": "Compiling I3TRandomService \n- in offline-software release V15-08-00\n- configured with cmake -DSYSTEM_PACKAGES=True\n- and ROOT 6.05/02 installed from source into /usr/local/\n- on Linux Mint 17.2, with GCC 4.8.4-2ubuntu1~14.04 and matching libstdc++\ngives the error\n\n{{{\nIn file included from /usr/local/include/TNamed.h:29:0,\n from /usr/local/include/TRandom.h:26,\n from /usr/local/include/TRandom3.h:26,\n from /home/chris/Software/offline/src/phys-services/public/phys-services/I3TRandomService.h:5,\n from /home/chris/Software/offline/src/phys-services/private/phys-services/I3TRandomService.cxx:11:\n/usr/local/include/TString.h:787:32: error: \u2018string_view\u2019 in namespace \u2018std\u2019 does not name a type\n std::string printValue(const std::string_view &val);\n}}}\n\n(full error message attached)", "reporter": "chraab", "cc": "", "resolution": "wontfix", "_ts": "1442928503192644", "component": "combo core", "summary": "I3TRandomService does not compile with ROOT 6.05/02", "priority": "normal", "keywords": "offline,root,std,string_view", "time": "2015-09-22T09:40:23", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
defect
does not compile with root trac migrated from json status closed changetime description compiling n in offline software release n configured with cmake dsystem packages true n and root installed from source into usr local n on linux mint with gcc and matching libstdc ngives the error n n nin file included from usr local include tnamed h n from usr local include trandom h n from usr local include h n from home chris software offline src phys services public phys services h n from home chris software offline src phys services private phys services cxx n usr local include tstring h error view in namespace does not name a type n std string printvalue const std string view val n n n full error message attached reporter chraab cc resolution wontfix ts component combo core summary does not compile with root priority normal keywords offline root std string view time milestone owner type defect
1
48,649
25,736,719,440
IssuesEvent
2022-12-08 01:30:39
WDscholia/scholia
https://api.github.com/repos/WDscholia/scholia
opened
On curation page for ```use```, add more ```LIMIT```s to query for missing author name strings
SPARQL missing-data performance P4510-describes-project-that-uses
**What query is this about** https://github.com/WDscholia/scholia/blob/master/scholia/app/templates/use-curation_missing-author-items.sparql , e.g. as per https://scholia.toolforge.org/use/Q1659584/curation#missing-author-items : ![image](https://user-images.githubusercontent.com/465923/206333620-5fd020b3-4fa7-494b-b05b-45a9a6f1a78c.png) **What change do you propose, and why?** We should add some more LIMITs to the query, to enhance performance. **Any other considerations?**
True
On curation page for ```use```, add more ```LIMIT```s to query for missing author name strings - **What query is this about** https://github.com/WDscholia/scholia/blob/master/scholia/app/templates/use-curation_missing-author-items.sparql , e.g. as per https://scholia.toolforge.org/use/Q1659584/curation#missing-author-items : ![image](https://user-images.githubusercontent.com/465923/206333620-5fd020b3-4fa7-494b-b05b-45a9a6f1a78c.png) **What change do you propose, and why?** We should add some more LIMITs to the query, to enhance performance. **Any other considerations?**
non_defect
on curation page for use add more limit s to query for missing author name strings what query is this about e g as per what change do you propose and why we should add some more limits to the query to enhance performance any other considerations
0
21,669
11,308,487,654
IssuesEvent
2020-01-19 06:09:52
CaiJingLong/flutter_photo_manager
https://api.github.com/repos/CaiJingLong/flutter_photo_manager
closed
[BUG] performance problem with 0.4.6
ios performance optimization
After upgrading to 0.4.6 I've noticed a massive performance loss when iterating through all my photos on my device (about 12.000 photos) with version 0.4.6 it takes about 2 minutes with version 0.4.5 it took about 2 seconds this is the code I am using var result = await PhotoManager.requestPermission(); if (result) { List<AssetEntity> imageList = []; List<AssetPathEntity> list = await PhotoManager.getImageAsset(); if (list!=null) for(AssetPathEntity path in list) imageList.addAll(await path.getAssetListRange(start: 0, end: path.assetCount)); if (imageList.isNotEmpty) { imageList.shuffle(); List<AssetEntity> imageListNew = imageList.length > count ? imageList.sublist(0, count) : imageList; List<AssetEntity> data = []; for (AssetEntity assetEntity in imageListNew) data.add(assetEntity); return data; } }
True
[BUG] performance problem with 0.4.6 - After upgrading to 0.4.6 I've noticed a massive performance loss when iterating through all my photos on my device (about 12.000 photos) with version 0.4.6 it takes about 2 minutes with version 0.4.5 it took about 2 seconds this is the code I am using var result = await PhotoManager.requestPermission(); if (result) { List<AssetEntity> imageList = []; List<AssetPathEntity> list = await PhotoManager.getImageAsset(); if (list!=null) for(AssetPathEntity path in list) imageList.addAll(await path.getAssetListRange(start: 0, end: path.assetCount)); if (imageList.isNotEmpty) { imageList.shuffle(); List<AssetEntity> imageListNew = imageList.length > count ? imageList.sublist(0, count) : imageList; List<AssetEntity> data = []; for (AssetEntity assetEntity in imageListNew) data.add(assetEntity); return data; } }
non_defect
performance problem with after upgrading to i ve noticed a massive performance loss when iterating through all my photos on my device about photos with version it takes about minutes with version it took about seconds this is the code i am using var result await photomanager requestpermission if result list imagelist list list await photomanager getimageasset if list null for assetpathentity path in list imagelist addall await path getassetlistrange start end path assetcount if imagelist isnotempty imagelist shuffle list imagelistnew imagelist length count imagelist sublist count imagelist list data for assetentity assetentity in imagelistnew data add assetentity return data
0
29,051
5,514,006,752
IssuesEvent
2017-03-17 14:09:09
PowerDNS/pdns
https://api.github.com/repos/PowerDNS/pdns
closed
RPZ TTL and defttl is ignored unless defpol is also set
defect rec
- Program: Recursor - Issue type: Bug report ### Short description When loading an RPZ file using `rpzFile("local.rpz", {})`, all returned answers get a TTL of zero. Setting `defttl=60` does not change that. ### Environment - Operating system: Ubuntu 16.10 - Software version: master c5aa7800c87c5f869ca83809fa1d7f53f371297c - Software source: git ### Steps to reproduce 1. put `rpzFile("local.rpz", {})` in the lua configure script 2. populate local.rpz as below 3. start recursor 4. `dig +noad ttltest.` and `dig +noad ttltest2.` local.rpz: ``` $TTL 600 $ORIGIN localrpz. @ 600 IN SOA localrpz. hostmaster.t-ipnet.net. ( 2017012300 ; Serial YYYYMMDDxx 600 ; Refresh 300 ; Retry 7257600 ; Expire 3600 ) ; NX-TTL @ 600 NS localhost. ttltest 600 IN A 1.2.3.4 ttltest2 700 IN CNAME www.example.com. ``` ### Expected behaviour To see the 600/700 TTL on the two tests. ### Actual behaviour A 0 TTL is returned. ### Other information The reason `defttl` does not work is that rec-lua-conf.cc only picks up `defttl` if `defpol` is also set. The reason the TTLs from the file are ignored is that rpzloader.cc `RPZRecordToPolicy` only takes the TTL from the file if the default it sees in the policy is `<0`, but a freshly created policy has `d_ttl` set to 0. This patch fixes it for me but I'm unsure about side-effects: ```diff diff --git a/pdns/filterpo.hh b/pdns/filterpo.hh index 283ea5b..073a71b 100644 --- a/pdns/filterpo.hh +++ b/pdns/filterpo.hh @@ -67,7 +67,7 @@ public: enum class PolicyKind { NoAction, Drop, NXDOMAIN, NODATA, Truncate, Custom}; struct Policy { - Policy(): d_kind(PolicyKind::NoAction), d_custom(nullptr), d_name(nullptr), d_ttl(0) + Policy(): d_kind(PolicyKind::NoAction), d_custom(nullptr), d_name(nullptr), d_ttl(-1) { } bool operator==(const Policy& rhs) const ```
1.0
RPZ TTL and defttl is ignored unless defpol is also set - - Program: Recursor - Issue type: Bug report ### Short description When loading an RPZ file using `rpzFile("local.rpz", {})`, all returned answers get a TTL of zero. Setting `defttl=60` does not change that. ### Environment - Operating system: Ubuntu 16.10 - Software version: master c5aa7800c87c5f869ca83809fa1d7f53f371297c - Software source: git ### Steps to reproduce 1. put `rpzFile("local.rpz", {})` in the lua configure script 2. populate local.rpz as below 3. start recursor 4. `dig +noad ttltest.` and `dig +noad ttltest2.` local.rpz: ``` $TTL 600 $ORIGIN localrpz. @ 600 IN SOA localrpz. hostmaster.t-ipnet.net. ( 2017012300 ; Serial YYYYMMDDxx 600 ; Refresh 300 ; Retry 7257600 ; Expire 3600 ) ; NX-TTL @ 600 NS localhost. ttltest 600 IN A 1.2.3.4 ttltest2 700 IN CNAME www.example.com. ``` ### Expected behaviour To see the 600/700 TTL on the two tests. ### Actual behaviour A 0 TTL is returned. ### Other information The reason `defttl` does not work is that rec-lua-conf.cc only picks up `defttl` if `defpol` is also set. The reason the TTLs from the file are ignored is that rpzloader.cc `RPZRecordToPolicy` only takes the TTL from the file if the default it sees in the policy is `<0`, but a freshly created policy has `d_ttl` set to 0. This patch fixes it for me but I'm unsure about side-effects: ```diff diff --git a/pdns/filterpo.hh b/pdns/filterpo.hh index 283ea5b..073a71b 100644 --- a/pdns/filterpo.hh +++ b/pdns/filterpo.hh @@ -67,7 +67,7 @@ public: enum class PolicyKind { NoAction, Drop, NXDOMAIN, NODATA, Truncate, Custom}; struct Policy { - Policy(): d_kind(PolicyKind::NoAction), d_custom(nullptr), d_name(nullptr), d_ttl(0) + Policy(): d_kind(PolicyKind::NoAction), d_custom(nullptr), d_name(nullptr), d_ttl(-1) { } bool operator==(const Policy& rhs) const ```
defect
rpz ttl and defttl is ignored unless defpol is also set program recursor issue type bug report short description when loading an rpz file using rpzfile local rpz all returned answers get a ttl of zero setting defttl does not change that environment operating system ubuntu software version master software source git steps to reproduce put rpzfile local rpz in the lua configure script populate local rpz as below start recursor dig noad ttltest and dig noad local rpz ttl origin localrpz in soa localrpz hostmaster t ipnet net serial yyyymmddxx refresh retry expire nx ttl ns localhost ttltest in a in cname expected behaviour to see the ttl on the two tests actual behaviour a ttl is returned other information the reason defttl does not work is that rec lua conf cc only picks up defttl if defpol is also set the reason the ttls from the file are ignored is that rpzloader cc rpzrecordtopolicy only takes the ttl from the file if the default it sees in the policy is but a freshly created policy has d ttl set to this patch fixes it for me but i m unsure about side effects diff diff git a pdns filterpo hh b pdns filterpo hh index a pdns filterpo hh b pdns filterpo hh public enum class policykind noaction drop nxdomain nodata truncate custom struct policy policy d kind policykind noaction d custom nullptr d name nullptr d ttl policy d kind policykind noaction d custom nullptr d name nullptr d ttl bool operator const policy rhs const
1
11,218
2,641,932,454
IssuesEvent
2015-03-11 20:35:54
chrsmith/html5rocks
https://api.github.com/repos/chrsmith/html5rocks
closed
Homepage Lacking <meta charset="utf-8">
Priority-Medium Type-Defect
Original [issue 128](https://code.google.com/p/html5rocks/issues/detail?id=128) created by chrsmith on 2010-08-02T21:22:10.000Z: Homepage should specify the character set via &lt;meta charset=&quot;utf-8&quot;&gt;
1.0
Homepage Lacking <meta charset="utf-8"> - Original [issue 128](https://code.google.com/p/html5rocks/issues/detail?id=128) created by chrsmith on 2010-08-02T21:22:10.000Z: Homepage should specify the character set via &lt;meta charset=&quot;utf-8&quot;&gt;
defect
homepage lacking original created by chrsmith on homepage should specify the character set via lt meta charset quot utf quot gt
1
30,657
6,218,147,487
IssuesEvent
2017-07-08 21:57:33
networkx/networkx
https://api.github.com/repos/networkx/networkx
closed
label in gml output should be quoted
Defect Easy-Fix or Beginner Needs PR
The output of the `write_gml` function currently (1.11) does not quote the label values, but I do think it should according to the [GML spec.](http://www.fim.uni-passau.de/fileadmin/files/lehrstuhl/brandenburg/projekte/gml/gml-technical-report.pdf). Because of this, tools like Cytoscape, jhive and gml2gv fails to read the gml file. A node from my output: ``` node [ id 0 label 1203 ] ``` that should be ``` node [ id 0 label "1203" ] ```
1.0
label in gml output should be quoted - The output of the `write_gml` function currently (1.11) does not quote the label values, but I do think it should according to the [GML spec.](http://www.fim.uni-passau.de/fileadmin/files/lehrstuhl/brandenburg/projekte/gml/gml-technical-report.pdf). Because of this, tools like Cytoscape, jhive and gml2gv fails to read the gml file. A node from my output: ``` node [ id 0 label 1203 ] ``` that should be ``` node [ id 0 label "1203" ] ```
defect
label in gml output should be quoted the output of the write gml function currently does not quote the label values but i do think it should according to the because of this tools like cytoscape jhive and fails to read the gml file a node from my output node id label that should be node id label
1
365,298
10,780,519,555
IssuesEvent
2019-11-04 13:10:18
CESARBR/knot-setup-android
https://api.github.com/repos/CESARBR/knot-setup-android
opened
Develop connectivity with KNoT Cloud
priority: medium
The KNoT Cloud uses WebSockets as its means of communication. In order to GET and POST data from the Cloud, one should have a networking infrastructure to communicate with the server that takes cares of connection and error handling.
1.0
Develop connectivity with KNoT Cloud - The KNoT Cloud uses WebSockets as its means of communication. In order to GET and POST data from the Cloud, one should have a networking infrastructure to communicate with the server that takes cares of connection and error handling.
non_defect
develop connectivity with knot cloud the knot cloud uses websockets as its means of communication in order to get and post data from the cloud one should have a networking infrastructure to communicate with the server that takes cares of connection and error handling
0
57,092
15,682,395,914
IssuesEvent
2021-03-25 07:14:24
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Leaving a space doesn't make it disappear from the LLP
A-Spaces T-Defect Z-M2
I've left a space I've created but it's still on my LLP and still shows notification counters etc - refreshing fixed it
1.0
Leaving a space doesn't make it disappear from the LLP - I've left a space I've created but it's still on my LLP and still shows notification counters etc - refreshing fixed it
defect
leaving a space doesn t make it disappear from the llp i ve left a space i ve created but it s still on my llp and still shows notification counters etc refreshing fixed it
1
189,610
22,047,076,612
IssuesEvent
2022-05-30 03:50:43
Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492
https://api.github.com/repos/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492
closed
CVE-2021-3760 (High) detected in linuxlinux-4.19.88 - autoclosed
security vulnerability
## CVE-2021-3760 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/net/nfc/nci/rsp.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/net/nfc/nci/rsp.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/net/nfc/nci/rsp.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Linux kernel. A use-after-free vulnerability in the NFC stack can lead to a threat to confidentiality, integrity, and system availability. <p>Publish Date: 2022-02-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3760>CVE-2021-3760</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-3760">https://www.linuxkernelcves.com/cves/CVE-2021-3760</a></p> <p>Release Date: 2022-02-16</p> <p>Fix Resolution: v4.4.290,v4.9.288,v4.14.253,v4.19.214,v5.4.156,v5.10.76,v5.14.15,v5.15-rc6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3760 (High) detected in linuxlinux-4.19.88 - autoclosed - ## CVE-2021-3760 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/net/nfc/nci/rsp.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/net/nfc/nci/rsp.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/net/nfc/nci/rsp.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in the Linux kernel. A use-after-free vulnerability in the NFC stack can lead to a threat to confidentiality, integrity, and system availability. <p>Publish Date: 2022-02-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3760>CVE-2021-3760</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-3760">https://www.linuxkernelcves.com/cves/CVE-2021-3760</a></p> <p>Release Date: 2022-02-16</p> <p>Fix Resolution: v4.4.290,v4.9.288,v4.14.253,v4.19.214,v5.4.156,v5.10.76,v5.14.15,v5.15-rc6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files linux net nfc nci rsp c linux net nfc nci rsp c linux net nfc nci rsp c vulnerability details a flaw was found in the linux kernel a use after free vulnerability in the nfc stack can lead to a threat to confidentiality integrity and system availability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
32,398
13,798,142,432
IssuesEvent
2020-10-10 00:10:40
microsoft/botframework-solutions
https://api.github.com/repos/microsoft/botframework-solutions
closed
Creating Visual Studio 2019 Project using the Virtual Assistant Template fails
Bot Services Needs Triage Support Type: Bug customer-replied-to customer-reported
I was following the tutorial in order to create a Virtual Assistant. I downloaded and installed all necessary resources as described here: https://microsoft.github.io/botframework-solutions/virtual-assistant/tutorials/create-assistant/csharp/2-download-and-install/ In the nextstep one is asked here https://microsoft.github.io/botframework-solutions/virtual-assistant/tutorials/create-assistant/csharp/3-create-project/ to create a Virtual Assistant project in Visual Studio using the Virtual Assistant template. The creation fails with an error message that the project file cannot be migrated. ![image](https://user-images.githubusercontent.com/72129017/94666738-23cd1380-030e-11eb-84c4-ad4a40e73a5e.png) A similar issue is described here: https://github.com/microsoft/botframework-solutions/issues/2635 Hence I installed the additional Microsoft.NETCore.App as described there - to no avail... As it seemed to be a problem connected with Visual Studio I opened then a thread in the Visual Studio Community, see: https://developercommunity.visualstudio.com/content/problem/1194245/cannot-create-virtual-assistant-project.html Jeff Kelly from Microsoft analysed the error logs and said that VS cannot find the .NET Core SDK installed on my machine. ![image](https://user-images.githubusercontent.com/72129017/94665738-d13f2780-030c-11eb-9ab1-5e123d0ce45f.png) But the resources are there, see this output from powershell commands "dotnet --info" and "node -v": ![image](https://user-images.githubusercontent.com/72129017/94666162-64785d00-030d-11eb-9c29-ecfb4595e528.png) Jeff assumes that there is something wrong with the template and recommended to open a new issue here... Thanks for your help!
1.0
Creating Visual Studio 2019 Project using the Virtual Assistant Template fails - I was following the tutorial in order to create a Virtual Assistant. I downloaded and installed all necessary resources as described here: https://microsoft.github.io/botframework-solutions/virtual-assistant/tutorials/create-assistant/csharp/2-download-and-install/ In the nextstep one is asked here https://microsoft.github.io/botframework-solutions/virtual-assistant/tutorials/create-assistant/csharp/3-create-project/ to create a Virtual Assistant project in Visual Studio using the Virtual Assistant template. The creation fails with an error message that the project file cannot be migrated. ![image](https://user-images.githubusercontent.com/72129017/94666738-23cd1380-030e-11eb-84c4-ad4a40e73a5e.png) A similar issue is described here: https://github.com/microsoft/botframework-solutions/issues/2635 Hence I installed the additional Microsoft.NETCore.App as described there - to no avail... As it seemed to be a problem connected with Visual Studio I opened then a thread in the Visual Studio Community, see: https://developercommunity.visualstudio.com/content/problem/1194245/cannot-create-virtual-assistant-project.html Jeff Kelly from Microsoft analysed the error logs and said that VS cannot find the .NET Core SDK installed on my machine. ![image](https://user-images.githubusercontent.com/72129017/94665738-d13f2780-030c-11eb-9ab1-5e123d0ce45f.png) But the resources are there, see this output from powershell commands "dotnet --info" and "node -v": ![image](https://user-images.githubusercontent.com/72129017/94666162-64785d00-030d-11eb-9c29-ecfb4595e528.png) Jeff assumes that there is something wrong with the template and recommended to open a new issue here... Thanks for your help!
non_defect
creating visual studio project using the virtual assistant template fails i was following the tutorial in order to create a virtual assistant i downloaded and installed all necessary resources as described here in the nextstep one is asked here to create a virtual assistant project in visual studio using the virtual assistant template the creation fails with an error message that the project file cannot be migrated a similar issue is described here hence i installed the additional microsoft netcore app as described there to no avail as it seemed to be a problem connected with visual studio i opened then a thread in the visual studio community see jeff kelly from microsoft analysed the error logs and said that vs cannot find the net core sdk installed on my machine but the resources are there see this output from powershell commands dotnet info and node v jeff assumes that there is something wrong with the template and recommended to open a new issue here thanks for your help
0
16,556
2,917,641,027
IssuesEvent
2015-06-24 00:00:16
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
Add "error" and "info"-method to Logger
Area-Pkg Pkg-Logging Priority-Unassigned Triaged Type-Defect
*This issue was originally filed by off...&#064;mikemitterer.at* _____ **What steps will reproduce the problem?** final Logger logger = new Logger(&quot;rest.DefeaultRESTErrorProcessor&quot;); logger.error(&quot;Does not work&quot;); **Please provide any additional information below.** Most Logging-Frameworks use &quot;error&quot; to print out error-messages. You use &quot;severe&quot;, this is OK but give us &quot;error&quot; to. The same with &quot;info&quot; - I don't know who is using fine, finer, finest - means nothing. fine, finer, finest???? - please provide us with &quot;info&quot; in addition to &quot;fine&quot;
1.0
Add "error" and "info"-method to Logger - *This issue was originally filed by off...&#064;mikemitterer.at* _____ **What steps will reproduce the problem?** final Logger logger = new Logger(&quot;rest.DefeaultRESTErrorProcessor&quot;); logger.error(&quot;Does not work&quot;); **Please provide any additional information below.** Most Logging-Frameworks use &quot;error&quot; to print out error-messages. You use &quot;severe&quot;, this is OK but give us &quot;error&quot; to. The same with &quot;info&quot; - I don't know who is using fine, finer, finest - means nothing. fine, finer, finest???? - please provide us with &quot;info&quot; in addition to &quot;fine&quot;
defect
add error and info method to logger this issue was originally filed by off mikemitterer at what steps will reproduce the problem final logger logger new logger quot rest defeaultresterrorprocessor quot logger error quot does not work quot please provide any additional information below most logging frameworks use quot error quot to print out error messages you use quot severe quot this is ok but give us quot error quot to the same with quot info quot i don t know who is using fine finer finest means nothing fine finer finest please provide us with quot info quot in addition to quot fine quot
1
81,357
10,130,327,569
IssuesEvent
2019-08-01 16:40:55
opencollective/opencollective
https://api.github.com/repos/opencollective/opencollective
opened
Collective Page→ Feedback Navigation Collective Page
design design → UI design → UX figma → Collectives
## User story Here under we have option A and B for Navigation in te collective page. This is to collect feedback for both options pros and cons. ## Referenced Issues #1928 #
3.0
Collective Page→ Feedback Navigation Collective Page - ## User story Here under we have option A and B for Navigation in te collective page. This is to collect feedback for both options pros and cons. ## Referenced Issues #1928 #
non_defect
collective page→ feedback navigation collective page user story here under we have option a and b for navigation in te collective page this is to collect feedback for both options pros and cons referenced issues
0
69,655
13,304,240,333
IssuesEvent
2020-08-25 16:37:39
Abbassihraf/P-curiosity-LAB
https://api.github.com/repos/Abbassihraf/P-curiosity-LAB
closed
Team's functionality
Code In progress back-end
### Team's back-end - [x] Populate the table with data - [x] Functionality for displaying team members - [x] Functionality for adding team members to the db - [x] Functionality for updating team members from the db - [x] Functionality for deleting team members from the db - [x] Security and refactor code
1.0
Team's functionality - ### Team's back-end - [x] Populate the table with data - [x] Functionality for displaying team members - [x] Functionality for adding team members to the db - [x] Functionality for updating team members from the db - [x] Functionality for deleting team members from the db - [x] Security and refactor code
non_defect
team s functionality team s back end populate the table with data functionality for displaying team members functionality for adding team members to the db functionality for updating team members from the db functionality for deleting team members from the db security and refactor code
0
19,542
25,864,263,203
IssuesEvent
2022-12-13 19:25:24
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
DISABLED test_success_non_blocking (__main__.ForkTest)
module: multiprocessing triaged module: flaky-tests skipped
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_success_non_blocking&suite=ForkTest&file=test_multiprocessing_spawn.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7286770748). Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green. cc @VitalyFedyunin
1.0
DISABLED test_success_non_blocking (__main__.ForkTest) - Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_success_non_blocking&suite=ForkTest&file=test_multiprocessing_spawn.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7286770748). Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green. cc @VitalyFedyunin
non_defect
disabled test success non blocking main forktest platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with red and green cc vitalyfedyunin
0
21,851
3,573,319,807
IssuesEvent
2016-01-27 05:25:32
ariya/phantomjs
https://api.github.com/repos/ariya/phantomjs
closed
Infinite loop when setting page.content in page.open callback
old.Priority-Medium old.Status-New old.Type-Defect
_**[jare...@gmail.com](http://code.google.com/u/116019651359833698002/) commented:**_ > <b>Which version of PhantomJS are you using? Tip: run 'phantomjs --version'.</b> 1.5 > > <b>What steps will reproduce the problem?</b> var page = require('webpage').create(); > page.open('http://google.com', function(status){ > page.content = &quot;&quot;; > console.log('status'); > }); > > <b>What is the expected output? What do you see instead?</b> expected: run once > result: infinite loop of callback > > <b>Which operating system are you using?</b> OSX > <b>Did you use binary PhantomJS or did you compile it from source?</b> Binary > <b>Please provide any additional information below.</b> --- **Disclaimer:** This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #568](http://code.google.com/p/phantomjs/issues/detail?id=568). :star2: &nbsp; **2** people had starred this issue at the time of migration.
1.0
Infinite loop when setting page.content in page.open callback - _**[jare...@gmail.com](http://code.google.com/u/116019651359833698002/) commented:**_ > <b>Which version of PhantomJS are you using? Tip: run 'phantomjs --version'.</b> 1.5 > > <b>What steps will reproduce the problem?</b> var page = require('webpage').create(); > page.open('http://google.com', function(status){ > page.content = &quot;&quot;; > console.log('status'); > }); > > <b>What is the expected output? What do you see instead?</b> expected: run once > result: infinite loop of callback > > <b>Which operating system are you using?</b> OSX > <b>Did you use binary PhantomJS or did you compile it from source?</b> Binary > <b>Please provide any additional information below.</b> --- **Disclaimer:** This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #568](http://code.google.com/p/phantomjs/issues/detail?id=568). :star2: &nbsp; **2** people had starred this issue at the time of migration.
defect
infinite loop when setting page content in page open callback commented which version of phantomjs are you using tip run phantomjs version what steps will reproduce the problem var page require webpage create page open function status page content quot quot console log status what is the expected output what do you see instead expected run once result infinite loop of callback which operating system are you using osx did you use binary phantomjs or did you compile it from source binary please provide any additional information below disclaimer this issue was migrated on from the project s former issue tracker on google code nbsp people had starred this issue at the time of migration
1
29,683
5,815,120,541
IssuesEvent
2017-05-05 07:29:23
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
hazelcast 3.7.2 NullPointerException ClientPartitionServiceImpl$RefreshTaskCallback
Team: Client Type: Defect
We are seeing intermittent null pointer exceptions in a background task coming from our hazelcast client using version 3.7.2. ``` ERROR com.hazelcast.client.spi.ClientInvocationService hz.client_0 [dev] [3.7.2] Failed asynchronous execution of execution callback: com.hazelcast.client.spi.impl.ClientPartitionServiceImpl$RefreshTaskCallback@28143d08for call ClientMessage{length=22, correlationId=200151, messageType=8, partitionId=-1, isComplete=true, isRetryable=false, isEvent=false, writeOffset=0} java.lang.NullPointerException: null at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011) at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006) at com.hazelcast.client.spi.impl.ClientPartitionServiceImpl.processPartitionResponse(ClientPartitionServiceImpl.java:153) at com.hazelcast.client.spi.impl.ClientPartitionServiceImpl.access$800(ClientPartitionServiceImpl.java:48) at com.hazelcast.client.spi.impl.ClientPartitionServiceImpl$RefreshTaskCallback.onResponse(ClientPartitionServiceImpl.java:268) at com.hazelcast.client.spi.impl.ClientPartitionServiceImpl$RefreshTaskCallback.onResponse(ClientPartitionServiceImpl.java:259) at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:251) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)\n at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:92)\n ```
1.0
hazelcast 3.7.2 NullPointerException ClientPartitionServiceImpl$RefreshTaskCallback - We are seeing intermittent null pointer exceptions in a background task coming from our hazelcast client using version 3.7.2. ``` ERROR com.hazelcast.client.spi.ClientInvocationService hz.client_0 [dev] [3.7.2] Failed asynchronous execution of execution callback: com.hazelcast.client.spi.impl.ClientPartitionServiceImpl$RefreshTaskCallback@28143d08for call ClientMessage{length=22, correlationId=200151, messageType=8, partitionId=-1, isComplete=true, isRetryable=false, isEvent=false, writeOffset=0} java.lang.NullPointerException: null at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011) at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006) at com.hazelcast.client.spi.impl.ClientPartitionServiceImpl.processPartitionResponse(ClientPartitionServiceImpl.java:153) at com.hazelcast.client.spi.impl.ClientPartitionServiceImpl.access$800(ClientPartitionServiceImpl.java:48) at com.hazelcast.client.spi.impl.ClientPartitionServiceImpl$RefreshTaskCallback.onResponse(ClientPartitionServiceImpl.java:268) at com.hazelcast.client.spi.impl.ClientPartitionServiceImpl$RefreshTaskCallback.onResponse(ClientPartitionServiceImpl.java:259) at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:251) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)\n at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:92)\n ```
defect
hazelcast nullpointerexception clientpartitionserviceimpl refreshtaskcallback we are seeing intermittent null pointer exceptions in a background task coming from our hazelcast client using version error com hazelcast client spi clientinvocationservice hz client failed asynchronous execution of execution callback com hazelcast client spi impl clientpartitionserviceimpl refreshtaskcallback call clientmessage length correlationid messagetype partitionid iscomplete true isretryable false isevent false writeoffset java lang nullpointerexception null at java util concurrent concurrenthashmap putval concurrenthashmap java at java util concurrent concurrenthashmap put concurrenthashmap java at com hazelcast client spi impl clientpartitionserviceimpl processpartitionresponse clientpartitionserviceimpl java at com hazelcast client spi impl clientpartitionserviceimpl access clientpartitionserviceimpl java at com hazelcast client spi impl clientpartitionserviceimpl refreshtaskcallback onresponse clientpartitionserviceimpl java at com hazelcast client spi impl clientpartitionserviceimpl refreshtaskcallback onresponse clientpartitionserviceimpl java at com hazelcast spi impl abstractinvocationfuture run abstractinvocationfuture java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java n at com hazelcast util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast util executor hazelcastmanagedthread run hazelcastmanagedthread java n
1
66,442
20,199,345,574
IssuesEvent
2022-02-11 13:49:18
decentraland/unity-renderer
https://api.github.com/repos/decentraland/unity-renderer
opened
Userprefs are not removed when uninstalling the application
defect
When the app is reinstalled, the settings are still as the last time the app was used for example if the audio volume was left at 0, the new installed app will still have no audio.
1.0
Userprefs are not removed when uninstalling the application - When the app is reinstalled, the settings are still as the last time the app was used for example if the audio volume was left at 0, the new installed app will still have no audio.
defect
userprefs are not removed when uninstalling the application when the app is reinstalled the settings are still as the last time the app was used for example if the audio volume was left at the new installed app will still have no audio
1
54,020
13,312,292,084
IssuesEvent
2020-08-26 09:31:53
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
cluster split sql hd idx IllegalArgumentException Size must be positive AbstractPoolingMemoryManager.java:175
Module: IMap Module: SQL Source: Internal Team: Core Type: Defect
sh hz-bench-run-jenkins-split hz/stable/hd/pool/hd-idx http://jenkins.hazelcast.com/view/split/job/split-x/27/console /disk1/jenkins/workspace/split-x/4.1-SNAPSHOT/2020_07_09-03_42_21/hd-idx Failed fail HzClient4HZBB fn hzcmd.distributed.PutRandPerson threadId=0 java.lang.IllegalArgumentException: Size must be positive: -409714372 http://54.147.27.51/~jenkins/workspace/split-x/4.1-SNAPSHOT/2020_07_09-03_42_21/hd-idx output/HZ/HzMember3HZBB/exception.txt output/HZ/HzClient5HZBB/exception.txt output/HZ/HzClient3HZBB/exception.txt output/HZ/HzClient2HZAA/exception.txt output/HZ/HzMember2HZAA/exception.txt output/HZ/HzMember5HZBB/exception.txt output/HZ/HzMember4HZBB/exception.txt output/HZ/HzClient1HZAA/exception.txt output/HZ/HzClient4HZBB/exception.txt output/HZ/HzMember1HZAA/exception.txt ``` more output/HZ/HzClient4HZBB/exception.txt java.lang.IllegalArgumentException: Size must be positive: -409714372 at com.hazelcast.internal.memory.AbstractPoolingMemoryManager.getAddressQueue(AbstractPoolingMemoryManager.java:175) at com.hazelcast.internal.memory.AbstractPoolingMemoryManager.allocate(AbstractPoolingMemoryManager.java:57) at com.hazelcast.internal.monitor.impl.HDGlobalPerIndexStats$MemoryAllocatorWithStats.allocate(HDGlobalPerIndexStats.java:70) at com.hazelcast.internal.bplustree.DefaultBPlusTreeKeyAccessor.cloneNativeMemory(DefaultBPlusTreeKeyAccessor.java:81) at com.hazelcast.internal.bplustree.HDBTreeInnerNodeAccessor.clonedEntryKeyAddr(HDBTreeInnerNodeAccessor.java:232) at com.hazelcast.internal.bplustree.HDBPlusTree.splitLockedLeafWithParentLocked(HDBPlusTree.java:457) at com.hazelcast.internal.bplustree.HDBPlusTree.splitLockedNodeWithParentLocked(HDBPlusTree.java:407) at com.hazelcast.internal.bplustree.HDBPlusTree.insert(HDBPlusTree.java:305) at com.hazelcast.internal.bplustree.HDBPlusTree.insert(HDBPlusTree.java:257) at com.hazelcast.query.impl.HDBPlusTreeIndex.put(HDBPlusTreeIndex.java:40) at com.hazelcast.query.impl.HDOrderedConcurrentIndexStore.mapAttributeToEntry(HDOrderedConcurrentIndexStore.java:150) at com.hazelcast.query.impl.HDOrderedConcurrentIndexStore.insertInternal(HDOrderedConcurrentIndexStore.java:68) at com.hazelcast.query.impl.BaseSingleValueIndexStore.unwrapAndInsertToIndex(BaseSingleValueIndexStore.java:128) at com.hazelcast.query.impl.BaseSingleValueIndexStore.insert(BaseSingleValueIndexStore.java:85) at com.hazelcast.query.impl.AbstractIndex.putEntry(AbstractIndex.java:141) at com.hazelcast.query.impl.Indexes.putEntry(Indexes.java:276) at com.hazelcast.map.impl.recordstore.IndexingMutationObserver.saveIndex(IndexingMutationObserver.java:169) at com.hazelcast.map.impl.recordstore.IndexingMutationObserver.onPutRecord(IndexingMutationObserver.java:54) at com.hazelcast.map.impl.recordstore.CompositeMutationObserver.onPutRecord(CompositeMutationObserver.java:61) at com.hazelcast.map.impl.recordstore.AbstractRecordStore.putNewRecord(AbstractRecordStore.java:216) at com.hazelcast.map.impl.recordstore.DefaultRecordStore.putInternal(DefaultRecordStore.java:771) at com.hazelcast.map.impl.recordstore.DefaultRecordStore.put(DefaultRecordStore.java:753) at com.hazelcast.map.impl.operation.PutOperation.runInternal(PutOperation.java:36) at com.hazelcast.map.impl.operation.MapOperation.run(MapOperation.java:112) at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:184) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:227) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:216) at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:411) at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:438) at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:597) at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:582) at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:541) at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:238) at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processInternal(AbstractPartitionMessageTask.java:51) at com.hazelcast.client.impl.protocol.task.AbstractAsyncMessageTask.processMessage(AbstractAsyncMessageTask.java:71) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:153) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:116) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:180) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:172) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:140) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ```
1.0
cluster split sql hd idx IllegalArgumentException Size must be positive AbstractPoolingMemoryManager.java:175 - sh hz-bench-run-jenkins-split hz/stable/hd/pool/hd-idx http://jenkins.hazelcast.com/view/split/job/split-x/27/console /disk1/jenkins/workspace/split-x/4.1-SNAPSHOT/2020_07_09-03_42_21/hd-idx Failed fail HzClient4HZBB fn hzcmd.distributed.PutRandPerson threadId=0 java.lang.IllegalArgumentException: Size must be positive: -409714372 http://54.147.27.51/~jenkins/workspace/split-x/4.1-SNAPSHOT/2020_07_09-03_42_21/hd-idx output/HZ/HzMember3HZBB/exception.txt output/HZ/HzClient5HZBB/exception.txt output/HZ/HzClient3HZBB/exception.txt output/HZ/HzClient2HZAA/exception.txt output/HZ/HzMember2HZAA/exception.txt output/HZ/HzMember5HZBB/exception.txt output/HZ/HzMember4HZBB/exception.txt output/HZ/HzClient1HZAA/exception.txt output/HZ/HzClient4HZBB/exception.txt output/HZ/HzMember1HZAA/exception.txt ``` more output/HZ/HzClient4HZBB/exception.txt java.lang.IllegalArgumentException: Size must be positive: -409714372 at com.hazelcast.internal.memory.AbstractPoolingMemoryManager.getAddressQueue(AbstractPoolingMemoryManager.java:175) at com.hazelcast.internal.memory.AbstractPoolingMemoryManager.allocate(AbstractPoolingMemoryManager.java:57) at com.hazelcast.internal.monitor.impl.HDGlobalPerIndexStats$MemoryAllocatorWithStats.allocate(HDGlobalPerIndexStats.java:70) at com.hazelcast.internal.bplustree.DefaultBPlusTreeKeyAccessor.cloneNativeMemory(DefaultBPlusTreeKeyAccessor.java:81) at com.hazelcast.internal.bplustree.HDBTreeInnerNodeAccessor.clonedEntryKeyAddr(HDBTreeInnerNodeAccessor.java:232) at com.hazelcast.internal.bplustree.HDBPlusTree.splitLockedLeafWithParentLocked(HDBPlusTree.java:457) at com.hazelcast.internal.bplustree.HDBPlusTree.splitLockedNodeWithParentLocked(HDBPlusTree.java:407) at com.hazelcast.internal.bplustree.HDBPlusTree.insert(HDBPlusTree.java:305) at com.hazelcast.internal.bplustree.HDBPlusTree.insert(HDBPlusTree.java:257) at com.hazelcast.query.impl.HDBPlusTreeIndex.put(HDBPlusTreeIndex.java:40) at com.hazelcast.query.impl.HDOrderedConcurrentIndexStore.mapAttributeToEntry(HDOrderedConcurrentIndexStore.java:150) at com.hazelcast.query.impl.HDOrderedConcurrentIndexStore.insertInternal(HDOrderedConcurrentIndexStore.java:68) at com.hazelcast.query.impl.BaseSingleValueIndexStore.unwrapAndInsertToIndex(BaseSingleValueIndexStore.java:128) at com.hazelcast.query.impl.BaseSingleValueIndexStore.insert(BaseSingleValueIndexStore.java:85) at com.hazelcast.query.impl.AbstractIndex.putEntry(AbstractIndex.java:141) at com.hazelcast.query.impl.Indexes.putEntry(Indexes.java:276) at com.hazelcast.map.impl.recordstore.IndexingMutationObserver.saveIndex(IndexingMutationObserver.java:169) at com.hazelcast.map.impl.recordstore.IndexingMutationObserver.onPutRecord(IndexingMutationObserver.java:54) at com.hazelcast.map.impl.recordstore.CompositeMutationObserver.onPutRecord(CompositeMutationObserver.java:61) at com.hazelcast.map.impl.recordstore.AbstractRecordStore.putNewRecord(AbstractRecordStore.java:216) at com.hazelcast.map.impl.recordstore.DefaultRecordStore.putInternal(DefaultRecordStore.java:771) at com.hazelcast.map.impl.recordstore.DefaultRecordStore.put(DefaultRecordStore.java:753) at com.hazelcast.map.impl.operation.PutOperation.runInternal(PutOperation.java:36) at com.hazelcast.map.impl.operation.MapOperation.run(MapOperation.java:112) at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:184) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:227) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:216) at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:411) at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:438) at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:597) at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:582) at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:541) at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:238) at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processInternal(AbstractPartitionMessageTask.java:51) at com.hazelcast.client.impl.protocol.task.AbstractAsyncMessageTask.processMessage(AbstractAsyncMessageTask.java:71) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:153) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:116) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:180) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:172) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:140) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ```
defect
cluster split sql hd idx illegalargumentexception size must be positive abstractpoolingmemorymanager java sh hz bench run jenkins split hz stable hd pool hd idx jenkins workspace split x snapshot hd idx failed fail fn hzcmd distributed putrandperson threadid java lang illegalargumentexception size must be positive output hz exception txt output hz exception txt output hz exception txt output hz exception txt output hz exception txt output hz exception txt output hz exception txt output hz exception txt output hz exception txt output hz exception txt more output hz exception txt java lang illegalargumentexception size must be positive at com hazelcast internal memory abstractpoolingmemorymanager getaddressqueue abstractpoolingmemorymanager java at com hazelcast internal memory abstractpoolingmemorymanager allocate abstractpoolingmemorymanager java at com hazelcast internal monitor impl hdglobalperindexstats memoryallocatorwithstats allocate hdglobalperindexstats java at com hazelcast internal bplustree defaultbplustreekeyaccessor clonenativememory defaultbplustreekeyaccessor java at com hazelcast internal bplustree hdbtreeinnernodeaccessor clonedentrykeyaddr hdbtreeinnernodeaccessor java at com hazelcast internal bplustree hdbplustree splitlockedleafwithparentlocked hdbplustree java at com hazelcast internal bplustree hdbplustree splitlockednodewithparentlocked hdbplustree java at com hazelcast internal bplustree hdbplustree insert hdbplustree java at com hazelcast internal bplustree hdbplustree insert hdbplustree java at com hazelcast query impl hdbplustreeindex put hdbplustreeindex java at com hazelcast query impl hdorderedconcurrentindexstore mapattributetoentry hdorderedconcurrentindexstore java at com hazelcast query impl hdorderedconcurrentindexstore insertinternal hdorderedconcurrentindexstore java at com hazelcast query impl basesinglevalueindexstore unwrapandinserttoindex basesinglevalueindexstore java at com hazelcast query impl basesinglevalueindexstore insert basesinglevalueindexstore java at com hazelcast query impl abstractindex putentry abstractindex java at com hazelcast query impl indexes putentry indexes java at com hazelcast map impl recordstore indexingmutationobserver saveindex indexingmutationobserver java at com hazelcast map impl recordstore indexingmutationobserver onputrecord indexingmutationobserver java at com hazelcast map impl recordstore compositemutationobserver onputrecord compositemutationobserver java at com hazelcast map impl recordstore abstractrecordstore putnewrecord abstractrecordstore java at com hazelcast map impl recordstore defaultrecordstore putinternal defaultrecordstore java at com hazelcast map impl recordstore defaultrecordstore put defaultrecordstore java at com hazelcast map impl operation putoperation runinternal putoperation java at com hazelcast map impl operation mapoperation run mapoperation java at com hazelcast spi impl operationservice operation call operation java at com hazelcast spi impl operationservice impl operationrunnerimpl call operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl run operationexecutorimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl runorexecute operationexecutorimpl java at com hazelcast spi impl operationservice impl invocation doinvokelocal invocation java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast client impl protocol task abstractpartitionmessagetask processinternal abstractpartitionmessagetask java at com hazelcast client impl protocol task abstractasyncmessagetask processmessage abstractasyncmessagetask java at com hazelcast client impl protocol task abstractmessagetask initializeandprocessmessage abstractmessagetask java at com hazelcast client impl protocol task abstractmessagetask run abstractmessagetask java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread executerun operationthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java
1
82,211
32,060,987,935
IssuesEvent
2023-09-24 16:57:48
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
closed
[🐛 Bug]: selenium-server fails to launch intermittently
I-defect I-issue-template
### What happened? We are experiencing a rather strange behavior with selenium-server intermittently failing to start. Basically we have an application which launches selenium using the following cmd: ``` java -jar -Djcommander.debug=true -Dwebdriver.ie.driver=wIEDriverServer.exe -Dwebdriver.gecko.driver="geckodriver.exe" -Dwebdriver.edge.driver="msedgedriver.exe" -Dwebdriver.chrome.driver="chromedriver.exe" selenium-server-4.8.1.jar standalone --port 4444 --session-timeout 1500 --override-max-sessions true --max-sessions 100 ``` And 1 in 10 or so times, selenium fails to start with the following error. This seems to be highly system dependent and on some systems happens 1 in 10 times, on other 1 in 1000: ``` Was passed main parameter '--port' but no main parameter was defined in your arg class ``` After enabling `jcommander.debug` there seems to be a difference JCommander debug logs. `org.openqa.selenium.grid.server.BaseServerFlags@50cbc42f ` is missing in case of a failed launch. **Successful launch:** ``` [JCommander] Parsing "--port 4444 --session-timeout 1500 --override-max-sessions true --max-sessions 100" with:org.openqa.selenium.grid.server.HelpFlags@4d76f3f8 org.openqa.selenium.grid.config.ConfigFlags@2d8e6db6 org.openqa.selenium.grid.node.config.NodeFlags@5ddfd24d org.openqa.selenium.grid.server.BaseServerFlags@50cbc42f org.openqa.selenium.grid.node.docker.DockerFlags@15d0aca4 org.openqa.selenium.grid.router.httpd.RouterFlags@75412c2f org.openqa.selenium.grid.node.relay.RelayFlags@45538b14 org.openqa.selenium.grid.distributor.config.DistributorFlags@182d6294 org.openqa.selenium.grid.log.LoggingFlags@3b438666 org.openqa.selenium.grid.sessionqueue.config.NewSessionQueueFlags@282ba1e [JCommander] Adding description for -h [JCommander] Adding description for -help [JCommander] Adding description for --help [JCommander] Adding description for /? [JCommander] Adding description for --version [JCommander] Adding description for --config [JCommander] Adding description for --dump-config [JCommander] Adding description for --config-help [JCommander] Adding description for --max-sessions [JCommander] Adding description for --override-max-sessions [JCommander] Adding description for --session-timeout [JCommander] Adding description for --detect-drivers [JCommander] Adding description for -I [JCommander] Adding description for --driver-implementation [JCommander] Adding description for --driver-factory [JCommander] Adding description for --grid-url [JCommander] Adding description for --hub [JCommander] Adding description for --driver-configuration [JCommander] Adding description for --register-cycle [JCommander] Adding description for --register-period [JCommander] Adding description for --heartbeat-period [JCommander] Adding description for --vnc-env-var [JCommander] Adding description for --no-vnc-port [JCommander] Adding description for --drain-after-session-count [JCommander] Adding description for --enable-cdp [JCommander] Adding description for --enable-bidi [JCommander] Adding description for --node-implementation [JCommander] Adding description for --downloads-path [JCommander] Adding description for --host [JCommander] Adding description for --bind-host [JCommander] Adding description for -p [JCommander] Adding description for --port [JCommander] Adding description for --max-threads [JCommander] Adding description for --allow-cors [JCommander] Adding description for --https-private-key [JCommander] Adding description for --https-certificate [JCommander] Adding description for --registration-secret [JCommander] Adding description for --self-signed-https [JCommander] Adding description for --docker-url [JCommander] Adding description for --docker-host [JCommander] Adding description for --docker-port [JCommander] Adding description for --docker [JCommander] Adding description for -D [JCommander] Adding description for --docker-devices [JCommander] Adding description for --docker-video-image [JCommander] Adding description for --docker-assets-path [JCommander] Adding description for --relax-checks [JCommander] Adding description for --username [JCommander] Adding description for --password [JCommander] Adding description for --sub-path [JCommander] Adding description for --service-configuration [JCommander] Adding description for --service-url [JCommander] Adding description for --service-host [JCommander] Adding description for --service-port [JCommander] Adding description for --service-status-endpoint [JCommander] Adding description for -d [JCommander] Adding description for --distributor [JCommander] Adding description for --distributor-port [JCommander] Adding description for --distributor-host [JCommander] Adding description for --distributor-implementation [JCommander] Adding description for --slot-matcher [JCommander] Adding description for --slot-selector [JCommander] Adding description for --healthcheck-interval [JCommander] Adding description for --reject-unsupported-caps [JCommander] Adding description for --newsession-threadpool-size [JCommander] Adding description for --configure-logging [JCommander] Adding description for --structured-logs [JCommander] Adding description for --plain-logs [JCommander] Adding description for --tracing [JCommander] Adding description for --http-logs [JCommander] Adding description for --log [JCommander] Adding description for --log-encoding [JCommander] Adding description for --log-level [JCommander] Adding description for --log-timestamp-format [JCommander] Adding description for --sq [JCommander] Adding description for --sessionqueue [JCommander] Adding description for --sessionqueue-port [JCommander] Adding description for --sessionqueue-host [JCommander] Adding description for --session-request-timeout [JCommander] Adding description for --session-request-timeout-period [JCommander] Adding description for --session-retry-interval [JCommander] Adding description for --sessionqueue-batch-size [JCommander] Parsing arg: --port [ParameterDescription] Adding value:4444 to parameter:port [JCommander] Parsing arg: --session-timeout [ParameterDescription] Adding value:1500 to parameter:sessionTimeout [JCommander] Parsing arg: --override-max-sessions [ParameterDescription] Adding value:true to parameter:overrideMaxSessions [JCommander] Parsing arg: --max-sessions [ParameterDescription] Adding value:100 to parameter:maxSessions 01:47:26.972 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding ``` **Failed launch:** ``` [JCommander] Parsing "--port 4444 --session-timeout 1500 --override-max-sessions true --max-sessions 100" with:org.openqa.selenium.grid.server.HelpFlags@4d76f3f8 org.openqa.selenium.grid.config.ConfigFlags@2d8e6db6 org.openqa.selenium.grid.node.config.NodeFlags@5ddfd24d org.openqa.selenium.grid.node.docker.DockerFlags@15d0aca4 org.openqa.selenium.grid.router.httpd.RouterFlags@75412c2f org.openqa.selenium.grid.node.relay.RelayFlags@45538b14 org.openqa.selenium.grid.distributor.config.DistributorFlags@182d6294 org.openqa.selenium.grid.sessionqueue.config.NewSessionQueueFlags@282ba1e org.openqa.selenium.grid.log.LoggingFlags@3b438666 [JCommander] Adding description for -h [JCommander] Adding description for -help [JCommander] Adding description for --help [JCommander] Adding description for /? [JCommander] Adding description for --version [JCommander] Adding description for --config [JCommander] Adding description for --dump-config [JCommander] Adding description for --config-help [JCommander] Adding description for --max-sessions [JCommander] Adding description for --override-max-sessions [JCommander] Adding description for --session-timeout [JCommander] Adding description for --detect-drivers [JCommander] Adding description for -I [JCommander] Adding description for --driver-implementation [JCommander] Adding description for --driver-factory [JCommander] Adding description for --grid-url [JCommander] Adding description for --hub [JCommander] Adding description for --driver-configuration [JCommander] Adding description for --register-cycle [JCommander] Adding description for --register-period [JCommander] Adding description for --heartbeat-period [JCommander] Adding description for --vnc-env-var [JCommander] Adding description for --no-vnc-port [JCommander] Adding description for --drain-after-session-count [JCommander] Adding description for --enable-cdp [JCommander] Adding description for --enable-bidi [JCommander] Adding description for --node-implementation [JCommander] Adding description for --downloads-path [JCommander] Adding description for --docker-url [JCommander] Adding description for --docker-host [JCommander] Adding description for --docker-port [JCommander] Adding description for --docker [JCommander] Adding description for -D [JCommander] Adding description for --docker-devices [JCommander] Adding description for --docker-video-image [JCommander] Adding description for --docker-assets-path [JCommander] Adding description for --relax-checks [JCommander] Adding description for --username [JCommander] Adding description for --password [JCommander] Adding description for --sub-path [JCommander] Adding description for --service-configuration [JCommander] Adding description for --service-url [JCommander] Adding description for --service-host [JCommander] Adding description for --service-port [JCommander] Adding description for --service-status-endpoint [JCommander] Adding description for -d [JCommander] Adding description for --distributor [JCommander] Adding description for --distributor-port [JCommander] Adding description for --distributor-host [JCommander] Adding description for --distributor-implementation [JCommander] Adding description for --slot-matcher [JCommander] Adding description for --slot-selector [JCommander] Adding description for --healthcheck-interval [JCommander] Adding description for --reject-unsupported-caps [JCommander] Adding description for --newsession-threadpool-size [JCommander] Adding description for --sq [JCommander] Adding description for --sessionqueue [JCommander] Adding description for --sessionqueue-port [JCommander] Adding description for --sessionqueue-host [JCommander] Adding description for --session-request-timeout [JCommander] Adding description for --session-request-timeout-period [JCommander] Adding description for --session-retry-interval [JCommander] Adding description for --sessionqueue-batch-size [JCommander] Adding description for --configure-logging [JCommander] Adding description for --structured-logs [JCommander] Adding description for --plain-logs [JCommander] Adding description for --tracing [JCommander] Adding description for --http-logs [JCommander] Adding description for --log [JCommander] Adding description for --log-encoding [JCommander] Adding description for --log-level [JCommander] Adding description for --log-timestamp-format [JCommander] Parsing arg: --port Was passed main parameter '--port' but no main parameter was defined in your arg class Usage: standalone [options] Options: ``` ### How can we reproduce the issue? Execute following the batch: ```shell for /l %%x in (1, 1, 500) do ( start "" /B java -jar -Djcommander.debug=true -Dwebdriver.chrome.driver="chromedriver.exe" selenium-server-4.8.0.jar standalone --port 4444 --session-timeout 1500 --override-max-sessions true --max-sessions 100 timeout 5 > nul taskkill /im java.exe /f ) ``` ### Relevant log output ```shell Logs are included in the message body. ``` ### Operating System Windows 10 ### Selenium version 4.8.1 ### What are the browser(s) and version(s) where you see this issue? n/a ### What are the browser driver(s) and version(s) where you see this issue? n/a ### Are you using Selenium Grid? 4.8.1
1.0
[🐛 Bug]: selenium-server fails to launch intermittently - ### What happened? We are experiencing a rather strange behavior with selenium-server intermittently failing to start. Basically we have an application which launches selenium using the following cmd: ``` java -jar -Djcommander.debug=true -Dwebdriver.ie.driver=wIEDriverServer.exe -Dwebdriver.gecko.driver="geckodriver.exe" -Dwebdriver.edge.driver="msedgedriver.exe" -Dwebdriver.chrome.driver="chromedriver.exe" selenium-server-4.8.1.jar standalone --port 4444 --session-timeout 1500 --override-max-sessions true --max-sessions 100 ``` And 1 in 10 or so times, selenium fails to start with the following error. This seems to be highly system dependent and on some systems happens 1 in 10 times, on other 1 in 1000: ``` Was passed main parameter '--port' but no main parameter was defined in your arg class ``` After enabling `jcommander.debug` there seems to be a difference JCommander debug logs. `org.openqa.selenium.grid.server.BaseServerFlags@50cbc42f ` is missing in case of a failed launch. **Successful launch:** ``` [JCommander] Parsing "--port 4444 --session-timeout 1500 --override-max-sessions true --max-sessions 100" with:org.openqa.selenium.grid.server.HelpFlags@4d76f3f8 org.openqa.selenium.grid.config.ConfigFlags@2d8e6db6 org.openqa.selenium.grid.node.config.NodeFlags@5ddfd24d org.openqa.selenium.grid.server.BaseServerFlags@50cbc42f org.openqa.selenium.grid.node.docker.DockerFlags@15d0aca4 org.openqa.selenium.grid.router.httpd.RouterFlags@75412c2f org.openqa.selenium.grid.node.relay.RelayFlags@45538b14 org.openqa.selenium.grid.distributor.config.DistributorFlags@182d6294 org.openqa.selenium.grid.log.LoggingFlags@3b438666 org.openqa.selenium.grid.sessionqueue.config.NewSessionQueueFlags@282ba1e [JCommander] Adding description for -h [JCommander] Adding description for -help [JCommander] Adding description for --help [JCommander] Adding description for /? [JCommander] Adding description for --version [JCommander] Adding description for --config [JCommander] Adding description for --dump-config [JCommander] Adding description for --config-help [JCommander] Adding description for --max-sessions [JCommander] Adding description for --override-max-sessions [JCommander] Adding description for --session-timeout [JCommander] Adding description for --detect-drivers [JCommander] Adding description for -I [JCommander] Adding description for --driver-implementation [JCommander] Adding description for --driver-factory [JCommander] Adding description for --grid-url [JCommander] Adding description for --hub [JCommander] Adding description for --driver-configuration [JCommander] Adding description for --register-cycle [JCommander] Adding description for --register-period [JCommander] Adding description for --heartbeat-period [JCommander] Adding description for --vnc-env-var [JCommander] Adding description for --no-vnc-port [JCommander] Adding description for --drain-after-session-count [JCommander] Adding description for --enable-cdp [JCommander] Adding description for --enable-bidi [JCommander] Adding description for --node-implementation [JCommander] Adding description for --downloads-path [JCommander] Adding description for --host [JCommander] Adding description for --bind-host [JCommander] Adding description for -p [JCommander] Adding description for --port [JCommander] Adding description for --max-threads [JCommander] Adding description for --allow-cors [JCommander] Adding description for --https-private-key [JCommander] Adding description for --https-certificate [JCommander] Adding description for --registration-secret [JCommander] Adding description for --self-signed-https [JCommander] Adding description for --docker-url [JCommander] Adding description for --docker-host [JCommander] Adding description for --docker-port [JCommander] Adding description for --docker [JCommander] Adding description for -D [JCommander] Adding description for --docker-devices [JCommander] Adding description for --docker-video-image [JCommander] Adding description for --docker-assets-path [JCommander] Adding description for --relax-checks [JCommander] Adding description for --username [JCommander] Adding description for --password [JCommander] Adding description for --sub-path [JCommander] Adding description for --service-configuration [JCommander] Adding description for --service-url [JCommander] Adding description for --service-host [JCommander] Adding description for --service-port [JCommander] Adding description for --service-status-endpoint [JCommander] Adding description for -d [JCommander] Adding description for --distributor [JCommander] Adding description for --distributor-port [JCommander] Adding description for --distributor-host [JCommander] Adding description for --distributor-implementation [JCommander] Adding description for --slot-matcher [JCommander] Adding description for --slot-selector [JCommander] Adding description for --healthcheck-interval [JCommander] Adding description for --reject-unsupported-caps [JCommander] Adding description for --newsession-threadpool-size [JCommander] Adding description for --configure-logging [JCommander] Adding description for --structured-logs [JCommander] Adding description for --plain-logs [JCommander] Adding description for --tracing [JCommander] Adding description for --http-logs [JCommander] Adding description for --log [JCommander] Adding description for --log-encoding [JCommander] Adding description for --log-level [JCommander] Adding description for --log-timestamp-format [JCommander] Adding description for --sq [JCommander] Adding description for --sessionqueue [JCommander] Adding description for --sessionqueue-port [JCommander] Adding description for --sessionqueue-host [JCommander] Adding description for --session-request-timeout [JCommander] Adding description for --session-request-timeout-period [JCommander] Adding description for --session-retry-interval [JCommander] Adding description for --sessionqueue-batch-size [JCommander] Parsing arg: --port [ParameterDescription] Adding value:4444 to parameter:port [JCommander] Parsing arg: --session-timeout [ParameterDescription] Adding value:1500 to parameter:sessionTimeout [JCommander] Parsing arg: --override-max-sessions [ParameterDescription] Adding value:true to parameter:overrideMaxSessions [JCommander] Parsing arg: --max-sessions [ParameterDescription] Adding value:100 to parameter:maxSessions 01:47:26.972 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding ``` **Failed launch:** ``` [JCommander] Parsing "--port 4444 --session-timeout 1500 --override-max-sessions true --max-sessions 100" with:org.openqa.selenium.grid.server.HelpFlags@4d76f3f8 org.openqa.selenium.grid.config.ConfigFlags@2d8e6db6 org.openqa.selenium.grid.node.config.NodeFlags@5ddfd24d org.openqa.selenium.grid.node.docker.DockerFlags@15d0aca4 org.openqa.selenium.grid.router.httpd.RouterFlags@75412c2f org.openqa.selenium.grid.node.relay.RelayFlags@45538b14 org.openqa.selenium.grid.distributor.config.DistributorFlags@182d6294 org.openqa.selenium.grid.sessionqueue.config.NewSessionQueueFlags@282ba1e org.openqa.selenium.grid.log.LoggingFlags@3b438666 [JCommander] Adding description for -h [JCommander] Adding description for -help [JCommander] Adding description for --help [JCommander] Adding description for /? [JCommander] Adding description for --version [JCommander] Adding description for --config [JCommander] Adding description for --dump-config [JCommander] Adding description for --config-help [JCommander] Adding description for --max-sessions [JCommander] Adding description for --override-max-sessions [JCommander] Adding description for --session-timeout [JCommander] Adding description for --detect-drivers [JCommander] Adding description for -I [JCommander] Adding description for --driver-implementation [JCommander] Adding description for --driver-factory [JCommander] Adding description for --grid-url [JCommander] Adding description for --hub [JCommander] Adding description for --driver-configuration [JCommander] Adding description for --register-cycle [JCommander] Adding description for --register-period [JCommander] Adding description for --heartbeat-period [JCommander] Adding description for --vnc-env-var [JCommander] Adding description for --no-vnc-port [JCommander] Adding description for --drain-after-session-count [JCommander] Adding description for --enable-cdp [JCommander] Adding description for --enable-bidi [JCommander] Adding description for --node-implementation [JCommander] Adding description for --downloads-path [JCommander] Adding description for --docker-url [JCommander] Adding description for --docker-host [JCommander] Adding description for --docker-port [JCommander] Adding description for --docker [JCommander] Adding description for -D [JCommander] Adding description for --docker-devices [JCommander] Adding description for --docker-video-image [JCommander] Adding description for --docker-assets-path [JCommander] Adding description for --relax-checks [JCommander] Adding description for --username [JCommander] Adding description for --password [JCommander] Adding description for --sub-path [JCommander] Adding description for --service-configuration [JCommander] Adding description for --service-url [JCommander] Adding description for --service-host [JCommander] Adding description for --service-port [JCommander] Adding description for --service-status-endpoint [JCommander] Adding description for -d [JCommander] Adding description for --distributor [JCommander] Adding description for --distributor-port [JCommander] Adding description for --distributor-host [JCommander] Adding description for --distributor-implementation [JCommander] Adding description for --slot-matcher [JCommander] Adding description for --slot-selector [JCommander] Adding description for --healthcheck-interval [JCommander] Adding description for --reject-unsupported-caps [JCommander] Adding description for --newsession-threadpool-size [JCommander] Adding description for --sq [JCommander] Adding description for --sessionqueue [JCommander] Adding description for --sessionqueue-port [JCommander] Adding description for --sessionqueue-host [JCommander] Adding description for --session-request-timeout [JCommander] Adding description for --session-request-timeout-period [JCommander] Adding description for --session-retry-interval [JCommander] Adding description for --sessionqueue-batch-size [JCommander] Adding description for --configure-logging [JCommander] Adding description for --structured-logs [JCommander] Adding description for --plain-logs [JCommander] Adding description for --tracing [JCommander] Adding description for --http-logs [JCommander] Adding description for --log [JCommander] Adding description for --log-encoding [JCommander] Adding description for --log-level [JCommander] Adding description for --log-timestamp-format [JCommander] Parsing arg: --port Was passed main parameter '--port' but no main parameter was defined in your arg class Usage: standalone [options] Options: ``` ### How can we reproduce the issue? Execute following the batch: ```shell for /l %%x in (1, 1, 500) do ( start "" /B java -jar -Djcommander.debug=true -Dwebdriver.chrome.driver="chromedriver.exe" selenium-server-4.8.0.jar standalone --port 4444 --session-timeout 1500 --override-max-sessions true --max-sessions 100 timeout 5 > nul taskkill /im java.exe /f ) ``` ### Relevant log output ```shell Logs are included in the message body. ``` ### Operating System Windows 10 ### Selenium version 4.8.1 ### What are the browser(s) and version(s) where you see this issue? n/a ### What are the browser driver(s) and version(s) where you see this issue? n/a ### Are you using Selenium Grid? 4.8.1
defect
selenium server fails to launch intermittently what happened we are experiencing a rather strange behavior with selenium server intermittently failing to start basically we have an application which launches selenium using the following cmd java jar djcommander debug true dwebdriver ie driver wiedriverserver exe dwebdriver gecko driver geckodriver exe dwebdriver edge driver msedgedriver exe dwebdriver chrome driver chromedriver exe selenium server jar standalone port session timeout override max sessions true max sessions and in or so times selenium fails to start with the following error this seems to be highly system dependent and on some systems happens in times on other in was passed main parameter port but no main parameter was defined in your arg class after enabling jcommander debug there seems to be a difference jcommander debug logs org openqa selenium grid server baseserverflags is missing in case of a failed launch successful launch parsing port session timeout override max sessions true max sessions with org openqa selenium grid server helpflags org openqa selenium grid config configflags org openqa selenium grid node config nodeflags org openqa selenium grid server baseserverflags org openqa selenium grid node docker dockerflags org openqa selenium grid router httpd routerflags org openqa selenium grid node relay relayflags org openqa selenium grid distributor config distributorflags org openqa selenium grid log loggingflags org openqa selenium grid sessionqueue config newsessionqueueflags adding description for h adding description for help adding description for help adding description for adding description for version adding description for config adding description for dump config adding description for config help adding description for max sessions adding description for override max sessions adding description for session timeout adding description for detect drivers adding description for i adding description for driver implementation adding description for driver factory adding description for grid url adding description for hub adding description for driver configuration adding description for register cycle adding description for register period adding description for heartbeat period adding description for vnc env var adding description for no vnc port adding description for drain after session count adding description for enable cdp adding description for enable bidi adding description for node implementation adding description for downloads path adding description for host adding description for bind host adding description for p adding description for port adding description for max threads adding description for allow cors adding description for https private key adding description for https certificate adding description for registration secret adding description for self signed https adding description for docker url adding description for docker host adding description for docker port adding description for docker adding description for d adding description for docker devices adding description for docker video image adding description for docker assets path adding description for relax checks adding description for username adding description for password adding description for sub path adding description for service configuration adding description for service url adding description for service host adding description for service port adding description for service status endpoint adding description for d adding description for distributor adding description for distributor port adding description for distributor host adding description for distributor implementation adding description for slot matcher adding description for slot selector adding description for healthcheck interval adding description for reject unsupported caps adding description for newsession threadpool size adding description for configure logging adding description for structured logs adding description for plain logs adding description for tracing adding description for http logs adding description for log adding description for log encoding adding description for log level adding description for log timestamp format adding description for sq adding description for sessionqueue adding description for sessionqueue port adding description for sessionqueue host adding description for session request timeout adding description for session request timeout period adding description for session retry interval adding description for sessionqueue batch size parsing arg port adding value to parameter port parsing arg session timeout adding value to parameter sessiontimeout parsing arg override max sessions adding value true to parameter overridemaxsessions parsing arg max sessions adding value to parameter maxsessions info using the system default encoding failed launch parsing port session timeout override max sessions true max sessions with org openqa selenium grid server helpflags org openqa selenium grid config configflags org openqa selenium grid node config nodeflags org openqa selenium grid node docker dockerflags org openqa selenium grid router httpd routerflags org openqa selenium grid node relay relayflags org openqa selenium grid distributor config distributorflags org openqa selenium grid sessionqueue config newsessionqueueflags org openqa selenium grid log loggingflags adding description for h adding description for help adding description for help adding description for adding description for version adding description for config adding description for dump config adding description for config help adding description for max sessions adding description for override max sessions adding description for session timeout adding description for detect drivers adding description for i adding description for driver implementation adding description for driver factory adding description for grid url adding description for hub adding description for driver configuration adding description for register cycle adding description for register period adding description for heartbeat period adding description for vnc env var adding description for no vnc port adding description for drain after session count adding description for enable cdp adding description for enable bidi adding description for node implementation adding description for downloads path adding description for docker url adding description for docker host adding description for docker port adding description for docker adding description for d adding description for docker devices adding description for docker video image adding description for docker assets path adding description for relax checks adding description for username adding description for password adding description for sub path adding description for service configuration adding description for service url adding description for service host adding description for service port adding description for service status endpoint adding description for d adding description for distributor adding description for distributor port adding description for distributor host adding description for distributor implementation adding description for slot matcher adding description for slot selector adding description for healthcheck interval adding description for reject unsupported caps adding description for newsession threadpool size adding description for sq adding description for sessionqueue adding description for sessionqueue port adding description for sessionqueue host adding description for session request timeout adding description for session request timeout period adding description for session retry interval adding description for sessionqueue batch size adding description for configure logging adding description for structured logs adding description for plain logs adding description for tracing adding description for http logs adding description for log adding description for log encoding adding description for log level adding description for log timestamp format parsing arg port was passed main parameter port but no main parameter was defined in your arg class usage standalone options how can we reproduce the issue execute following the batch shell for l x in do start b java jar djcommander debug true dwebdriver chrome driver chromedriver exe selenium server jar standalone port session timeout override max sessions true max sessions timeout nul taskkill im java exe f relevant log output shell logs are included in the message body operating system windows selenium version what are the browser s and version s where you see this issue n a what are the browser driver s and version s where you see this issue n a are you using selenium grid
1
37,491
8,406,272,167
IssuesEvent
2018-10-11 17:28:12
NREL/EnergyPlus
https://api.github.com/repos/NREL/EnergyPlus
closed
Surface out of bounds issues can occur with internal mass surfaces under some situations - possibly due to thin metal layer in the construction. (CR #8933)
Defect EnergyPlus PriorityLow SeverityMedium WontFix
#### Temperature stability problems - internal mass - possibly due to thin metal layer ###### Added on 2012-08-17 10:18 by @mjwitte ## #### Description From ticket: 2) When I run the model with 2010 weather, however, it crashes with the error message below. The crash occurs when the heating system is OFF and there is no cooling system for that zone *\* Severe *\* Temperature (low) out of bounds [-355.06] for zone="EASTENDRMS", for surface="INTERNALMASS_EASTENDRMS" *\* ~~~ *\* Environment=EASTLANSINGMI, at Simulation time=08/05 18:30 - 18:31 *\* ~~~ *\* Zone="EASTENDRMS", Diagnostic Details: *\* ~~~ *\* ...Internal Heat Gain [11.000] W/m2 *\* ~~~ *\* ...Infiltration/Ventilation [7.623E-002] m3/s *\* ~~~ *\* ...Mixing/Cross Mixing [0.000] m3/s *\* ~~~ *\* ...Zone is part of HVAC controlled system. LKL 02 Jul 2012 (ticket reply) Well, I know what the problem is but I have no work around. The construction for this internal mass starts cycling between odd temperatures (+200, -400). I tried adding multiple internal mass objects of lesser area, that did not help. I tried a different material in the construction (I think it was the same as construction 15 -- it behaved better). As in, it converged on a reasonable temperature. User reply 03 Jul 2012 Got the file to run 1. I revised the definition of the internal mass construction so that it had two materials 2. Reversed the materials in the adjacent ôprevious reversedö material 3. Revised the thickness of both materials 4. IDF attached After thinking about it, I realized that the thickness change affected more walls than I intended, but since itÆs not a ôrealö building at this point (ItÆ s an example for a paper), I left it as is. MJW 17 Aug 2012 The user's revised file still has temp out of bounds, but it does complete: ************\* *\* Severe *\* Temperature (low) out of bounds for zone=EASTENDRMS for surface=INTERNALMASS_EASTENDRMS ************\* *\* ~~~ *\* This error occurred 219 total times; ************\* *\* ~~~ *\* during Warmup 0 times; ************\* *\* ~~~ *\* during Sizing 0 times. ************\* *\* ~~~ *\* Max=-127.855641 C Min=-165.674808 C Assigning to RKS per conf call discussion on 8/15. Inputs: 8933-* Weather: 8933-USA_MI_East_Lansing_589331_2010_amy.epw ## External Ref: Ticket 6125 Last build tested: `12.07.18 V7.2.0.001`
1.0
Surface out of bounds issues can occur with internal mass surfaces under some situations - possibly due to thin metal layer in the construction. (CR #8933) - #### Temperature stability problems - internal mass - possibly due to thin metal layer ###### Added on 2012-08-17 10:18 by @mjwitte ## #### Description From ticket: 2) When I run the model with 2010 weather, however, it crashes with the error message below. The crash occurs when the heating system is OFF and there is no cooling system for that zone *\* Severe *\* Temperature (low) out of bounds [-355.06] for zone="EASTENDRMS", for surface="INTERNALMASS_EASTENDRMS" *\* ~~~ *\* Environment=EASTLANSINGMI, at Simulation time=08/05 18:30 - 18:31 *\* ~~~ *\* Zone="EASTENDRMS", Diagnostic Details: *\* ~~~ *\* ...Internal Heat Gain [11.000] W/m2 *\* ~~~ *\* ...Infiltration/Ventilation [7.623E-002] m3/s *\* ~~~ *\* ...Mixing/Cross Mixing [0.000] m3/s *\* ~~~ *\* ...Zone is part of HVAC controlled system. LKL 02 Jul 2012 (ticket reply) Well, I know what the problem is but I have no work around. The construction for this internal mass starts cycling between odd temperatures (+200, -400). I tried adding multiple internal mass objects of lesser area, that did not help. I tried a different material in the construction (I think it was the same as construction 15 -- it behaved better). As in, it converged on a reasonable temperature. User reply 03 Jul 2012 Got the file to run 1. I revised the definition of the internal mass construction so that it had two materials 2. Reversed the materials in the adjacent ôprevious reversedö material 3. Revised the thickness of both materials 4. IDF attached After thinking about it, I realized that the thickness change affected more walls than I intended, but since itÆs not a ôrealö building at this point (ItÆ s an example for a paper), I left it as is. MJW 17 Aug 2012 The user's revised file still has temp out of bounds, but it does complete: ************\* *\* Severe *\* Temperature (low) out of bounds for zone=EASTENDRMS for surface=INTERNALMASS_EASTENDRMS ************\* *\* ~~~ *\* This error occurred 219 total times; ************\* *\* ~~~ *\* during Warmup 0 times; ************\* *\* ~~~ *\* during Sizing 0 times. ************\* *\* ~~~ *\* Max=-127.855641 C Min=-165.674808 C Assigning to RKS per conf call discussion on 8/15. Inputs: 8933-* Weather: 8933-USA_MI_East_Lansing_589331_2010_amy.epw ## External Ref: Ticket 6125 Last build tested: `12.07.18 V7.2.0.001`
defect
surface out of bounds issues can occur with internal mass surfaces under some situations possibly due to thin metal layer in the construction cr temperature stability problems internal mass possibly due to thin metal layer added on by mjwitte description from ticket when i run the model with weather however it crashes with the error message below the crash occurs when the heating system is off and there is no cooling system for that zone severe temperature low out of bounds for zone eastendrms for surface internalmass eastendrms environment eastlansingmi at simulation time zone eastendrms diagnostic details internal heat gain w infiltration ventilation s mixing cross mixing s zone is part of hvac controlled system lkl jul ticket reply well i know what the problem is but i have no work around the construction for this internal mass starts cycling between odd temperatures i tried adding multiple internal mass objects of lesser area that did not help i tried a different material in the construction i think it was the same as construction it behaved better as in it converged on a reasonable temperature user reply jul got the file to run i revised the definition of the internal mass construction so that it had two materials reversed the materials in the adjacent ôprevious reversedö material revised the thickness of both materials idf attached after thinking about it i realized that the thickness change affected more walls than i intended but since itæs not a ôrealö building at this point itæ s an example for a paper i left it as is mjw aug the user s revised file still has temp out of bounds but it does complete severe temperature low out of bounds for zone eastendrms for surface internalmass eastendrms this error occurred total times during warmup times during sizing times max c min c assigning to rks per conf call discussion on inputs weather usa mi east lansing amy epw external ref ticket last build tested
1
43,747
11,826,032,309
IssuesEvent
2020-03-21 15:54:10
DependencyTrack/dependency-track
https://api.github.com/repos/DependencyTrack/dependency-track
closed
Project Findings Audited %: Incorrect values (revisited)
defect p3 pending release
### Current Behavior: Project Overview pages display a "Findings Audited %" metric. This metric seems to be calculated incorrectly. I have not tested whether the same metric is correct at the portfolio level. #### Example: * Audit tab displays 13 vulnerabilities, of which 3 are audited. * Overview reports count of 10 vulnerabilities (correct) * Overview reports count of 3 for "Findings Audited" (correct) * Overview reports that "Findings Audited %" = 30% (incorrect). The actual percentage is 23.07% ( ((13 - 3)/13)) * 100) This was previously reported as item 3 in #377 ### Steps to Reproduce: * Inspect a project that has a number of vulnerabilities. Audit and supress 1 vulnerability and check that audit % is displayed correctly. * Repeat for audit of 2 vulnerabilities, etc. ### Expected Behavior: Findings Audited % should display correct metric. ### Environment: - Dependency-Track Version: 3.7.1 - Distribution: [ Executable WAR ] - BOM Format & Version: CycloneDX 1.1 - Database Server: [ PostgreSQL ] - Browser: Firefox 71.0
1.0
Project Findings Audited %: Incorrect values (revisited) - ### Current Behavior: Project Overview pages display a "Findings Audited %" metric. This metric seems to be calculated incorrectly. I have not tested whether the same metric is correct at the portfolio level. #### Example: * Audit tab displays 13 vulnerabilities, of which 3 are audited. * Overview reports count of 10 vulnerabilities (correct) * Overview reports count of 3 for "Findings Audited" (correct) * Overview reports that "Findings Audited %" = 30% (incorrect). The actual percentage is 23.07% ( ((13 - 3)/13)) * 100) This was previously reported as item 3 in #377 ### Steps to Reproduce: * Inspect a project that has a number of vulnerabilities. Audit and supress 1 vulnerability and check that audit % is displayed correctly. * Repeat for audit of 2 vulnerabilities, etc. ### Expected Behavior: Findings Audited % should display correct metric. ### Environment: - Dependency-Track Version: 3.7.1 - Distribution: [ Executable WAR ] - BOM Format & Version: CycloneDX 1.1 - Database Server: [ PostgreSQL ] - Browser: Firefox 71.0
defect
project findings audited incorrect values revisited current behavior project overview pages display a findings audited metric this metric seems to be calculated incorrectly i have not tested whether the same metric is correct at the portfolio level example audit tab displays vulnerabilities of which are audited overview reports count of vulnerabilities correct overview reports count of for findings audited correct overview reports that findings audited incorrect the actual percentage is this was previously reported as item in steps to reproduce inspect a project that has a number of vulnerabilities audit and supress vulnerability and check that audit is displayed correctly repeat for audit of vulnerabilities etc expected behavior findings audited should display correct metric environment dependency track version distribution bom format version cyclonedx database server browser firefox
1
69,860
22,700,234,045
IssuesEvent
2022-07-05 09:58:24
hpi-swa-teaching/SVGMorph
https://api.github.com/repos/hpi-swa-teaching/SVGMorph
opened
Bezier: Self-intersecting shapes result in holes
defect
![Screenshot from 2022-07-05 11-51-55](https://user-images.githubusercontent.com/45318774/177302074-e2796843-87ce-481d-b497-0687e74ae64d.png) In the image, the radius of the shape is smaller than half the stroke width, thus the stroke boundaries form a husk. The renderer uses evenodd fill rule. Hence, these husks appear as holes. One Solution would be to convert the shape to emulate the behavior of nonzero fill rule. This would also be necessary for similar issues with the shape itself, not only the outline as well as the `fill-rule` SVG attribute.
1.0
Bezier: Self-intersecting shapes result in holes - ![Screenshot from 2022-07-05 11-51-55](https://user-images.githubusercontent.com/45318774/177302074-e2796843-87ce-481d-b497-0687e74ae64d.png) In the image, the radius of the shape is smaller than half the stroke width, thus the stroke boundaries form a husk. The renderer uses evenodd fill rule. Hence, these husks appear as holes. One Solution would be to convert the shape to emulate the behavior of nonzero fill rule. This would also be necessary for similar issues with the shape itself, not only the outline as well as the `fill-rule` SVG attribute.
defect
bezier self intersecting shapes result in holes in the image the radius of the shape is smaller than half the stroke width thus the stroke boundaries form a husk the renderer uses evenodd fill rule hence these husks appear as holes one solution would be to convert the shape to emulate the behavior of nonzero fill rule this would also be necessary for similar issues with the shape itself not only the outline as well as the fill rule svg attribute
1
57,032
15,598,113,301
IssuesEvent
2021-03-18 17:42:10
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
opened
Intermittent test failure: " Serialization failure: 1213 Deadlock found when trying to get lock;"
Defect Needs refining
**Describe the defect** We are seeing intermittent failures of our test suite on DEV/STAGING with: > Serialization failure: 1213 Deadlock found when trying to get lock; try restarting Here is the latest example: ``` @BeforeFeature @mock_va_gov_urls # CustomDrupal\ContentModelContextCustom::enableVaGovBackendHttpClient() ... SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting ┃ transaction: INSERT INTO {cache_config} (cid, expire, created, tags, checksum, data, serialized) VALUES ``` **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information if relevant, or delete):** - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] **Additional context** Add any other context about the problem here. Reach out to the Product Managers to determine if it should be escalated as critical (prevents users from accomplishing their work with no known workaround and needs to be addressed within 2 business days). ## Labels (You can delete this section once it's complete) - [x] Issue type (red) (defaults to "Defect") - [ ] CMS subsystem (green) - [ ] CMS practice area (blue) - [x] CMS objective (orange) (not needed for bug tickets) - [ ] CMS-supported product (black)
1.0
Intermittent test failure: " Serialization failure: 1213 Deadlock found when trying to get lock;" - **Describe the defect** We are seeing intermittent failures of our test suite on DEV/STAGING with: > Serialization failure: 1213 Deadlock found when trying to get lock; try restarting Here is the latest example: ``` @BeforeFeature @mock_va_gov_urls # CustomDrupal\ContentModelContextCustom::enableVaGovBackendHttpClient() ... SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting ┃ transaction: INSERT INTO {cache_config} (cid, expire, created, tags, checksum, data, serialized) VALUES ``` **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information if relevant, or delete):** - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] **Additional context** Add any other context about the problem here. Reach out to the Product Managers to determine if it should be escalated as critical (prevents users from accomplishing their work with no known workaround and needs to be addressed within 2 business days). ## Labels (You can delete this section once it's complete) - [x] Issue type (red) (defaults to "Defect") - [ ] CMS subsystem (green) - [ ] CMS practice area (blue) - [x] CMS objective (orange) (not needed for bug tickets) - [ ] CMS-supported product (black)
defect
intermittent test failure serialization failure deadlock found when trying to get lock describe the defect we are seeing intermittent failures of our test suite on dev staging with serialization failure deadlock found when trying to get lock try restarting here is the latest example beforefeature mock va gov urls customdrupal contentmodelcontextcustom enablevagovbackendhttpclient sqlstate serialization failure deadlock found when trying to get lock try restarting ┃ transaction insert into cache config cid expire created tags checksum data serialized values to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem desktop please complete the following information if relevant or delete os browser version additional context add any other context about the problem here reach out to the product managers to determine if it should be escalated as critical prevents users from accomplishing their work with no known workaround and needs to be addressed within business days labels you can delete this section once it s complete issue type red defaults to defect cms subsystem green cms practice area blue cms objective orange not needed for bug tickets cms supported product black
1
19,199
3,756,361,354
IssuesEvent
2016-03-13 09:04:53
QubesOS/qubes-issues
https://api.github.com/repos/QubesOS/qubes-issues
reopened
Split-GPG is incompatible with Tor Birdy
C: other help wanted r3.0-dom0-testing r3.0-fc20-testing r3.0-fc21-testing r3.0-fc22-testing r3.0-fc23-testing r3.0-jessie-testing r3.0-wheezy-testing r3.1-dom0-stable r3.1-fc21-stable r3.1-fc22-stable r3.1-fc23-stable r3.1-jessie-testing r3.1-stretch-testing r3.1-wheezy-testing
Attempting to use Split-GPG and Tor Birdy at the same time results in no GPG notifications in Thunderbird at all when viewing signed or encrypted messages.
10.0
Split-GPG is incompatible with Tor Birdy - Attempting to use Split-GPG and Tor Birdy at the same time results in no GPG notifications in Thunderbird at all when viewing signed or encrypted messages.
non_defect
split gpg is incompatible with tor birdy attempting to use split gpg and tor birdy at the same time results in no gpg notifications in thunderbird at all when viewing signed or encrypted messages
0
69,776
22,666,853,032
IssuesEvent
2022-07-03 02:30:11
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Returning null from transactionCoroutine throws NoSuchElementException
T: Defect
### Expected behavior Should be able to return null from the transactional block, which propagates back out as the return value of transactionCoroutine. ### Actual behavior Throws: ``` java.util.NoSuchElementException: No value received via onNext for awaitFirst at ???(Coroutine boundary.?(?) at DatabaseTest$selectOne$1$1.invokeSuspend(DatabaseTest.kt:34) at DatabaseTest$selectOne$1.invokeSuspend(DatabaseTest.kt:33) Caused by: java.util.NoSuchElementException: No value received via onNext for awaitFirst at kotlinx.coroutines.reactive.AwaitKt$awaitOne$2$1.onComplete(Await.kt:270) ``` This is caused by the fact that the mono function from kotlinx-coroutines-reactor has the functionality "If the result of block is null, MonoSink.success is invoked without a value." and transactionCoroutine uses .awaitFirst() which always expects a value. ### Steps to reproduce the problem ```kotlin val dsl = DSL.using(connFac, SQLDialect.POSTGRES) runBlocking { val none = dsl.transactionCoroutine { null } println(none) } ``` Should print "null", throws instead ### Versions - jOOQ: 3.17.1 - Java: Openjdk 17 - Kotlin: 1.7.0
1.0
Returning null from transactionCoroutine throws NoSuchElementException - ### Expected behavior Should be able to return null from the transactional block, which propagates back out as the return value of transactionCoroutine. ### Actual behavior Throws: ``` java.util.NoSuchElementException: No value received via onNext for awaitFirst at ???(Coroutine boundary.?(?) at DatabaseTest$selectOne$1$1.invokeSuspend(DatabaseTest.kt:34) at DatabaseTest$selectOne$1.invokeSuspend(DatabaseTest.kt:33) Caused by: java.util.NoSuchElementException: No value received via onNext for awaitFirst at kotlinx.coroutines.reactive.AwaitKt$awaitOne$2$1.onComplete(Await.kt:270) ``` This is caused by the fact that the mono function from kotlinx-coroutines-reactor has the functionality "If the result of block is null, MonoSink.success is invoked without a value." and transactionCoroutine uses .awaitFirst() which always expects a value. ### Steps to reproduce the problem ```kotlin val dsl = DSL.using(connFac, SQLDialect.POSTGRES) runBlocking { val none = dsl.transactionCoroutine { null } println(none) } ``` Should print "null", throws instead ### Versions - jOOQ: 3.17.1 - Java: Openjdk 17 - Kotlin: 1.7.0
defect
returning null from transactioncoroutine throws nosuchelementexception expected behavior should be able to return null from the transactional block which propagates back out as the return value of transactioncoroutine actual behavior throws java util nosuchelementexception no value received via onnext for awaitfirst at coroutine boundary at databasetest selectone invokesuspend databasetest kt at databasetest selectone invokesuspend databasetest kt caused by java util nosuchelementexception no value received via onnext for awaitfirst at kotlinx coroutines reactive awaitkt awaitone oncomplete await kt this is caused by the fact that the mono function from kotlinx coroutines reactor has the functionality if the result of block is null monosink success is invoked without a value and transactioncoroutine uses awaitfirst which always expects a value steps to reproduce the problem kotlin val dsl dsl using connfac sqldialect postgres runblocking val none dsl transactioncoroutine null println none should print null throws instead versions jooq java openjdk kotlin
1
14,225
2,794,131,123
IssuesEvent
2015-05-11 15:09:02
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
Sparse matrix slice segmentation fault
defect scipy.sparse
I have a large csr_matrix with the following parameters: In [5]: a.shape Out[5]: (360417, 360417) In [6]: a.nnz Out[6]: 15464020 I generated it the following way: a = sp.sparse.csr_matrix((values, (i_indices, j_indices)), shape=shape, dtype=np.float32) where values is a list holding the values, i_indices and j_indeces are also lists holding the corresponding indices. Everything works fine, I can access all values by e.g., a[0,0] or a[0,3] and so on. I can also do all operations like e.g., a.floor(). However, as soon as I slice the matrix, I get a segmentation fault e.g., causes by a[0,:2]. Any idea?
1.0
Sparse matrix slice segmentation fault - I have a large csr_matrix with the following parameters: In [5]: a.shape Out[5]: (360417, 360417) In [6]: a.nnz Out[6]: 15464020 I generated it the following way: a = sp.sparse.csr_matrix((values, (i_indices, j_indices)), shape=shape, dtype=np.float32) where values is a list holding the values, i_indices and j_indeces are also lists holding the corresponding indices. Everything works fine, I can access all values by e.g., a[0,0] or a[0,3] and so on. I can also do all operations like e.g., a.floor(). However, as soon as I slice the matrix, I get a segmentation fault e.g., causes by a[0,:2]. Any idea?
defect
sparse matrix slice segmentation fault i have a large csr matrix with the following parameters in a shape out in a nnz out i generated it the following way a sp sparse csr matrix values i indices j indices shape shape dtype np where values is a list holding the values i indices and j indeces are also lists holding the corresponding indices everything works fine i can access all values by e g a or a and so on i can also do all operations like e g a floor however as soon as i slice the matrix i get a segmentation fault e g causes by a any idea
1
73,162
24,480,166,330
IssuesEvent
2022-10-08 18:19:09
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
`test_powm1` failing in CI on Windows
defect scipy.special
The weekly wheel builds currently show a single failure on Windows, and that failure is consistent for all supported Python versions. Latest build log of weekly scheduled builds: https://github.com/scipy/scipy/actions/runs/3209882330 ``` _______________ test_powm1[-1.25-751.0--6.017550852453444e+72] ________________ Users\runneradmin\AppData\Local\Temp\cibw-run-owgacz5n\cp310-win_amd64\venv-test\lib\site-packages\scipy\special\tests\test_powm1.py:27: in test_powm1 assert_allclose(p, expected, rtol=1e-15) E AssertionError: E Not equal to tolerance rtol=1e-15, atol=0 E E Mismatched elements: 1 / 1 (100%) E Max absolute difference: 9.4156526e+57 E Max relative difference: 1.56469847e-15 E x: array(-6.017551e+72) E y: array(-6.017551e+72) expected = -6.017550852453444e+72 p = -6.017550852453454e+72 x = -1.25 y = 751.0 ``` @WarrenWeckesser I assume that this is due to gh-16017. Would you be able to have a look?
1.0
`test_powm1` failing in CI on Windows - The weekly wheel builds currently show a single failure on Windows, and that failure is consistent for all supported Python versions. Latest build log of weekly scheduled builds: https://github.com/scipy/scipy/actions/runs/3209882330 ``` _______________ test_powm1[-1.25-751.0--6.017550852453444e+72] ________________ Users\runneradmin\AppData\Local\Temp\cibw-run-owgacz5n\cp310-win_amd64\venv-test\lib\site-packages\scipy\special\tests\test_powm1.py:27: in test_powm1 assert_allclose(p, expected, rtol=1e-15) E AssertionError: E Not equal to tolerance rtol=1e-15, atol=0 E E Mismatched elements: 1 / 1 (100%) E Max absolute difference: 9.4156526e+57 E Max relative difference: 1.56469847e-15 E x: array(-6.017551e+72) E y: array(-6.017551e+72) expected = -6.017550852453444e+72 p = -6.017550852453454e+72 x = -1.25 y = 751.0 ``` @WarrenWeckesser I assume that this is due to gh-16017. Would you be able to have a look?
defect
test failing in ci on windows the weekly wheel builds currently show a single failure on windows and that failure is consistent for all supported python versions latest build log of weekly scheduled builds test users runneradmin appdata local temp cibw run win venv test lib site packages scipy special tests test py in test assert allclose p expected rtol e assertionerror e not equal to tolerance rtol atol e e mismatched elements e max absolute difference e max relative difference e x array e y array expected p x y warrenweckesser i assume that this is due to gh would you be able to have a look
1
40,934
10,232,199,951
IssuesEvent
2019-08-18 15:38:54
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
Bug in scipy.special.hyp1f1
defect scipy.special
I've been checking values in Scipy's implementation of hyp1f1 against other implementations and I noticed some discrepancies where Scipy's solution seems to be off. scipy.special.hyp1f1(a,b,c) vs M(a,b,c) from (http://keisan.casio.com/exec/system/1349143651). In particular, take small values of a, b, and c, say a = 1, b = 5, and c = 0.01. Both equal: 1.002003338101197 (with differences in the remaining digits) However change a to 50 and b to 100. a = 50, b = 100, and c = 0.01. scipy.special.hyp1f1(a,b,c) = 1774259915037.1118 M(a,b,c) = 1.005012645242146341004 I've double checked on Matlab and the latter is correct. In general, higher values of a and b give the scipy implementation greater trouble. This is using scipy version 0.13.3. There was an earlier report similar in nature (https://github.com/scipy/scipy/issues/946) which has since been closed.
1.0
Bug in scipy.special.hyp1f1 - I've been checking values in Scipy's implementation of hyp1f1 against other implementations and I noticed some discrepancies where Scipy's solution seems to be off. scipy.special.hyp1f1(a,b,c) vs M(a,b,c) from (http://keisan.casio.com/exec/system/1349143651). In particular, take small values of a, b, and c, say a = 1, b = 5, and c = 0.01. Both equal: 1.002003338101197 (with differences in the remaining digits) However change a to 50 and b to 100. a = 50, b = 100, and c = 0.01. scipy.special.hyp1f1(a,b,c) = 1774259915037.1118 M(a,b,c) = 1.005012645242146341004 I've double checked on Matlab and the latter is correct. In general, higher values of a and b give the scipy implementation greater trouble. This is using scipy version 0.13.3. There was an earlier report similar in nature (https://github.com/scipy/scipy/issues/946) which has since been closed.
defect
bug in scipy special i ve been checking values in scipy s implementation of against other implementations and i noticed some discrepancies where scipy s solution seems to be off scipy special a b c vs m a b c from in particular take small values of a b and c say a b and c both equal with differences in the remaining digits however change a to and b to a b and c scipy special a b c m a b c i ve double checked on matlab and the latter is correct in general higher values of a and b give the scipy implementation greater trouble this is using scipy version there was an earlier report similar in nature which has since been closed
1
108,677
13,646,013,664
IssuesEvent
2020-09-25 22:05:04
near/near-explorer
https://api.github.com/repos/near/near-explorer
closed
Show avg block time
Item Design New Feature Priority 2
Block time is very important property for blockchain that displays how fast tx will be processed and finalized. Would be great to show what has been average block time over last 10 minutes for example.
1.0
Show avg block time - Block time is very important property for blockchain that displays how fast tx will be processed and finalized. Would be great to show what has been average block time over last 10 minutes for example.
non_defect
show avg block time block time is very important property for blockchain that displays how fast tx will be processed and finalized would be great to show what has been average block time over last minutes for example
0
30,798
6,288,107,882
IssuesEvent
2017-07-19 16:14:38
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
SelectOneMenu not working after client side disable/enable
defect
## 1) Environment PrimeFaces version: 6.1.2 ## 2) Expected behavior After disabling and enabling a SelectOneMenu on client side via JavaScript it should work as before. ## 3) Actual behavior When disabling a SelectOneMenu via JavaScript and enabling it again, mouse events (mouse over, click) on the menu's item are not working anymore. ## 4) Sample XHTML ... <p:selectOneMenu widgetVar="selectOneMenu" ...> ... JavaScript: PF('selectOneMenu').disable(); ... PF('selectOneMenu').enable(); ## 5) Analyses Binding of item's events has been moved out of the bindEvents method to a separate method bindItemEvents, but this isn't called in the enable method.
1.0
SelectOneMenu not working after client side disable/enable - ## 1) Environment PrimeFaces version: 6.1.2 ## 2) Expected behavior After disabling and enabling a SelectOneMenu on client side via JavaScript it should work as before. ## 3) Actual behavior When disabling a SelectOneMenu via JavaScript and enabling it again, mouse events (mouse over, click) on the menu's item are not working anymore. ## 4) Sample XHTML ... <p:selectOneMenu widgetVar="selectOneMenu" ...> ... JavaScript: PF('selectOneMenu').disable(); ... PF('selectOneMenu').enable(); ## 5) Analyses Binding of item's events has been moved out of the bindEvents method to a separate method bindItemEvents, but this isn't called in the enable method.
defect
selectonemenu not working after client side disable enable environment primefaces version expected behavior after disabling and enabling a selectonemenu on client side via javascript it should work as before actual behavior when disabling a selectonemenu via javascript and enabling it again mouse events mouse over click on the menu s item are not working anymore sample xhtml javascript pf selectonemenu disable pf selectonemenu enable analyses binding of item s events has been moved out of the bindevents method to a separate method binditemevents but this isn t called in the enable method
1
190,835
14,580,108,357
IssuesEvent
2020-12-18 08:36:31
rollbar/terraform-provider-rollbar
https://api.github.com/repos/rollbar/terraform-provider-rollbar
closed
Explore `go-vcr` for testing
question testing
Potentially [`go-vcr`](https://github.com/dnaeon/go-vcr) could be used to make a set of blazing fast pseudo-acceptance tests. Record the acceptance test suite run against the real API. Then the acceptance tests can be run against the playback.
1.0
Explore `go-vcr` for testing - Potentially [`go-vcr`](https://github.com/dnaeon/go-vcr) could be used to make a set of blazing fast pseudo-acceptance tests. Record the acceptance test suite run against the real API. Then the acceptance tests can be run against the playback.
non_defect
explore go vcr for testing potentially could be used to make a set of blazing fast pseudo acceptance tests record the acceptance test suite run against the real api then the acceptance tests can be run against the playback
0
2,135
2,603,976,915
IssuesEvent
2015-02-24 19:01:44
chrsmith/nishazi6
https://api.github.com/repos/chrsmith/nishazi6
opened
沈阳人乳头状病毒感染
auto-migrated Priority-Medium Type-Defect
``` 沈阳人乳头状病毒感染〓沈陽軍區政治部醫院性病〓TEL:024-3 1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。� ��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌� ��歷史悠久、設備精良、技術權威、專家云集,是預防、保健 、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲�� �部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、� ��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空 軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體�� �等功。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:24
1.0
沈阳人乳头状病毒感染 - ``` 沈阳人乳头状病毒感染〓沈陽軍區政治部醫院性病〓TEL:024-3 1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。� ��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌� ��歷史悠久、設備精良、技術權威、專家云集,是預防、保健 、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲�� �部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、� ��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空 軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體�� �等功。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:24
defect
沈阳人乳头状病毒感染 沈阳人乳头状病毒感染〓沈陽軍區政治部醫院性病〓tel: 〓 , 。� �� 。是一所與新中國同建立共輝煌� ��歷史悠久、設備精良、技術權威、專家云集,是預防、保健 、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲�� �部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、� ��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空 軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體�� �等功。 original issue reported on code google com by gmail com on jun at
1
120,618
17,644,243,825
IssuesEvent
2021-08-20 02:02:25
fbennets/HCLC-GDPR-Bot
https://api.github.com/repos/fbennets/HCLC-GDPR-Bot
opened
CVE-2021-29558 (High) detected in tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl
security vulnerability
## CVE-2021-29558 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: HCLC-GDPR-Bot/requirements.txt</p> <p>Path to vulnerable library: HCLC-GDPR-Bot/requirements.txt</p> <p> Dependency Hierarchy: - tensorflow_addons-0.7.1-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library) - :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. An attacker can cause a heap buffer overflow in `tf.raw_ops.SparseSplit`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/699bff5d961f0abfde8fa3f876e6d241681fbef8/tensorflow/core/util/sparse/sparse_tensor.h#L528-L530) accesses an array element based on a user controlled offset. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. <p>Publish Date: 2021-05-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29558>CVE-2021-29558</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-mqh2-9wrp-vx84">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-mqh2-9wrp-vx84</a></p> <p>Release Date: 2021-05-14</p> <p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-29558 (High) detected in tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2021-29558 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: HCLC-GDPR-Bot/requirements.txt</p> <p>Path to vulnerable library: HCLC-GDPR-Bot/requirements.txt</p> <p> Dependency Hierarchy: - tensorflow_addons-0.7.1-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library) - :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. An attacker can cause a heap buffer overflow in `tf.raw_ops.SparseSplit`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/699bff5d961f0abfde8fa3f876e6d241681fbef8/tensorflow/core/util/sparse/sparse_tensor.h#L528-L530) accesses an array element based on a user controlled offset. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. <p>Publish Date: 2021-05-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29558>CVE-2021-29558</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-mqh2-9wrp-vx84">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-mqh2-9wrp-vx84</a></p> <p>Release Date: 2021-05-14</p> <p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file hclc gdpr bot requirements txt path to vulnerable library hclc gdpr bot requirements txt dependency hierarchy tensorflow addons whl root library x tensorflow whl vulnerable library found in base branch master vulnerability details tensorflow is an end to end open source platform for machine learning an attacker can cause a heap buffer overflow in tf raw ops sparsesplit this is because the implementation accesses an array element based on a user controlled offset the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
0
756,350
26,467,679,010
IssuesEvent
2023-01-17 02:33:24
jbx-protocol/juice-interface
https://api.github.com/repos/jbx-protocol/juice-interface
closed
Font size on v1 activity feed for primary number is incosistent
V1 priority:3 ux:general
## Summary Distributed Funds number is smaller than Distributed Reserved Tokens number. ## Relevant logs and/or screenshots ![image](https://user-images.githubusercontent.com/18723426/212687313-4511de8d-90ba-409e-ace4-037e7dd2fca2.png) ## Discord link https://discord.com/channels/775859454780244028/1064375393026588713/1064375397287985182
1.0
Font size on v1 activity feed for primary number is incosistent - ## Summary Distributed Funds number is smaller than Distributed Reserved Tokens number. ## Relevant logs and/or screenshots ![image](https://user-images.githubusercontent.com/18723426/212687313-4511de8d-90ba-409e-ace4-037e7dd2fca2.png) ## Discord link https://discord.com/channels/775859454780244028/1064375393026588713/1064375397287985182
non_defect
font size on activity feed for primary number is incosistent summary distributed funds number is smaller than distributed reserved tokens number relevant logs and or screenshots discord link
0
47,348
13,056,133,993
IssuesEvent
2020-07-30 03:45:41
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
GLShovel does not correctly show pulses/launches/i3particle if the viewing timewindow is not in range (Trac #390)
Migrated from Trac defect glshovel
GLShovel has problems with long hitseries on a any DOM, when the viewing time excludes the first hit. If for example a hitseries has two hits, 1 at 10 microseconds and one at 100 microseconds, and we put the t_min to 50 and t_max to 200, no hit will be rendered for that DOM. The same is true for domlaunches and I3Particles. For example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event, you have to back in time often quite far to be able to see the track. Migrated from https://code.icecube.wisc.edu/ticket/390 ```json { "status": "closed", "changetime": "2013-11-21T22:34:18", "description": "GLShovel has problems with long hitseries on a any DOM, when the viewing time excludes the first hit.\n\nIf for example a hitseries has two hits, 1 at 10 microseconds and one\nat 100 microseconds, and we put the t_min to 50 and t_max to 200,\nno hit will be rendered for that DOM.\n\n\nThe same is true for domlaunches and I3Particles. For example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event, you have to back in time often quite far to be able to see the track.\n\n", "reporter": "gluesenkamp", "cc": "", "resolution": "fixed", "_ts": "1385073258000000", "component": "glshovel", "summary": "GLShovel does not correctly show pulses/launches/i3particle if the viewing timewindow is not in range", "priority": "normal", "keywords": "", "time": "2012-04-26T15:11:55", "milestone": "", "owner": "olivas", "type": "defect" } ```
1.0
GLShovel does not correctly show pulses/launches/i3particle if the viewing timewindow is not in range (Trac #390) - GLShovel has problems with long hitseries on a any DOM, when the viewing time excludes the first hit. If for example a hitseries has two hits, 1 at 10 microseconds and one at 100 microseconds, and we put the t_min to 50 and t_max to 200, no hit will be rendered for that DOM. The same is true for domlaunches and I3Particles. For example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event, you have to back in time often quite far to be able to see the track. Migrated from https://code.icecube.wisc.edu/ticket/390 ```json { "status": "closed", "changetime": "2013-11-21T22:34:18", "description": "GLShovel has problems with long hitseries on a any DOM, when the viewing time excludes the first hit.\n\nIf for example a hitseries has two hits, 1 at 10 microseconds and one\nat 100 microseconds, and we put the t_min to 50 and t_max to 200,\nno hit will be rendered for that DOM.\n\n\nThe same is true for domlaunches and I3Particles. For example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event, you have to back in time often quite far to be able to see the track.\n\n", "reporter": "gluesenkamp", "cc": "", "resolution": "fixed", "_ts": "1385073258000000", "component": "glshovel", "summary": "GLShovel does not correctly show pulses/launches/i3particle if the viewing timewindow is not in range", "priority": "normal", "keywords": "", "time": "2012-04-26T15:11:55", "milestone": "", "owner": "olivas", "type": "defect" } ```
defect
glshovel does not correctly show pulses launches if the viewing timewindow is not in range trac glshovel has problems with long hitseries on a any dom when the viewing time excludes the first hit if for example a hitseries has two hits at microseconds and one at microseconds and we put the t min to and t max to no hit will be rendered for that dom the same is true for domlaunches and for example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event you have to back in time often quite far to be able to see the track migrated from json status closed changetime description glshovel has problems with long hitseries on a any dom when the viewing time excludes the first hit n nif for example a hitseries has two hits at microseconds and one nat microseconds and we put the t min to and t max to nno hit will be rendered for that dom n n nthe same is true for domlaunches and for example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event you have to back in time often quite far to be able to see the track n n reporter gluesenkamp cc resolution fixed ts component glshovel summary glshovel does not correctly show pulses launches if the viewing timewindow is not in range priority normal keywords time milestone owner olivas type defect
1
13,127
8,804,900,445
IssuesEvent
2018-12-26 16:17:51
spring-cloud/spring-cloud-dataflow
https://api.github.com/repos/spring-cloud/spring-cloud-dataflow
closed
Provide example application that shows LDAP authentication in action
security-2.x-upgrade
Provide example application that shows LDAP authentication in action using the UAA. child of spring-cloud/spring-cloud-dataflow#2574
True
Provide example application that shows LDAP authentication in action - Provide example application that shows LDAP authentication in action using the UAA. child of spring-cloud/spring-cloud-dataflow#2574
non_defect
provide example application that shows ldap authentication in action provide example application that shows ldap authentication in action using the uaa child of spring cloud spring cloud dataflow
0
77,195
26,833,994,188
IssuesEvent
2023-02-02 17:59:51
idaholab/moose
https://api.github.com/repos/idaholab/moose
closed
Nearest node transfer either errors or gives bad results for FIRST monomials
C: Framework T: defect P: normal
## Bug Description The `MultiAppNearestNodeTransfer` does not behave as expected when transferring data to/from FIRST MONOMIAL variables. The behavior is slightly different for transfers to/from FIRST MONOMIAL - at the end of this issue, I've attached input files you can use to reproduce this behavior. I've considered the following transfers, and here's a summary of the behavior I see: |Source Variable | Target Variable | Behavior| |----------------|----------------|----------| | FIRST nodal | CONSTANT elemental | normal| | FIRST nodal | FIRST elemental | Fails due to bounding box| | FIRST elemental | FIRST nodal | Strange-looking transfer results| | CONSTANT elemental | FIRST nodal | normal| |CONSTANT elemental | FIRST elemental | Fails due to bounding box| |FIRST elemental | CONSTANT elemental | Strange-looking transfer results| |CONSTANT elemental | CONSTANT elemental | normal| |FIRST elemental | FIRST elemental | Fails due to bounding box| ### Sending data _to_ FIRST MONOMIAL For sending data to a FIRST monomial, regardless of whether the source variable is FIRST nodal, FIRST elemental, or CONSTANT elemental, you get a bounding box failure like this: ``` *** ERROR *** The following error occurred in the object "to_sub_nodal_to_elemental_first", of type "MultiAppNearestNodeTransfer". In to_sub_nodal_to_elemental_first: No candidate BoundingBoxes found for Elem 8, centroid = (x,y,z)=(0.114148, 0.399319, 0) ``` ### Sending data _from_ FIRST MONOMIAL For sending data from a FIRST monomial, regardless of whether the target variable is FIRST nodal or CONSTANT elemental, the transferred variable just doesn't look correct in the receiver app. In the example attached below, here is the FIRST monomial in the sending app: ![Screen Shot 2021-03-08 at 11 09 02 AM](https://user-images.githubusercontent.com/17039662/110355336-ba5cd980-7ffe-11eb-9e4e-4f73146460ab.png) And here are the received variables: **FIRST MONOMIAL -> FIRST NODAL** ![Screen Shot 2021-03-08 at 11 09 54 AM](https://user-images.githubusercontent.com/17039662/110355422-d3658a80-7ffe-11eb-9317-c99794746ed6.png) ** FIRST MONOMIAL-> CONSTANT MONOMIAL ** ![Screen Shot 2021-03-08 at 11 09 12 AM](https://user-images.githubusercontent.com/17039662/110355505-ee37ff00-7ffe-11eb-94bb-f63f7236b26e.png) ## Steps to Reproduce Run the attached input files. To see the bounding box failures, uncomment the three transfers to FIRST MONOMIAL in the `[Transfers]` block. ## Impact If I interpreted these results correctly, then there is incorrect transfer behavior for FIRST MONOMIALs - for sending from FIRST, the received values don't look right, while sending to FIRST fails. There are currently no tests for transfers with `MultiAppNearestNodeTransfer` that use FIRST MONOMIALs as either the target or source variable, so apps using this feature might have incorrect transfers.
1.0
Nearest node transfer either errors or gives bad results for FIRST monomials - ## Bug Description The `MultiAppNearestNodeTransfer` does not behave as expected when transferring data to/from FIRST MONOMIAL variables. The behavior is slightly different for transfers to/from FIRST MONOMIAL - at the end of this issue, I've attached input files you can use to reproduce this behavior. I've considered the following transfers, and here's a summary of the behavior I see: |Source Variable | Target Variable | Behavior| |----------------|----------------|----------| | FIRST nodal | CONSTANT elemental | normal| | FIRST nodal | FIRST elemental | Fails due to bounding box| | FIRST elemental | FIRST nodal | Strange-looking transfer results| | CONSTANT elemental | FIRST nodal | normal| |CONSTANT elemental | FIRST elemental | Fails due to bounding box| |FIRST elemental | CONSTANT elemental | Strange-looking transfer results| |CONSTANT elemental | CONSTANT elemental | normal| |FIRST elemental | FIRST elemental | Fails due to bounding box| ### Sending data _to_ FIRST MONOMIAL For sending data to a FIRST monomial, regardless of whether the source variable is FIRST nodal, FIRST elemental, or CONSTANT elemental, you get a bounding box failure like this: ``` *** ERROR *** The following error occurred in the object "to_sub_nodal_to_elemental_first", of type "MultiAppNearestNodeTransfer". In to_sub_nodal_to_elemental_first: No candidate BoundingBoxes found for Elem 8, centroid = (x,y,z)=(0.114148, 0.399319, 0) ``` ### Sending data _from_ FIRST MONOMIAL For sending data from a FIRST monomial, regardless of whether the target variable is FIRST nodal or CONSTANT elemental, the transferred variable just doesn't look correct in the receiver app. In the example attached below, here is the FIRST monomial in the sending app: ![Screen Shot 2021-03-08 at 11 09 02 AM](https://user-images.githubusercontent.com/17039662/110355336-ba5cd980-7ffe-11eb-9e4e-4f73146460ab.png) And here are the received variables: **FIRST MONOMIAL -> FIRST NODAL** ![Screen Shot 2021-03-08 at 11 09 54 AM](https://user-images.githubusercontent.com/17039662/110355422-d3658a80-7ffe-11eb-9317-c99794746ed6.png) ** FIRST MONOMIAL-> CONSTANT MONOMIAL ** ![Screen Shot 2021-03-08 at 11 09 12 AM](https://user-images.githubusercontent.com/17039662/110355505-ee37ff00-7ffe-11eb-94bb-f63f7236b26e.png) ## Steps to Reproduce Run the attached input files. To see the bounding box failures, uncomment the three transfers to FIRST MONOMIAL in the `[Transfers]` block. ## Impact If I interpreted these results correctly, then there is incorrect transfer behavior for FIRST MONOMIALs - for sending from FIRST, the received values don't look right, while sending to FIRST fails. There are currently no tests for transfers with `MultiAppNearestNodeTransfer` that use FIRST MONOMIALs as either the target or source variable, so apps using this feature might have incorrect transfers.
defect
nearest node transfer either errors or gives bad results for first monomials bug description the multiappnearestnodetransfer does not behave as expected when transferring data to from first monomial variables the behavior is slightly different for transfers to from first monomial at the end of this issue i ve attached input files you can use to reproduce this behavior i ve considered the following transfers and here s a summary of the behavior i see source variable target variable behavior first nodal constant elemental normal first nodal first elemental fails due to bounding box first elemental first nodal strange looking transfer results constant elemental first nodal normal constant elemental first elemental fails due to bounding box first elemental constant elemental strange looking transfer results constant elemental constant elemental normal first elemental first elemental fails due to bounding box sending data to first monomial for sending data to a first monomial regardless of whether the source variable is first nodal first elemental or constant elemental you get a bounding box failure like this error the following error occurred in the object to sub nodal to elemental first of type multiappnearestnodetransfer in to sub nodal to elemental first no candidate boundingboxes found for elem centroid x y z sending data from first monomial for sending data from a first monomial regardless of whether the target variable is first nodal or constant elemental the transferred variable just doesn t look correct in the receiver app in the example attached below here is the first monomial in the sending app and here are the received variables first monomial first nodal first monomial constant monomial steps to reproduce run the attached input files to see the bounding box failures uncomment the three transfers to first monomial in the block impact if i interpreted these results correctly then there is incorrect transfer behavior for first monomials for sending from first the received values don t look right while sending to first fails there are currently no tests for transfers with multiappnearestnodetransfer that use first monomials as either the target or source variable so apps using this feature might have incorrect transfers
1
248,812
21,075,091,230
IssuesEvent
2022-04-02 02:59:55
sourcegraph/sec-pr-audit-trail
https://api.github.com/repos/sourcegraph/sec-pr-audit-trail
opened
sourcegraph/sourcegraph#33324: "insights: add check for issue to be a code insights issue before adding to the project"
exception/test-plan sourcegraph/sourcegraph
https://github.com/sourcegraph/sourcegraph/pull/33324 "insights: add check for issue to be a code insights issue before adding to the project" **has no test plan**. Learn more about test plans in our [testing guidelines](https://docs.sourcegraph.com/dev/background-information/testing_principles#test-plans). @felixfbecker please comment in this issue with an explanation for this exception and close this issue.
1.0
sourcegraph/sourcegraph#33324: "insights: add check for issue to be a code insights issue before adding to the project" - https://github.com/sourcegraph/sourcegraph/pull/33324 "insights: add check for issue to be a code insights issue before adding to the project" **has no test plan**. Learn more about test plans in our [testing guidelines](https://docs.sourcegraph.com/dev/background-information/testing_principles#test-plans). @felixfbecker please comment in this issue with an explanation for this exception and close this issue.
non_defect
sourcegraph sourcegraph insights add check for issue to be a code insights issue before adding to the project insights add check for issue to be a code insights issue before adding to the project has no test plan learn more about test plans in our felixfbecker please comment in this issue with an explanation for this exception and close this issue
0
13,874
2,789,431,111
IssuesEvent
2015-05-08 19:21:27
orwant/google-visualization-issues
https://api.github.com/repos/orwant/google-visualization-issues
closed
Interactive time series chart displays inaccurately in Google Sites
Priority-Medium Type-Defect
Original [issue 40](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=40) created by orwant on 2009-09-01T13:51:57.000Z: <b>What steps will reproduce the problem? Please provide a link to a</b> <b>demonstration page if at all possible, or attach code.</b> I have created a simple spreadsheet to record sales v forecast data and used the Interactive time series chart gadget to display it. The chart renders correctly in the Spreadsheet but not when the chart in embedded into a Google Sites page <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> Interactive time series chart <b>Are you using the test environment (version 1.1)?</b> <b>(If you are not sure, answer NO)</b> No <b>What operating system and browser are you using?</b> IE 8, Firefox 2.0 <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
1.0
Interactive time series chart displays inaccurately in Google Sites - Original [issue 40](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=40) created by orwant on 2009-09-01T13:51:57.000Z: <b>What steps will reproduce the problem? Please provide a link to a</b> <b>demonstration page if at all possible, or attach code.</b> I have created a simple spreadsheet to record sales v forecast data and used the Interactive time series chart gadget to display it. The chart renders correctly in the Spreadsheet but not when the chart in embedded into a Google Sites page <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> Interactive time series chart <b>Are you using the test environment (version 1.1)?</b> <b>(If you are not sure, answer NO)</b> No <b>What operating system and browser are you using?</b> IE 8, Firefox 2.0 <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
defect
interactive time series chart displays inaccurately in google sites original created by orwant on what steps will reproduce the problem please provide a link to a demonstration page if at all possible or attach code i have created a simple spreadsheet to record sales v forecast data and used the interactive time series chart gadget to display it the chart renders correctly in the spreadsheet but not when the chart in embedded into a google sites page what component is this issue related to piechart linechart datatable query etc interactive time series chart are you using the test environment version if you are not sure answer no no what operating system and browser are you using ie firefox for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
1
74,840
25,351,386,924
IssuesEvent
2022-11-19 20:25:41
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
signature from "ArchZFS Bot <buildbot@archzfs.com>" is unknown trust
Type: Defect
<!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | ArchLinux Distribution Version | Kernel Version | 6.0.8-hardened1-1-hardened Architecture | x86 OpenZFS Version | zfs-2.1.6-1 <!-- Command to find OpenZFS version: Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing Trying to run any command with Arch's package manager pacman results in something along the lines of: [code] error: archzfs: signature from "ArchZFS Bot <buildbot@archzfs.com>" is unknown trust :: Synchronizing package databases... core is up to date extra is up to date community 7.2 MiB 3.16 MiB/s 00:02 [########################################] 100% archzfs 14.1 KiB 36.5 KiB/s 00:00 [########################################] 100% error: archzfs: signature from "ArchZFS Bot <buildbot@archzfs.com>" is unknown trust error: failed to synchronize all databases (invalid or corrupted database (PGP signature)) [/code] ### Describe how to reproduce the problem ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
1.0
signature from "ArchZFS Bot <buildbot@archzfs.com>" is unknown trust - <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | ArchLinux Distribution Version | Kernel Version | 6.0.8-hardened1-1-hardened Architecture | x86 OpenZFS Version | zfs-2.1.6-1 <!-- Command to find OpenZFS version: Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing Trying to run any command with Arch's package manager pacman results in something along the lines of: [code] error: archzfs: signature from "ArchZFS Bot <buildbot@archzfs.com>" is unknown trust :: Synchronizing package databases... core is up to date extra is up to date community 7.2 MiB 3.16 MiB/s 00:02 [########################################] 100% archzfs 14.1 KiB 36.5 KiB/s 00:00 [########################################] 100% error: archzfs: signature from "ArchZFS Bot <buildbot@archzfs.com>" is unknown trust error: failed to synchronize all databases (invalid or corrupted database (PGP signature)) [/code] ### Describe how to reproduce the problem ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
defect
signature from archzfs bot is unknown trust thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name archlinux distribution version kernel version hardened architecture openzfs version zfs command to find openzfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing trying to run any command with arch s package manager pacman results in something along the lines of error archzfs signature from archzfs bot is unknown trust synchronizing package databases core is up to date extra is up to date community mib mib s archzfs kib kib s error archzfs signature from archzfs bot is unknown trust error failed to synchronize all databases invalid or corrupted database pgp signature describe how to reproduce the problem include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with
1
2,425
25,286,354,051
IssuesEvent
2022-11-16 19:39:02
NVIDIA/spark-rapids
https://api.github.com/repos/NVIDIA/spark-rapids
closed
[BUG] 30TB query95 fails on the join with illegal memory access with 200 partitions
bug ? - Needs Triage reliability
As a follow on to https://github.com/NVIDIA/spark-rapids/issues/6983, we ran the q95 query at 30TB with the fix in this PR (https://github.com/rapidsai/cudf/pull/12079) and we ended up failing during a couple of the joins later, an inner join and a left semi. In both of those cases we are hitting instances of the overflowing strided loop issue in cuco's `static_multimap::pair_count` and `static_map::insert` (see compute-sanitizer output below). It looks like cuDF could work around this by using `int64_t` as the type in their `counting_transform_iterator` (like I did in this [proof-of-concept](https://github.com/abellina/cudf/commit/ea59ed13f2fe2ba8ec8492bc4d421189a18f1f1d)), but it is not clear if that is the right solution. This issue is for our tracking, but the fix will be in cuDF or cuCollections. The only current workaround is to increase our shuffle partitions (for example 400 partitions worked without issues). Inner join: ``` ========= Invalid __global__ read of size 4 bytes ========= at 0x500 in void cuco::detail::pair_count<(unsigned int)128, (unsigned int)2, (bool)0, thrust::transform_iterator<cudf::detail::make_pair_function<cudf::row_hasher<cudf::detail::default_hash, cudf::nullate::DYNAMIC>, int>, thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>, cuda::__4::atomic<unsigned long, (cuda::std::__4::__detail::thread_scope)1>, cuco::static_multimap<unsigned int, int, (cuda::std::__4::__detail::thread_scope)1, rmm::mr::stream_allocator_adaptor<default_allocator<char>>, cuco::double_hashing<(unsigned int)2, cudf::detail::MurmurHash3_32<unsigned int>, cudf::detail::MurmurHash3_32<unsigned int>>>::device_view, cudf::detail::pair_equality<cudf::row_equality_comparator<cudf::nullate::DYNAMIC>>>(T4, T4, T5 *, T6, T7) ========= by thread (64,0,0) in block (14773391,0,0) ========= Address 0xbcd89fe80 is out of bounds ========= and is 1603745152 bytes before the nearest allocation at 0xc2d213400 of size 256 bytes ========= Saved host backtrace up to driver entry point at kernel launch time ========= Host Frame: [0x22da7a] ========= in /usr/lib/x86_64-linux-gnu/libcuda.so.1 ========= Host Frame: [0x3deb04b] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame: [0x3e28798] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:unsigned long cuco::static_multimap<unsigned int, int, (cuda::std::__4::__detail::thread_scope)1, rmm::mr::stream_allocator_adaptor<default_allocator<char> >, cuco::double_hashing<2u, cudf::detail::MurmurHash3_32<unsigned int>, cudf::detail::MurmurHash3_32<unsigned int> > >::pair_count<thrust::transform_iterator<cudf::detail::make_pair_function<cudf::row_hasher<cudf::detail::MurmurHash3_32, cudf::nullate::DYNAMIC>, int>, thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>, cudf::detail::pair_equality<cudf::row_equality_comparator<cudf::nullate::DYNAMIC> > >(thrust::transform_iterator<cudf::detail::make_pair_function<cudf::row_hasher<cudf::detail::MurmurHash3_32, cudf::nullate::DYNAMIC>, int>, thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>, thrust::transform_iterator<cudf::detail::make_pair_function<cudf::row_hasher<cudf::detail::MurmurHash3_32, cudf::nullate::DYNAMIC>, int>, thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>, cudf::detail::pair_equality<cudf::row_equality_comparator<cudf::nullate::DYNAMIC> >, CUstream_st*) const [0x1e69a52] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:unsigned long cudf::detail::_GLOBAL__N__dbc92c90_12_hash_join_cu_cd66f71b::compute_join_output_size<cudf::detail::join_kind>(cudf::table_device_view, cudf::detail::_GLOBAL__N__dbc92c90_12_hash_join_cu_cd66f71b::compute_join_output_size<cudf::detail::join_kind>, cuco::static_multimap<unsigned int, int, cuda::std::__4::__detail::thread_scope, rmm::mr::stream_allocator_adaptor<default_allocator<char>>, cudf::table_device_view::double_hashing<unsigned int=2, cudf::detail::MurmurHash3_32<unsigned int>, cudf::detail::MurmurHash3_32>> const &, bool, cudf::null_equality, cuda::std::__4::__detail::thread_scope::cuda_stream_view) [0x1e69fcc] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:std::pair<std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector>>, std::default_delete<rmm::device_uvector>> cudf::detail::_GLOBAL__N__dbc92c90_12_hash_join_cu_cd66f71b::probe_join_hash_table<cudf::detail::join_kind>(cudf::table_device_view, std::pair<std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector>>, std::default_delete<rmm::device_uvector>>, cuco::static_multimap<unsigned int, int, cuda::std::__4::__detail::thread_scope, std::unique_ptr::mr::stream_allocator_adaptor<default_allocator<char>>, cudf::table_device_view::double_hashing<unsigned int=2, cudf::detail::MurmurHash3_32<unsigned int>, cudf::detail::MurmurHash3_32>> const &, bool, cudf::null_equality, std::optional<unsigned long>, std::unique_ptr::cuda_stream_view, cuda::std::__4::__detail::thread_scope::device_memory_resource*) [0x1e6f64f] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:std::pair<std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector<int> > >, std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector<int> > > > cudf::detail::hash_join<cudf::detail::MurmurHash3_32<unsigned int> >::probe_join_indices<(cudf::detail::join_kind)0>(cudf::table_view const&, std::optional<unsigned long>, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) const [0x1e6f7f2] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:std::pair<std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector<int> > >, std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector<int> > > > cudf::detail::hash_join<cudf::detail::MurmurHash3_32<unsigned int> >::compute_hash_join<(cudf::detail::join_kind)0>(cudf::table_view const&, std::optional<unsigned long>, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) const [0x1e6face] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:cudf::hash_join::inner_join(cudf::table_view const&, std::optional<unsigned long>, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) const [0x1e679e3] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:cudf::detail::inner_join(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) [0x1e70633] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:cudf::inner_join(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::mr::device_memory_resource*) [0x1e70c5c] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:Java_ai_rapids_cudf_Table_innerJoinGatherMaps [0x14ec5e3] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame: [0x254ac96a7] ========= in ========= ``` Leftsemi: ``` ========= Invalid __global__ read of size 4 bytes ========= at 0x440 in /spark-rapids-jni/thirdparty/cudf/cpp/include/cudf/column/column_device_view.cuh:431:T1 cudf::column_device_view::element<int, (void *)0>(int) const ========= by thread (0,0,0) in block (29517103,0,0) ========= Address 0xa4ba95700 is out of bounds ========= and is 3222998784 bytes before the nearest allocation at 0xb0bc46600 of size 256 bytes ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/include/cudf/table/row_operators.cuh:538:unsigned int cudf::element_hasher_with_seed<cudf::detail::default_hash, cudf::nullate::DYNAMIC>::operator ()<int, (void *)0>(cudf::column_device_view, int) const [0x3f0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/include/cudf/utilities/type_dispatcher.hpp:455:decltype(auto) cudf::type_dispatcher<cudf::dispatch_storage_type, cudf::element_hasher_with_seed<cudf::detail::default_hash, cudf::nullate::DYNAMIC>, const cudf::column_device_view &, int &>(cudf::data_type, T2, T3 &&...) [0x2e0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/include/cudf/table/row_operators.cuh:605:cudf::row_hasher<cudf::detail::default_hash, cudf::nullate::DYNAMIC>::operator ()(int) const [0x1c0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/src/search/contains_table.cu:71:auto cudf::detail::<unnamed>::strong_index_hasher_adapter<cudf::row_hasher<cudf::detail::default_hash, cudf::nullate::DYNAMIC>>::operator ()<cudf::experimental::row::lhs_index_type, (void *)0>(T1) const [0x1c0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/build/_deps/cuco-src/include/cuco/static_map.cuh:510:cuco::pair<cuda::__4::atomic<cudf::experimental::row::lhs_index_type, (cuda::std::__4::__detail::thread_scope)1>, cuda::__4::atomic<int, (cuda::std::__4::__detail::thread_scope)1>> * cuco::static_map<cudf::experimental::row::lhs_index_type, int, (cuda::std::__4::__detail::thread_scope)1, rmm::mr::stream_allocator_adaptor<default_allocator<char>>>::device_view_base::initial_slot<cooperative_groups::__v1::thread_block_tile<(unsigned int)4, cooperative_groups::__v1::thread_block>, cudf::experimental::row::lhs_index_type, cudf::detail::<unnamed>::strong_index_hasher_adapter<cudf::row_hasher<cudf::detail::default_hash, cudf::nullate::DYNAMIC>>>(const T1 &, const T2 &, T3) [0x1c0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/build/_deps/cuco-src/include/cuco/detail/static_map.inl:520:bool cuco::static_map<cudf::experimental::row::lhs_index_type, int, (cuda::std::__4::__detail::thread_scope)1, rmm::mr::stream_allocator_adaptor<default_allocator<char>>>::device_mutable_view::insert<cooperative_groups::__v1::thread_block_tile<(unsigned int)4, cooperative_groups::__v1::thread_block>, cudf::detail::<unnamed>::strong_index_hasher_adapter<cudf::row_hasher<cudf::detail::default_hash, cudf::nullate::DYNAMIC>>, cudf::detail::<unnamed>::strong_index_comparator_adapter<cudf::row_equality_comparator<cudf::nullate::DYNAMIC>>>(const T1 &, const cuco::pair<cudf::experimental::row::lhs_index_type, int> &, T2, T3) [0xc0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/build/_deps/cuco-src/include/cuco/detail/static_map_kernels.cuh:154:void cuco::detail::insert<(unsigned long)128, (unsigned int)4, thrust::transform_iterator<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::detail::<unnamed>::contains_without_lists_or_nans(const _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::table_view &, const _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::table_view &, _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource *)::[lambda(T1) (instance 1)], thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>, cuda::__4::atomic<unsigned long, (cuda::std::__4::__detail::thread_scope)1>, cuco::static_map<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::experimental::row::lhs_index_type, int, (cuda::std::__4::__detail::thread_scope)1, rmm::mr::stream_allocator_adaptor<default_allocator<char>>>::device_mutable_view, _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::detail::<unnamed>::strong_index_hasher_adapter<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::row_hasher<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::detail::default_hash, _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::nullate::DYNAMIC>>, _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::detail::<unnamed>::strong_index_comparator_adapter<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::row_equality_comparator<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::nullate::DYNAMIC>>>(T3, T3, T4 *, T5, T6, T7) [0xc0] ========= Saved host backtrace up to driver entry point at kernel launch time ========= Host Frame: [0x22da7a] ========= in /usr/lib/x86_64-linux-gnu/libcuda.so.1 ========= Host Frame: [0x3decaab] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame: [0x3e2a1f8] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:cuco::static_map<cudf::experimental::row::lhs_index_type(void, cudf::experimental::row::lhs_index_type, cudf::detail::_GLOBAL__N__b2e14aee_17_contains_table_cu_f61ccc2b_310::strong_index_comparator_adapter<cudf::row_equality_comparator<cudf::nullate>>, int, cuda::std::__4::__detail::thread_scope, CUstream_st*), int, cuda::std::__4::__detail::thread_scope, rmm::mr::stream_allocator_adaptor<default_allocator<char>>>::insert<thrust::transform_iterator<__nv_dl_wrapper_t<__nv_dl_tag<rmm::device_uvector<bool> (*) (cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*), __operator_&__(cudf::detail::_GLOBAL__N__b2e14aee_17_contains_table_cu_f61ccc2b_310::contains_without_lists_or_nans(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*)), unsigned int=1>>, thrust::counting_iterator<int, thrust::use_default, thrust::counting_iterator, thrust::counting_iterator>, thrust::counting_iterator, thrust::counting_iterator>, cudf::detail::_GLOBAL__N__b2e14aee_17_contains_table_cu_f61ccc2b_310::strong_index_hasher_adapter<cudf::row_hasher<cudf::detail::MurmurHash3_32, cudf::nullate::DYNAMIC>>, cudf::detail::_GLOBAL__N__b2e14aee_17_contains_table_cu_f61ccc2b_310::strong_index_comparator_adapter<cudf::row_equality_comparator<cudf::nullate>>> [0x2a607ce] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:cudf::detail::_GLOBAL__N__b2e14aee_17_contains_table_cu_f61ccc2b_310::contains_without_lists_or_nans(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) [0x2a5ee26] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:cudf::detail::contains(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, cudf::nan_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) [0x2a5f222] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:cudf::detail::left_semi_anti_join(cudf::detail::join_kind, cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) [0x1e82d72] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:cudf::left_semi_join(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::mr::device_memory_resource*) [0x1e83a1c] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:Java_ai_rapids_cudf_Table_leftSemiJoinGatherMap [0x14ebf33] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame: [0x272ea7627] ========= in ```
True
[BUG] 30TB query95 fails on the join with illegal memory access with 200 partitions - As a follow on to https://github.com/NVIDIA/spark-rapids/issues/6983, we ran the q95 query at 30TB with the fix in this PR (https://github.com/rapidsai/cudf/pull/12079) and we ended up failing during a couple of the joins later, an inner join and a left semi. In both of those cases we are hitting instances of the overflowing strided loop issue in cuco's `static_multimap::pair_count` and `static_map::insert` (see compute-sanitizer output below). It looks like cuDF could work around this by using `int64_t` as the type in their `counting_transform_iterator` (like I did in this [proof-of-concept](https://github.com/abellina/cudf/commit/ea59ed13f2fe2ba8ec8492bc4d421189a18f1f1d)), but it is not clear if that is the right solution. This issue is for our tracking, but the fix will be in cuDF or cuCollections. The only current workaround is to increase our shuffle partitions (for example 400 partitions worked without issues). Inner join: ``` ========= Invalid __global__ read of size 4 bytes ========= at 0x500 in void cuco::detail::pair_count<(unsigned int)128, (unsigned int)2, (bool)0, thrust::transform_iterator<cudf::detail::make_pair_function<cudf::row_hasher<cudf::detail::default_hash, cudf::nullate::DYNAMIC>, int>, thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>, cuda::__4::atomic<unsigned long, (cuda::std::__4::__detail::thread_scope)1>, cuco::static_multimap<unsigned int, int, (cuda::std::__4::__detail::thread_scope)1, rmm::mr::stream_allocator_adaptor<default_allocator<char>>, cuco::double_hashing<(unsigned int)2, cudf::detail::MurmurHash3_32<unsigned int>, cudf::detail::MurmurHash3_32<unsigned int>>>::device_view, cudf::detail::pair_equality<cudf::row_equality_comparator<cudf::nullate::DYNAMIC>>>(T4, T4, T5 *, T6, T7) ========= by thread (64,0,0) in block (14773391,0,0) ========= Address 0xbcd89fe80 is out of bounds ========= and is 1603745152 bytes before the nearest allocation at 0xc2d213400 of size 256 bytes ========= Saved host backtrace up to driver entry point at kernel launch time ========= Host Frame: [0x22da7a] ========= in /usr/lib/x86_64-linux-gnu/libcuda.so.1 ========= Host Frame: [0x3deb04b] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame: [0x3e28798] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:unsigned long cuco::static_multimap<unsigned int, int, (cuda::std::__4::__detail::thread_scope)1, rmm::mr::stream_allocator_adaptor<default_allocator<char> >, cuco::double_hashing<2u, cudf::detail::MurmurHash3_32<unsigned int>, cudf::detail::MurmurHash3_32<unsigned int> > >::pair_count<thrust::transform_iterator<cudf::detail::make_pair_function<cudf::row_hasher<cudf::detail::MurmurHash3_32, cudf::nullate::DYNAMIC>, int>, thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>, cudf::detail::pair_equality<cudf::row_equality_comparator<cudf::nullate::DYNAMIC> > >(thrust::transform_iterator<cudf::detail::make_pair_function<cudf::row_hasher<cudf::detail::MurmurHash3_32, cudf::nullate::DYNAMIC>, int>, thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>, thrust::transform_iterator<cudf::detail::make_pair_function<cudf::row_hasher<cudf::detail::MurmurHash3_32, cudf::nullate::DYNAMIC>, int>, thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>, cudf::detail::pair_equality<cudf::row_equality_comparator<cudf::nullate::DYNAMIC> >, CUstream_st*) const [0x1e69a52] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:unsigned long cudf::detail::_GLOBAL__N__dbc92c90_12_hash_join_cu_cd66f71b::compute_join_output_size<cudf::detail::join_kind>(cudf::table_device_view, cudf::detail::_GLOBAL__N__dbc92c90_12_hash_join_cu_cd66f71b::compute_join_output_size<cudf::detail::join_kind>, cuco::static_multimap<unsigned int, int, cuda::std::__4::__detail::thread_scope, rmm::mr::stream_allocator_adaptor<default_allocator<char>>, cudf::table_device_view::double_hashing<unsigned int=2, cudf::detail::MurmurHash3_32<unsigned int>, cudf::detail::MurmurHash3_32>> const &, bool, cudf::null_equality, cuda::std::__4::__detail::thread_scope::cuda_stream_view) [0x1e69fcc] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:std::pair<std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector>>, std::default_delete<rmm::device_uvector>> cudf::detail::_GLOBAL__N__dbc92c90_12_hash_join_cu_cd66f71b::probe_join_hash_table<cudf::detail::join_kind>(cudf::table_device_view, std::pair<std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector>>, std::default_delete<rmm::device_uvector>>, cuco::static_multimap<unsigned int, int, cuda::std::__4::__detail::thread_scope, std::unique_ptr::mr::stream_allocator_adaptor<default_allocator<char>>, cudf::table_device_view::double_hashing<unsigned int=2, cudf::detail::MurmurHash3_32<unsigned int>, cudf::detail::MurmurHash3_32>> const &, bool, cudf::null_equality, std::optional<unsigned long>, std::unique_ptr::cuda_stream_view, cuda::std::__4::__detail::thread_scope::device_memory_resource*) [0x1e6f64f] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:std::pair<std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector<int> > >, std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector<int> > > > cudf::detail::hash_join<cudf::detail::MurmurHash3_32<unsigned int> >::probe_join_indices<(cudf::detail::join_kind)0>(cudf::table_view const&, std::optional<unsigned long>, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) const [0x1e6f7f2] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:std::pair<std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector<int> > >, std::unique_ptr<rmm::device_uvector<int>, std::default_delete<rmm::device_uvector<int> > > > cudf::detail::hash_join<cudf::detail::MurmurHash3_32<unsigned int> >::compute_hash_join<(cudf::detail::join_kind)0>(cudf::table_view const&, std::optional<unsigned long>, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) const [0x1e6face] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:cudf::hash_join::inner_join(cudf::table_view const&, std::optional<unsigned long>, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) const [0x1e679e3] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:cudf::detail::inner_join(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) [0x1e70633] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:cudf::inner_join(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::mr::device_memory_resource*) [0x1e70c5c] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame:Java_ai_rapids_cudf_Table_innerJoinGatherMaps [0x14ec5e3] ========= in /tmp/cudf1750694697214535636.so ========= Host Frame: [0x254ac96a7] ========= in ========= ``` Leftsemi: ``` ========= Invalid __global__ read of size 4 bytes ========= at 0x440 in /spark-rapids-jni/thirdparty/cudf/cpp/include/cudf/column/column_device_view.cuh:431:T1 cudf::column_device_view::element<int, (void *)0>(int) const ========= by thread (0,0,0) in block (29517103,0,0) ========= Address 0xa4ba95700 is out of bounds ========= and is 3222998784 bytes before the nearest allocation at 0xb0bc46600 of size 256 bytes ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/include/cudf/table/row_operators.cuh:538:unsigned int cudf::element_hasher_with_seed<cudf::detail::default_hash, cudf::nullate::DYNAMIC>::operator ()<int, (void *)0>(cudf::column_device_view, int) const [0x3f0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/include/cudf/utilities/type_dispatcher.hpp:455:decltype(auto) cudf::type_dispatcher<cudf::dispatch_storage_type, cudf::element_hasher_with_seed<cudf::detail::default_hash, cudf::nullate::DYNAMIC>, const cudf::column_device_view &, int &>(cudf::data_type, T2, T3 &&...) [0x2e0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/include/cudf/table/row_operators.cuh:605:cudf::row_hasher<cudf::detail::default_hash, cudf::nullate::DYNAMIC>::operator ()(int) const [0x1c0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/src/search/contains_table.cu:71:auto cudf::detail::<unnamed>::strong_index_hasher_adapter<cudf::row_hasher<cudf::detail::default_hash, cudf::nullate::DYNAMIC>>::operator ()<cudf::experimental::row::lhs_index_type, (void *)0>(T1) const [0x1c0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/build/_deps/cuco-src/include/cuco/static_map.cuh:510:cuco::pair<cuda::__4::atomic<cudf::experimental::row::lhs_index_type, (cuda::std::__4::__detail::thread_scope)1>, cuda::__4::atomic<int, (cuda::std::__4::__detail::thread_scope)1>> * cuco::static_map<cudf::experimental::row::lhs_index_type, int, (cuda::std::__4::__detail::thread_scope)1, rmm::mr::stream_allocator_adaptor<default_allocator<char>>>::device_view_base::initial_slot<cooperative_groups::__v1::thread_block_tile<(unsigned int)4, cooperative_groups::__v1::thread_block>, cudf::experimental::row::lhs_index_type, cudf::detail::<unnamed>::strong_index_hasher_adapter<cudf::row_hasher<cudf::detail::default_hash, cudf::nullate::DYNAMIC>>>(const T1 &, const T2 &, T3) [0x1c0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/build/_deps/cuco-src/include/cuco/detail/static_map.inl:520:bool cuco::static_map<cudf::experimental::row::lhs_index_type, int, (cuda::std::__4::__detail::thread_scope)1, rmm::mr::stream_allocator_adaptor<default_allocator<char>>>::device_mutable_view::insert<cooperative_groups::__v1::thread_block_tile<(unsigned int)4, cooperative_groups::__v1::thread_block>, cudf::detail::<unnamed>::strong_index_hasher_adapter<cudf::row_hasher<cudf::detail::default_hash, cudf::nullate::DYNAMIC>>, cudf::detail::<unnamed>::strong_index_comparator_adapter<cudf::row_equality_comparator<cudf::nullate::DYNAMIC>>>(const T1 &, const cuco::pair<cudf::experimental::row::lhs_index_type, int> &, T2, T3) [0xc0] ========= Device Frame:/spark-rapids-jni/thirdparty/cudf/cpp/build/_deps/cuco-src/include/cuco/detail/static_map_kernels.cuh:154:void cuco::detail::insert<(unsigned long)128, (unsigned int)4, thrust::transform_iterator<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::detail::<unnamed>::contains_without_lists_or_nans(const _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::table_view &, const _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::table_view &, _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource *)::[lambda(T1) (instance 1)], thrust::counting_iterator<int, thrust::use_default, thrust::use_default, thrust::use_default>, thrust::use_default, thrust::use_default>, cuda::__4::atomic<unsigned long, (cuda::std::__4::__detail::thread_scope)1>, cuco::static_map<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::experimental::row::lhs_index_type, int, (cuda::std::__4::__detail::thread_scope)1, rmm::mr::stream_allocator_adaptor<default_allocator<char>>>::device_mutable_view, _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::detail::<unnamed>::strong_index_hasher_adapter<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::row_hasher<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::detail::default_hash, _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::nullate::DYNAMIC>>, _INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::detail::<unnamed>::strong_index_comparator_adapter<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::row_equality_comparator<_INTERNAL_b2e14aee_17_contains_table_cu_f61ccc2b_310::cudf::nullate::DYNAMIC>>>(T3, T3, T4 *, T5, T6, T7) [0xc0] ========= Saved host backtrace up to driver entry point at kernel launch time ========= Host Frame: [0x22da7a] ========= in /usr/lib/x86_64-linux-gnu/libcuda.so.1 ========= Host Frame: [0x3decaab] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame: [0x3e2a1f8] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:cuco::static_map<cudf::experimental::row::lhs_index_type(void, cudf::experimental::row::lhs_index_type, cudf::detail::_GLOBAL__N__b2e14aee_17_contains_table_cu_f61ccc2b_310::strong_index_comparator_adapter<cudf::row_equality_comparator<cudf::nullate>>, int, cuda::std::__4::__detail::thread_scope, CUstream_st*), int, cuda::std::__4::__detail::thread_scope, rmm::mr::stream_allocator_adaptor<default_allocator<char>>>::insert<thrust::transform_iterator<__nv_dl_wrapper_t<__nv_dl_tag<rmm::device_uvector<bool> (*) (cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*), __operator_&__(cudf::detail::_GLOBAL__N__b2e14aee_17_contains_table_cu_f61ccc2b_310::contains_without_lists_or_nans(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*)), unsigned int=1>>, thrust::counting_iterator<int, thrust::use_default, thrust::counting_iterator, thrust::counting_iterator>, thrust::counting_iterator, thrust::counting_iterator>, cudf::detail::_GLOBAL__N__b2e14aee_17_contains_table_cu_f61ccc2b_310::strong_index_hasher_adapter<cudf::row_hasher<cudf::detail::MurmurHash3_32, cudf::nullate::DYNAMIC>>, cudf::detail::_GLOBAL__N__b2e14aee_17_contains_table_cu_f61ccc2b_310::strong_index_comparator_adapter<cudf::row_equality_comparator<cudf::nullate>>> [0x2a607ce] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:cudf::detail::_GLOBAL__N__b2e14aee_17_contains_table_cu_f61ccc2b_310::contains_without_lists_or_nans(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) [0x2a5ee26] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:cudf::detail::contains(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, cudf::nan_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) [0x2a5f222] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:cudf::detail::left_semi_anti_join(cudf::detail::join_kind, cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::cuda_stream_view, rmm::mr::device_memory_resource*) [0x1e82d72] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:cudf::left_semi_join(cudf::table_view const &, cudf::table_view const &, cudf::null_equality, rmm::mr::device_memory_resource*) [0x1e83a1c] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame:Java_ai_rapids_cudf_Table_leftSemiJoinGatherMap [0x14ebf33] ========= in /tmp/cudf1875700103146174489.so ========= Host Frame: [0x272ea7627] ========= in ```
non_defect
fails on the join with illegal memory access with partitions as a follow on to we ran the query at with the fix in this pr and we ended up failing during a couple of the joins later an inner join and a left semi in both of those cases we are hitting instances of the overflowing strided loop issue in cuco s static multimap pair count and static map insert see compute sanitizer output below it looks like cudf could work around this by using t as the type in their counting transform iterator like i did in this but it is not clear if that is the right solution this issue is for our tracking but the fix will be in cudf or cucollections the only current workaround is to increase our shuffle partitions for example partitions worked without issues inner join invalid global read of size bytes at in void cuco detail pair count int thrust counting iterator thrust use default thrust use default cuda atomic cuco static multimap cuco double hashing cudf detail device view cudf detail pair equality by thread in block address is out of bounds and is bytes before the nearest allocation at of size bytes saved host backtrace up to driver entry point at kernel launch time host frame in usr lib linux gnu libcuda so host frame in tmp so host frame in tmp so host frame unsigned long cuco static multimap cuco double hashing cudf detail pair count int thrust counting iterator thrust use default thrust use default cudf detail pair equality thrust transform iterator int thrust counting iterator thrust use default thrust use default thrust transform iterator int thrust counting iterator thrust use default thrust use default cudf detail pair equality custream st const in tmp so host frame unsigned long cudf detail global n hash join cu compute join output size cudf table device view cudf detail global n hash join cu compute join output size cuco static multimap cudf table device view double hashing cudf detail const bool cudf null equality cuda std detail thread scope cuda stream view in tmp so host frame std pair std default delete std default delete cudf detail global n hash join cu probe join hash table cudf table device view std pair std default delete std default delete cuco static multimap cudf table device view double hashing cudf detail const bool cudf null equality std optional std unique ptr cuda stream view cuda std detail thread scope device memory resource in tmp so host frame std pair std default delete std unique ptr std default delete cudf detail hash join probe join indices cudf table view const std optional rmm cuda stream view rmm mr device memory resource const in tmp so host frame std pair std default delete std unique ptr std default delete cudf detail hash join compute hash join cudf table view const std optional rmm cuda stream view rmm mr device memory resource const in tmp so host frame cudf hash join inner join cudf table view const std optional rmm cuda stream view rmm mr device memory resource const in tmp so host frame cudf detail inner join cudf table view const cudf table view const cudf null equality rmm cuda stream view rmm mr device memory resource in tmp so host frame cudf inner join cudf table view const cudf table view const cudf null equality rmm mr device memory resource in tmp so host frame java ai rapids cudf table innerjoingathermaps in tmp so host frame in leftsemi invalid global read of size bytes at in spark rapids jni thirdparty cudf cpp include cudf column column device view cuh cudf column device view element int const by thread in block address is out of bounds and is bytes before the nearest allocation at of size bytes device frame spark rapids jni thirdparty cudf cpp include cudf table row operators cuh unsigned int cudf element hasher with seed operator cudf column device view int const device frame spark rapids jni thirdparty cudf cpp include cudf utilities type dispatcher hpp decltype auto cudf type dispatcher const cudf column device view int cudf data type device frame spark rapids jni thirdparty cudf cpp include cudf table row operators cuh cudf row hasher operator int const device frame spark rapids jni thirdparty cudf cpp src search contains table cu auto cudf detail strong index hasher adapter operator const device frame spark rapids jni thirdparty cudf cpp build deps cuco src include cuco static map cuh cuco pair cuda atomic cuco static map device view base initial slot cudf experimental row lhs index type cudf detail strong index hasher adapter const const device frame spark rapids jni thirdparty cudf cpp build deps cuco src include cuco detail static map inl bool cuco static map device mutable view insert cudf detail strong index hasher adapter cudf detail strong index comparator adapter const const cuco pair device frame spark rapids jni thirdparty cudf cpp build deps cuco src include cuco detail static map kernels cuh void cuco detail insert contains without lists or nans const internal contains table cu cudf table view const internal contains table cu cudf table view internal contains table cu cudf null equality rmm cuda stream view rmm mr device memory resource thrust counting iterator thrust use default thrust use default cuda atomic cuco static map device mutable view internal contains table cu cudf detail strong index hasher adapter internal contains table cu cudf detail strong index comparator adapter saved host backtrace up to driver entry point at kernel launch time host frame in usr lib linux gnu libcuda so host frame in tmp so host frame in tmp so host frame cuco static map int cuda std detail thread scope custream st int cuda std detail thread scope rmm mr stream allocator adaptor insert cudf table view const cudf table view const cudf null equality rmm cuda stream view rmm mr device memory resource operator cudf detail global n contains table cu contains without lists or nans cudf table view const cudf table view const cudf null equality rmm cuda stream view rmm mr device memory resource unsigned int thrust counting iterator thrust counting iterator thrust counting iterator cudf detail global n contains table cu strong index hasher adapter cudf detail global n contains table cu strong index comparator adapter in tmp so host frame cudf detail global n contains table cu contains without lists or nans cudf table view const cudf table view const cudf null equality rmm cuda stream view rmm mr device memory resource in tmp so host frame cudf detail contains cudf table view const cudf table view const cudf null equality cudf nan equality rmm cuda stream view rmm mr device memory resource in tmp so host frame cudf detail left semi anti join cudf detail join kind cudf table view const cudf table view const cudf null equality rmm cuda stream view rmm mr device memory resource in tmp so host frame cudf left semi join cudf table view const cudf table view const cudf null equality rmm mr device memory resource in tmp so host frame java ai rapids cudf table leftsemijoingathermap in tmp so host frame in
0
91,217
18,388,747,288
IssuesEvent
2021-10-12 00:43:05
JakePember/cymetrics
https://api.github.com/repos/JakePember/cymetrics
closed
Test Coverage: utils/find.js
code coverage
**Is your feature request related to a problem? Please describe.** Cymetrics is now published and open for others to use. It's important that the foundation code is stable and tested thoroughly to ensure consistent results. **Describe the solution you'd like** For utils/find.js - 100% code coverage is expected, any reason 100% can't be made must be documented. - Refactors are welcome, if it lowers the overall cognitive complexity. If the refactor results in the creation of a new function and isn't covered as part of this issue; A new issue is to be created to cover the new function. **Describe alternatives you've considered** NA **Additional context** NA
1.0
Test Coverage: utils/find.js - **Is your feature request related to a problem? Please describe.** Cymetrics is now published and open for others to use. It's important that the foundation code is stable and tested thoroughly to ensure consistent results. **Describe the solution you'd like** For utils/find.js - 100% code coverage is expected, any reason 100% can't be made must be documented. - Refactors are welcome, if it lowers the overall cognitive complexity. If the refactor results in the creation of a new function and isn't covered as part of this issue; A new issue is to be created to cover the new function. **Describe alternatives you've considered** NA **Additional context** NA
non_defect
test coverage utils find js is your feature request related to a problem please describe cymetrics is now published and open for others to use it s important that the foundation code is stable and tested thoroughly to ensure consistent results describe the solution you d like for utils find js code coverage is expected any reason can t be made must be documented refactors are welcome if it lowers the overall cognitive complexity if the refactor results in the creation of a new function and isn t covered as part of this issue a new issue is to be created to cover the new function describe alternatives you ve considered na additional context na
0
27,361
21,656,137,290
IssuesEvent
2022-05-06 14:17:07
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Microsoft.Windows.Compatibility should reference the latest System.Data.SqlClient
area-Infrastructure-libraries
System.Data.SqlClient got a servicing release last month to bump its latest version to `4.8.3`. However, Microsoft.Windows.Compatibility still references the old version `4.8.2`. We should update Microsoft.Windows.Compatibility to the latest version. That way customers who use the `6.0.0` compat pack will get the latest servicing fixes. While we are at it - we should ensure we are using the latest versions of all our dependencies in the compat pack. cc @safern @Anipik @ericstj
1.0
Microsoft.Windows.Compatibility should reference the latest System.Data.SqlClient - System.Data.SqlClient got a servicing release last month to bump its latest version to `4.8.3`. However, Microsoft.Windows.Compatibility still references the old version `4.8.2`. We should update Microsoft.Windows.Compatibility to the latest version. That way customers who use the `6.0.0` compat pack will get the latest servicing fixes. While we are at it - we should ensure we are using the latest versions of all our dependencies in the compat pack. cc @safern @Anipik @ericstj
non_defect
microsoft windows compatibility should reference the latest system data sqlclient system data sqlclient got a servicing release last month to bump its latest version to however microsoft windows compatibility still references the old version we should update microsoft windows compatibility to the latest version that way customers who use the compat pack will get the latest servicing fixes while we are at it we should ensure we are using the latest versions of all our dependencies in the compat pack cc safern anipik ericstj
0
113,989
11,834,616,269
IssuesEvent
2020-03-23 09:13:46
uStudioCompany/ustudio-ui
https://api.github.com/repos/uStudioCompany/ustudio-ui
closed
[BUG] (Select) Incorrect classNames list
bug documentation
**Describe the bug** Incorrect classNames list in Single Select **Screenshots** ![image](https://user-images.githubusercontent.com/23137619/77160267-af3c3100-6aaf-11ea-9f93-e638417cb4af.png) ![image](https://user-images.githubusercontent.com/23137619/77160300-bd8a4d00-6aaf-11ea-9c23-dcad4f5642d3.png)
1.0
[BUG] (Select) Incorrect classNames list - **Describe the bug** Incorrect classNames list in Single Select **Screenshots** ![image](https://user-images.githubusercontent.com/23137619/77160267-af3c3100-6aaf-11ea-9f93-e638417cb4af.png) ![image](https://user-images.githubusercontent.com/23137619/77160300-bd8a4d00-6aaf-11ea-9c23-dcad4f5642d3.png)
non_defect
select incorrect classnames list describe the bug incorrect classnames list in single select screenshots
0
360,596
25,298,239,018
IssuesEvent
2022-11-17 08:44:44
CFC-Servers/GLuaTest
https://api.github.com/repos/CFC-Servers/GLuaTest
opened
Document a developer guide
documentation enhancement
I want to create a document that outlines how I use GLuaTest to give other developers an idea of how they'd like to use it.
1.0
Document a developer guide - I want to create a document that outlines how I use GLuaTest to give other developers an idea of how they'd like to use it.
non_defect
document a developer guide i want to create a document that outlines how i use gluatest to give other developers an idea of how they d like to use it
0
10,314
13,157,376,296
IssuesEvent
2020-08-10 12:38:37
utopia-rise/godot-kotlin
https://api.github.com/repos/utopia-rise/godot-kotlin
closed
Drop redundant RPCMode enum
tools:annotation-processor tools:annotations wrapper:godot-library
Atm we have two different enums for RPCMode. This was from a time where we used the annotations as dependencies inside the annotation processor rather than the hardcoded fqname. With this issue we should drop the manually created enum and change the generated one to drop the RPC_ENUM prefix
1.0
Drop redundant RPCMode enum - Atm we have two different enums for RPCMode. This was from a time where we used the annotations as dependencies inside the annotation processor rather than the hardcoded fqname. With this issue we should drop the manually created enum and change the generated one to drop the RPC_ENUM prefix
non_defect
drop redundant rpcmode enum atm we have two different enums for rpcmode this was from a time where we used the annotations as dependencies inside the annotation processor rather than the hardcoded fqname with this issue we should drop the manually created enum and change the generated one to drop the rpc enum prefix
0
370
2,534,938,245
IssuesEvent
2015-01-25 14:22:00
radiowarwick/digiplay_legacy
https://api.github.com/repos/radiowarwick/digiplay_legacy
closed
Permissions Realms sometimes used incorrectly.
defect Website
Need to check through everywhere realms are used to ensure they are doing it correctly. Sometimes check is done moving up the tree, sometimes downwards: eg, AuthUtil::getDetailedUserrealmAccess(array(24,20,3)); vs. AuthUtil::getDetailedUserrealmAccess(array(3,20,24)); Also, Need to check realms are correct in database. For example, 'Studio' has row id of 22, but realmpath of 03.21, and 'Studio Audiowall' has id 35 and realmtree of 03.21.34, when this should be 03.22.35.
1.0
Permissions Realms sometimes used incorrectly. - Need to check through everywhere realms are used to ensure they are doing it correctly. Sometimes check is done moving up the tree, sometimes downwards: eg, AuthUtil::getDetailedUserrealmAccess(array(24,20,3)); vs. AuthUtil::getDetailedUserrealmAccess(array(3,20,24)); Also, Need to check realms are correct in database. For example, 'Studio' has row id of 22, but realmpath of 03.21, and 'Studio Audiowall' has id 35 and realmtree of 03.21.34, when this should be 03.22.35.
defect
permissions realms sometimes used incorrectly need to check through everywhere realms are used to ensure they are doing it correctly sometimes check is done moving up the tree sometimes downwards eg authutil getdetaileduserrealmaccess array vs authutil getdetaileduserrealmaccess array also need to check realms are correct in database for example studio has row id of but realmpath of and studio audiowall has id and realmtree of when this should be
1
109,339
9,378,742,291
IssuesEvent
2019-04-04 13:33:51
Flowminder/FlowKit
https://api.github.com/repos/Flowminder/FlowKit
closed
Diff tools used by ApprovalTests should be more easily configurable
docs tests
Currently the `opendiff` tool (which is Mac-only) is hard-coded in a few `conftest.py` files, so this is awkward to change. It should be more easily configurable, and the docs should mention how to configure this and how to update the `*.approved.txt` files after a code change that alters the reference output.
1.0
Diff tools used by ApprovalTests should be more easily configurable - Currently the `opendiff` tool (which is Mac-only) is hard-coded in a few `conftest.py` files, so this is awkward to change. It should be more easily configurable, and the docs should mention how to configure this and how to update the `*.approved.txt` files after a code change that alters the reference output.
non_defect
diff tools used by approvaltests should be more easily configurable currently the opendiff tool which is mac only is hard coded in a few conftest py files so this is awkward to change it should be more easily configurable and the docs should mention how to configure this and how to update the approved txt files after a code change that alters the reference output
0
22,099
6,229,301,076
IssuesEvent
2017-07-11 03:12:44
XceedBoucherS/TestImport5
https://api.github.com/repos/XceedBoucherS/TestImport5
closed
Exception using Memory Clear Button
CodePlex
<b>gusbigardi[CodePlex]</b> <br />When you try to calculate any expression that will result an error (like 8 divided by 0, that result ERROR in display text) and you hit the MC button or other button that try to parse the display value to Decimal (checked this looking in the source code), the application will throw an Unhandled Exception.
1.0
Exception using Memory Clear Button - <b>gusbigardi[CodePlex]</b> <br />When you try to calculate any expression that will result an error (like 8 divided by 0, that result ERROR in display text) and you hit the MC button or other button that try to parse the display value to Decimal (checked this looking in the source code), the application will throw an Unhandled Exception.
non_defect
exception using memory clear button gusbigardi when you try to calculate any expression that will result an error like divided by that result error in display text and you hit the mc button or other button that try to parse the display value to decimal checked this looking in the source code the application will throw an unhandled exception
0
227,266
17,379,193,732
IssuesEvent
2021-07-31 10:28:32
HoonHaChoi/Coin
https://api.github.com/repos/HoonHaChoi/Coin
closed
일지 화면 UI 데모 디자인
documentation
<img width="336" alt="스크린샷 2021-07-31 오전 5 57 57" src="https://user-images.githubusercontent.com/33626693/127710854-dcfea635-e972-4a94-9324-183f5c454fa9.png"> 중점 - 한 눈에 정보를 다 나타내도록 고려함 - 왼쪽이 임팩트가 없었기에 상승,하락을 표시할 띠 추가 (아이콘은 어울리지 않음) 추가할 점 - 사용자 선택에 따라 일자 오름차순 내림차순 정렬 변경 가능하도록
1.0
일지 화면 UI 데모 디자인 - <img width="336" alt="스크린샷 2021-07-31 오전 5 57 57" src="https://user-images.githubusercontent.com/33626693/127710854-dcfea635-e972-4a94-9324-183f5c454fa9.png"> 중점 - 한 눈에 정보를 다 나타내도록 고려함 - 왼쪽이 임팩트가 없었기에 상승,하락을 표시할 띠 추가 (아이콘은 어울리지 않음) 추가할 점 - 사용자 선택에 따라 일자 오름차순 내림차순 정렬 변경 가능하도록
non_defect
일지 화면 ui 데모 디자인 img width alt 스크린샷 오전 src 중점 한 눈에 정보를 다 나타내도록 고려함 왼쪽이 임팩트가 없었기에 상승 하락을 표시할 띠 추가 아이콘은 어울리지 않음 추가할 점 사용자 선택에 따라 일자 오름차순 내림차순 정렬 변경 가능하도록
0
45,637
12,965,183,497
IssuesEvent
2020-07-20 21:50:57
googlefonts/noto-fonts
https://api.github.com/repos/googlefonts/noto-fonts
closed
NotoSansDisplay-ItalicMM.glyphs has incompatible masters
Type-Defect
These are the warnings from fontmake (they should be errors, since we cannot make a reasonable font): WARNING:fontTools.varLib:glyph uni1D05 has incompatible masters; skipping WARNING:fontTools.varLib:glyph uni1D06 has incompatible masters; skipping WARNING:fontTools.varLib:glyph uni1D1A has incompatible masters; skipping Please fix NotoSansDisplay-ItalicMM.glyphs for uni1D05, uni1D06 and uni1D1A to have compatible masters
1.0
NotoSansDisplay-ItalicMM.glyphs has incompatible masters - These are the warnings from fontmake (they should be errors, since we cannot make a reasonable font): WARNING:fontTools.varLib:glyph uni1D05 has incompatible masters; skipping WARNING:fontTools.varLib:glyph uni1D06 has incompatible masters; skipping WARNING:fontTools.varLib:glyph uni1D1A has incompatible masters; skipping Please fix NotoSansDisplay-ItalicMM.glyphs for uni1D05, uni1D06 and uni1D1A to have compatible masters
defect
notosansdisplay italicmm glyphs has incompatible masters these are the warnings from fontmake they should be errors since we cannot make a reasonable font warning fonttools varlib glyph has incompatible masters skipping warning fonttools varlib glyph has incompatible masters skipping warning fonttools varlib glyph has incompatible masters skipping please fix notosansdisplay italicmm glyphs for and to have compatible masters
1
56,449
23,784,092,509
IssuesEvent
2022-09-02 08:27:28
PreMiD/Presences
https://api.github.com/repos/PreMiD/Presences
opened
YNOproject
service request
### Website name YNOproject ### Website URL https://ynoproject.net/yume/ ### Website logo https://i.imgur.com/LlFNjpI.png ### Prerequisites - [ ] It is a paid service - [ ] It displays NSFW content - [ ] It is region restricted ### Description Display of what game the user is playing, and how much time has elapsed during that session
1.0
YNOproject - ### Website name YNOproject ### Website URL https://ynoproject.net/yume/ ### Website logo https://i.imgur.com/LlFNjpI.png ### Prerequisites - [ ] It is a paid service - [ ] It displays NSFW content - [ ] It is region restricted ### Description Display of what game the user is playing, and how much time has elapsed during that session
non_defect
ynoproject website name ynoproject website url website logo prerequisites it is a paid service it displays nsfw content it is region restricted description display of what game the user is playing and how much time has elapsed during that session
0
36,277
7,875,985,102
IssuesEvent
2018-06-25 22:29:53
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
closed
External method decorated with [ExpandParams] attribute
defect
Incorrect JavaScript produced when an external method decorated with `[ExpandParams]` attribute has a `params` argument and at least one plain argument. The logic emitting `.apply()` call seems being executed twice. ### Steps To Reproduce https://deck.net/a69ee70c1dd126f220f0f65f91b70ea7 ```csharp public class Program { [Init(InitPosition.Top)] public static void Init() { /*@ var Logger = (function () { function Logger() { } Logger.prototype.Log = function (s) { var args = [].slice.call(arguments, 1); var msg = args.join(", "); System.Console.WriteLine(arguments[0] + ": " + msg); }; return Logger; }()); */ } public static void Main() { var arr = new[] { "one", "two", "three" }; var logger = new Logger(); logger.Log("Info", arr); } } [External] [Namespace(false)] public class Logger { [ExpandParams] public extern void Log(string level, params string[] msgs); } ``` ### Expected Result ```js Bridge.assembly("Demo", function ($asm, globals) { "use strict"; Bridge.define("Demo.Program", { main: function Main () { var arr = System.Array.init(["one", "two", "three"], System.String); var logger = new Logger(); logger.Log.apply(logger, ["Info"].concat(arr)); } }); }); ``` ### Actual Result ```js Bridge.assembly("Demo", function ($asm, globals) { "use strict"; Bridge.define("Demo.Program", { main: function Main () { var arr = System.Array.init(["one", "two", "three"], System.String); var logger = new Logger(); logger.Log.apply.apply(logger, logger, ["Info"].concat(arr)); // ERROR } }); }); ```
1.0
External method decorated with [ExpandParams] attribute - Incorrect JavaScript produced when an external method decorated with `[ExpandParams]` attribute has a `params` argument and at least one plain argument. The logic emitting `.apply()` call seems being executed twice. ### Steps To Reproduce https://deck.net/a69ee70c1dd126f220f0f65f91b70ea7 ```csharp public class Program { [Init(InitPosition.Top)] public static void Init() { /*@ var Logger = (function () { function Logger() { } Logger.prototype.Log = function (s) { var args = [].slice.call(arguments, 1); var msg = args.join(", "); System.Console.WriteLine(arguments[0] + ": " + msg); }; return Logger; }()); */ } public static void Main() { var arr = new[] { "one", "two", "three" }; var logger = new Logger(); logger.Log("Info", arr); } } [External] [Namespace(false)] public class Logger { [ExpandParams] public extern void Log(string level, params string[] msgs); } ``` ### Expected Result ```js Bridge.assembly("Demo", function ($asm, globals) { "use strict"; Bridge.define("Demo.Program", { main: function Main () { var arr = System.Array.init(["one", "two", "three"], System.String); var logger = new Logger(); logger.Log.apply(logger, ["Info"].concat(arr)); } }); }); ``` ### Actual Result ```js Bridge.assembly("Demo", function ($asm, globals) { "use strict"; Bridge.define("Demo.Program", { main: function Main () { var arr = System.Array.init(["one", "two", "three"], System.String); var logger = new Logger(); logger.Log.apply.apply(logger, logger, ["Info"].concat(arr)); // ERROR } }); }); ```
defect
external method decorated with attribute incorrect javascript produced when an external method decorated with attribute has a params argument and at least one plain argument the logic emitting apply call seems being executed twice steps to reproduce csharp public class program public static void init var logger function function logger logger prototype log function s var args slice call arguments var msg args join system console writeline arguments msg return logger public static void main var arr new one two three var logger new logger logger log info arr public class logger public extern void log string level params string msgs expected result js bridge assembly demo function asm globals use strict bridge define demo program main function main var arr system array init system string var logger new logger logger log apply logger concat arr actual result js bridge assembly demo function asm globals use strict bridge define demo program main function main var arr system array init system string var logger new logger logger log apply apply logger logger concat arr error
1
103,635
8,924,461,290
IssuesEvent
2019-01-21 18:47:02
operator-framework/operator-sdk
https://api.github.com/repos/operator-framework/operator-sdk
closed
Ansible Molecule Test Fails CI on Version Bump
ansible-operator bug testing
The Ansible Molecule E2E tests currently depend on the image for the base ansible operator existing with the current image tag in quay. This works fine for master builds, as that uses the last version release, but when a new release is made, the tests run before the deploy stage, so the Ansible Molecule test fails. The current workaround is running the deploy stage for the Ansible operator base image before the tests. One potential solution for future releases would be to make molecule run the version that is built in the test (which would have the `:dev` tag). Here is an example of the Molecule test failure output: ``` TASK [Build Operator Image] **************************************************** fatal: [kind-test-local]: FAILED! => {"changed": true, "cmd": ["docker", "build", "-f", "/build/build/Dockerfile", "-t", "ansible.example.com/memcached-operator:testing", "/build"], "delta": "0:00:01.120205", "end": "2019-01-18 20:50:29.886104", "msg": "non-zero return code", "rc": 1, "start": "2019-01-18 20:50:28.765899", "stderr": "manifest for quay.io/operator-framework/ansible-operator:v0.4.0 not found", "stderr_lines": ["manifest for quay.io/operator-framework/ansible-operator:v0.4.0 not found"], "stdout": "Sending build context to Docker daemon 161.3kB\r\r\nStep 1/4 : FROM quay.io/operator-framework/ansible-operator:v0.4.0", "stdout_lines": ["Sending build context to Docker daemon 161.3kB", "", "Step 1/4 : FROM quay.io/operator-framework/ansible-operator:v0.4.0"]} ```
1.0
Ansible Molecule Test Fails CI on Version Bump - The Ansible Molecule E2E tests currently depend on the image for the base ansible operator existing with the current image tag in quay. This works fine for master builds, as that uses the last version release, but when a new release is made, the tests run before the deploy stage, so the Ansible Molecule test fails. The current workaround is running the deploy stage for the Ansible operator base image before the tests. One potential solution for future releases would be to make molecule run the version that is built in the test (which would have the `:dev` tag). Here is an example of the Molecule test failure output: ``` TASK [Build Operator Image] **************************************************** fatal: [kind-test-local]: FAILED! => {"changed": true, "cmd": ["docker", "build", "-f", "/build/build/Dockerfile", "-t", "ansible.example.com/memcached-operator:testing", "/build"], "delta": "0:00:01.120205", "end": "2019-01-18 20:50:29.886104", "msg": "non-zero return code", "rc": 1, "start": "2019-01-18 20:50:28.765899", "stderr": "manifest for quay.io/operator-framework/ansible-operator:v0.4.0 not found", "stderr_lines": ["manifest for quay.io/operator-framework/ansible-operator:v0.4.0 not found"], "stdout": "Sending build context to Docker daemon 161.3kB\r\r\nStep 1/4 : FROM quay.io/operator-framework/ansible-operator:v0.4.0", "stdout_lines": ["Sending build context to Docker daemon 161.3kB", "", "Step 1/4 : FROM quay.io/operator-framework/ansible-operator:v0.4.0"]} ```
non_defect
ansible molecule test fails ci on version bump the ansible molecule tests currently depend on the image for the base ansible operator existing with the current image tag in quay this works fine for master builds as that uses the last version release but when a new release is made the tests run before the deploy stage so the ansible molecule test fails the current workaround is running the deploy stage for the ansible operator base image before the tests one potential solution for future releases would be to make molecule run the version that is built in the test which would have the dev tag here is an example of the molecule test failure output task fatal failed changed true cmd delta end msg non zero return code rc start stderr manifest for quay io operator framework ansible operator not found stderr lines stdout sending build context to docker daemon r r nstep from quay io operator framework ansible operator stdout lines
0
75,764
26,036,161,237
IssuesEvent
2022-12-22 05:19:17
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
closed
[🐛 Bug]: Type ahead help suggesting incompatible methods for .window()
C-nodejs I-defect
### What happened? The type ahead help (seen in Visual Studios Code 1.73.0) is misleading when trying to get or set a screen size with .window(). The .getSize and .setSize are suggested even though they are only available for WebElements. When a user uses .window(), it is my understanding that the type ahead should suggest .getRect or .setRect instead. (Failing) To get the screen size : `await this.driver.manage().window().getSize()` (Failing) To set screen : ``` let width = 700 let height = 500 await this.driver.manage().window().setSize({x: 0, y: 0, width: width, height: height}) ``` The two examples above kept leading to an error “.getSize is not a function” even though the type ahead help suggests otherwise. I was able to resolve the issue and have my test pass by changing my code with the solutions below. (Passing) To get the screen size : `await this.driver.manage().window().getRect()` (Passing) To set screen : ``` let width = 700 let height = 500 await this.driver.manage().window().setRect({x: 0, y: 0, width: width, height: height}) ``` ### How can we reproduce the issue? ```shell await this.driver.manage().window().getSize(); let width = 700; let height = 500; await this.driver.manage().window().setSize({x: 0, y: 0, width: width, height: height}); ``` ### Relevant log output ```shell .getSize is not a function ``` ### Operating System MacOS 12.1 ### Selenium version 4.7.0 ### What are the browser(s) and version(s) where you see this issue? Version 108.0.5359.124 (Official Build) (x86_64) ### What are the browser driver(s) and version(s) where you see this issue? ChromeDriver 108.0.0 ### Are you using Selenium Grid? No
1.0
[🐛 Bug]: Type ahead help suggesting incompatible methods for .window() - ### What happened? The type ahead help (seen in Visual Studios Code 1.73.0) is misleading when trying to get or set a screen size with .window(). The .getSize and .setSize are suggested even though they are only available for WebElements. When a user uses .window(), it is my understanding that the type ahead should suggest .getRect or .setRect instead. (Failing) To get the screen size : `await this.driver.manage().window().getSize()` (Failing) To set screen : ``` let width = 700 let height = 500 await this.driver.manage().window().setSize({x: 0, y: 0, width: width, height: height}) ``` The two examples above kept leading to an error “.getSize is not a function” even though the type ahead help suggests otherwise. I was able to resolve the issue and have my test pass by changing my code with the solutions below. (Passing) To get the screen size : `await this.driver.manage().window().getRect()` (Passing) To set screen : ``` let width = 700 let height = 500 await this.driver.manage().window().setRect({x: 0, y: 0, width: width, height: height}) ``` ### How can we reproduce the issue? ```shell await this.driver.manage().window().getSize(); let width = 700; let height = 500; await this.driver.manage().window().setSize({x: 0, y: 0, width: width, height: height}); ``` ### Relevant log output ```shell .getSize is not a function ``` ### Operating System MacOS 12.1 ### Selenium version 4.7.0 ### What are the browser(s) and version(s) where you see this issue? Version 108.0.5359.124 (Official Build) (x86_64) ### What are the browser driver(s) and version(s) where you see this issue? ChromeDriver 108.0.0 ### Are you using Selenium Grid? No
defect
type ahead help suggesting incompatible methods for window what happened the type ahead help seen in visual studios code is misleading when trying to get or set a screen size with window the getsize and setsize are suggested even though they are only available for webelements when a user uses window it is my understanding that the type ahead should suggest getrect or setrect instead failing to get the screen size await this driver manage window getsize failing to set screen let width let height await this driver manage window setsize x y width width height height the two examples above kept leading to an error “ getsize is not a function” even though the type ahead help suggests otherwise i was able to resolve the issue and have my test pass by changing my code with the solutions below passing to get the screen size await this driver manage window getrect passing to set screen let width let height await this driver manage window setrect x y width width height height how can we reproduce the issue shell await this driver manage window getsize let width let height await this driver manage window setsize x y width width height height relevant log output shell getsize is not a function operating system macos selenium version what are the browser s and version s where you see this issue version official build what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no
1
68,854
21,928,539,007
IssuesEvent
2022-05-23 07:39:49
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
closed
Sending multiple invites to a room only works partially
T-Defect
### Steps to reproduce 1. Create a room or get into a room where you can invite people. 2. Invite several people (at least 3 or 4). ### Outcome #### What did you expect? Everyone should be invited to the room. #### What happened instead? Only the first 1 or 2 people actually get invited. ### Your phone model Emulator ### Operating system version Android 12 ### Application version and app store 1.4.18 ### Homeserver matrix.org ### Will you send logs? No
1.0
Sending multiple invites to a room only works partially - ### Steps to reproduce 1. Create a room or get into a room where you can invite people. 2. Invite several people (at least 3 or 4). ### Outcome #### What did you expect? Everyone should be invited to the room. #### What happened instead? Only the first 1 or 2 people actually get invited. ### Your phone model Emulator ### Operating system version Android 12 ### Application version and app store 1.4.18 ### Homeserver matrix.org ### Will you send logs? No
defect
sending multiple invites to a room only works partially steps to reproduce create a room or get into a room where you can invite people invite several people at least or outcome what did you expect everyone should be invited to the room what happened instead only the first or people actually get invited your phone model emulator operating system version android application version and app store homeserver matrix org will you send logs no
1
80,413
15,586,284,095
IssuesEvent
2021-03-18 01:35:25
attesch/myretail
https://api.github.com/repos/attesch/myretail
opened
CVE-2020-36181 (High) detected in jackson-databind-2.9.4.jar
security vulnerability
## CVE-2020-36181 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /myretail/build.gradle</p> <p>Path to vulnerable library: myretail/build.gradle</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.0.0.RELEASE.jar (Root Library) - spring-boot-starter-json-2.0.0.RELEASE.jar - :x: **jackson-databind-2.9.4.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS. <p>Publish Date: 2021-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p> <p>Release Date: 2021-01-06</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-36181 (High) detected in jackson-databind-2.9.4.jar - ## CVE-2020-36181 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /myretail/build.gradle</p> <p>Path to vulnerable library: myretail/build.gradle</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.0.0.RELEASE.jar (Root Library) - spring-boot-starter-json-2.0.0.RELEASE.jar - :x: **jackson-databind-2.9.4.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS. <p>Publish Date: 2021-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p> <p>Release Date: 2021-01-06</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file myretail build gradle path to vulnerable library myretail build gradle dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
0
367,069
10,833,625,377
IssuesEvent
2019-11-11 13:22:07
AY1920S1-CS2113T-F09-4/main
https://api.github.com/repos/AY1920S1-CS2113T-F09-4/main
closed
Add command tempo bug
priority.High severity.Medium status.Ongoing type.Bug
the tempo should not accept zero or negative integers as a valid input. ![image.png](https://raw.githubusercontent.com/OungKennedy/ped/master/files/4771ecf9-e7d1-4d26-9287-9743096821b2.png) <hr><sub>[original: OungKennedy/ped#5]<br/> </sub>
1.0
Add command tempo bug - the tempo should not accept zero or negative integers as a valid input. ![image.png](https://raw.githubusercontent.com/OungKennedy/ped/master/files/4771ecf9-e7d1-4d26-9287-9743096821b2.png) <hr><sub>[original: OungKennedy/ped#5]<br/> </sub>
non_defect
add command tempo bug the tempo should not accept zero or negative integers as a valid input
0
223,626
7,459,248,574
IssuesEvent
2018-03-30 14:32:05
metasfresh/metasfresh-webui-frontend
https://api.github.com/repos/metasfresh/metasfresh-webui-frontend
closed
Console error when grid view updates
priority:high type:bug
### Is this a bug or feature request? bug ### What is the current behavior? Console error when grid view updates TypeError: Cannot read property 'ID' of undefined #### Which are the steps to reproduce? 1. open settings: https://w101.metasfresh.com:8443/window/53100/2188223 2. new tab: open user window in grid view (this is page no. 2 because there was my user from settings) https://w101.metasfresh.com:8443/window/108?page=2&viewId=108-934e04bf3a09430ba73fe8bd159d302e 3. go back to settings tab and change email address 4. go back to user tab, see console => ViewActions.js:153 Uncaught (in promise) TypeError: Cannot read property 'ID' of undefined at Object.values.map.viewRowField (ViewActions.js:153) at Array.map (<anonymous>) at mergeColumnInfosIntoViewRow (ViewActions.js:153) at toRows.map.row (ViewActions.js:196) at Array.map (<anonymous>) at mergeRows (ViewActions.js:196) at then.response (DocumentList.js:229) at <anonymous> => also you need to refresh to see the change NOK ### What is the expected or desired behavior? no errors, and the change shall appear without refreshing
1.0
Console error when grid view updates - ### Is this a bug or feature request? bug ### What is the current behavior? Console error when grid view updates TypeError: Cannot read property 'ID' of undefined #### Which are the steps to reproduce? 1. open settings: https://w101.metasfresh.com:8443/window/53100/2188223 2. new tab: open user window in grid view (this is page no. 2 because there was my user from settings) https://w101.metasfresh.com:8443/window/108?page=2&viewId=108-934e04bf3a09430ba73fe8bd159d302e 3. go back to settings tab and change email address 4. go back to user tab, see console => ViewActions.js:153 Uncaught (in promise) TypeError: Cannot read property 'ID' of undefined at Object.values.map.viewRowField (ViewActions.js:153) at Array.map (<anonymous>) at mergeColumnInfosIntoViewRow (ViewActions.js:153) at toRows.map.row (ViewActions.js:196) at Array.map (<anonymous>) at mergeRows (ViewActions.js:196) at then.response (DocumentList.js:229) at <anonymous> => also you need to refresh to see the change NOK ### What is the expected or desired behavior? no errors, and the change shall appear without refreshing
non_defect
console error when grid view updates is this a bug or feature request bug what is the current behavior console error when grid view updates typeerror cannot read property id of undefined which are the steps to reproduce open settings new tab open user window in grid view this is page no because there was my user from settings go back to settings tab and change email address go back to user tab see console viewactions js uncaught in promise typeerror cannot read property id of undefined at object values map viewrowfield viewactions js at array map at mergecolumninfosintoviewrow viewactions js at torows map row viewactions js at array map at mergerows viewactions js at then response documentlist js at also you need to refresh to see the change nok what is the expected or desired behavior no errors and the change shall appear without refreshing
0
136,918
18,751,508,488
IssuesEvent
2021-11-05 03:00:22
Dima2022/Resiliency-Studio
https://api.github.com/repos/Dima2022/Resiliency-Studio
closed
CVE-2020-11112 (High) detected in jackson-databind-2.8.6.jar - autoclosed
security vulnerability
## CVE-2020-11112 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: Resiliency-Studio/resiliency-studio-agent/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar</p> <p> Dependency Hierarchy: - sdk-java-rest-6.2.0.4-oss.jar (Root Library) - :x: **jackson-databind-2.8.6.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Dima2022/Resiliency-Studio/commit/9809d9b7bfdc114eafb0a14d86667f3a76a014e8">9809d9b7bfdc114eafb0a14d86667f3a76a014e8</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.proxy.provider.remoting.RmiProvider (aka apache/commons-proxy). <p>Publish Date: 2020-03-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11112>CVE-2020-11112</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112</a></p> <p>Release Date: 2020-03-31</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.6","packageFilePaths":["/resiliency-studio-agent/pom.xml","/resiliency-studio-security/pom.xml","/resiliency-studio-service/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.att.ajsc:sdk-java-rest:6.2.0.4-oss;com.fasterxml.jackson.core:jackson-databind:2.8.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2020-11112","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.proxy.provider.remoting.RmiProvider (aka apache/commons-proxy).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11112","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-11112 (High) detected in jackson-databind-2.8.6.jar - autoclosed - ## CVE-2020-11112 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: Resiliency-Studio/resiliency-studio-agent/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar</p> <p> Dependency Hierarchy: - sdk-java-rest-6.2.0.4-oss.jar (Root Library) - :x: **jackson-databind-2.8.6.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Dima2022/Resiliency-Studio/commit/9809d9b7bfdc114eafb0a14d86667f3a76a014e8">9809d9b7bfdc114eafb0a14d86667f3a76a014e8</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.proxy.provider.remoting.RmiProvider (aka apache/commons-proxy). <p>Publish Date: 2020-03-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11112>CVE-2020-11112</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11112</a></p> <p>Release Date: 2020-03-31</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.6","packageFilePaths":["/resiliency-studio-agent/pom.xml","/resiliency-studio-security/pom.xml","/resiliency-studio-service/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.att.ajsc:sdk-java-rest:6.2.0.4-oss;com.fasterxml.jackson.core:jackson-databind:2.8.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2020-11112","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.proxy.provider.remoting.RmiProvider (aka apache/commons-proxy).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11112","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_defect
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file resiliency studio resiliency studio agent pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy sdk java rest oss jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons proxy provider remoting rmiprovider aka apache commons proxy publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com att ajsc sdk java rest oss com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons proxy provider remoting rmiprovider aka apache commons proxy vulnerabilityurl
0
48,897
13,184,769,692
IssuesEvent
2020-08-12 20:03:39
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
I3Db services should always check the status of mysql connection (Trac #402)
I3Db Incomplete Migration Migrated from Trac defect
<details> <summary>_Migrated from https://code.icecube.wisc.edu/ticket/402 , reported by blaufuss and owned by kohnen_</summary> <p> ```json { "status": "closed", "changetime": "2012-07-05T10:56:21", "description": "I3Db services that use connections to a mysql server shoudl check that\nremore server is still attached. We've had several run_summary queries from OFU/GFU clients that fail:\n\n\n2012-05-23 23:19:50 [GMT] FATAL I3DbDetectorStatusService : /scratch/blaufuss/pnf/V12-05-00/src/I3Db/private/I3Db/I3DbDetectorStatusService.cxx:884 I3DbDetectorStatus:GetRunSummary-RunId=000000;Error=-1;DbfErrNo=0000;DbfStatus=;Reason=Run Number No Found !!!;\n\n\nBuuut, this is clearly in the DB, as PnF happily processed the run. The suspicion is the connection timed out:\n\nhttp://lists.icecube.wisc.edu/pipermail/logbook/2012-May/002260.html\n\nwas made to help with this, and it's not clear if this solved it, but the code\nshould be robust against DB servers going down, etc.\n\n\n\n", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "_ts": "1341485781000000", "component": "I3Db", "summary": "I3Db services should always check the status of mysql connection", "priority": "normal", "keywords": "", "time": "2012-05-25T13:22:32", "milestone": "", "owner": "kohnen", "type": "defect" } ``` </p> </details>
1.0
I3Db services should always check the status of mysql connection (Trac #402) - <details> <summary>_Migrated from https://code.icecube.wisc.edu/ticket/402 , reported by blaufuss and owned by kohnen_</summary> <p> ```json { "status": "closed", "changetime": "2012-07-05T10:56:21", "description": "I3Db services that use connections to a mysql server shoudl check that\nremore server is still attached. We've had several run_summary queries from OFU/GFU clients that fail:\n\n\n2012-05-23 23:19:50 [GMT] FATAL I3DbDetectorStatusService : /scratch/blaufuss/pnf/V12-05-00/src/I3Db/private/I3Db/I3DbDetectorStatusService.cxx:884 I3DbDetectorStatus:GetRunSummary-RunId=000000;Error=-1;DbfErrNo=0000;DbfStatus=;Reason=Run Number No Found !!!;\n\n\nBuuut, this is clearly in the DB, as PnF happily processed the run. The suspicion is the connection timed out:\n\nhttp://lists.icecube.wisc.edu/pipermail/logbook/2012-May/002260.html\n\nwas made to help with this, and it's not clear if this solved it, but the code\nshould be robust against DB servers going down, etc.\n\n\n\n", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "_ts": "1341485781000000", "component": "I3Db", "summary": "I3Db services should always check the status of mysql connection", "priority": "normal", "keywords": "", "time": "2012-05-25T13:22:32", "milestone": "", "owner": "kohnen", "type": "defect" } ``` </p> </details>
defect
services should always check the status of mysql connection trac migrated from reported by blaufuss and owned by kohnen json status closed changetime description services that use connections to a mysql server shoudl check that nremore server is still attached we ve had several run summary queries from ofu gfu clients that fail n n fatal scratch blaufuss pnf src private cxx getrunsummary runid error dbferrno dbfstatus reason run number no found n n nbuuut this is clearly in the db as pnf happily processed the run the suspicion is the connection timed out n n made to help with this and it s not clear if this solved it but the code nshould be robust against db servers going down etc n n n n reporter blaufuss cc resolution fixed ts component summary services should always check the status of mysql connection priority normal keywords time milestone owner kohnen type defect
1
49,395
26,137,998,750
IssuesEvent
2022-12-29 14:33:08
Bartalog/cool-maze
https://api.github.com/repos/Bartalog/cool-maze
closed
Stat multiple share: tte (Android)
performance Android Backend
Time-to-Encrypt each resource The throughput for Multiple Share doesn't have the same meaning as Single Share, because many things are happening at the same time and interfering with each other's performance numbers. The numbers are still useful to gather and analyze though.
True
Stat multiple share: tte (Android) - Time-to-Encrypt each resource The throughput for Multiple Share doesn't have the same meaning as Single Share, because many things are happening at the same time and interfering with each other's performance numbers. The numbers are still useful to gather and analyze though.
non_defect
stat multiple share tte android time to encrypt each resource the throughput for multiple share doesn t have the same meaning as single share because many things are happening at the same time and interfering with each other s performance numbers the numbers are still useful to gather and analyze though
0
31,744
6,611,981,332
IssuesEvent
2017-09-20 00:40:07
extnet/Ext.NET
https://api.github.com/repos/extnet/Ext.NET
closed
CanActivate option should work with not Ext.menu.Item
2.x 3.x 4.x defect review-after-extjs-upgrade sencha sencha-disclaim
http://forums.ext.net/showthread.php?23262 http://www.sencha.com/forum/showthread.php?255184 Dup: http://forums.ext.net/showthread.php?26213 Corrected the Toolbar/Menu/Controls_in_Menu example. Revert back after Sencha fix. **Update:** the issue is not actual anymore for ExtJS 6, i.e. for Ext.NET 4. `CanActivate` has been removed - #1243.
1.0
CanActivate option should work with not Ext.menu.Item - http://forums.ext.net/showthread.php?23262 http://www.sencha.com/forum/showthread.php?255184 Dup: http://forums.ext.net/showthread.php?26213 Corrected the Toolbar/Menu/Controls_in_Menu example. Revert back after Sencha fix. **Update:** the issue is not actual anymore for ExtJS 6, i.e. for Ext.NET 4. `CanActivate` has been removed - #1243.
defect
canactivate option should work with not ext menu item dup corrected the toolbar menu controls in menu example revert back after sencha fix update the issue is not actual anymore for extjs i e for ext net canactivate has been removed
1
67,698
14,886,595,206
IssuesEvent
2021-01-20 17:08:53
anyulled/mws-restaurant-stage-1
https://api.github.com/repos/anyulled/mws-restaurant-stage-1
opened
CVE-2020-28481 (Medium) detected in socket.io-2.1.1.tgz
security vulnerability
## CVE-2020-28481 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>socket.io-2.1.1.tgz</b></p></summary> <p>node.js realtime framework server</p> <p>Library home page: <a href="https://registry.npmjs.org/socket.io/-/socket.io-2.1.1.tgz">https://registry.npmjs.org/socket.io/-/socket.io-2.1.1.tgz</a></p> <p>Path to dependency file: mws-restaurant-stage-1/package.json</p> <p>Path to vulnerable library: mws-restaurant-stage-1/node_modules/socket.io/package.json</p> <p> Dependency Hierarchy: - browser-sync-2.26.13.tgz (Root Library) - :x: **socket.io-2.1.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/anyulled/mws-restaurant-stage-1/commit/1e0ac892821eb44b19483d38ea27a9c11e55eefa">1e0ac892821eb44b19483d38ea27a9c11e55eefa</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package socket.io before 2.4.0 are vulnerable to Insecure Defaults due to CORS Misconfiguration. All domains are whitelisted by default. <p>Publish Date: 2021-01-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28481>CVE-2020-28481</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28481">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28481</a></p> <p>Release Date: 2021-01-19</p> <p>Fix Resolution: 2.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-28481 (Medium) detected in socket.io-2.1.1.tgz - ## CVE-2020-28481 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>socket.io-2.1.1.tgz</b></p></summary> <p>node.js realtime framework server</p> <p>Library home page: <a href="https://registry.npmjs.org/socket.io/-/socket.io-2.1.1.tgz">https://registry.npmjs.org/socket.io/-/socket.io-2.1.1.tgz</a></p> <p>Path to dependency file: mws-restaurant-stage-1/package.json</p> <p>Path to vulnerable library: mws-restaurant-stage-1/node_modules/socket.io/package.json</p> <p> Dependency Hierarchy: - browser-sync-2.26.13.tgz (Root Library) - :x: **socket.io-2.1.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/anyulled/mws-restaurant-stage-1/commit/1e0ac892821eb44b19483d38ea27a9c11e55eefa">1e0ac892821eb44b19483d38ea27a9c11e55eefa</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package socket.io before 2.4.0 are vulnerable to Insecure Defaults due to CORS Misconfiguration. All domains are whitelisted by default. <p>Publish Date: 2021-01-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28481>CVE-2020-28481</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28481">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28481</a></p> <p>Release Date: 2021-01-19</p> <p>Fix Resolution: 2.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in socket io tgz cve medium severity vulnerability vulnerable library socket io tgz node js realtime framework server library home page a href path to dependency file mws restaurant stage package json path to vulnerable library mws restaurant stage node modules socket io package json dependency hierarchy browser sync tgz root library x socket io tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package socket io before are vulnerable to insecure defaults due to cors misconfiguration all domains are whitelisted by default publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
1,131
2,597,107,632
IssuesEvent
2015-02-21 02:56:07
STEllAR-GROUP/hpx
https://api.github.com/repos/STEllAR-GROUP/hpx
closed
Crashing in hpx::parcelset::policies::mpi::connection_handler::handle_messages() on SuperMIC
category: parcel transport type: defect
I am getting crashes when I run my fmmx code on SuperMIC on more than one node. This is using only host processors. The last HPX call on the stack (that I can tell) is hpx::parcelset::policies::mpi::connection_handler::handle_messages(). The stack trace is here: https://gist.github.com/dmarce1/fc69d303159b2744115a The code I am running is here: https://github.com/dmarce1/xtree
1.0
Crashing in hpx::parcelset::policies::mpi::connection_handler::handle_messages() on SuperMIC - I am getting crashes when I run my fmmx code on SuperMIC on more than one node. This is using only host processors. The last HPX call on the stack (that I can tell) is hpx::parcelset::policies::mpi::connection_handler::handle_messages(). The stack trace is here: https://gist.github.com/dmarce1/fc69d303159b2744115a The code I am running is here: https://github.com/dmarce1/xtree
defect
crashing in hpx parcelset policies mpi connection handler handle messages on supermic i am getting crashes when i run my fmmx code on supermic on more than one node this is using only host processors the last hpx call on the stack that i can tell is hpx parcelset policies mpi connection handler handle messages the stack trace is here the code i am running is here
1
15,765
2,869,062,522
IssuesEvent
2015-06-05 23:01:43
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
Null exception thrown in changeNotifier after it creates the object which should not make it null
Area-Pkg Pkg-Observe Priority-Unassigned Triaged Type-Defect
*This issue was originally filed by ir...&#064;google.com* _____ **What steps will reproduce the problem?** 1. Execute code that invokes ChangeNotifier.changes 2. In certain cases, the following exception is produced: The null object does not have a getter 'stream'. NoSuchMethodError: method not found: 'stream' Receiver: null Arguments: [] STACKTRACE: #­0 Object.noSuchMethod (dart:core-patch/object_patch.dart:45) #­1 ChangeNotifier.changes (package:observe/src/change_notifier.dart:29:21) For reference, line 24-30 from change_notifier.dart: &nbsp;&nbsp;Stream&lt;List&lt;ChangeRecord&gt;&gt; get changes { &nbsp;&nbsp;&nbsp;&nbsp;if (_changes == null) { &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\_changes = new StreamController.broadcast(sync: true, &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;onListen: observed, onCancel: unobserved); &nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;return \_changes.stream; &nbsp;&nbsp;} **What is the expected output? What do you see instead?** \_changes should never be null, so no exception should be thrown. **What version of the product are you using?** Dartium Version 37.0.2062.0 (287872) (64-bit) **On what operating system?** Linux **What browser (if applicable)?** Dartium Version 37.0.2062.0 (287872) (64-bit) **Please provide any additional information below.** C1
1.0
Null exception thrown in changeNotifier after it creates the object which should not make it null - *This issue was originally filed by ir...&#064;google.com* _____ **What steps will reproduce the problem?** 1. Execute code that invokes ChangeNotifier.changes 2. In certain cases, the following exception is produced: The null object does not have a getter 'stream'. NoSuchMethodError: method not found: 'stream' Receiver: null Arguments: [] STACKTRACE: #­0 Object.noSuchMethod (dart:core-patch/object_patch.dart:45) #­1 ChangeNotifier.changes (package:observe/src/change_notifier.dart:29:21) For reference, line 24-30 from change_notifier.dart: &nbsp;&nbsp;Stream&lt;List&lt;ChangeRecord&gt;&gt; get changes { &nbsp;&nbsp;&nbsp;&nbsp;if (_changes == null) { &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\_changes = new StreamController.broadcast(sync: true, &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;onListen: observed, onCancel: unobserved); &nbsp;&nbsp;&nbsp;&nbsp;} &nbsp;&nbsp;&nbsp;&nbsp;return \_changes.stream; &nbsp;&nbsp;} **What is the expected output? What do you see instead?** \_changes should never be null, so no exception should be thrown. **What version of the product are you using?** Dartium Version 37.0.2062.0 (287872) (64-bit) **On what operating system?** Linux **What browser (if applicable)?** Dartium Version 37.0.2062.0 (287872) (64-bit) **Please provide any additional information below.** C1
defect
null exception thrown in changenotifier after it creates the object which should not make it null this issue was originally filed by ir google com what steps will reproduce the problem execute code that invokes changenotifier changes in certain cases the following exception is produced the null object does not have a getter stream nosuchmethoderror method not found stream receiver null arguments stacktrace ­ object nosuchmethod dart core patch object patch dart ­ changenotifier changes package observe src change notifier dart for reference line from change notifier dart nbsp nbsp stream lt list lt changerecord gt gt get changes nbsp nbsp nbsp nbsp if changes null nbsp nbsp nbsp nbsp nbsp nbsp changes new streamcontroller broadcast sync true nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp onlisten observed oncancel unobserved nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp return changes stream nbsp nbsp what is the expected output what do you see instead changes should never be null so no exception should be thrown what version of the product are you using dartium version bit on what operating system linux what browser if applicable dartium version bit please provide any additional information below
1
305,044
9,358,730,796
IssuesEvent
2019-04-02 03:43:00
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Refactor kube-apiserver flags into constants
kind/cleanup kind/feature priority/awaiting-more-evidence sig/api-machinery
**What would you like to be added**: Refactor all flags in kube-apiserver package into constants example: https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/options/options.go#L128 The issue is a continue of work related to those issues: https://github.com/kubernetes/kubeadm/issues/1336 https://github.com/kubernetes/kubeadm/issues/1333 Open questions: We have few Todo's in this package, like https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/options/options.go#L132 or https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/options/options.go#L175 can we resolve them? We have a lot of hardcodes here https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/aggregator.go#L242 should we do something to them? /kind cleanup /priority important-longterm /help-wanted
1.0
Refactor kube-apiserver flags into constants - **What would you like to be added**: Refactor all flags in kube-apiserver package into constants example: https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/options/options.go#L128 The issue is a continue of work related to those issues: https://github.com/kubernetes/kubeadm/issues/1336 https://github.com/kubernetes/kubeadm/issues/1333 Open questions: We have few Todo's in this package, like https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/options/options.go#L132 or https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/options/options.go#L175 can we resolve them? We have a lot of hardcodes here https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/aggregator.go#L242 should we do something to them? /kind cleanup /priority important-longterm /help-wanted
non_defect
refactor kube apiserver flags into constants what would you like to be added refactor all flags in kube apiserver package into constants example the issue is a continue of work related to those issues open questions we have few todo s in this package like or can we resolve them we have a lot of hardcodes here should we do something to them kind cleanup priority important longterm help wanted
0
78,238
27,387,593,562
IssuesEvent
2023-02-28 14:19:50
phake/phake
https://api.github.com/repos/phake/phake
closed
Cannot pass values by reference when using Phake::makeVisible()
Defect
``` php class Foo { protected function bar(array &$errors) { // do stuff } } ``` ``` php $errors = []; $foo = Phake::partialMock('Foo'); Phake::makeVisible($foo)->bar($error); // asserts ``` The above code fails complaining about argument 1 is not a reference and instead is a value. Making `bar()` public and not using `Phake::makeVisible()` works as intended.
1.0
Cannot pass values by reference when using Phake::makeVisible() - ``` php class Foo { protected function bar(array &$errors) { // do stuff } } ``` ``` php $errors = []; $foo = Phake::partialMock('Foo'); Phake::makeVisible($foo)->bar($error); // asserts ``` The above code fails complaining about argument 1 is not a reference and instead is a value. Making `bar()` public and not using `Phake::makeVisible()` works as intended.
defect
cannot pass values by reference when using phake makevisible php class foo protected function bar array errors do stuff php errors foo phake partialmock foo phake makevisible foo bar error asserts the above code fails complaining about argument is not a reference and instead is a value making bar public and not using phake makevisible works as intended
1
16,261
2,886,378,849
IssuesEvent
2015-06-12 07:50:42
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
reopened
Add os_version to the testing script to enable suppressions on individual os/browser combinations
Accepted Area-Infrastructure Priority-High Type-Defect
It is currently not possible to specify something like windows10 ie10 in the status file, which makes us mark stuff as flaky that is not really
1.0
Add os_version to the testing script to enable suppressions on individual os/browser combinations - It is currently not possible to specify something like windows10 ie10 in the status file, which makes us mark stuff as flaky that is not really
defect
add os version to the testing script to enable suppressions on individual os browser combinations it is currently not possible to specify something like in the status file which makes us mark stuff as flaky that is not really
1
4,526
4,415,296,665
IssuesEvent
2016-08-14 00:12:03
orientechnologies/orientdb
https://api.github.com/repos/orientechnologies/orientdb
closed
Profiler: redesign the profiler to be less expensive in terms of CPU
performance
The profiler is an old component of OrientDB. The goal was to store all the data to be provided to the user. The problem is that it takes a lot of CPU because it has been designed with multiple centralized concurrent maps = high contention rate. We should redesign it by keep statistics locally to the component: - create the OProfilerAgent interface - create the OProfilerAgentSynchronized as thread safe and - create the OProfilerAgentNotSynchronized as lock-free implementation - install on each component to monitor one of these 2 agent classes, based on the fact that the monitored component needs it synchronized or not - create an interface OProfilable with only the method: `getProfilerAgent()` that returns the agent - some components, like OStorage, should implement it The OProfilerAgent interface could be like this: ```java public class OProfilerAgent{ stopChrono(long startTime, String metricName); getChronos(Map<String,Long> statsToUpdate); } ``` In this way every time the metrics are requested, the EE OProfiler will call on all the registered agents the method `getChronos()` by passing global map. Each component simply add the metrics to the map. At the end, the map is populated with all the values from all the components. The final map is not synchronized and is populated agent by agent. Doesn't matter if collecting data takes few ns more, the important is to avoid centralized locking.
True
Profiler: redesign the profiler to be less expensive in terms of CPU - The profiler is an old component of OrientDB. The goal was to store all the data to be provided to the user. The problem is that it takes a lot of CPU because it has been designed with multiple centralized concurrent maps = high contention rate. We should redesign it by keep statistics locally to the component: - create the OProfilerAgent interface - create the OProfilerAgentSynchronized as thread safe and - create the OProfilerAgentNotSynchronized as lock-free implementation - install on each component to monitor one of these 2 agent classes, based on the fact that the monitored component needs it synchronized or not - create an interface OProfilable with only the method: `getProfilerAgent()` that returns the agent - some components, like OStorage, should implement it The OProfilerAgent interface could be like this: ```java public class OProfilerAgent{ stopChrono(long startTime, String metricName); getChronos(Map<String,Long> statsToUpdate); } ``` In this way every time the metrics are requested, the EE OProfiler will call on all the registered agents the method `getChronos()` by passing global map. Each component simply add the metrics to the map. At the end, the map is populated with all the values from all the components. The final map is not synchronized and is populated agent by agent. Doesn't matter if collecting data takes few ns more, the important is to avoid centralized locking.
non_defect
profiler redesign the profiler to be less expensive in terms of cpu the profiler is an old component of orientdb the goal was to store all the data to be provided to the user the problem is that it takes a lot of cpu because it has been designed with multiple centralized concurrent maps high contention rate we should redesign it by keep statistics locally to the component create the oprofileragent interface create the oprofileragentsynchronized as thread safe and create the oprofileragentnotsynchronized as lock free implementation install on each component to monitor one of these agent classes based on the fact that the monitored component needs it synchronized or not create an interface oprofilable with only the method getprofileragent that returns the agent some components like ostorage should implement it the oprofileragent interface could be like this java public class oprofileragent stopchrono long starttime string metricname getchronos map statstoupdate in this way every time the metrics are requested the ee oprofiler will call on all the registered agents the method getchronos by passing global map each component simply add the metrics to the map at the end the map is populated with all the values from all the components the final map is not synchronized and is populated agent by agent doesn t matter if collecting data takes few ns more the important is to avoid centralized locking
0
7,893
2,611,058,236
IssuesEvent
2015-02-27 00:26:49
alistairreilly/andors-trail
https://api.github.com/repos/alistairreilly/andors-trail
closed
Cant talk to Umar
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1.done all missions (i think) 2.talked to Buccus 3.try to talk to Umar What is the expected output? What do you see instead? i am suppose tot alk to umar, but cant enter the hatch. I always get the message "this part of te map is not done yet, please come back in a later version of the game" What version of the product are you using? On what operating system? 6.7 on an Xperia X10 with with 2.1 Update Please provide any additional information below. i had an older version, and finally got to download the 6.7 here from your site, hoping i could finally continue, but i still cant enter the hatch. So as i have done every other mission, unless there is one with the snake pit, im stuck, because i cant fulfill nocmars task to kill undead either. if you need a safegame, please inform me how to send it to you. ``` Original issue reported on code.google.com by `ivod...@googlemail.com` on 26 Jan 2011 at 11:51
1.0
Cant talk to Umar - ``` What steps will reproduce the problem? 1.done all missions (i think) 2.talked to Buccus 3.try to talk to Umar What is the expected output? What do you see instead? i am suppose tot alk to umar, but cant enter the hatch. I always get the message "this part of te map is not done yet, please come back in a later version of the game" What version of the product are you using? On what operating system? 6.7 on an Xperia X10 with with 2.1 Update Please provide any additional information below. i had an older version, and finally got to download the 6.7 here from your site, hoping i could finally continue, but i still cant enter the hatch. So as i have done every other mission, unless there is one with the snake pit, im stuck, because i cant fulfill nocmars task to kill undead either. if you need a safegame, please inform me how to send it to you. ``` Original issue reported on code.google.com by `ivod...@googlemail.com` on 26 Jan 2011 at 11:51
defect
cant talk to umar what steps will reproduce the problem done all missions i think talked to buccus try to talk to umar what is the expected output what do you see instead i am suppose tot alk to umar but cant enter the hatch i always get the message this part of te map is not done yet please come back in a later version of the game what version of the product are you using on what operating system on an xperia with with update please provide any additional information below i had an older version and finally got to download the here from your site hoping i could finally continue but i still cant enter the hatch so as i have done every other mission unless there is one with the snake pit im stuck because i cant fulfill nocmars task to kill undead either if you need a safegame please inform me how to send it to you original issue reported on code google com by ivod googlemail com on jan at
1
300,158
9,206,202,191
IssuesEvent
2019-03-08 13:02:59
forpdi/forpdi
https://api.github.com/repos/forpdi/forpdi
opened
Erro na edição de unidades
ForRisco bug mediumpriority
Quando acesso para editar uma unidade não tem o campo para poder editar o nome da unidade criada ![sem titulo](https://user-images.githubusercontent.com/28953578/54030098-45ddce80-4189-11e9-9ea7-642d5cd539ef.png)
1.0
Erro na edição de unidades - Quando acesso para editar uma unidade não tem o campo para poder editar o nome da unidade criada ![sem titulo](https://user-images.githubusercontent.com/28953578/54030098-45ddce80-4189-11e9-9ea7-642d5cd539ef.png)
non_defect
erro na edição de unidades quando acesso para editar uma unidade não tem o campo para poder editar o nome da unidade criada
0
81,974
31,837,636,921
IssuesEvent
2023-09-14 14:24:08
vector-im/element-desktop
https://api.github.com/repos/vector-im/element-desktop
closed
segfault on wayland
T-Defect Z-Wayland
### Steps to reproduce No Nightly as in https://github.com/vector-im/element-desktop/issues/1026 No mixed-scaled dual monitor as in https://github.com/vector-im/element-desktop/issues/873 I dist-upgarded debian sid about a week ago and element segfaults on start now. I tried to investigate this issue with strace and packages downgrade, but without success ): ### Outcome ``` % element-desktop --enable-features=UseOzonePlatform --ozone-platform=wayland /home/sergio/.config/Element exists: yes /home/sergio/.config/Riot exists: no Starting auto update with base URL: https://packages.element.io/desktop/update/ Auto update not supported on this platform Fetching translation json for locale: en_EN Changing application language to en Fetching translation json for locale: en Resetting the UI components after locale change Resetting the UI components after locale change Changing application language to en Fetching translation json for locale: en Resetting the UI components after locale change zsh: segmentation fault element-desktop --enable-features=UseOzonePlatform --ozone-platform=wayland ``` ### Operating system debian sid ### Application version 1.11.34 ### How did you install the app? https://packages.element.io/debian ### Homeserver _No response_ ### Will you send logs? Yes
1.0
segfault on wayland - ### Steps to reproduce No Nightly as in https://github.com/vector-im/element-desktop/issues/1026 No mixed-scaled dual monitor as in https://github.com/vector-im/element-desktop/issues/873 I dist-upgarded debian sid about a week ago and element segfaults on start now. I tried to investigate this issue with strace and packages downgrade, but without success ): ### Outcome ``` % element-desktop --enable-features=UseOzonePlatform --ozone-platform=wayland /home/sergio/.config/Element exists: yes /home/sergio/.config/Riot exists: no Starting auto update with base URL: https://packages.element.io/desktop/update/ Auto update not supported on this platform Fetching translation json for locale: en_EN Changing application language to en Fetching translation json for locale: en Resetting the UI components after locale change Resetting the UI components after locale change Changing application language to en Fetching translation json for locale: en Resetting the UI components after locale change zsh: segmentation fault element-desktop --enable-features=UseOzonePlatform --ozone-platform=wayland ``` ### Operating system debian sid ### Application version 1.11.34 ### How did you install the app? https://packages.element.io/debian ### Homeserver _No response_ ### Will you send logs? Yes
defect
segfault on wayland steps to reproduce no nightly as in no mixed scaled dual monitor as in i dist upgarded debian sid about a week ago and element segfaults on start now i tried to investigate this issue with strace and packages downgrade but without success outcome element desktop enable features useozoneplatform ozone platform wayland home sergio config element exists yes home sergio config riot exists no starting auto update with base url auto update not supported on this platform fetching translation json for locale en en changing application language to en fetching translation json for locale en resetting the ui components after locale change resetting the ui components after locale change changing application language to en fetching translation json for locale en resetting the ui components after locale change zsh segmentation fault element desktop enable features useozoneplatform ozone platform wayland operating system debian sid application version how did you install the app homeserver no response will you send logs yes
1
34,890
7,467,609,885
IssuesEvent
2018-04-02 15:56:07
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
Scipy mmwrite incorrectly writes the zeros for skew-symmetric, array matrices
defect scipy.io
When you have a dense, skew-symmetric matrix, and you write it to file using scipy.io.mmwrite and specify that it is a skew-symmetric matrix, the resulting file incorrectly lists the zeros on the diagonal. According to the [matrix market format documentation](http://math.nist.gov/MatrixMarket/formats.html#MMformat), page 9: > Only entries below the main diagonal > are stored in the file. The entries on the main diagonal are zero and > those above the main diagonal are known by symmetry. ### Reproducing code example: ``` from scipy.io import mmwrite matrix = [[0, 1], [-1, 0]] mmwrite("test.txt", matrix, symmetry="skew-symmetric") ``` ### Error message: The resulting matrix is: ``` %%MatrixMarket matrix array integer skew-symmetric % 2 2 0 -1 0 ``` ### Scipy/Numpy/Python version information: ``` 0.19.1 1.13.1 sys.version_info(major=3, minor=6, micro=2, releaselevel='final', serial=0) ```
1.0
Scipy mmwrite incorrectly writes the zeros for skew-symmetric, array matrices - When you have a dense, skew-symmetric matrix, and you write it to file using scipy.io.mmwrite and specify that it is a skew-symmetric matrix, the resulting file incorrectly lists the zeros on the diagonal. According to the [matrix market format documentation](http://math.nist.gov/MatrixMarket/formats.html#MMformat), page 9: > Only entries below the main diagonal > are stored in the file. The entries on the main diagonal are zero and > those above the main diagonal are known by symmetry. ### Reproducing code example: ``` from scipy.io import mmwrite matrix = [[0, 1], [-1, 0]] mmwrite("test.txt", matrix, symmetry="skew-symmetric") ``` ### Error message: The resulting matrix is: ``` %%MatrixMarket matrix array integer skew-symmetric % 2 2 0 -1 0 ``` ### Scipy/Numpy/Python version information: ``` 0.19.1 1.13.1 sys.version_info(major=3, minor=6, micro=2, releaselevel='final', serial=0) ```
defect
scipy mmwrite incorrectly writes the zeros for skew symmetric array matrices when you have a dense skew symmetric matrix and you write it to file using scipy io mmwrite and specify that it is a skew symmetric matrix the resulting file incorrectly lists the zeros on the diagonal according to the page only entries below the main diagonal are stored in the file the entries on the main diagonal are zero and those above the main diagonal are known by symmetry reproducing code example from scipy io import mmwrite matrix mmwrite test txt matrix symmetry skew symmetric error message the resulting matrix is matrixmarket matrix array integer skew symmetric scipy numpy python version information sys version info major minor micro releaselevel final serial
1
55,402
14,439,685,304
IssuesEvent
2020-12-07 14:40:58
dkfans/keeperfx
https://api.github.com/repos/dkfans/keeperfx
closed
Game crash on alt tabbing
Priority-Critical Status-New Type-Defect
On the discord it was reported that Alt+Tabbing could crash the game. This user traced it back himself towards the recently rewritten draw_stripey_line function. He has a consistent reproduction path of the crash: 1) align the view with middle mouse button 2) hover mouse over some wall as if you want to select it for digging 3) press alt tab (played in isometric mode, with high walls) I cannot reproduce it myself. He can, and can also confirm it's not present for him in the released build, only on the recent alpha's. When he disables the draw_stripey_line and returns to _DK_draw_stripey_line the crash is fixed. His investigation of the code: > stripey line fails here: remainder = start_b_dist_from_window * distance_a % distance_b; > distance_b == 0 > also this fails only if view is aligned, press middle mouse button > the problem is that anything % 0 is undefined behavior
1.0
Game crash on alt tabbing - On the discord it was reported that Alt+Tabbing could crash the game. This user traced it back himself towards the recently rewritten draw_stripey_line function. He has a consistent reproduction path of the crash: 1) align the view with middle mouse button 2) hover mouse over some wall as if you want to select it for digging 3) press alt tab (played in isometric mode, with high walls) I cannot reproduce it myself. He can, and can also confirm it's not present for him in the released build, only on the recent alpha's. When he disables the draw_stripey_line and returns to _DK_draw_stripey_line the crash is fixed. His investigation of the code: > stripey line fails here: remainder = start_b_dist_from_window * distance_a % distance_b; > distance_b == 0 > also this fails only if view is aligned, press middle mouse button > the problem is that anything % 0 is undefined behavior
defect
game crash on alt tabbing on the discord it was reported that alt tabbing could crash the game this user traced it back himself towards the recently rewritten draw stripey line function he has a consistent reproduction path of the crash align the view with middle mouse button hover mouse over some wall as if you want to select it for digging press alt tab played in isometric mode with high walls i cannot reproduce it myself he can and can also confirm it s not present for him in the released build only on the recent alpha s when he disables the draw stripey line and returns to dk draw stripey line the crash is fixed his investigation of the code stripey line fails here remainder start b dist from window distance a distance b distance b also this fails only if view is aligned press middle mouse button the problem is that anything is undefined behavior
1
285,228
24,652,854,503
IssuesEvent
2022-10-17 20:14:19
scikit-hep/vector
https://api.github.com/repos/scikit-hep/vector
closed
Test notebooks
help wanted tests hacktoberfest
It would be nice to test the example notebooks on each CI run to ensure they are not outdated.
1.0
Test notebooks - It would be nice to test the example notebooks on each CI run to ensure they are not outdated.
non_defect
test notebooks it would be nice to test the example notebooks on each ci run to ensure they are not outdated
0
71,328
23,542,299,565
IssuesEvent
2022-08-20 15:39:23
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Answering a video call (from Android!) does not send an SDP answer when there is no camera
T-Defect
### Steps to reproduce 1. Start a video call from Element Android to a user using Element Web 2. Answer the call on Element Web ### Outcome #### What did you expect? * To see the caller's video feed * *this is what happens when you initiate a call from Element Web*(!) #### What happened instead? * The call remains forever in the 'Connecting' state until it eventually times out (the caller gets a message saying the callee didn't answer) * Firefox's `about:webrtc` shows that no Local SDP Answer is being sent * Firefox's console is showing lots of activity. It appears to the naïve eye that Element is stuck in a loop trying to request permission to use the camera, but it's automatically failing. ### Operating system Linux ### Application version Element version: 1.11.2 Olm version: 3.2.12 ### How did you install the app? Ubuntu PPA ### Homeserver librepush.net & other internal use homeserver ### Will you send logs? Yes
1.0
Answering a video call (from Android!) does not send an SDP answer when there is no camera - ### Steps to reproduce 1. Start a video call from Element Android to a user using Element Web 2. Answer the call on Element Web ### Outcome #### What did you expect? * To see the caller's video feed * *this is what happens when you initiate a call from Element Web*(!) #### What happened instead? * The call remains forever in the 'Connecting' state until it eventually times out (the caller gets a message saying the callee didn't answer) * Firefox's `about:webrtc` shows that no Local SDP Answer is being sent * Firefox's console is showing lots of activity. It appears to the naïve eye that Element is stuck in a loop trying to request permission to use the camera, but it's automatically failing. ### Operating system Linux ### Application version Element version: 1.11.2 Olm version: 3.2.12 ### How did you install the app? Ubuntu PPA ### Homeserver librepush.net & other internal use homeserver ### Will you send logs? Yes
defect
answering a video call from android does not send an sdp answer when there is no camera steps to reproduce start a video call from element android to a user using element web answer the call on element web outcome what did you expect to see the caller s video feed this is what happens when you initiate a call from element web what happened instead the call remains forever in the connecting state until it eventually times out the caller gets a message saying the callee didn t answer firefox s about webrtc shows that no local sdp answer is being sent firefox s console is showing lots of activity it appears to the naïve eye that element is stuck in a loop trying to request permission to use the camera but it s automatically failing operating system linux application version element version olm version how did you install the app ubuntu ppa homeserver librepush net other internal use homeserver will you send logs yes
1
79,232
28,053,865,407
IssuesEvent
2023-03-29 08:05:00
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
Regression when using INSERT .. RETURNING pre MariaDB 10.5
T: Defect P: Urgent C: DB: MariaDB E: Professional Edition E: Enterprise Edition
### Expected behavior for a simple table like this ```sql CREATE TABLE avro_schema ( id INTEGER NOT NULL AUTO_INCREMENT, md5 VARCHAR(32) NOT NULL, schema_json TEXT NOT NULL ); ``` and jooq configured w/ spring.jooq.sql-dialect=MARIADB_10_3 using the following code ```kotlin val r = context().newRecord(AVRO_SCHEMA).setMd5("").setSchemaJson("") r.insert() ``` inserts a new record and the record's id is available from r.getId() (works as expected in jooq 3.17.8) ### Actual behavior > bad SQL grammar [insert into `avro_schema` (`md5`, `schema_json`) values (?, ?) returning `id`]; nested exception is java.sql.SQLSyntaxErrorException: (conn=45420) You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'returning `id`' at line 1 According to https://mariadb.com/kb/en/insertreturning/ , insert...returning is only available in mariadb 10.5 and later ### Steps to reproduce the problem use mariadb v10.4, generated code for table with auto generated key, configure jooq with mariadb dialect before 10.5, and try to insert a new record. ### jOOQ Version jOOQ Prof. 3.18.0, 3.18.1 ### Database product and version 10.4.28-MariaDB-1:10.4.28+maria~ubu2004 ### Java Version _No response_ ### OS Version _No response_ ### JDBC driver name and version (include name if unofficial driver) _No response_
1.0
Regression when using INSERT .. RETURNING pre MariaDB 10.5 - ### Expected behavior for a simple table like this ```sql CREATE TABLE avro_schema ( id INTEGER NOT NULL AUTO_INCREMENT, md5 VARCHAR(32) NOT NULL, schema_json TEXT NOT NULL ); ``` and jooq configured w/ spring.jooq.sql-dialect=MARIADB_10_3 using the following code ```kotlin val r = context().newRecord(AVRO_SCHEMA).setMd5("").setSchemaJson("") r.insert() ``` inserts a new record and the record's id is available from r.getId() (works as expected in jooq 3.17.8) ### Actual behavior > bad SQL grammar [insert into `avro_schema` (`md5`, `schema_json`) values (?, ?) returning `id`]; nested exception is java.sql.SQLSyntaxErrorException: (conn=45420) You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'returning `id`' at line 1 According to https://mariadb.com/kb/en/insertreturning/ , insert...returning is only available in mariadb 10.5 and later ### Steps to reproduce the problem use mariadb v10.4, generated code for table with auto generated key, configure jooq with mariadb dialect before 10.5, and try to insert a new record. ### jOOQ Version jOOQ Prof. 3.18.0, 3.18.1 ### Database product and version 10.4.28-MariaDB-1:10.4.28+maria~ubu2004 ### Java Version _No response_ ### OS Version _No response_ ### JDBC driver name and version (include name if unofficial driver) _No response_
defect
regression when using insert returning pre mariadb expected behavior for a simple table like this sql create table avro schema id integer not null auto increment varchar not null schema json text not null and jooq configured w spring jooq sql dialect mariadb using the following code kotlin val r context newrecord avro schema setschemajson r insert inserts a new record and the record s id is available from r getid works as expected in jooq actual behavior bad sql grammar nested exception is java sql sqlsyntaxerrorexception conn you have an error in your sql syntax check the manual that corresponds to your mariadb server version for the right syntax to use near returning id at line according to insert returning is only available in mariadb and later steps to reproduce the problem use mariadb generated code for table with auto generated key configure jooq with mariadb dialect before and try to insert a new record jooq version jooq prof database product and version mariadb maria java version no response os version no response jdbc driver name and version include name if unofficial driver no response
1
143,987
13,092,329,851
IssuesEvent
2020-08-03 08:23:58
Cactusphere/Cactusphere-100
https://api.github.com/repos/Cactusphere/Cactusphere-100
closed
IoT Central テナントCA証明書検証手順に関して
documentation
Azure IoT Centralの下記のアップデートにより、テナントCA証明書検証手順が変更となりました。 https://azure.microsoft.com/ja-jp/updates/azure-iot-central-feature-updates-june-2020/ そのため、Cactusphere 100シリーズ ソフトウェアマニュアル及び [日本語版Azure IoT Centralドキュメント](https://docs.microsoft.com/ja-jp/azure/iot-central/core/concepts-get-connected#add-and-verify-a-root-or-intermediate-certificate)に記載されている手順では証明書の管理画面が表示できなくなっております。 Cactusphere 100シリーズ ソフトウェアマニュアルの記載については今後のアップデートで修正予定です。 それまではお手数ですが、[英語版Azure IoT Centralドキュメント](https://docs.microsoft.com/en-us/azure/iot-central/core/concepts-get-connected#connect-devices-using-x509-certificates)を参照していただくようお願いいたします。
1.0
IoT Central テナントCA証明書検証手順に関して - Azure IoT Centralの下記のアップデートにより、テナントCA証明書検証手順が変更となりました。 https://azure.microsoft.com/ja-jp/updates/azure-iot-central-feature-updates-june-2020/ そのため、Cactusphere 100シリーズ ソフトウェアマニュアル及び [日本語版Azure IoT Centralドキュメント](https://docs.microsoft.com/ja-jp/azure/iot-central/core/concepts-get-connected#add-and-verify-a-root-or-intermediate-certificate)に記載されている手順では証明書の管理画面が表示できなくなっております。 Cactusphere 100シリーズ ソフトウェアマニュアルの記載については今後のアップデートで修正予定です。 それまではお手数ですが、[英語版Azure IoT Centralドキュメント](https://docs.microsoft.com/en-us/azure/iot-central/core/concepts-get-connected#connect-devices-using-x509-certificates)を参照していただくようお願いいたします。
non_defect
iot central テナントca証明書検証手順に関して azure iot centralの下記のアップデートにより、テナントca証明書検証手順が変更となりました。 そのため、cactusphere ソフトウェアマニュアル及び cactusphere ソフトウェアマニュアルの記載については今後のアップデートで修正予定です。 それまではお手数ですが、
0
772,550
27,126,697,965
IssuesEvent
2023-02-16 06:11:40
space-wizards/space-station-14
https://api.github.com/repos/space-wizards/space-station-14
closed
Mobs start spinning randomly
Issue: Bug Priority: 2-Before Release Difficulty: 2-Medium Bug: Needs Replicating
## Description <!-- Explain your issue in detail. Issues without proper explanation are liable to be closed by maintainers. --> If you have joined a game lately, you will see how sometimes a mob is just spinning for seemingly no reason, at a different speed. **Reproduction** <!-- Include the steps to reproduce if applicable. --> I got no hecking clue **Screenshots** <!-- If applicable, add screenshots to help explain your problem. --> https://user-images.githubusercontent.com/43253663/181996563-90752f9f-aa16-4b6b-a7b4-1ca9d9021b1e.mp4 **Additional context** <!-- Add any other context about the problem here. Anything you think is related to the issue. -->
1.0
Mobs start spinning randomly - ## Description <!-- Explain your issue in detail. Issues without proper explanation are liable to be closed by maintainers. --> If you have joined a game lately, you will see how sometimes a mob is just spinning for seemingly no reason, at a different speed. **Reproduction** <!-- Include the steps to reproduce if applicable. --> I got no hecking clue **Screenshots** <!-- If applicable, add screenshots to help explain your problem. --> https://user-images.githubusercontent.com/43253663/181996563-90752f9f-aa16-4b6b-a7b4-1ca9d9021b1e.mp4 **Additional context** <!-- Add any other context about the problem here. Anything you think is related to the issue. -->
non_defect
mobs start spinning randomly description if you have joined a game lately you will see how sometimes a mob is just spinning for seemingly no reason at a different speed reproduction i got no hecking clue screenshots additional context
0
61,641
17,023,746,334
IssuesEvent
2021-07-03 03:37:20
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
[water] Incomplete text rendering
Component: mapnik Priority: minor Resolution: worksforme Type: defect
**[Submitted to the original trac issue database at 1.00pm, Saturday, 17th September 2011]** There is an error in the text rendering of a river for quite some time. http://osm.org/go/0JNmMtjsA-- The text "Schwarzer Regen" has a gap and it looks like it's at the corner of a map tile.
1.0
[water] Incomplete text rendering - **[Submitted to the original trac issue database at 1.00pm, Saturday, 17th September 2011]** There is an error in the text rendering of a river for quite some time. http://osm.org/go/0JNmMtjsA-- The text "Schwarzer Regen" has a gap and it looks like it's at the corner of a map tile.
defect
incomplete text rendering there is an error in the text rendering of a river for quite some time the text schwarzer regen has a gap and it looks like it s at the corner of a map tile
1
204,990
15,963,254,258
IssuesEvent
2021-04-16 03:25:05
kubernetes-sigs/kustomize
https://api.github.com/repos/kubernetes-sigs/kustomize
closed
Doc: Define stabilization (v1) of kustomize build module
kind/documentation lifecycle/rotten priority/important-soon
Mainly what to deprecate, cleanup or move to `internal`. No new features - this is a stabilization.
1.0
Doc: Define stabilization (v1) of kustomize build module - Mainly what to deprecate, cleanup or move to `internal`. No new features - this is a stabilization.
non_defect
doc define stabilization of kustomize build module mainly what to deprecate cleanup or move to internal no new features this is a stabilization
0
34,441
7,451,536,382
IssuesEvent
2018-03-29 03:33:58
kerdokullamae/test_koik_issued
https://api.github.com/repos/kerdokullamae/test_koik_issued
closed
AISi andmemahu vähendamisel lähevad kaduma mõned andmed
P: high R: duplicate T: defect
**Reported by sven syld on 8 Apr 2013 14:21 UTC** '''Object''' AIS importer '''Description''' Importeri käimise kiiruse huvides on andmete hulk tõmmatud 90% väiksemaks. Alamobjektide (nt KÜ pealkirjad) impordil kontrollitakse, et KÜ oleks baasis olemas. Import ise toimub 1'000'000 portsude kaupa. Kui importer näeb, et portsus oli vähem kui 1M kirjet, arvab ta, et rohkem kirjeid ei olegi. Aga väiksem kirjete arv võis olla tingitud hoopis eelpoolmainitud põhjusel. '''Todo''' Parandada asi nii, et tõmmataks sisse kõik kirjed. Enne importi lugeda kokku kõikide kirjete arv (nt 8M), seejärel teha 1Mx8=8M iteratsiooni.
1.0
AISi andmemahu vähendamisel lähevad kaduma mõned andmed - **Reported by sven syld on 8 Apr 2013 14:21 UTC** '''Object''' AIS importer '''Description''' Importeri käimise kiiruse huvides on andmete hulk tõmmatud 90% väiksemaks. Alamobjektide (nt KÜ pealkirjad) impordil kontrollitakse, et KÜ oleks baasis olemas. Import ise toimub 1'000'000 portsude kaupa. Kui importer näeb, et portsus oli vähem kui 1M kirjet, arvab ta, et rohkem kirjeid ei olegi. Aga väiksem kirjete arv võis olla tingitud hoopis eelpoolmainitud põhjusel. '''Todo''' Parandada asi nii, et tõmmataks sisse kõik kirjed. Enne importi lugeda kokku kõikide kirjete arv (nt 8M), seejärel teha 1Mx8=8M iteratsiooni.
defect
aisi andmemahu vähendamisel lähevad kaduma mõned andmed reported by sven syld on apr utc object ais importer description importeri käimise kiiruse huvides on andmete hulk tõmmatud väiksemaks alamobjektide nt kü pealkirjad impordil kontrollitakse et kü oleks baasis olemas import ise toimub portsude kaupa kui importer näeb et portsus oli vähem kui kirjet arvab ta et rohkem kirjeid ei olegi aga väiksem kirjete arv võis olla tingitud hoopis eelpoolmainitud põhjusel todo parandada asi nii et tõmmataks sisse kõik kirjed enne importi lugeda kokku kõikide kirjete arv nt seejärel teha iteratsiooni
1
16,412
2,892,246,966
IssuesEvent
2015-06-15 11:48:43
MDAnalysis/mdanalysis
https://api.github.com/repos/MDAnalysis/mdanalysis
closed
relative imports and nosetests
auto-migrated defect maintainability Priority-Low testing
``` Currently have to do: nosetests-2.7 test_analysis.py:Test_Helanal and not: nosetests-2.7 test_analysis:Test_Helanal because of a relative import problem. Since the wiki docs advertise the version without the .py, it is perhaps suitable to have a consistent policy on whether we will allow Python relative imports in unit test modules. It seems that we can mostly work around relative imports, except for things that were placed in the testing __init__.py ``` Original issue reported on code.google.com by `tyler.je.reddy@gmail.com` on 8 Jul 2014 at 7:39
1.0
relative imports and nosetests - ``` Currently have to do: nosetests-2.7 test_analysis.py:Test_Helanal and not: nosetests-2.7 test_analysis:Test_Helanal because of a relative import problem. Since the wiki docs advertise the version without the .py, it is perhaps suitable to have a consistent policy on whether we will allow Python relative imports in unit test modules. It seems that we can mostly work around relative imports, except for things that were placed in the testing __init__.py ``` Original issue reported on code.google.com by `tyler.je.reddy@gmail.com` on 8 Jul 2014 at 7:39
defect
relative imports and nosetests currently have to do nosetests test analysis py test helanal and not nosetests test analysis test helanal because of a relative import problem since the wiki docs advertise the version without the py it is perhaps suitable to have a consistent policy on whether we will allow python relative imports in unit test modules it seems that we can mostly work around relative imports except for things that were placed in the testing init py original issue reported on code google com by tyler je reddy gmail com on jul at
1
54,146
13,440,675,708
IssuesEvent
2020-09-08 01:43:44
STEllAR-GROUP/phylanx
https://api.github.com/repos/STEllAR-GROUP/phylanx
closed
Can't print array
category: primitives type: defect
The following simple phylanx program (built in release mode) seems to not terminate: ``` im = 31 jm = 284 arr = [[random() for i in range(im)] for j in range(jm)] from phylanx import Phylanx @Phylanx def pr(a): print(a) pr(arr) ```
1.0
Can't print array - The following simple phylanx program (built in release mode) seems to not terminate: ``` im = 31 jm = 284 arr = [[random() for i in range(im)] for j in range(jm)] from phylanx import Phylanx @Phylanx def pr(a): print(a) pr(arr) ```
defect
can t print array the following simple phylanx program built in release mode seems to not terminate im jm arr for j in range jm from phylanx import phylanx phylanx def pr a print a pr arr
1
77,549
27,047,855,856
IssuesEvent
2023-02-13 11:04:08
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
ERROR: syntax error at or near "merge"
T: Defect C: Functionality C: DB: PostgreSQL P: Medium R: Duplicate E: All Editions
### Expected behavior Command mergeInto should emulate merge for PostgreSQL 14 ### Actual behavior ERROR: syntax error at or near "merge" ### Steps to reproduce the problem Code ``` String url = "jdbc:postgresql://db-01:5432/Finanz_V02"; Connection conn = DriverManager.getConnection(url, userName, password); DSLContext ctx = DSL.using(conn, SQLDialect.POSTGRES); logger_.debug ("Dialect: {}", ctx.dialect()); logger_.debug ("Version: {}", ctx.resultQuery("select version()").fetchInto(String.class)); ctx.mergeInto(Z_TEST) .using(ctx.selectOne()) .on(Z_TEST.UNIQUE.equal(0)) .whenMatchedThenUpdate() .set(Z_TEST.VALUE, "v5") .whenNotMatchedThenInsert(Z_TEST.UNIQUE, Z_TEST.VALUE) .values(0, "v6") .execute(); ``` Debug Log ``` 2023-02-11 08:30:49,306 DEBUG [main] [org.mb.apps.AppDb1] Dialect: POSTGRES 2023-02-11 08:30:49,348 INFO [main] [org.jooq.Constants] ... @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Thank you for using jOOQ 3.17.7 2023-02-11 08:30:49,388 DEBUG [main] [org.jooq.tools.LoggerListener] Executing query : select version() 2023-02-11 08:30:49,470 DEBUG [main] [org.jooq.tools.LoggerListener] Fetched result : +--------------------------------------------------+ 2023-02-11 08:30:49,470 DEBUG [main] [org.jooq.tools.LoggerListener] : |version | 2023-02-11 08:30:49,470 DEBUG [main] [org.jooq.tools.LoggerListener] : +--------------------------------------------------+ 2023-02-11 08:30:49,470 DEBUG [main] [org.jooq.tools.LoggerListener] : |PostgreSQL 14.6 (Debian 14.6-1.pgdg110+1) on x8...| 2023-02-11 08:30:49,470 DEBUG [main] [org.jooq.tools.LoggerListener] : +--------------------------------------------------+ 2023-02-11 08:30:49,473 DEBUG [main] [org.jooq.tools.LoggerListener] Fetched row(s) : 1 2023-02-11 08:30:49,474 DEBUG [main] [org.mb.apps.AppDb1] Version: [PostgreSQL 14.6 (Debian 14.6-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit] 2023-02-11 08:30:49,958 DEBUG [main] [org.jooq.tools.LoggerListener] Executing query : merge into "data"."z_test" using (select 1 as "one") on "data"."z_test"."unique" = ? when matched then update set "value" = ? when not matched then insert ("unique", "value") values (?, ?) 2023-02-11 08:30:49,959 DEBUG [main] [org.jooq.tools.LoggerListener] -> with bind values : merge into "data"."z_test" using (select 1 as "one") on "data"."z_test"."unique" = 0 when matched then update set "value" = 'v5' when not matched then insert ("unique", "value") values (0, 'v6') 2023-02-11 08:30:49,973 DEBUG [main] [org.jooq.tools.LoggerListener] Exception org.jooq.exception.DataAccessException: SQL [merge into "data"."z_test" using (select 1 as "one") on "data"."z_test"."unique" = ? when matched then update set "value" = ? when not matched then insert ("unique", "value") values (?, ?)]; ERROR: syntax error at or near "merge" ``` ### jOOQ Version 3.17.7 ### Database product and version PostgreSQL 14.6 (Debian 14.6-1.pgdg110+1) ### Java Version V17 ### OS Version Windows 11 ### JDBC driver name and version (include name if unofficial driver) org.postgresql:postgresql:42.5.3
1.0
ERROR: syntax error at or near "merge" - ### Expected behavior Command mergeInto should emulate merge for PostgreSQL 14 ### Actual behavior ERROR: syntax error at or near "merge" ### Steps to reproduce the problem Code ``` String url = "jdbc:postgresql://db-01:5432/Finanz_V02"; Connection conn = DriverManager.getConnection(url, userName, password); DSLContext ctx = DSL.using(conn, SQLDialect.POSTGRES); logger_.debug ("Dialect: {}", ctx.dialect()); logger_.debug ("Version: {}", ctx.resultQuery("select version()").fetchInto(String.class)); ctx.mergeInto(Z_TEST) .using(ctx.selectOne()) .on(Z_TEST.UNIQUE.equal(0)) .whenMatchedThenUpdate() .set(Z_TEST.VALUE, "v5") .whenNotMatchedThenInsert(Z_TEST.UNIQUE, Z_TEST.VALUE) .values(0, "v6") .execute(); ``` Debug Log ``` 2023-02-11 08:30:49,306 DEBUG [main] [org.mb.apps.AppDb1] Dialect: POSTGRES 2023-02-11 08:30:49,348 INFO [main] [org.jooq.Constants] ... @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Thank you for using jOOQ 3.17.7 2023-02-11 08:30:49,388 DEBUG [main] [org.jooq.tools.LoggerListener] Executing query : select version() 2023-02-11 08:30:49,470 DEBUG [main] [org.jooq.tools.LoggerListener] Fetched result : +--------------------------------------------------+ 2023-02-11 08:30:49,470 DEBUG [main] [org.jooq.tools.LoggerListener] : |version | 2023-02-11 08:30:49,470 DEBUG [main] [org.jooq.tools.LoggerListener] : +--------------------------------------------------+ 2023-02-11 08:30:49,470 DEBUG [main] [org.jooq.tools.LoggerListener] : |PostgreSQL 14.6 (Debian 14.6-1.pgdg110+1) on x8...| 2023-02-11 08:30:49,470 DEBUG [main] [org.jooq.tools.LoggerListener] : +--------------------------------------------------+ 2023-02-11 08:30:49,473 DEBUG [main] [org.jooq.tools.LoggerListener] Fetched row(s) : 1 2023-02-11 08:30:49,474 DEBUG [main] [org.mb.apps.AppDb1] Version: [PostgreSQL 14.6 (Debian 14.6-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit] 2023-02-11 08:30:49,958 DEBUG [main] [org.jooq.tools.LoggerListener] Executing query : merge into "data"."z_test" using (select 1 as "one") on "data"."z_test"."unique" = ? when matched then update set "value" = ? when not matched then insert ("unique", "value") values (?, ?) 2023-02-11 08:30:49,959 DEBUG [main] [org.jooq.tools.LoggerListener] -> with bind values : merge into "data"."z_test" using (select 1 as "one") on "data"."z_test"."unique" = 0 when matched then update set "value" = 'v5' when not matched then insert ("unique", "value") values (0, 'v6') 2023-02-11 08:30:49,973 DEBUG [main] [org.jooq.tools.LoggerListener] Exception org.jooq.exception.DataAccessException: SQL [merge into "data"."z_test" using (select 1 as "one") on "data"."z_test"."unique" = ? when matched then update set "value" = ? when not matched then insert ("unique", "value") values (?, ?)]; ERROR: syntax error at or near "merge" ``` ### jOOQ Version 3.17.7 ### Database product and version PostgreSQL 14.6 (Debian 14.6-1.pgdg110+1) ### Java Version V17 ### OS Version Windows 11 ### JDBC driver name and version (include name if unofficial driver) org.postgresql:postgresql:42.5.3
defect
error syntax error at or near merge expected behavior command mergeinto should emulate merge for postgresql actual behavior error syntax error at or near merge steps to reproduce the problem code string url jdbc postgresql db finanz connection conn drivermanager getconnection url username password dslcontext ctx dsl using conn sqldialect postgres logger debug dialect ctx dialect logger debug version ctx resultquery select version fetchinto string class ctx mergeinto z test using ctx selectone on z test unique equal whenmatchedthenupdate set z test value whennotmatchedtheninsert z test unique z test value values execute debug log debug dialect postgres info thank you for using jooq debug executing query select version debug fetched result debug version debug debug postgresql debian on debug debug fetched row s debug version debug executing query merge into data z test using select as one on data z test unique when matched then update set value when not matched then insert unique value values debug with bind values merge into data z test using select as one on data z test unique when matched then update set value when not matched then insert unique value values debug exception org jooq exception dataaccessexception sql error syntax error at or near merge jooq version database product and version postgresql debian java version os version windows jdbc driver name and version include name if unofficial driver org postgresql postgresql
1
725,366
24,960,298,980
IssuesEvent
2022-11-01 15:02:17
Qiskit/qiskit-machine-learning
https://api.github.com/repos/Qiskit/qiskit-machine-learning
opened
Add unit tests that run on either backend-based primitives or Aer primitives
priority: medium type: enhancement
### What is the expected enhancement? New QML classes that leverage Qiskit primitives are tested only on the reference implementation available in Terra, e.g. `Sampler` and `Estimator`. This issue is created to discuss and implement new unit tests for the classes that are built on top of primitives, e.g. `SamplerQNN`, `EstimatorQNN`, quantum kernel classes.
1.0
Add unit tests that run on either backend-based primitives or Aer primitives - ### What is the expected enhancement? New QML classes that leverage Qiskit primitives are tested only on the reference implementation available in Terra, e.g. `Sampler` and `Estimator`. This issue is created to discuss and implement new unit tests for the classes that are built on top of primitives, e.g. `SamplerQNN`, `EstimatorQNN`, quantum kernel classes.
non_defect
add unit tests that run on either backend based primitives or aer primitives what is the expected enhancement new qml classes that leverage qiskit primitives are tested only on the reference implementation available in terra e g sampler and estimator this issue is created to discuss and implement new unit tests for the classes that are built on top of primitives e g samplerqnn estimatorqnn quantum kernel classes
0
508,607
14,703,600,483
IssuesEvent
2021-01-04 15:16:41
mozilla/addons-server
https://api.github.com/repos/mozilla/addons-server
closed
New listed version submissions do not trigger an add-on sync with Salesforce
component: devhub priority: p3
**Prerequisites:** AMO developer account exists in Salesforce ### Describe the problem and steps to reproduce it: 1. Log into AMO -dev with a synced developer account (see Prerequisites) 2. Upload a new listed add-on 3. Approve the add-on with a reviewer account 4. Make sure the new add-on is listed under the developer's account info in Salesforce - make a note of the 'AMO Current Version' in the add-on details 5. Upload a new listed version for this add-on on AMO 6. Once the version is auto-approved, verify if the new version string is reflected in Salesforce 'AMO Current Version' field ### What happened? Salesforce is still showing the previous version number ### What did you expect to happen? Salesforce should be displaying the current listed version number ### Anything else we should know? - the issue does not apply to unlisted add-ons/unlisted versions - the new listed version number is updated only when certain add-on fields (name, summary etc) are manually updated on AMO Example: https://addons-dev.allizom.org/en-US/firefox/addon/test-for-listed-sf/ 1. Add-on details on AMO: ![image](https://user-images.githubusercontent.com/31961530/103010112-6e8eae00-4540-11eb-8724-d732b1fa53e1.png) 2. Add-on details in Salesforce: ![image](https://user-images.githubusercontent.com/31961530/103010150-7e0df700-4540-11eb-93c8-042aaa982263.png)
1.0
New listed version submissions do not trigger an add-on sync with Salesforce - **Prerequisites:** AMO developer account exists in Salesforce ### Describe the problem and steps to reproduce it: 1. Log into AMO -dev with a synced developer account (see Prerequisites) 2. Upload a new listed add-on 3. Approve the add-on with a reviewer account 4. Make sure the new add-on is listed under the developer's account info in Salesforce - make a note of the 'AMO Current Version' in the add-on details 5. Upload a new listed version for this add-on on AMO 6. Once the version is auto-approved, verify if the new version string is reflected in Salesforce 'AMO Current Version' field ### What happened? Salesforce is still showing the previous version number ### What did you expect to happen? Salesforce should be displaying the current listed version number ### Anything else we should know? - the issue does not apply to unlisted add-ons/unlisted versions - the new listed version number is updated only when certain add-on fields (name, summary etc) are manually updated on AMO Example: https://addons-dev.allizom.org/en-US/firefox/addon/test-for-listed-sf/ 1. Add-on details on AMO: ![image](https://user-images.githubusercontent.com/31961530/103010112-6e8eae00-4540-11eb-8724-d732b1fa53e1.png) 2. Add-on details in Salesforce: ![image](https://user-images.githubusercontent.com/31961530/103010150-7e0df700-4540-11eb-93c8-042aaa982263.png)
non_defect
new listed version submissions do not trigger an add on sync with salesforce prerequisites amo developer account exists in salesforce describe the problem and steps to reproduce it log into amo dev with a synced developer account see prerequisites upload a new listed add on approve the add on with a reviewer account make sure the new add on is listed under the developer s account info in salesforce make a note of the amo current version in the add on details upload a new listed version for this add on on amo once the version is auto approved verify if the new version string is reflected in salesforce amo current version field what happened salesforce is still showing the previous version number what did you expect to happen salesforce should be displaying the current listed version number anything else we should know the issue does not apply to unlisted add ons unlisted versions the new listed version number is updated only when certain add on fields name summary etc are manually updated on amo example add on details on amo add on details in salesforce
0
96,193
16,113,282,527
IssuesEvent
2021-04-28 01:59:39
jgeraigery/kibana
https://api.github.com/repos/jgeraigery/kibana
opened
CVE-2021-31597 (Medium) detected in xmlhttprequest-ssl-1.5.5.tgz
security vulnerability
## CVE-2021-31597 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary> <p>XMLHttpRequest for Node</p> <p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p> <p> Dependency Hierarchy: - karma-5.0.2.tgz (Root Library) - socket.io-2.1.1.tgz - socket.io-client-2.1.1.tgz - engine.io-client-3.2.1.tgz - :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected. <p>Publish Date: 2021-04-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p> <p>Release Date: 2021-04-23</p> <p>Fix Resolution: xmlhttprequest-ssl - 1.6.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"xmlhttprequest-ssl","packageVersion":"1.5.5","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"karma:5.0.2;socket.io:2.1.1;socket.io-client:2.1.1;engine.io-client:3.2.1;xmlhttprequest-ssl:1.5.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"xmlhttprequest-ssl - 1.6.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-31597","vulnerabilityDetails":"The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-31597 (Medium) detected in xmlhttprequest-ssl-1.5.5.tgz - ## CVE-2021-31597 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary> <p>XMLHttpRequest for Node</p> <p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p> <p> Dependency Hierarchy: - karma-5.0.2.tgz (Root Library) - socket.io-2.1.1.tgz - socket.io-client-2.1.1.tgz - engine.io-client-3.2.1.tgz - :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected. <p>Publish Date: 2021-04-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p> <p>Release Date: 2021-04-23</p> <p>Fix Resolution: xmlhttprequest-ssl - 1.6.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"xmlhttprequest-ssl","packageVersion":"1.5.5","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"karma:5.0.2;socket.io:2.1.1;socket.io-client:2.1.1;engine.io-client:3.2.1;xmlhttprequest-ssl:1.5.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"xmlhttprequest-ssl - 1.6.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-31597","vulnerabilityDetails":"The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_defect
cve medium detected in xmlhttprequest ssl tgz cve medium severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x xmlhttprequest ssl tgz vulnerable library found in base branch master vulnerability details the xmlhttprequest ssl package before for node js disables ssl certificate validation by default because rejectunauthorized when the property exists but is undefined is considered to be false within the https request function of node js in other words no certificate is ever rejected publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlhttprequest ssl isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree karma socket io socket io client engine io client xmlhttprequest ssl isminimumfixversionavailable true minimumfixversion xmlhttprequest ssl basebranches vulnerabilityidentifier cve vulnerabilitydetails the xmlhttprequest ssl package before for node js disables ssl certificate validation by default because rejectunauthorized when the property exists but is undefined is considered to be false within the https request function of node js in other words no certificate is ever rejected vulnerabilityurl
0
8,658
2,611,534,699
IssuesEvent
2015-02-27 06:05:04
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
opened
Barrels while still falling vertically rather than freely do not bounce off rubber
auto-migrated Priority-Low Type-Defect
``` What steps will reproduce the problem? 1. Place a rubber horizontally. 2. Place a girder horizonally, close to the left of the rubber. It should be in the same height. 3. Place a girder above the middle of both. 4. Place a hedgehog on the top girder. 5. Drop a mine on the bottom girder from the top girder. 6. Drop a mine on the rubber while standing on the top girder. Use the same bouncyness as for the first mine. What is the expected output? What do you see instead? I expect that the mine dropped on the rubber bounces off stronger than the mine dropped on the ordinary girder. Instead both mines bounce off with the same power. What version of the product are you using? On what operating system? Hedgewars 0.9-20-r9780 on GNU/Linux. Please provide any additional information below. Unless for issue 735, here I am sure that it happens, since it can be clearly observed. I also tested it in a similar way with barrels. Barrels seem to not bounce off as well and clearly take fall damage. ``` Original issue reported on code.google.com by `almikes@aol.com` on 23 Dec 2013 at 3:02
1.0
Barrels while still falling vertically rather than freely do not bounce off rubber - ``` What steps will reproduce the problem? 1. Place a rubber horizontally. 2. Place a girder horizonally, close to the left of the rubber. It should be in the same height. 3. Place a girder above the middle of both. 4. Place a hedgehog on the top girder. 5. Drop a mine on the bottom girder from the top girder. 6. Drop a mine on the rubber while standing on the top girder. Use the same bouncyness as for the first mine. What is the expected output? What do you see instead? I expect that the mine dropped on the rubber bounces off stronger than the mine dropped on the ordinary girder. Instead both mines bounce off with the same power. What version of the product are you using? On what operating system? Hedgewars 0.9-20-r9780 on GNU/Linux. Please provide any additional information below. Unless for issue 735, here I am sure that it happens, since it can be clearly observed. I also tested it in a similar way with barrels. Barrels seem to not bounce off as well and clearly take fall damage. ``` Original issue reported on code.google.com by `almikes@aol.com` on 23 Dec 2013 at 3:02
defect
barrels while still falling vertically rather than freely do not bounce off rubber what steps will reproduce the problem place a rubber horizontally place a girder horizonally close to the left of the rubber it should be in the same height place a girder above the middle of both place a hedgehog on the top girder drop a mine on the bottom girder from the top girder drop a mine on the rubber while standing on the top girder use the same bouncyness as for the first mine what is the expected output what do you see instead i expect that the mine dropped on the rubber bounces off stronger than the mine dropped on the ordinary girder instead both mines bounce off with the same power what version of the product are you using on what operating system hedgewars on gnu linux please provide any additional information below unless for issue here i am sure that it happens since it can be clearly observed i also tested it in a similar way with barrels barrels seem to not bounce off as well and clearly take fall damage original issue reported on code google com by almikes aol com on dec at
1
51,240
13,207,400,498
IssuesEvent
2020-08-14 22:57:44
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
I3DOMLaunch serialzation error (Trac #85)
Incomplete Migration Migrated from Trac defect offline-software
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/85">https://code.icecube.wisc.edu/projects/icecube/ticket/85</a>, reported by blaufussand owned by blaufuss</em></summary> <p> ```json { "status": "closed", "changetime": "2007-11-11T03:51:18", "_ts": "1194753078000000", "description": "from Kevin;\nThere is a Bug in I3DOMLaunch serialization that sometimes causes the last bin of an ATWD readout to be zero. (I haven't checked if it happens with the fADC.)\n\nHe has added tests to dataclasses/trunk that highlight the problem. (kj++)", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "time": "2007-08-08T18:53:06", "component": "offline-software", "summary": "I3DOMLaunch serialzation error", "priority": "major", "keywords": "", "milestone": "", "owner": "blaufuss", "type": "defect" } ``` </p> </details>
1.0
I3DOMLaunch serialzation error (Trac #85) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/85">https://code.icecube.wisc.edu/projects/icecube/ticket/85</a>, reported by blaufussand owned by blaufuss</em></summary> <p> ```json { "status": "closed", "changetime": "2007-11-11T03:51:18", "_ts": "1194753078000000", "description": "from Kevin;\nThere is a Bug in I3DOMLaunch serialization that sometimes causes the last bin of an ATWD readout to be zero. (I haven't checked if it happens with the fADC.)\n\nHe has added tests to dataclasses/trunk that highlight the problem. (kj++)", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "time": "2007-08-08T18:53:06", "component": "offline-software", "summary": "I3DOMLaunch serialzation error", "priority": "major", "keywords": "", "milestone": "", "owner": "blaufuss", "type": "defect" } ``` </p> </details>
defect
serialzation error trac migrated from json status closed changetime ts description from kevin nthere is a bug in serialization that sometimes causes the last bin of an atwd readout to be zero i haven t checked if it happens with the fadc n nhe has added tests to dataclasses trunk that highlight the problem kj reporter blaufuss cc resolution fixed time component offline software summary serialzation error priority major keywords milestone owner blaufuss type defect
1
449,394
31,841,788,893
IssuesEvent
2023-09-14 16:51:05
department-of-veterans-affairs/va-mobile-app
https://api.github.com/repos/department-of-veterans-affairs/va-mobile-app
closed
Publish documentation for Calendar component
ux component-documentation mobile-platform
### Description Based on the approved component documentation template (see #6301), we need to update the documentation for the Calendar component. A draft has been started (see #3590). ### Steps - [ ] Update documentation in #3590 to match approved template - [ ] Update Figma design library - [ ] Implement [plan](https://app.zenhub.com/workspaces/va-mobile-product-view-610035bc5395bb000e62e529/issues/gh/department-of-veterans-affairs/va-mobile-app/5457) to simplify the Assets panel by removing frames/groups and nesting/hiding lower level components - [ ] Implement [component template](https://app.zenhub.com/workspaces/va-mobile-product-view-610035bc5395bb000e62e529/issues/gh/department-of-veterans-affairs/va-mobile-app/5457) - [ ] Update metadata with description, link and alternative names - [ ] Review with UX team - [ ] Review accessibility with Brea - [ ] Review content with Misty - [ ] Publish documentation to doc site - [ ] Add link to component in Figma design library - [ ] Close #3590 and update [component documentation spreadsheet](https://docs.google.com/spreadsheets/d/1_EAH2LWSzwF8Om7o4LAYJf6gT9UWENANYpF7SZy3j8w/edit#gid=0)
1.0
Publish documentation for Calendar component - ### Description Based on the approved component documentation template (see #6301), we need to update the documentation for the Calendar component. A draft has been started (see #3590). ### Steps - [ ] Update documentation in #3590 to match approved template - [ ] Update Figma design library - [ ] Implement [plan](https://app.zenhub.com/workspaces/va-mobile-product-view-610035bc5395bb000e62e529/issues/gh/department-of-veterans-affairs/va-mobile-app/5457) to simplify the Assets panel by removing frames/groups and nesting/hiding lower level components - [ ] Implement [component template](https://app.zenhub.com/workspaces/va-mobile-product-view-610035bc5395bb000e62e529/issues/gh/department-of-veterans-affairs/va-mobile-app/5457) - [ ] Update metadata with description, link and alternative names - [ ] Review with UX team - [ ] Review accessibility with Brea - [ ] Review content with Misty - [ ] Publish documentation to doc site - [ ] Add link to component in Figma design library - [ ] Close #3590 and update [component documentation spreadsheet](https://docs.google.com/spreadsheets/d/1_EAH2LWSzwF8Om7o4LAYJf6gT9UWENANYpF7SZy3j8w/edit#gid=0)
non_defect
publish documentation for calendar component description based on the approved component documentation template see we need to update the documentation for the calendar component a draft has been started see steps update documentation in to match approved template update figma design library implement to simplify the assets panel by removing frames groups and nesting hiding lower level components implement update metadata with description link and alternative names review with ux team review accessibility with brea review content with misty publish documentation to doc site add link to component in figma design library close and update
0
157,875
6,017,520,256
IssuesEvent
2017-06-07 09:51:32
appscode/voyager
https://api.github.com/repos/appscode/voyager
closed
Deleting LB deployment does not get recreated
kind/bug priority/P0
- Watch Service, Deployment, DaemonSet deletion - Undo delete, if the source ingress exists.
1.0
Deleting LB deployment does not get recreated - - Watch Service, Deployment, DaemonSet deletion - Undo delete, if the source ingress exists.
non_defect
deleting lb deployment does not get recreated watch service deployment daemonset deletion undo delete if the source ingress exists
0