Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
19,629 | 3,228,466,768 | IssuesEvent | 2015-10-12 02:34:09 | vickyg3/social-photos | https://api.github.com/repos/vickyg3/social-photos | closed | G+ -> FB: Worked fine, until I transferred 3 albums. Now G+ list is empty. | Priority-Medium Status-New Type-Defect | Originally reported on Google Code with ID 22
```
What steps will reproduce the problem?
1. Try transferring at least 3 albums in chronological order from Google Plus to Facebook
shortly after granting the app access to G+.
What is the expected output? What do you see instead?
Expected a list of ~8 G+ albums. Saw an empty list.
What operating system and browser are you using? On what operating system?
Firefox 32, Ubuntu 14.04
Please provide any additional information below.
Tried selecting "google plus" again. Tried signing out and back in to G+, refreshed
the app page. Tried using a proxy (I'm travelling overseas right now).
G+ is working fine for me right now, but it only worked through this app three times.
```
Reported by `ross9885` on 2014-09-23 20:22:48
| 1.0 | G+ -> FB: Worked fine, until I transferred 3 albums. Now G+ list is empty. - Originally reported on Google Code with ID 22
```
What steps will reproduce the problem?
1. Try transferring at least 3 albums in chronological order from Google Plus to Facebook
shortly after granting the app access to G+.
What is the expected output? What do you see instead?
Expected a list of ~8 G+ albums. Saw an empty list.
What operating system and browser are you using? On what operating system?
Firefox 32, Ubuntu 14.04
Please provide any additional information below.
Tried selecting "google plus" again. Tried signing out and back in to G+, refreshed
the app page. Tried using a proxy (I'm travelling overseas right now).
G+ is working fine for me right now, but it only worked through this app three times.
```
Reported by `ross9885` on 2014-09-23 20:22:48
| defect | g fb worked fine until i transferred albums now g list is empty originally reported on google code with id what steps will reproduce the problem try transferring at least albums in chronological order from google plus to facebook shortly after granting the app access to g what is the expected output what do you see instead expected a list of g albums saw an empty list what operating system and browser are you using on what operating system firefox ubuntu please provide any additional information below tried selecting google plus again tried signing out and back in to g refreshed the app page tried using a proxy i m travelling overseas right now g is working fine for me right now but it only worked through this app three times reported by on | 1 |
65,637 | 19,608,845,137 | IssuesEvent | 2022-01-06 13:03:42 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | selectOneMenu: Component is skipped by screenreader in browse mode | defect | **Describe the defect**
Screenreaders typically have a browse mode in which you can read the content of a page and a focus mode in which you can also interact with the page (e.g. clicking a button). In both modes each element should be readable by the screenreader. The defect here is that, when in browse mode, selectOneMenu components are skipped by the screenreader so a blind user would not know that they exist (the same problem may exist with other components such as selectCheckboxMenu but I did not try that).
**Environment:**
- PF Version: _10.0_
- tested with the screenreader NVDA
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://www.primefaces.org/diamond/input.xhtml
2. Scroll down to the "Listbox" panel
3. Use the browse mode of a screenreader (e.g. NVDA) to test if the Dropdown and CheckboxMenu are read by the screenreader.
4. See error: The headlines "Dropdown" and "CheckboxMenu" are read aloud but the select components are skipped
**Expected behavior**
For accessibility, select components like selectOneMenu or selectCheckboxMenu as well as their given options should be read by the screenreader, not skipped. | 1.0 | selectOneMenu: Component is skipped by screenreader in browse mode - **Describe the defect**
Screenreaders typically have a browse mode in which you can read the content of a page and a focus mode in which you can also interact with the page (e.g. clicking a button). In both modes each element should be readable by the screenreader. The defect here is that, when in browse mode, selectOneMenu components are skipped by the screenreader so a blind user would not know that they exist (the same problem may exist with other components such as selectCheckboxMenu but I did not try that).
**Environment:**
- PF Version: _10.0_
- tested with the screenreader NVDA
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://www.primefaces.org/diamond/input.xhtml
2. Scroll down to the "Listbox" panel
3. Use the browse mode of a screenreader (e.g. NVDA) to test if the Dropdown and CheckboxMenu are read by the screenreader.
4. See error: The headlines "Dropdown" and "CheckboxMenu" are read aloud but the select components are skipped
**Expected behavior**
For accessibility, select components like selectOneMenu or selectCheckboxMenu as well as their given options should be read by the screenreader, not skipped. | defect | selectonemenu component is skipped by screenreader in browse mode describe the defect screenreaders typically have a browse mode in which you can read the content of a page and a focus mode in which you can also interact with the page e g clicking a button in both modes each element should be readable by the screenreader the defect here is that when in browse mode selectonemenu components are skipped by the screenreader so a blind user would not know that they exist the same problem may exist with other components such as selectcheckboxmenu but i did not try that environment pf version tested with the screenreader nvda to reproduce steps to reproduce the behavior go to scroll down to the listbox panel use the browse mode of a screenreader e g nvda to test if the dropdown and checkboxmenu are read by the screenreader see error the headlines dropdown and checkboxmenu are read aloud but the select components are skipped expected behavior for accessibility select components like selectonemenu or selectcheckboxmenu as well as their given options should be read by the screenreader not skipped | 1 |
234,447 | 7,721,085,978 | IssuesEvent | 2018-05-24 03:03:42 | SANDRAProject/api | https://api.github.com/repos/SANDRAProject/api | opened | plan for v0.4 | Change: Minor Component: Project Component: REST Priority: Medium Status: 0-Discussion Type: Feature | ## Backend
- [ ] Config Validation
- [ ] Search for Messages, Subscriptions and Services
- [ ] Refactor to *Channels* to Improve Performance
- [ ] Billing System
## Frontend
- [ ] Refactor
## Feed
- [ ] YouTube
- [ ] Facebook
- [ ] Jike
- [ ] JSON Feed
- [ ] Atom Feed | 1.0 | plan for v0.4 - ## Backend
- [ ] Config Validation
- [ ] Search for Messages, Subscriptions and Services
- [ ] Refactor to *Channels* to Improve Performance
- [ ] Billing System
## Frontend
- [ ] Refactor
## Feed
- [ ] YouTube
- [ ] Facebook
- [ ] Jike
- [ ] JSON Feed
- [ ] Atom Feed | non_defect | plan for backend config validation search for messages subscriptions and services refactor to channels to improve performance billing system frontend refactor feed youtube facebook jike json feed atom feed | 0 |
10,214 | 2,618,942,221 | IssuesEvent | 2015-03-03 00:04:32 | chrsmith/open-ig | https://api.github.com/repos/chrsmith/open-ig | closed | Tankok feltöltése maxra | auto-migrated Component-UI Priority-Medium Type-Defect | ```
Hello,
Nem is igazán hiba miatt írok, inkább egy javaslat. A flotta automatikus
feltöltésénél "+++", amivel automata tudom feltölteni a lézereket,
ágyukat, rakétákat stb, hogy a tankokat is pls a legerősebbeket mindig tele
pakolja, vagyis mindig a maximálisat. pl betudnék tenni 24 darabot, de
automata csak 16-ot vagy 14 et rak be. Amikor többször támadok, mindig
elfelejtem, hogy nem tölti fullra, csak akkor látom, amikor a bolygóra csak
14 db-ot tesz le. Betennél egy beállítási lehetőséget vagy valamit, hogy
ha mást nem, akkor belehesen állítani, hogy mennyi tankot tegyen be max
automatán ? De az is jó ha csak egyből maxra pakol.
Minden a legfrissebb verzió lásd kép és win7 magyar.
Előre is köszönöm!
Üdv
Bence
```
Original issue reported on code.google.com by `Harvesta...@gmail.com` on 23 Nov 2014 at 9:33 | 1.0 | Tankok feltöltése maxra - ```
Hello,
Nem is igazán hiba miatt írok, inkább egy javaslat. A flotta automatikus
feltöltésénél "+++", amivel automata tudom feltölteni a lézereket,
ágyukat, rakétákat stb, hogy a tankokat is pls a legerősebbeket mindig tele
pakolja, vagyis mindig a maximálisat. pl betudnék tenni 24 darabot, de
automata csak 16-ot vagy 14 et rak be. Amikor többször támadok, mindig
elfelejtem, hogy nem tölti fullra, csak akkor látom, amikor a bolygóra csak
14 db-ot tesz le. Betennél egy beállítási lehetőséget vagy valamit, hogy
ha mást nem, akkor belehesen állítani, hogy mennyi tankot tegyen be max
automatán ? De az is jó ha csak egyből maxra pakol.
Minden a legfrissebb verzió lásd kép és win7 magyar.
Előre is köszönöm!
Üdv
Bence
```
Original issue reported on code.google.com by `Harvesta...@gmail.com` on 23 Nov 2014 at 9:33 | defect | tankok feltöltése maxra hello nem is igazán hiba miatt írok inkább egy javaslat a flotta automatikus feltöltésénél amivel automata tudom feltölteni a lézereket ágyukat rakétákat stb hogy a tankokat is pls a legerősebbeket mindig tele pakolja vagyis mindig a maximálisat pl betudnék tenni darabot de automata csak ot vagy et rak be amikor többször támadok mindig elfelejtem hogy nem tölti fullra csak akkor látom amikor a bolygóra csak db ot tesz le betennél egy beállítási lehetőséget vagy valamit hogy ha mást nem akkor belehesen állítani hogy mennyi tankot tegyen be max automatán de az is jó ha csak egyből maxra pakol minden a legfrissebb verzió lásd kép és magyar előre is köszönöm üdv bence original issue reported on code google com by harvesta gmail com on nov at | 1 |
12,054 | 2,678,942,391 | IssuesEvent | 2015-03-26 14:17:02 | Hippocampome-Org/php | https://api.github.com/repos/Hippocampome-Org/php | closed | DWW: Properly display marker evidence where Fragment table contains multiple rows of same original_id with different linking info | defect Development Local Production Review | When new linking info was added to the URD_marker.xlsx, to provide multiple linking per ReferenceID (original_id), two or more rows were added containing the same ReferenceID. These multiple rows are propagated into the database table called Fragment. However, when property_page_markers.php sees this, something breaks and no evidence is shown. What should happen is one instance of that evidence should be displayed containing multiple sets of linking info. | 1.0 | DWW: Properly display marker evidence where Fragment table contains multiple rows of same original_id with different linking info - When new linking info was added to the URD_marker.xlsx, to provide multiple linking per ReferenceID (original_id), two or more rows were added containing the same ReferenceID. These multiple rows are propagated into the database table called Fragment. However, when property_page_markers.php sees this, something breaks and no evidence is shown. What should happen is one instance of that evidence should be displayed containing multiple sets of linking info. | defect | dww properly display marker evidence where fragment table contains multiple rows of same original id with different linking info when new linking info was added to the urd marker xlsx to provide multiple linking per referenceid original id two or more rows were added containing the same referenceid these multiple rows are propagated into the database table called fragment however when property page markers php sees this something breaks and no evidence is shown what should happen is one instance of that evidence should be displayed containing multiple sets of linking info | 1 |
1,221 | 2,601,760,491 | IssuesEvent | 2015-02-24 00:34:53 | chrsmith/bwapi | https://api.github.com/repos/chrsmith/bwapi | closed | Server /commands not working on battle.net? | auto-migrated Component-Logic Priority-High Type-Defect Usability | ```
The /commands that are not part of bwapi don't appear to work at all.
```
-----
Original issue reported on code.google.com by `AHeinerm` on 12 Feb 2011 at 7:18 | 1.0 | Server /commands not working on battle.net? - ```
The /commands that are not part of bwapi don't appear to work at all.
```
-----
Original issue reported on code.google.com by `AHeinerm` on 12 Feb 2011 at 7:18 | defect | server commands not working on battle net the commands that are not part of bwapi don t appear to work at all original issue reported on code google com by aheinerm on feb at | 1 |
53,563 | 6,733,718,768 | IssuesEvent | 2017-10-18 15:37:49 | blockstack/designs | https://api.github.com/repos/blockstack/designs | closed | Design legal disclaimer section on blockstack.com | design | Design legal disclaimer section on blockstack.com | 1.0 | Design legal disclaimer section on blockstack.com - Design legal disclaimer section on blockstack.com | non_defect | design legal disclaimer section on blockstack com design legal disclaimer section on blockstack com | 0 |
13,796 | 2,784,102,543 | IssuesEvent | 2015-05-07 07:14:32 | sylingd/phpsocks5 | https://api.github.com/repos/sylingd/phpsocks5 | closed | 關於 MySQL 資料庫方面的建議 | auto-migrated Priority-Medium Type-Defect | ```
有嘗試過於自己的電腦架設 MySQL,之後再不提供資料庫的 PHP
空間架設成功
建議可以將本程序自帶綠色版的 MySQL,並隨著程序自動執行
這樣一來只要支援 PHP 之空間皆能使用此程序了
希望能參考看看本建議
```
Original issue reported on code.google.com by `sony.and...@gmail.com` on 6 Mar 2012 at 8:02 | 1.0 | 關於 MySQL 資料庫方面的建議 - ```
有嘗試過於自己的電腦架設 MySQL,之後再不提供資料庫的 PHP
空間架設成功
建議可以將本程序自帶綠色版的 MySQL,並隨著程序自動執行
這樣一來只要支援 PHP 之空間皆能使用此程序了
希望能參考看看本建議
```
Original issue reported on code.google.com by `sony.and...@gmail.com` on 6 Mar 2012 at 8:02 | defect | 關於 mysql 資料庫方面的建議 有嘗試過於自己的電腦架設 mysql,之後再不提供資料庫的 php 空間架設成功 建議可以將本程序自帶綠色版的 mysql,並隨著程序自動執行 這樣一來只要支援 php 之空間皆能使用此程序了 希望能參考看看本建議 original issue reported on code google com by sony and gmail com on mar at | 1 |
186,568 | 14,398,993,325 | IssuesEvent | 2020-12-03 10:19:24 | ethereumjs/ethereumjs-vm | https://api.github.com/repos/ethereumjs/ethereumjs-vm | closed | Explore better ways to work with ethereumjs-testing | package: monorepo type: discussion / question type: tests | It is a somewhat heavy burden to work with the official tests files. As a consequence of its size, we've been treating it in a [very special way](https://github.com/ethereumjs/ethereumjs-vm/blob/master/packages/vm/package.json#L70). The package is kinda easily handled locally with local npm caches, but that's not the case with the CI.
With a total cache bundle surpassing 1.4GB, unpacking the cache can take even longer than installing dependencies directly. After some investigation, I found out that using `git submodule update` is generally [much faster (48s)](https://github.com/ethereumjs/ethereumjs-testing/runs/761421284?check_suite_focus=true#step:5:1) than [unpacking the overall bundle (3 to 7 minutes)](https://github.com/ethereumjs/ethereumjs-vm/runs/796991621?check_suite_focus=true#step:4:7).
This is _now_ a problem, because I wanted to have an unified cached `node_modules` to load in all tests. Some ideas I had:
**1) Link submodule from a deeper ethereum/tests directory**
❌ Can't be done, as we use all of the submdule top level directories.
**2) Let the VM repo manage the ethereum/tests submodule directly**
❌ That would be some kind of shortcut, but it defeats the purpose of ethereumjs-testing versioning and releases.
**3) Release ethereumjs-testing without the submodule**
- The overall cache size for this repo would be 83% smaller.
- Each consumer would lazily load the test files from github.
For that to work, we'd need to:
- [ ] setup script to run the `git submodule update` when needed
- [ ] teardown script to delete these files by the end of the tests, so they don't reach the job cache
**4) Load test files as a submodule on the vm**
- each package that uses it should have their own test loading logic
- git submodule checkout is way faster than cache load
- `node_modules` cache size will be alleviated
_2020-11-12 edit Added 4th item_ | 1.0 | Explore better ways to work with ethereumjs-testing - It is a somewhat heavy burden to work with the official tests files. As a consequence of its size, we've been treating it in a [very special way](https://github.com/ethereumjs/ethereumjs-vm/blob/master/packages/vm/package.json#L70). The package is kinda easily handled locally with local npm caches, but that's not the case with the CI.
With a total cache bundle surpassing 1.4GB, unpacking the cache can take even longer than installing dependencies directly. After some investigation, I found out that using `git submodule update` is generally [much faster (48s)](https://github.com/ethereumjs/ethereumjs-testing/runs/761421284?check_suite_focus=true#step:5:1) than [unpacking the overall bundle (3 to 7 minutes)](https://github.com/ethereumjs/ethereumjs-vm/runs/796991621?check_suite_focus=true#step:4:7).
This is _now_ a problem, because I wanted to have an unified cached `node_modules` to load in all tests. Some ideas I had:
**1) Link submodule from a deeper ethereum/tests directory**
❌ Can't be done, as we use all of the submdule top level directories.
**2) Let the VM repo manage the ethereum/tests submodule directly**
❌ That would be some kind of shortcut, but it defeats the purpose of ethereumjs-testing versioning and releases.
**3) Release ethereumjs-testing without the submodule**
- The overall cache size for this repo would be 83% smaller.
- Each consumer would lazily load the test files from github.
For that to work, we'd need to:
- [ ] setup script to run the `git submodule update` when needed
- [ ] teardown script to delete these files by the end of the tests, so they don't reach the job cache
**4) Load test files as a submodule on the vm**
- each package that uses it should have their own test loading logic
- git submodule checkout is way faster than cache load
- `node_modules` cache size will be alleviated
_2020-11-12 edit Added 4th item_ | non_defect | explore better ways to work with ethereumjs testing it is a somewhat heavy burden to work with the official tests files as a consequence of its size we ve been treating it in a the package is kinda easily handled locally with local npm caches but that s not the case with the ci with a total cache bundle surpassing unpacking the cache can take even longer than installing dependencies directly after some investigation i found out that using git submodule update is generally than this is now a problem because i wanted to have an unified cached node modules to load in all tests some ideas i had link submodule from a deeper ethereum tests directory ❌ can t be done as we use all of the submdule top level directories let the vm repo manage the ethereum tests submodule directly ❌ that would be some kind of shortcut but it defeats the purpose of ethereumjs testing versioning and releases release ethereumjs testing without the submodule the overall cache size for this repo would be smaller each consumer would lazily load the test files from github for that to work we d need to setup script to run the git submodule update when needed teardown script to delete these files by the end of the tests so they don t reach the job cache load test files as a submodule on the vm each package that uses it should have their own test loading logic git submodule checkout is way faster than cache load node modules cache size will be alleviated edit added item | 0 |
65,193 | 19,254,349,897 | IssuesEvent | 2021-12-09 09:41:34 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Add vertical spacing between buttons when they go over multiple lines | T-Defect S-Tolerable A-Room-Settings A-User-Settings O-Occasional good first issue | ### Steps to reproduce
1. Switch to a language with long text like French
2. Open the preferences dialog, go to "Security and Privacy"
3. Scroll down to the encryption section
### Outcome
#### What did you expect?

#### What happened instead?

Since buttons have long content, the line is broken, but the top and bottom margins are set on the parent element.
They should be set on each button instead.
This is true in this panel, but also within the dialog to verify your session with the security passphrase:

I suspect there are more.
### Operating system
Debian/unstable
### Browser information
Firefox 94.0
### URL for webapp
_No response_
### Application version
Element develop 9746517ef734eec94c31a07d73c7c1bc7d30e932
### Homeserver
matrix.org
### Will you send logs?
No | 1.0 | Add vertical spacing between buttons when they go over multiple lines - ### Steps to reproduce
1. Switch to a language with long text like French
2. Open the preferences dialog, go to "Security and Privacy"
3. Scroll down to the encryption section
### Outcome
#### What did you expect?

#### What happened instead?

Since buttons have long content, the line is broken, but the top and bottom margins are set on the parent element.
They should be set on each button instead.
This is true in this panel, but also within the dialog to verify your session with the security passphrase:

I suspect there are more.
### Operating system
Debian/unstable
### Browser information
Firefox 94.0
### URL for webapp
_No response_
### Application version
Element develop 9746517ef734eec94c31a07d73c7c1bc7d30e932
### Homeserver
matrix.org
### Will you send logs?
No | defect | add vertical spacing between buttons when they go over multiple lines steps to reproduce switch to a language with long text like french open the preferences dialog go to security and privacy scroll down to the encryption section outcome what did you expect what happened instead since buttons have long content the line is broken but the top and bottom margins are set on the parent element they should be set on each button instead this is true in this panel but also within the dialog to verify your session with the security passphrase i suspect there are more operating system debian unstable browser information firefox url for webapp no response application version element develop homeserver matrix org will you send logs no | 1 |
37,669 | 8,474,787,732 | IssuesEvent | 2018-10-24 17:05:45 | brainvisa/testbidon | https://api.github.com/repos/brainvisa/testbidon | closed | conflicting niftilib | Category: soma-io Component: Resolution Priority: Normal Status: Closed Tracker: Defect | ---
Author Name: **Riviere, Denis** (Riviere, Denis)
Original Redmine Issue: 11319, https://bioproj.extra.cea.fr/redmine/issues/11319
Original Date: 2014-10-22
---
somanifti contains its own builtin niftilib. But another niftilib may be loaded in memory. It happens in brainvisa on Ubuntu 14.04: the python nifti module gets loaded, which loads libniftiio2.
Then symbols are duplicated, and we are not calling the right version of the functions, and do not have the same state variables.
We cannot really rely on the system one because
* we have done some fixes in niftilib that we need to read some nifti files
* our nifti-2 implementation shares static variables / functions with nifti-1, so need to compile both as one source (nifti2_io.c includes nifti1_io.c, the latter is not compiled separately).
So we need a means to separate symbols from our builtin niftilib (without changing sources, if possible)
| 1.0 | conflicting niftilib - ---
Author Name: **Riviere, Denis** (Riviere, Denis)
Original Redmine Issue: 11319, https://bioproj.extra.cea.fr/redmine/issues/11319
Original Date: 2014-10-22
---
somanifti contains its own builtin niftilib. But another niftilib may be loaded in memory. It happens in brainvisa on Ubuntu 14.04: the python nifti module gets loaded, which loads libniftiio2.
Then symbols are duplicated, and we are not calling the right version of the functions, and do not have the same state variables.
We cannot really rely on the system one because
* we have done some fixes in niftilib that we need to read some nifti files
* our nifti-2 implementation shares static variables / functions with nifti-1, so need to compile both as one source (nifti2_io.c includes nifti1_io.c, the latter is not compiled separately).
So we need a means to separate symbols from our builtin niftilib (without changing sources, if possible)
| defect | conflicting niftilib author name riviere denis riviere denis original redmine issue original date somanifti contains its own builtin niftilib but another niftilib may be loaded in memory it happens in brainvisa on ubuntu the python nifti module gets loaded which loads then symbols are duplicated and we are not calling the right version of the functions and do not have the same state variables we cannot really rely on the system one because we have done some fixes in niftilib that we need to read some nifti files our nifti implementation shares static variables functions with nifti so need to compile both as one source io c includes io c the latter is not compiled separately so we need a means to separate symbols from our builtin niftilib without changing sources if possible | 1 |
11,538 | 2,657,942,886 | IssuesEvent | 2015-03-18 12:53:59 | Red5/red5-server | https://api.github.com/repos/Red5/red5-server | opened | Problem with ServerClientDetection, Error FMS to Red5 | auto-migrated Priority-Medium Type-Defect | _From @GoogleCodeExporter on March 15, 2015 17:1_
```
In org.red5.server.Client this:
ServerClientDetection detection = new ServerClientDetection();
detection.checkBandwidth(Red5.getConnectionLocal());
But it's execute before the NetConnection.Connect.Success receive by FMS and
the FMS close the connection.
I think this call must be pending and make after de
NetConnection.Connect.Success is sent.
What do you think?
```
Original issue reported on code.google.com by `sebastie...@gmail.com` on 13 Aug 2013 at 6:46
_Copied from original issue: mondain/red5#424_ | 1.0 | Problem with ServerClientDetection, Error FMS to Red5 - _From @GoogleCodeExporter on March 15, 2015 17:1_
```
In org.red5.server.Client this:
ServerClientDetection detection = new ServerClientDetection();
detection.checkBandwidth(Red5.getConnectionLocal());
But it's execute before the NetConnection.Connect.Success receive by FMS and
the FMS close the connection.
I think this call must be pending and make after de
NetConnection.Connect.Success is sent.
What do you think?
```
Original issue reported on code.google.com by `sebastie...@gmail.com` on 13 Aug 2013 at 6:46
_Copied from original issue: mondain/red5#424_ | defect | problem with serverclientdetection error fms to from googlecodeexporter on march in org server client this serverclientdetection detection new serverclientdetection detection checkbandwidth getconnectionlocal but it s execute before the netconnection connect success receive by fms and the fms close the connection i think this call must be pending and make after de netconnection connect success is sent what do you think original issue reported on code google com by sebastie gmail com on aug at copied from original issue mondain | 1 |
372,206 | 25,987,040,246 | IssuesEvent | 2022-12-20 01:50:39 | gradient-ai/Graphcore-Pytorch | https://api.github.com/repos/gradient-ai/Graphcore-Pytorch | closed | README found in getting-started needs updating | documentation | Readme is from the original BERT-L finetuning README, this needs to be updated to talk about the 2 notebooks in get started:
- BERT-L
- Schnet | 1.0 | README found in getting-started needs updating - Readme is from the original BERT-L finetuning README, this needs to be updated to talk about the 2 notebooks in get started:
- BERT-L
- Schnet | non_defect | readme found in getting started needs updating readme is from the original bert l finetuning readme this needs to be updated to talk about the notebooks in get started bert l schnet | 0 |
46,525 | 13,055,927,065 | IssuesEvent | 2020-07-30 03:08:37 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | multiple projects -- trigger BoostBug:4842 -- "pure virtual method called" (Trac #1366) | Incomplete Migration Migrated from Trac defect tools/ports | Migrated from https://code.icecube.wisc.edu/ticket/1366
```json
{
"status": "closed",
"changetime": "2015-09-23T01:48:12",
"description": "On Ubuntu 12.04 with SYSTEM_PACKAGES and boost 1.46.1, multiple projects trigger Boost:ticket:4842. This causes your executable exit with the error:\n\n{{{\npure virtual method called\nterminate called without an active exception\n}}}\nNote that this happens at program exit.\n\nSee [http://builds.icecube.wisc.edu/builders/Ubuntu%2012.04/builds/142/steps/test/logs/stdio this failed build] for more examples.\n\nThe easy answer is to upgrade your boost. On Ubuntu 12.04 this can be done by swapping the 1.46.1 packages for the 1.48 packages.\n\nRecompiling and relinking **may** stop this bug from being triggered. Just relinking your executable with `--as-needed`, **may** also prevent this bug from being triggered.\n",
"reporter": "nega",
"cc": "",
"resolution": "wontfix",
"_ts": "1442972892751935",
"component": "tools/ports",
"summary": "multiple projects -- trigger BoostBug:4842 -- \"pure virtual method called\"",
"priority": "normal",
"keywords": "boost bug",
"time": "2015-09-23T01:47:57",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
| 1.0 | multiple projects -- trigger BoostBug:4842 -- "pure virtual method called" (Trac #1366) - Migrated from https://code.icecube.wisc.edu/ticket/1366
```json
{
"status": "closed",
"changetime": "2015-09-23T01:48:12",
"description": "On Ubuntu 12.04 with SYSTEM_PACKAGES and boost 1.46.1, multiple projects trigger Boost:ticket:4842. This causes your executable exit with the error:\n\n{{{\npure virtual method called\nterminate called without an active exception\n}}}\nNote that this happens at program exit.\n\nSee [http://builds.icecube.wisc.edu/builders/Ubuntu%2012.04/builds/142/steps/test/logs/stdio this failed build] for more examples.\n\nThe easy answer is to upgrade your boost. On Ubuntu 12.04 this can be done by swapping the 1.46.1 packages for the 1.48 packages.\n\nRecompiling and relinking **may** stop this bug from being triggered. Just relinking your executable with `--as-needed`, **may** also prevent this bug from being triggered.\n",
"reporter": "nega",
"cc": "",
"resolution": "wontfix",
"_ts": "1442972892751935",
"component": "tools/ports",
"summary": "multiple projects -- trigger BoostBug:4842 -- \"pure virtual method called\"",
"priority": "normal",
"keywords": "boost bug",
"time": "2015-09-23T01:47:57",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
| defect | multiple projects trigger boostbug pure virtual method called trac migrated from json status closed changetime description on ubuntu with system packages and boost multiple projects trigger boost ticket this causes your executable exit with the error n n npure virtual method called nterminate called without an active exception n nnote that this happens at program exit n nsee for more examples n nthe easy answer is to upgrade your boost on ubuntu this can be done by swapping the packages for the packages n nrecompiling and relinking may stop this bug from being triggered just relinking your executable with as needed may also prevent this bug from being triggered n reporter nega cc resolution wontfix ts component tools ports summary multiple projects trigger boostbug pure virtual method called priority normal keywords boost bug time milestone owner nega type defect | 1 |
57,082 | 6,537,130,986 | IssuesEvent | 2017-08-31 21:00:41 | zmap/zlint | https://api.github.com/repos/zmap/zlint | closed | meta_lint.py should enforce naming conventions | enhancement test | Right now, `lints/meta_lint.py` enforces that the Lint Name matches the report field being updated if you `.replace("_", "").lower()`. It should further match that the name of the report field matches the field that would be generated by the protobuf compiler. | 1.0 | meta_lint.py should enforce naming conventions - Right now, `lints/meta_lint.py` enforces that the Lint Name matches the report field being updated if you `.replace("_", "").lower()`. It should further match that the name of the report field matches the field that would be generated by the protobuf compiler. | non_defect | meta lint py should enforce naming conventions right now lints meta lint py enforces that the lint name matches the report field being updated if you replace lower it should further match that the name of the report field matches the field that would be generated by the protobuf compiler | 0 |
19,745 | 2,622,165,444 | IssuesEvent | 2015-03-04 00:12:05 | byzhang/graphchi | https://api.github.com/repos/byzhang/graphchi | opened | pagerank.cpp: would be nice to set the personalization vector | auto-migrated Priority-Medium Type-Enhancement | ```
Looking at this code snippet from example_apps/pagerank.cpp
-- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8
#define RANDOMRESETPROB 0.15
// ...
struct PagerankProgram :
public GraphChiProgram<VertexDataType,
EdgeDataType> {
// ...
void update(graphchi_vertex<VertexDataType, EdgeDataType> &v,
graphchi_context &ginfo) {
float sum=0;
// ...
for(int i=0; i < v.num_inedges(); i++) {
float val = v.inedge(i)->get_data();
sum += val;
}
/* Compute my pagerank */
float pagerank = RANDOMRESETPROB + (1 - RANDOMRESETPROB) * sum;
// ...
}
}
-- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8
looks like that the current pagerank implementation doesn't
allow you to set the "personalisation vector" to anything different
than a uniform probability vector. I mean, if the pagerank equation
in matrix form is
p = (1-c) * A * p + c * V
where:
p is the pagerank vector, N compontents (N is the size of the web)
c is the probability of jumping to a random page no matter the outlinks
from current location
A is the transition matrix, N-by-N, if you see a random walk on the web
as a Markov chain
V is a N-vector, where V_i is the probability of random-jumping to page i
(side note: I am not normalizing by N, i.e. all probabilities sum up to N
and not to 1)
well, given all of this, in the current implementation of GraphChi pagerank
V is a uniform probability vectory = [1, 1, 1, ..., 1].
A jump to every page is equally likely to happen, no matter the page.
After this wall of text I come to my point:
could a non trivial "personalisation vector" be implemented?
I'd like to be able to set V myself.
Is this in the priorities of the GraphChi team?
Cheers,
```
Original issue reported on code.google.com by `g.gherdo...@gmail.com` on 12 Feb 2013 at 10:20 | 1.0 | pagerank.cpp: would be nice to set the personalization vector - ```
Looking at this code snippet from example_apps/pagerank.cpp
-- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8
#define RANDOMRESETPROB 0.15
// ...
struct PagerankProgram :
public GraphChiProgram<VertexDataType,
EdgeDataType> {
// ...
void update(graphchi_vertex<VertexDataType, EdgeDataType> &v,
graphchi_context &ginfo) {
float sum=0;
// ...
for(int i=0; i < v.num_inedges(); i++) {
float val = v.inedge(i)->get_data();
sum += val;
}
/* Compute my pagerank */
float pagerank = RANDOMRESETPROB + (1 - RANDOMRESETPROB) * sum;
// ...
}
}
-- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8 -- -- >8
looks like that the current pagerank implementation doesn't
allow you to set the "personalisation vector" to anything different
than a uniform probability vector. I mean, if the pagerank equation
in matrix form is
p = (1-c) * A * p + c * V
where:
p is the pagerank vector, N compontents (N is the size of the web)
c is the probability of jumping to a random page no matter the outlinks
from current location
A is the transition matrix, N-by-N, if you see a random walk on the web
as a Markov chain
V is a N-vector, where V_i is the probability of random-jumping to page i
(side note: I am not normalizing by N, i.e. all probabilities sum up to N
and not to 1)
well, given all of this, in the current implementation of GraphChi pagerank
V is a uniform probability vectory = [1, 1, 1, ..., 1].
A jump to every page is equally likely to happen, no matter the page.
After this wall of text I come to my point:
could a non trivial "personalisation vector" be implemented?
I'd like to be able to set V myself.
Is this in the priorities of the GraphChi team?
Cheers,
```
Original issue reported on code.google.com by `g.gherdo...@gmail.com` on 12 Feb 2013 at 10:20 | non_defect | pagerank cpp would be nice to set the personalization vector looking at this code snippet from example apps pagerank cpp define randomresetprob struct pagerankprogram public graphchiprogram vertexdatatype edgedatatype void update graphchi vertex v graphchi context ginfo float sum for int i i v num inedges i float val v inedge i get data sum val compute my pagerank float pagerank randomresetprob randomresetprob sum looks like that the current pagerank implementation doesn t allow you to set the personalisation vector to anything different than a uniform probability vector i mean if the pagerank equation in matrix form is p c a p c v where p is the pagerank vector n compontents n is the size of the web c is the probability of jumping to a random page no matter the outlinks from current location a is the transition matrix n by n if you see a random walk on the web as a markov chain v is a n vector where v i is the probability of random jumping to page i side note i am not normalizing by n i e all probabilities sum up to n and not to well given all of this in the current implementation of graphchi pagerank v is a uniform probability vectory a jump to every page is equally likely to happen no matter the page after this wall of text i come to my point could a non trivial personalisation vector be implemented i d like to be able to set v myself is this in the priorities of the graphchi team cheers original issue reported on code google com by g gherdo gmail com on feb at | 0 |
676,973 | 23,144,953,762 | IssuesEvent | 2022-07-28 23:01:25 | glizondo/Ewallet-SENG210 | https://api.github.com/repos/glizondo/Ewallet-SENG210 | opened | Feature5 - Have a hint for username information | enhancement High priority | User story: As a user, I want to be able to use the correct password and have it checked so it is right
Story points: 1 | 1.0 | Feature5 - Have a hint for username information - User story: As a user, I want to be able to use the correct password and have it checked so it is right
Story points: 1 | non_defect | have a hint for username information user story as a user i want to be able to use the correct password and have it checked so it is right story points | 0 |
44,455 | 12,168,736,733 | IssuesEvent | 2020-04-27 13:09:57 | zealdocs/zeal | https://api.github.com/repos/zealdocs/zeal | closed | javascript docset can't open | resolution/duplicate scope/ui/webview type/defect | Hi,
I download and install the javascript docset, but when I click any document, it said can't show correclty.
Kind regards
Fred | 1.0 | javascript docset can't open - Hi,
I download and install the javascript docset, but when I click any document, it said can't show correclty.
Kind regards
Fred | defect | javascript docset can t open hi i download and install the javascript docset but when i click any document it said can t show correclty kind regards fred | 1 |
62,608 | 17,090,082,316 | IssuesEvent | 2021-07-08 16:16:07 | Questie/Questie | https://api.github.com/repos/Questie/Questie | closed | "[QuestieQuest] [Tooltips] Error happened while creating objectives 10607 No error" | Type - Defect | ## Bug description
Having the message box being spam at every loading, the problem came with tbc classic.
The message itself is translated (as i am playing in french) must be something like this in english:
`Questie: [ERROR] [QuestieQuest]: vXXXX - An error occured during the creation of objectives for *Name of the quest* 10607 X No error`
## Screenshots

## Questie version
v6.3.12 (but occur since tbc classic)
| 1.0 | "[QuestieQuest] [Tooltips] Error happened while creating objectives 10607 No error" - ## Bug description
Having the message box being spam at every loading, the problem came with tbc classic.
The message itself is translated (as i am playing in french) must be something like this in english:
`Questie: [ERROR] [QuestieQuest]: vXXXX - An error occured during the creation of objectives for *Name of the quest* 10607 X No error`
## Screenshots

## Questie version
v6.3.12 (but occur since tbc classic)
| defect | error happened while creating objectives no error bug description having the message box being spam at every loading the problem came with tbc classic the message itself is translated as i am playing in french must be something like this in english questie vxxxx an error occured during the creation of objectives for name of the quest x no error screenshots questie version but occur since tbc classic | 1 |
60,900 | 17,023,553,468 | IssuesEvent | 2021-07-03 02:37:00 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Crash when modifying style | Component: merkaartor Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 1.26pm, Tuesday, 16th February 2010]**
It looks like that when you modify a style and you create an expression that have an error, it crashes the whole Merkaartor (made me loose more of an hour of work :( )
I used the expression "not [name]" (I forgot the "is _NULL_") and just when you click out the textbox. Boum. :) | 1.0 | Crash when modifying style - **[Submitted to the original trac issue database at 1.26pm, Tuesday, 16th February 2010]**
It looks like that when you modify a style and you create an expression that have an error, it crashes the whole Merkaartor (made me loose more of an hour of work :( )
I used the expression "not [name]" (I forgot the "is _NULL_") and just when you click out the textbox. Boum. :) | defect | crash when modifying style it looks like that when you modify a style and you create an expression that have an error it crashes the whole merkaartor made me loose more of an hour of work i used the expression not i forgot the is null and just when you click out the textbox boum | 1 |
75,864 | 9,334,764,745 | IssuesEvent | 2019-03-28 17:00:54 | GSA/pra.gov | https://api.github.com/repos/GSA/pra.gov | opened | Change record keeping requirement example | content design enhancement | Swap current example with one that's easily understood by someone with no knowledge of the PRA. | 1.0 | Change record keeping requirement example - Swap current example with one that's easily understood by someone with no knowledge of the PRA. | non_defect | change record keeping requirement example swap current example with one that s easily understood by someone with no knowledge of the pra | 0 |
711,402 | 24,462,275,097 | IssuesEvent | 2022-10-07 12:16:23 | eth-cscs/DLA-Future | https://api.github.com/repos/eth-cscs/DLA-Future | closed | Potential problems with std::vector wrapper for unsigned types | Priority:Low | Since we work with unsigned integral types and there is discussion #212 going on, there is the need of a `std::vector` that accepts unsigned types.
At the moment it is implemented as a wrapper that inherits from the class from the `std` library, but as described [here](https://quuxplusone.github.io/blog/2018/12/11/dont-inherit-from-std-types/) and [here](https://stackoverflow.com/questions/4353203/thou-shalt-not-inherit-from-stdvector), it may lead to problems (thanks @teonnik for pointing it out and linking the resources!)
Fixing this also requires to keep in mind that we may have to deal also with `HPX` (i.e. I have a use case of `std::vector<hpx::future>`) and our wrapper must work seamlessly with it. | 1.0 | Potential problems with std::vector wrapper for unsigned types - Since we work with unsigned integral types and there is discussion #212 going on, there is the need of a `std::vector` that accepts unsigned types.
At the moment it is implemented as a wrapper that inherits from the class from the `std` library, but as described [here](https://quuxplusone.github.io/blog/2018/12/11/dont-inherit-from-std-types/) and [here](https://stackoverflow.com/questions/4353203/thou-shalt-not-inherit-from-stdvector), it may lead to problems (thanks @teonnik for pointing it out and linking the resources!)
Fixing this also requires to keep in mind that we may have to deal also with `HPX` (i.e. I have a use case of `std::vector<hpx::future>`) and our wrapper must work seamlessly with it. | non_defect | potential problems with std vector wrapper for unsigned types since we work with unsigned integral types and there is discussion going on there is the need of a std vector that accepts unsigned types at the moment it is implemented as a wrapper that inherits from the class from the std library but as described and it may lead to problems thanks teonnik for pointing it out and linking the resources fixing this also requires to keep in mind that we may have to deal also with hpx i e i have a use case of std vector and our wrapper must work seamlessly with it | 0 |
22,803 | 11,787,936,344 | IssuesEvent | 2020-03-17 14:48:29 | chef/automate | https://api.github.com/repos/chef/automate | opened | add search suggestions to profile search | content-delivery-service | ## Overview
- add search suggestions to profile search
- ability to search on any of the metadata pieces
| 1.0 | add search suggestions to profile search - ## Overview
- add search suggestions to profile search
- ability to search on any of the metadata pieces
| non_defect | add search suggestions to profile search overview add search suggestions to profile search ability to search on any of the metadata pieces | 0 |
16,994 | 2,964,901,710 | IssuesEvent | 2015-07-10 19:21:06 | smartystreets/goconvey | https://api.github.com/repos/smartystreets/goconvey | closed | "???" in web UI output upon certain errors/failures | 1-confirmed 1-defect 3-web-server 4-difficult internal-queued | Encountered this bug (which other users had reported to me) at this commit for our metrics project:
https://github.com/smartystreets/metrics/commit/8f0d4239834ee0acdf9a4c1a971201884682f6f5
The "???" is caused by the UI because it's received JSON that it doesn't know what to do with. So this is definitely a server side problem.
- [ ] Fix the bug
- [ ] Notify any effected users (search for '???' in gmail) | 1.0 | "???" in web UI output upon certain errors/failures - Encountered this bug (which other users had reported to me) at this commit for our metrics project:
https://github.com/smartystreets/metrics/commit/8f0d4239834ee0acdf9a4c1a971201884682f6f5
The "???" is caused by the UI because it's received JSON that it doesn't know what to do with. So this is definitely a server side problem.
- [ ] Fix the bug
- [ ] Notify any effected users (search for '???' in gmail) | defect | in web ui output upon certain errors failures encountered this bug which other users had reported to me at this commit for our metrics project the is caused by the ui because it s received json that it doesn t know what to do with so this is definitely a server side problem fix the bug notify any effected users search for in gmail | 1 |
7,573 | 2,610,405,971 | IssuesEvent | 2015-02-26 20:11:56 | chrsmith/republic-at-war | https://api.github.com/repos/chrsmith/republic-at-war | opened | Ithor Land Texture | auto-migrated Priority-Medium Type-Defect | ```
Ithor's land texture needs to be shown at an increased range from the camera as
the camera distance from the ground has increased.
```
-----
Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 31 Jul 2011 at 6:37 | 1.0 | Ithor Land Texture - ```
Ithor's land texture needs to be shown at an increased range from the camera as
the camera distance from the ground has increased.
```
-----
Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 31 Jul 2011 at 6:37 | defect | ithor land texture ithor s land texture needs to be shown at an increased range from the camera as the camera distance from the ground has increased original issue reported on code google com by killerhurdz netscape net on jul at | 1 |
11,885 | 2,668,361,164 | IssuesEvent | 2015-03-23 08:01:52 | contao/test | https://api.github.com/repos/contao/test | closed | 4.0.0-alpha2: Fatal error: Class 'Contao\BackendInstall' not found in /htdocs/contao4/web/contao/install.php on line 27 | defect | > <a href="https://github.com/BugBuster1701"><img src="https://avatars.githubusercontent.com/u/1218809?v=3" align="left" width="42" height="42" hspace="10"></img></a> Issue by [BugBuster1701](https://github.com/BugBuster1701) (imported from https://github.com/contao/contao/issues/10)
Saturday Jun 21, 2014 at 20:18 GMT
Install Aufruf bringt diese Meldung.
Installiert wurde die alpha2 mittels der Release ZIP (grüner Button)
Das Verzeichnis ```vendor/contao/module-core``` **fehlt**.
```
Warning: include(/htdocs/contao4/vendor/contao/module-core/src/controllers/BackendInstall.php): failed to open stream: No such file or directory in vendor/composer/ClassLoader.php on line 377
#0 vendor/composer/ClassLoader.php(377): __error(2, 'include(/htdocs/contao4/...', '/htdocs/contao4/...', 377, Array)
#1 vendor/composer/ClassLoader.php(377): Composer\Autoload\includeFile()
#2 vendor/composer/ClassLoader.php(269): Composer\Autoload\includeFile('/htdocs/contao4/...')
#3 [internal function]: Composer\Autoload\ClassLoader->loadClass('Contao\\BackendI...')
#4 web/contao/install.php(27): spl_autoload_call('Contao\\BackendI...')
#5 {main}
Warning: include(): Failed opening '/htdocs/contao4/vendor/contao/module-core/src/controllers/BackendInstall.php' for inclusion (include_path='.:/usr/local/php5.4.10-cgi/lib/php') in vendor/composer/ClassLoader.php on line 377
#0 vendor/composer/ClassLoader.php(377): __error(2, 'include(): Fail...', '/htdocs/contao4/...', 377, Array)
#1 vendor/composer/ClassLoader.php(377): Composer\Autoload\includeFile()
#2 vendor/composer/ClassLoader.php(269): Composer\Autoload\includeFile('/htdocs/contao4/...')
#3 [internal function]: Composer\Autoload\ClassLoader->loadClass('Contao\\BackendI...')
#4 web/contao/install.php(27): spl_autoload_call('Contao\\BackendI...')
#5 {main}
Fatal error: Class 'Contao\BackendInstall' not found in /htdocs/contao4/web/contao/install.php on line 27
```
| 1.0 | 4.0.0-alpha2: Fatal error: Class 'Contao\BackendInstall' not found in /htdocs/contao4/web/contao/install.php on line 27 - > <a href="https://github.com/BugBuster1701"><img src="https://avatars.githubusercontent.com/u/1218809?v=3" align="left" width="42" height="42" hspace="10"></img></a> Issue by [BugBuster1701](https://github.com/BugBuster1701) (imported from https://github.com/contao/contao/issues/10)
Saturday Jun 21, 2014 at 20:18 GMT
Install Aufruf bringt diese Meldung.
Installiert wurde die alpha2 mittels der Release ZIP (grüner Button)
Das Verzeichnis ```vendor/contao/module-core``` **fehlt**.
```
Warning: include(/htdocs/contao4/vendor/contao/module-core/src/controllers/BackendInstall.php): failed to open stream: No such file or directory in vendor/composer/ClassLoader.php on line 377
#0 vendor/composer/ClassLoader.php(377): __error(2, 'include(/htdocs/contao4/...', '/htdocs/contao4/...', 377, Array)
#1 vendor/composer/ClassLoader.php(377): Composer\Autoload\includeFile()
#2 vendor/composer/ClassLoader.php(269): Composer\Autoload\includeFile('/htdocs/contao4/...')
#3 [internal function]: Composer\Autoload\ClassLoader->loadClass('Contao\\BackendI...')
#4 web/contao/install.php(27): spl_autoload_call('Contao\\BackendI...')
#5 {main}
Warning: include(): Failed opening '/htdocs/contao4/vendor/contao/module-core/src/controllers/BackendInstall.php' for inclusion (include_path='.:/usr/local/php5.4.10-cgi/lib/php') in vendor/composer/ClassLoader.php on line 377
#0 vendor/composer/ClassLoader.php(377): __error(2, 'include(): Fail...', '/htdocs/contao4/...', 377, Array)
#1 vendor/composer/ClassLoader.php(377): Composer\Autoload\includeFile()
#2 vendor/composer/ClassLoader.php(269): Composer\Autoload\includeFile('/htdocs/contao4/...')
#3 [internal function]: Composer\Autoload\ClassLoader->loadClass('Contao\\BackendI...')
#4 web/contao/install.php(27): spl_autoload_call('Contao\\BackendI...')
#5 {main}
Fatal error: Class 'Contao\BackendInstall' not found in /htdocs/contao4/web/contao/install.php on line 27
```
| defect | fatal error class contao backendinstall not found in htdocs web contao install php on line issue by imported from saturday jun at gmt install aufruf bringt diese meldung installiert wurde die mittels der release zip grüner button das verzeichnis vendor contao module core fehlt warning include htdocs vendor contao module core src controllers backendinstall php failed to open stream no such file or directory in vendor composer classloader php on line vendor composer classloader php error include htdocs htdocs array vendor composer classloader php composer autoload includefile vendor composer classloader php composer autoload includefile htdocs composer autoload classloader loadclass contao backendi web contao install php spl autoload call contao backendi main warning include failed opening htdocs vendor contao module core src controllers backendinstall php for inclusion include path usr local cgi lib php in vendor composer classloader php on line vendor composer classloader php error include fail htdocs array vendor composer classloader php composer autoload includefile vendor composer classloader php composer autoload includefile htdocs composer autoload classloader loadclass contao backendi web contao install php spl autoload call contao backendi main fatal error class contao backendinstall not found in htdocs web contao install php on line | 1 |
201 | 2,519,740,078 | IssuesEvent | 2015-01-18 09:01:08 | mbunkus/mtx-trac-import-test | https://api.github.com/repos/mbunkus/mtx-trac-import-test | closed | SRT subs with more than 1 empty lines break | C: mkvmerge P: normal R: fixed T: defect | **Reported by moritz on 16 Sep 2003 16:02 UTC**
They make the parser abort. | 1.0 | SRT subs with more than 1 empty lines break - **Reported by moritz on 16 Sep 2003 16:02 UTC**
They make the parser abort. | defect | srt subs with more than empty lines break reported by moritz on sep utc they make the parser abort | 1 |
819,325 | 30,729,013,276 | IssuesEvent | 2023-07-27 22:37:44 | r4ss/r4ss | https://api.github.com/repos/r4ss/r4ss | closed | `SS_write()` failed to write when `dir` pointed to recursive directory | SS_read/SS_write quick fix low priority | https://github.com/r4ss/r4ss/blob/2d48e892a7c5a2a79c26a1fa4c3e74ff0e90cc0b/R/SS_write.R#L55
failed when dir input was to a non-existent nested directory. Can I add recursive = TRUE to the above line? | 1.0 | `SS_write()` failed to write when `dir` pointed to recursive directory - https://github.com/r4ss/r4ss/blob/2d48e892a7c5a2a79c26a1fa4c3e74ff0e90cc0b/R/SS_write.R#L55
failed when dir input was to a non-existent nested directory. Can I add recursive = TRUE to the above line? | non_defect | ss write failed to write when dir pointed to recursive directory failed when dir input was to a non existent nested directory can i add recursive true to the above line | 0 |
51,525 | 13,207,508,428 | IssuesEvent | 2020-08-14 23:22:42 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | glshovel quits on Help-> About -> Close (Trac #576) | Incomplete Migration Migrated from Trac defect glshovel | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/576">https://code.icecube.wisc.edu/projects/icecube/ticket/576</a>, reported by blaufussand owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2011-05-11T22:19:13",
"_ts": "1305152353000000",
"description": "When running the GLshovel, Help-> about brings up a nice \ndialog box about the program. But the \"Close\" button for this dialog\nwill quit the GLShovel rather than just close the dialog box.\n\nReported on EL5 at Pole, and confirmed as well on Ubu 8.10 at UMD.\n\nReported version number also seems quite random.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2009-11-18T19:05:13",
"component": "glshovel",
"summary": "glshovel quits on Help-> About -> Close",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
| 1.0 | glshovel quits on Help-> About -> Close (Trac #576) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/576">https://code.icecube.wisc.edu/projects/icecube/ticket/576</a>, reported by blaufussand owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2011-05-11T22:19:13",
"_ts": "1305152353000000",
"description": "When running the GLshovel, Help-> about brings up a nice \ndialog box about the program. But the \"Close\" button for this dialog\nwill quit the GLShovel rather than just close the dialog box.\n\nReported on EL5 at Pole, and confirmed as well on Ubu 8.10 at UMD.\n\nReported version number also seems quite random.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2009-11-18T19:05:13",
"component": "glshovel",
"summary": "glshovel quits on Help-> About -> Close",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
| defect | glshovel quits on help about close trac migrated from json status closed changetime ts description when running the glshovel help about brings up a nice ndialog box about the program but the close button for this dialog nwill quit the glshovel rather than just close the dialog box n nreported on at pole and confirmed as well on ubu at umd n nreported version number also seems quite random reporter blaufuss cc resolution fixed time component glshovel summary glshovel quits on help about close priority normal keywords milestone owner troy type defect | 1 |
549,143 | 16,086,575,925 | IssuesEvent | 2021-04-26 12:01:56 | microsoft/fluentui | https://api.github.com/repos/microsoft/fluentui | closed | In <GroupedList/>, a group with no items can't be selected | Component: GroupedList Needs: Discussion Priority 3: Low Resolution: Soft Close Type: Bug :bug: | <!--
Thanks for contacting us! We're here to help.
Before you report an issue, check if it's been reported before:
* Search: https://github.com/OfficeDev/office-ui-fabric-react/search?type=Issues
* Search by area or component: https://github.com/OfficeDev/office-ui-fabric-react/issues/labels
Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report.
-->
### Environment Information
- **Package version(s)**: Latest
- **Browser and OS versions**: N/A
### Please provide a reproduction of the bug in a codepen:
https://codepen.io/mshindal/pen/NWKmpKv
<!--
Providing an isolated reproduction of the bug in a codepen makes it much easier for us to help you. Here are some ways to get started:
* Go to https://aka.ms/fabricpen for a starter codepen
* You can also use the "Export to Codepen" feature for the various components in our documentation site.
* See http://codepen.io/dzearing/pens/public/?grid_type=list for a variety of examples
Alternatively, you can also use https://aka.ms/fabricdemo to get permanent repro links if the repro occurs with an example.
(A permanent link is preferable to "use the website" as the website can change.)
-->
#### Actual behavior:
Group with no items is not selectable
#### Expected behavior:
Group with no items is selectable
### Priorities and help requested:
Are you willing to submit a PR to fix? Yes
Requested priority: Normal
Products/sites affected: N/A
| 1.0 | In <GroupedList/>, a group with no items can't be selected - <!--
Thanks for contacting us! We're here to help.
Before you report an issue, check if it's been reported before:
* Search: https://github.com/OfficeDev/office-ui-fabric-react/search?type=Issues
* Search by area or component: https://github.com/OfficeDev/office-ui-fabric-react/issues/labels
Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report.
-->
### Environment Information
- **Package version(s)**: Latest
- **Browser and OS versions**: N/A
### Please provide a reproduction of the bug in a codepen:
https://codepen.io/mshindal/pen/NWKmpKv
<!--
Providing an isolated reproduction of the bug in a codepen makes it much easier for us to help you. Here are some ways to get started:
* Go to https://aka.ms/fabricpen for a starter codepen
* You can also use the "Export to Codepen" feature for the various components in our documentation site.
* See http://codepen.io/dzearing/pens/public/?grid_type=list for a variety of examples
Alternatively, you can also use https://aka.ms/fabricdemo to get permanent repro links if the repro occurs with an example.
(A permanent link is preferable to "use the website" as the website can change.)
-->
#### Actual behavior:
Group with no items is not selectable
#### Expected behavior:
Group with no items is selectable
### Priorities and help requested:
Are you willing to submit a PR to fix? Yes
Requested priority: Normal
Products/sites affected: N/A
| non_defect | in a group with no items can t be selected thanks for contacting us we re here to help before you report an issue check if it s been reported before search search by area or component note that if you do not provide enough information to reproduce the issue we may not be able to take action on your report environment information package version s latest browser and os versions n a please provide a reproduction of the bug in a codepen providing an isolated reproduction of the bug in a codepen makes it much easier for us to help you here are some ways to get started go to for a starter codepen you can also use the export to codepen feature for the various components in our documentation site see for a variety of examples alternatively you can also use to get permanent repro links if the repro occurs with an example a permanent link is preferable to use the website as the website can change actual behavior group with no items is not selectable expected behavior group with no items is selectable priorities and help requested are you willing to submit a pr to fix yes requested priority normal products sites affected n a | 0 |
67,390 | 20,961,608,841 | IssuesEvent | 2022-03-27 21:48:46 | abedmaatalla/sipdroid | https://api.github.com/repos/abedmaatalla/sipdroid | closed | Dial Tone | Priority-Medium Type-Defect auto-migrated | ```
Dial tone work, but not accurate.
EX:when I call my cell phone's voice mail and try to enter the password using
the keypad, it will generate a tone, sometime it doesn't recognize the tone.
if I use the house phone to call it, it seems to be fine.
```
Original issue reported on code.google.com by `riceball...@gmail.com` on 4 Mar 2013 at 4:23
| 1.0 | Dial Tone - ```
Dial tone work, but not accurate.
EX:when I call my cell phone's voice mail and try to enter the password using
the keypad, it will generate a tone, sometime it doesn't recognize the tone.
if I use the house phone to call it, it seems to be fine.
```
Original issue reported on code.google.com by `riceball...@gmail.com` on 4 Mar 2013 at 4:23
| defect | dial tone dial tone work but not accurate ex when i call my cell phone s voice mail and try to enter the password using the keypad it will generate a tone sometime it doesn t recognize the tone if i use the house phone to call it it seems to be fine original issue reported on code google com by riceball gmail com on mar at | 1 |
12,507 | 2,702,224,750 | IssuesEvent | 2015-04-06 03:55:27 | LenShustek/arduino-playtune | https://api.github.com/repos/LenShustek/arduino-playtune | closed | Directory Issues when compiling Playtune | auto-migrated Priority-Medium Type-Defect | ```
Hello,
Thanks for the great program- I'm looking forward to trying it out.
I'm wondering if you could help me get started. I have created a folder called
"Playtune" in my Arduino Libraries folder. When I try to compile the
"test_nano.pde" code, I get the errors shown in the attached picture.
I'm running Windows Vista 64 bit. You state in the Miditones documentation that
there is a compiling issue, but this is Playtunes so I assume there is no
problem with 64 bit since it' not stated in the readme.
I made sure to set my folder location in the IDE preferences to the following:
C:\Users\Preston\Desktop\Arduino-1.0
I also tried deleting the Arduino folder and unziping it again to start fresh.
Inside the Playtune library folder I placed the "Playtune.h" and "Playtune.cpp"
files.
I appreciate your help.
Preston
```
Original issue reported on code.google.com by `pyesc...@asu.edu` on 13 Apr 2012 at 7:37
Attachments:
* [playtune.png](https://storage.googleapis.com/google-code-attachments/arduino-playtune/issue-1/comment-0/playtune.png)
| 1.0 | Directory Issues when compiling Playtune - ```
Hello,
Thanks for the great program- I'm looking forward to trying it out.
I'm wondering if you could help me get started. I have created a folder called
"Playtune" in my Arduino Libraries folder. When I try to compile the
"test_nano.pde" code, I get the errors shown in the attached picture.
I'm running Windows Vista 64 bit. You state in the Miditones documentation that
there is a compiling issue, but this is Playtunes so I assume there is no
problem with 64 bit since it' not stated in the readme.
I made sure to set my folder location in the IDE preferences to the following:
C:\Users\Preston\Desktop\Arduino-1.0
I also tried deleting the Arduino folder and unziping it again to start fresh.
Inside the Playtune library folder I placed the "Playtune.h" and "Playtune.cpp"
files.
I appreciate your help.
Preston
```
Original issue reported on code.google.com by `pyesc...@asu.edu` on 13 Apr 2012 at 7:37
Attachments:
* [playtune.png](https://storage.googleapis.com/google-code-attachments/arduino-playtune/issue-1/comment-0/playtune.png)
| defect | directory issues when compiling playtune hello thanks for the great program i m looking forward to trying it out i m wondering if you could help me get started i have created a folder called playtune in my arduino libraries folder when i try to compile the test nano pde code i get the errors shown in the attached picture i m running windows vista bit you state in the miditones documentation that there is a compiling issue but this is playtunes so i assume there is no problem with bit since it not stated in the readme i made sure to set my folder location in the ide preferences to the following c users preston desktop arduino i also tried deleting the arduino folder and unziping it again to start fresh inside the playtune library folder i placed the playtune h and playtune cpp files i appreciate your help preston original issue reported on code google com by pyesc asu edu on apr at attachments | 1 |
305,554 | 23,119,847,135 | IssuesEvent | 2022-07-27 20:14:21 | PennLINC/xcp_d | https://api.github.com/repos/PennLINC/xcp_d | closed | detrend | documentation enhancement | Hello deverlopper,
I am writing to confirm whether detrend is done using xcp_abcd by default. It seems like it is done based on the source code but I did not find this info in the method section in the sub-ID.html file.
The following is the code to do the detrend if I understand correctly.
# xcp_abcd/interfaces/regression.py
# line 78 ~ 80
orderx =np.floor(1+ data_matrix.shape[1]*self.inputs.tr/150)
dd_data = demean_detrend_data(data=data_matrix,TR=self.inputs.tr,order=orderx)
confound = demean_detrend_data(data=confound,TR=self.inputs.tr,order=orderx)
The following is the method section in the sub-ID.html file.
Post-processing of fMRIPrep outputs
For each of the nine CIFTI runs found per subject (across all tasks and sessions), the following post-processing was performed: before nuissance regression and filtering any volumes with framewise-displacement greater than 10.0 mm (Power et al. 2014; Satterthwaite et al. 2013) were flagged as outlier and excluded from nuissance regression. In total, 36 nuisance regressors were selected from the nuisance confound matrices of fMRIPrep output. These nuisance regressors included six motion parameters, global signal, the mean white matter, the mean CSF signal with their temporal derivatives, and the quadratic expansion of six motion parameters, tissues signals and their temporal derivatives (Ciric et al. 2018, 2017; Satterthwaite et al. 2013). These nuisance regressors were regressed from the BOLD data using linear regression - as implemented in Scikit-Learn 0.24.2 (Pedregosa et al. 2011). Residual timeseries from this regression were then band-pass filtered to retain signals within the 0.01-0.08 Hz frequency band. The processed BOLD was smoothed using Connectome Workbench with a gaussian kernel size of 6.0 mm (FWHM).
Many thanks for your confirmation.
Xiuyi
| 1.0 | detrend - Hello deverlopper,
I am writing to confirm whether detrend is done using xcp_abcd by default. It seems like it is done based on the source code but I did not find this info in the method section in the sub-ID.html file.
The following is the code to do the detrend if I understand correctly.
# xcp_abcd/interfaces/regression.py
# line 78 ~ 80
orderx =np.floor(1+ data_matrix.shape[1]*self.inputs.tr/150)
dd_data = demean_detrend_data(data=data_matrix,TR=self.inputs.tr,order=orderx)
confound = demean_detrend_data(data=confound,TR=self.inputs.tr,order=orderx)
The following is the method section in the sub-ID.html file.
Post-processing of fMRIPrep outputs
For each of the nine CIFTI runs found per subject (across all tasks and sessions), the following post-processing was performed: before nuissance regression and filtering any volumes with framewise-displacement greater than 10.0 mm (Power et al. 2014; Satterthwaite et al. 2013) were flagged as outlier and excluded from nuissance regression. In total, 36 nuisance regressors were selected from the nuisance confound matrices of fMRIPrep output. These nuisance regressors included six motion parameters, global signal, the mean white matter, the mean CSF signal with their temporal derivatives, and the quadratic expansion of six motion parameters, tissues signals and their temporal derivatives (Ciric et al. 2018, 2017; Satterthwaite et al. 2013). These nuisance regressors were regressed from the BOLD data using linear regression - as implemented in Scikit-Learn 0.24.2 (Pedregosa et al. 2011). Residual timeseries from this regression were then band-pass filtered to retain signals within the 0.01-0.08 Hz frequency band. The processed BOLD was smoothed using Connectome Workbench with a gaussian kernel size of 6.0 mm (FWHM).
Many thanks for your confirmation.
Xiuyi
| non_defect | detrend hello deverlopper i am writing to confirm whether detrend is done using xcp abcd by default it seems like it is done based on the source code but i did not find this info in the method section in the sub id html file the following is the code to do the detrend if i understand correctly xcp abcd interfaces regression py line orderx np floor data matrix shape self inputs tr dd data demean detrend data data data matrix tr self inputs tr order orderx confound demean detrend data data confound tr self inputs tr order orderx the following is the method section in the sub id html file post processing of fmriprep outputs for each of the nine cifti runs found per subject across all tasks and sessions the following post processing was performed before nuissance regression and filtering any volumes with framewise displacement greater than mm power et al satterthwaite et al were flagged as outlier and excluded from nuissance regression in total nuisance regressors were selected from the nuisance confound matrices of fmriprep output these nuisance regressors included six motion parameters global signal the mean white matter the mean csf signal with their temporal derivatives and the quadratic expansion of six motion parameters tissues signals and their temporal derivatives ciric et al satterthwaite et al these nuisance regressors were regressed from the bold data using linear regression as implemented in scikit learn pedregosa et al residual timeseries from this regression were then band pass filtered to retain signals within the hz frequency band the processed bold was smoothed using connectome workbench with a gaussian kernel size of mm fwhm many thanks for your confirmation xiuyi | 0 |
28,618 | 5,311,288,930 | IssuesEvent | 2017-02-13 02:44:21 | junichi11/netbeans-gitignore-io-plugin | https://api.github.com/repos/junichi11/netbeans-gitignore-io-plugin | closed | Plugin nbm does not contain plugin description | defect | The nbm distributable does not contain any description metadata. This means when viewing the plugin on NetBeans update centre it is unclear what the plugin does. The provided link does go to the github site which does provide a reasonable description from the README.md
Please add a description to the "OpenIDE-Module-Short-Description", perhaps just the contents of the README.
| 1.0 | Plugin nbm does not contain plugin description - The nbm distributable does not contain any description metadata. This means when viewing the plugin on NetBeans update centre it is unclear what the plugin does. The provided link does go to the github site which does provide a reasonable description from the README.md
Please add a description to the "OpenIDE-Module-Short-Description", perhaps just the contents of the README.
| defect | plugin nbm does not contain plugin description the nbm distributable does not contain any description metadata this means when viewing the plugin on netbeans update centre it is unclear what the plugin does the provided link does go to the github site which does provide a reasonable description from the readme md please add a description to the openide module short description perhaps just the contents of the readme | 1 |
269,827 | 28,960,307,029 | IssuesEvent | 2023-05-10 01:31:24 | praneethpanasala/linux | https://api.github.com/repos/praneethpanasala/linux | reopened | CVE-2020-10766 (Medium) detected in linuxv4.19 | Mend: dependency security vulnerability | ## CVE-2020-10766 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.19</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A logic bug flaw was found in Linux kernel before 5.8-rc1 in the implementation of SSBD. A bug in the logic handling allows an attacker with a local account to disable SSBD protection during a context switch when additional speculative execution mitigations are in place. This issue was introduced when the per task/process conditional STIPB switching was added on top of the existing SSBD switching. The highest threat from this vulnerability is to confidentiality.
<p>Publish Date: 2020-09-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10766>CVE-2020-10766</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10766">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10766</a></p>
<p>Release Date: 2020-09-23</p>
<p>Fix Resolution: v5.8-rc1,v4.4.228,v4.9.228,v4.14.185,v4.19.129,v5.4.47,v5.7.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-10766 (Medium) detected in linuxv4.19 - ## CVE-2020-10766 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.19</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A logic bug flaw was found in Linux kernel before 5.8-rc1 in the implementation of SSBD. A bug in the logic handling allows an attacker with a local account to disable SSBD protection during a context switch when additional speculative execution mitigations are in place. This issue was introduced when the per task/process conditional STIPB switching was added on top of the existing SSBD switching. The highest threat from this vulnerability is to confidentiality.
<p>Publish Date: 2020-09-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10766>CVE-2020-10766</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10766">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10766</a></p>
<p>Release Date: 2020-09-23</p>
<p>Fix Resolution: v5.8-rc1,v4.4.228,v4.9.228,v4.14.185,v4.19.129,v5.4.47,v5.7.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details a logic bug flaw was found in linux kernel before in the implementation of ssbd a bug in the logic handling allows an attacker with a local account to disable ssbd protection during a context switch when additional speculative execution mitigations are in place this issue was introduced when the per task process conditional stipb switching was added on top of the existing ssbd switching the highest threat from this vulnerability is to confidentiality publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
78,605 | 9,778,425,574 | IssuesEvent | 2019-06-07 12:01:14 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [APM] Show annotation on graphs when transaction occurred | :apm design | *Original comment by @formgeist:*
It would be cool to have an annotation, or time marker about when the currently displayed transaction took place. That makes it easier to correlate the transaction with the metrics. | 1.0 | [APM] Show annotation on graphs when transaction occurred - *Original comment by @formgeist:*
It would be cool to have an annotation, or time marker about when the currently displayed transaction took place. That makes it easier to correlate the transaction with the metrics. | non_defect | show annotation on graphs when transaction occurred original comment by formgeist it would be cool to have an annotation or time marker about when the currently displayed transaction took place that makes it easier to correlate the transaction with the metrics | 0 |
54,947 | 14,078,444,758 | IssuesEvent | 2020-11-04 13:34:59 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | [MOBILE DESIGN]: GIBCT EYB - Responsive tables SHOULD have HTML updated for better mobile presentation | 508-defect-3 508-issue-semantic-markup 508/Accessibility Awaiting Feedback BAH-EYB bug | # [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3)
<!--
Enter an issue title using the format [ERROR TYPE]: Brief description of the problem
---
[SCREENREADER]: Edit buttons need aria-label for context
[KEYBOARD]: Add another user link will not receive keyboard focus
[AXE-CORE]: Heading levels should increase by one
[COGNITION]: Error messages should be more specific
[COLOR]: Blue button on blue background does not have sufficient contrast ratio
---
-->
<!-- It's okay to delete the instructions above, but leave the link to the 508 defect severity level for your issue. -->
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. -->
**VFS Point of Contact:** _Trevor_
## User Story or Problem Statement
<!-- Example: As a user with cognitive considerations, I expect to see a label and input pairing consistently styled as throughout the rest of the site, with the label just above the text/email/search input or to the right of a radio/checkbox input, so that I am clearly able to understand what entry is expected. -->
As a mobile user, I want to see a header for the responsive table rows, so I understand what each data point means.
## Details
<!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. -->
The responsive table that's deployed on the GIBCT needs a couple of modifications to be fully inline with the recommended version that will become a design system component. Specifically:
1. The `tbody > tr > th` needs a `role="rowheader"` attribute
1. The `tbody > tr > th` needs a `<dfn>` added that is hidden at desktop size, and shown at mobile size
## Acceptance Criteria
- [ ] Code is updated to match the HTML snippet and [comment on 13825](https://github.com/department-of-veterans-affairs/va.gov-team/issues/13825#issuecomment-700810625) from VSA specialist
- [ ] DFN labels are appearing at mobile sizes
- [ ] HTML validates in the W3C validator
- [ ] Zero axe-core violations are shown
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
## Environment
* https://staging.va.gov/gi-bill-comparison-tool/profile/11006124 or any other profile view that has the responsive table
## Solution (if known)
```diff
<tbody>
<tr role="row">
- <th class="school-name-cell" scope="row" role="cell" tabindex="-1">...</th>
+ <th class="school-name-cell" role="rowheader" scope="row" tabindex="-1"> <!-- add rowheader role; set scope of th to row -->
+ <dfn>SMC letter designation</dfn> <!-- provides the column heading for mobile, but hides in desktop view -->
+ <span>SMC-K</span> <!-- the value of the row header -->
</th>
<td role="cell"><dfn>Monthly payment (in U.S. $)</dfn><span class="currency">110.31</span></td>
<td role="cell"><dfn>How this payment variation works</dfn><span class="paragraph">If you qualify for SMC-K, we add this rate to your basic disability compensation rate for any disability rating from 0% to 100%. We also add this rate to all SMC basic rates except SMC-O, SMC-Q, and SMC-R. You may receive 1 to 3 SMC-K awards in addition to basic and SMC rates.</span></td>
</tr>
```
## Screenshots or Trace Logs
<!-- Drop any screenshots or error logs that might be useful for debugging -->

| 1.0 | [MOBILE DESIGN]: GIBCT EYB - Responsive tables SHOULD have HTML updated for better mobile presentation - # [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3)
<!--
Enter an issue title using the format [ERROR TYPE]: Brief description of the problem
---
[SCREENREADER]: Edit buttons need aria-label for context
[KEYBOARD]: Add another user link will not receive keyboard focus
[AXE-CORE]: Heading levels should increase by one
[COGNITION]: Error messages should be more specific
[COLOR]: Blue button on blue background does not have sufficient contrast ratio
---
-->
<!-- It's okay to delete the instructions above, but leave the link to the 508 defect severity level for your issue. -->
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. -->
**VFS Point of Contact:** _Trevor_
## User Story or Problem Statement
<!-- Example: As a user with cognitive considerations, I expect to see a label and input pairing consistently styled as throughout the rest of the site, with the label just above the text/email/search input or to the right of a radio/checkbox input, so that I am clearly able to understand what entry is expected. -->
As a mobile user, I want to see a header for the responsive table rows, so I understand what each data point means.
## Details
<!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. -->
The responsive table that's deployed on the GIBCT needs a couple of modifications to be fully inline with the recommended version that will become a design system component. Specifically:
1. The `tbody > tr > th` needs a `role="rowheader"` attribute
1. The `tbody > tr > th` needs a `<dfn>` added that is hidden at desktop size, and shown at mobile size
## Acceptance Criteria
- [ ] Code is updated to match the HTML snippet and [comment on 13825](https://github.com/department-of-veterans-affairs/va.gov-team/issues/13825#issuecomment-700810625) from VSA specialist
- [ ] DFN labels are appearing at mobile sizes
- [ ] HTML validates in the W3C validator
- [ ] Zero axe-core violations are shown
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
## Environment
* https://staging.va.gov/gi-bill-comparison-tool/profile/11006124 or any other profile view that has the responsive table
## Solution (if known)
```diff
<tbody>
<tr role="row">
- <th class="school-name-cell" scope="row" role="cell" tabindex="-1">...</th>
+ <th class="school-name-cell" role="rowheader" scope="row" tabindex="-1"> <!-- add rowheader role; set scope of th to row -->
+ <dfn>SMC letter designation</dfn> <!-- provides the column heading for mobile, but hides in desktop view -->
+ <span>SMC-K</span> <!-- the value of the row header -->
</th>
<td role="cell"><dfn>Monthly payment (in U.S. $)</dfn><span class="currency">110.31</span></td>
<td role="cell"><dfn>How this payment variation works</dfn><span class="paragraph">If you qualify for SMC-K, we add this rate to your basic disability compensation rate for any disability rating from 0% to 100%. We also add this rate to all SMC basic rates except SMC-O, SMC-Q, and SMC-R. You may receive 1 to 3 SMC-K awards in addition to basic and SMC rates.</span></td>
</tr>
```
## Screenshots or Trace Logs
<!-- Drop any screenshots or error logs that might be useful for debugging -->

| defect | gibct eyb responsive tables should have html updated for better mobile presentation enter an issue title using the format brief description of the problem edit buttons need aria label for context add another user link will not receive keyboard focus heading levels should increase by one error messages should be more specific blue button on blue background does not have sufficient contrast ratio feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements point of contact vfs point of contact trevor user story or problem statement as a mobile user i want to see a header for the responsive table rows so i understand what each data point means details the responsive table that s deployed on the gibct needs a couple of modifications to be fully inline with the recommended version that will become a design system component specifically the tbody tr th needs a role rowheader attribute the tbody tr th needs a added that is hidden at desktop size and shown at mobile size acceptance criteria code is updated to match the html snippet and from vsa specialist dfn labels are appearing at mobile sizes html validates in the validator zero axe core violations are shown definition of done review and acknowledge feedback fix and or document decisions made accessibility specialist will close ticket after reviewing documented decisions validating fix environment or any other profile view that has the responsive table solution if known diff smc letter designation smc k monthly payment in nbsp u s nbsp how this payment variation works if you qualify for smc k we add this rate to your basic disability compensation rate for any disability rating from to we also add this rate to all smc basic rates except smc o smc q and smc r you may receive to smc k awards in addition to basic and smc rates screenshots or trace logs | 1 |
159,137 | 20,036,652,437 | IssuesEvent | 2022-02-02 12:38:43 | kapseliboi/dapp | https://api.github.com/repos/kapseliboi/dapp | opened | CVE-2021-37713 (High) detected in tar-4.4.13.tgz | security vulnerability | ## CVE-2021-37713 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.13.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.13.tgz">https://registry.npmjs.org/tar/-/tar-4.4.13.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- web3-1.2.11.tgz (Root Library)
- web3-bzz-1.2.11.tgz
- swarm-js-0.1.40.tgz
- :x: **tar-4.4.13.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/dapp/commit/79de7acd382466c6348d970d41ce91b47fc3366d">79de7acd382466c6348d970d41ce91b47fc3366d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain `..` path portions, and resolving the sanitized paths against the extraction target directory. This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as `C:some\path`. If the drive letter does not match the extraction target, for example `D:\extraction\dir`, then the result of `path.resolve(extractionDirectory, entryPath)` would resolve against the current working directory on the `C:` drive, rather than the extraction target directory. Additionally, a `..` portion of the path could occur immediately after the drive letter, such as `C:../foo`, and was not properly sanitized by the logic that checked for `..` within the normalized and split portions of the path. This only affects users of `node-tar` on Windows systems. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does. Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37713>CVE-2021-37713</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh">https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.18</p>
<p>Direct dependency fix Resolution (web3): 1.3.0-rc.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37713 (High) detected in tar-4.4.13.tgz - ## CVE-2021-37713 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.13.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.13.tgz">https://registry.npmjs.org/tar/-/tar-4.4.13.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- web3-1.2.11.tgz (Root Library)
- web3-bzz-1.2.11.tgz
- swarm-js-0.1.40.tgz
- :x: **tar-4.4.13.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/dapp/commit/79de7acd382466c6348d970d41ce91b47fc3366d">79de7acd382466c6348d970d41ce91b47fc3366d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain `..` path portions, and resolving the sanitized paths against the extraction target directory. This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as `C:some\path`. If the drive letter does not match the extraction target, for example `D:\extraction\dir`, then the result of `path.resolve(extractionDirectory, entryPath)` would resolve against the current working directory on the `C:` drive, rather than the extraction target directory. Additionally, a `..` portion of the path could occur immediately after the drive letter, such as `C:../foo`, and was not properly sanitized by the logic that checked for `..` within the normalized and split portions of the path. This only affects users of `node-tar` on Windows systems. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does. Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37713>CVE-2021-37713</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh">https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.18</p>
<p>Direct dependency fix Resolution (web3): 1.3.0-rc.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file package json path to vulnerable library node modules tar package json dependency hierarchy tgz root library bzz tgz swarm js tgz x tar tgz vulnerable library found in head commit a href found in base branch master vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted this is in part accomplished by sanitizing absolute paths of entries within the archive skipping archive entries that contain path portions and resolving the sanitized paths against the extraction target directory this logic was insufficient on windows systems when extracting tar files that contained a path that was not an absolute path but specified a drive letter different from the extraction target such as c some path if the drive letter does not match the extraction target for example d extraction dir then the result of path resolve extractiondirectory entrypath would resolve against the current working directory on the c drive rather than the extraction target directory additionally a portion of the path could occur immediately after the drive letter such as c foo and was not properly sanitized by the logic that checked for within the normalized and split portions of the path this only affects users of node tar on windows systems these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar there is no reasonable way to work around this issue without performing the same path normalization procedures that node tar now does users are encouraged to upgrade to the latest patched versions of node tar rather than attempt to sanitize paths themselves publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution rc step up your open source security game with whitesource | 0 |
6,119 | 2,610,221,512 | IssuesEvent | 2015-02-26 19:10:17 | chrsmith/somefinders | https://api.github.com/repos/chrsmith/somefinders | opened | utorrent rus setup exe.txt | auto-migrated Priority-Medium Type-Defect | ```
'''Всеволод Галкин'''
Привет всем не подскажите где можно найти
.utorrent rus setup exe.txt. где то видел уже
'''Валентин Шилов'''
Вот держи линк http://bit.ly/1ar0hU7
'''Арий Петров'''
Просит ввести номер мобилы!Не опасно ли это?
'''Влас Андреев'''
Не это не влияет на баланс
'''Велимир Соколов'''
Неа все ок у меня ничего не списало
Информация о файле: utorrent rus setup exe.txt
Загружен: В этом месяце
Скачан раз: 897
Рейтинг: 106
Средняя скорость скачивания: 250
Похожих файлов: 35
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 10:51 | 1.0 | utorrent rus setup exe.txt - ```
'''Всеволод Галкин'''
Привет всем не подскажите где можно найти
.utorrent rus setup exe.txt. где то видел уже
'''Валентин Шилов'''
Вот держи линк http://bit.ly/1ar0hU7
'''Арий Петров'''
Просит ввести номер мобилы!Не опасно ли это?
'''Влас Андреев'''
Не это не влияет на баланс
'''Велимир Соколов'''
Неа все ок у меня ничего не списало
Информация о файле: utorrent rus setup exe.txt
Загружен: В этом месяце
Скачан раз: 897
Рейтинг: 106
Средняя скорость скачивания: 250
Похожих файлов: 35
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 10:51 | defect | utorrent rus setup exe txt всеволод галкин привет всем не подскажите где можно найти utorrent rus setup exe txt где то видел уже валентин шилов вот держи линк арий петров просит ввести номер мобилы не опасно ли это влас андреев не это не влияет на баланс велимир соколов неа все ок у меня ничего не списало информация о файле utorrent rus setup exe txt загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at | 1 |
2,547 | 2,607,927,157 | IssuesEvent | 2015-02-26 00:25:09 | chrsmithdemos/minify | https://api.github.com/repos/chrsmithdemos/minify | closed | Impropper CSS paths created in rewriteCSSUrls in Win Vista / XAMPP | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Use XAMPP as your server on a windows machine
What is the expected output? What do you see instead?
Paths are created for CSS urls with the rewriteCSSUrls function... but on a
windows machine using XAMPP, the forward slashes are created as
backslashes, this not correctly rendering paths readable via CSS.
What version of the product are you using? On what operating system?
Windows Vista running XAMPP
Please provide any additional information below.
That's it :)
```
-----
Original issue reported on code.google.com by `ballsa...@gmail.com` on 5 Feb 2008 at 4:52 | 1.0 | Impropper CSS paths created in rewriteCSSUrls in Win Vista / XAMPP - ```
What steps will reproduce the problem?
1. Use XAMPP as your server on a windows machine
What is the expected output? What do you see instead?
Paths are created for CSS urls with the rewriteCSSUrls function... but on a
windows machine using XAMPP, the forward slashes are created as
backslashes, this not correctly rendering paths readable via CSS.
What version of the product are you using? On what operating system?
Windows Vista running XAMPP
Please provide any additional information below.
That's it :)
```
-----
Original issue reported on code.google.com by `ballsa...@gmail.com` on 5 Feb 2008 at 4:52 | defect | impropper css paths created in rewritecssurls in win vista xampp what steps will reproduce the problem use xampp as your server on a windows machine what is the expected output what do you see instead paths are created for css urls with the rewritecssurls function but on a windows machine using xampp the forward slashes are created as backslashes this not correctly rendering paths readable via css what version of the product are you using on what operating system windows vista running xampp please provide any additional information below that s it original issue reported on code google com by ballsa gmail com on feb at | 1 |
109,725 | 23,814,691,155 | IssuesEvent | 2022-09-05 04:53:41 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | [Bug]: Lot of deviation in time taken to setup autocomplete | Bug Performance High Autocomplete Needs Triaging FE Coders Pod | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
`startAutoComplete` took 3600ms in one profile and 2400ms in other profile. That is 50% deviation, while the rest of the code has around 2.5% deviation.
It would help with performance monitoring and performance in general if this deviation can be reduced.
Profiles can be found in the slack thread.
https://theappsmith.slack.com/archives/C024GUDM0LT/p1655467501490139
### Steps To Reproduce
Check the profiles on slack thread attached.
### Public Sample App
_No response_
### Version
Both / Latest | 1.0 | [Bug]: Lot of deviation in time taken to setup autocomplete - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
`startAutoComplete` took 3600ms in one profile and 2400ms in other profile. That is 50% deviation, while the rest of the code has around 2.5% deviation.
It would help with performance monitoring and performance in general if this deviation can be reduced.
Profiles can be found in the slack thread.
https://theappsmith.slack.com/archives/C024GUDM0LT/p1655467501490139
### Steps To Reproduce
Check the profiles on slack thread attached.
### Public Sample App
_No response_
### Version
Both / Latest | non_defect | lot of deviation in time taken to setup autocomplete is there an existing issue for this i have searched the existing issues description startautocomplete took in one profile and in other profile that is deviation while the rest of the code has around deviation it would help with performance monitoring and performance in general if this deviation can be reduced profiles can be found in the slack thread steps to reproduce check the profiles on slack thread attached public sample app no response version both latest | 0 |
51,416 | 13,207,468,777 | IssuesEvent | 2020-08-14 23:13:18 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | GEANT4 tool (Trac #380) | Incomplete Migration Migrated from Trac combo core defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/380">https://code.icecube.wisc.edu/projects/icecube/ticket/380</a>, reported by olivasand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-22T18:26:26",
"_ts": "1416680786794380",
"description": "Having trouble installing GEANT4 and therefore verifying that g4-tankresponse builds in IceSim. geant4_4.9.5 seems to install just fine on Ubuntu 11.10, but cmake can't find it when building g4-tankresponse. It can't find \"liblist\" which just doesn't exist in 4.9.5 where cmake is looking for it. Not sure if this is a build issue or whether geant4.cmake just needs to be adapted for 4.9.5.\n\nCan't seem to build 4.9.3 either as it dies with the error:\n\n---> Verifying checksum(s) for geant4_4.9.3\nError: Checksum (md5) mismatch for G4ABLA.3.0.tar.gz\nError: Checksum (sha1) mismatch for G4ABLA.3.0.tar.gz\nError: Target com.apple.checksum returned: Unable to verify file checksums\n\nTrying 4.9.4 to see if that's any better...\n\n",
"reporter": "olivas",
"cc": "",
"resolution": "fixed",
"time": "2012-03-13T04:08:42",
"component": "combo core",
"summary": "GEANT4 tool",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | GEANT4 tool (Trac #380) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/380">https://code.icecube.wisc.edu/projects/icecube/ticket/380</a>, reported by olivasand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-22T18:26:26",
"_ts": "1416680786794380",
"description": "Having trouble installing GEANT4 and therefore verifying that g4-tankresponse builds in IceSim. geant4_4.9.5 seems to install just fine on Ubuntu 11.10, but cmake can't find it when building g4-tankresponse. It can't find \"liblist\" which just doesn't exist in 4.9.5 where cmake is looking for it. Not sure if this is a build issue or whether geant4.cmake just needs to be adapted for 4.9.5.\n\nCan't seem to build 4.9.3 either as it dies with the error:\n\n---> Verifying checksum(s) for geant4_4.9.3\nError: Checksum (md5) mismatch for G4ABLA.3.0.tar.gz\nError: Checksum (sha1) mismatch for G4ABLA.3.0.tar.gz\nError: Target com.apple.checksum returned: Unable to verify file checksums\n\nTrying 4.9.4 to see if that's any better...\n\n",
"reporter": "olivas",
"cc": "",
"resolution": "fixed",
"time": "2012-03-13T04:08:42",
"component": "combo core",
"summary": "GEANT4 tool",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | tool trac migrated from json status closed changetime ts description having trouble installing and therefore verifying that tankresponse builds in icesim seems to install just fine on ubuntu but cmake can t find it when building tankresponse it can t find liblist which just doesn t exist in where cmake is looking for it not sure if this is a build issue or whether cmake just needs to be adapted for n ncan t seem to build either as it dies with the error n n verifying checksum s for nerror checksum mismatch for tar gz nerror checksum mismatch for tar gz nerror target com apple checksum returned unable to verify file checksums n ntrying to see if that s any better n n reporter olivas cc resolution fixed time component combo core summary tool priority normal keywords milestone owner nega type defect | 1 |
17,171 | 4,148,330,269 | IssuesEvent | 2016-06-15 10:33:13 | arangodb/arangodb | https://api.github.com/repos/arangodb/arangodb | closed | TRAVERSAL and GRAPH_TRAVERSAL documentation inconsistencies | 2_Question documentation | Having read the wonderful blogpost http://jsteemann.github.io/blog/2015/01/28/using-custom-visitors-in-aql-graph-traversals/ about using custom visitors when doing graph traversals, I stumbled upon the different documentation parts for the TRAVERSAL (see https://docs.arangodb.com/Aql/GraphFunctions.html) and GRAPH_TRAVERSAL (see https://docs.arangodb.com/Aql/GraphOperations.html) AQL functions.
Seems a lot of the options that can be used with and that are documented for TRAVERSAL can also be used with the GRAPH_TRAVERSAL function, but aren't documented there, e.g.
- visitor
- visitorReturnsResults
- data
- followEdges
- ...
Would it be feasible to simply copy the documentation parts for these options over to the GRAPH_TRAVERSAL documentation section to make clear they can be used with GRAPH_TRAVERSAL, too?
Or are these options not officially supported there?
Additionally, it would be helpful to add links between GraphFunctions and GraphOperations as it is not clear at the first sight that there exist two different types of graph-related traversal functions that can be used.
I'm not sure wether or how this ticket here is related to #1019 | 1.0 | TRAVERSAL and GRAPH_TRAVERSAL documentation inconsistencies - Having read the wonderful blogpost http://jsteemann.github.io/blog/2015/01/28/using-custom-visitors-in-aql-graph-traversals/ about using custom visitors when doing graph traversals, I stumbled upon the different documentation parts for the TRAVERSAL (see https://docs.arangodb.com/Aql/GraphFunctions.html) and GRAPH_TRAVERSAL (see https://docs.arangodb.com/Aql/GraphOperations.html) AQL functions.
Seems a lot of the options that can be used with and that are documented for TRAVERSAL can also be used with the GRAPH_TRAVERSAL function, but aren't documented there, e.g.
- visitor
- visitorReturnsResults
- data
- followEdges
- ...
Would it be feasible to simply copy the documentation parts for these options over to the GRAPH_TRAVERSAL documentation section to make clear they can be used with GRAPH_TRAVERSAL, too?
Or are these options not officially supported there?
Additionally, it would be helpful to add links between GraphFunctions and GraphOperations as it is not clear at the first sight that there exist two different types of graph-related traversal functions that can be used.
I'm not sure wether or how this ticket here is related to #1019 | non_defect | traversal and graph traversal documentation inconsistencies having read the wonderful blogpost about using custom visitors when doing graph traversals i stumbled upon the different documentation parts for the traversal see and graph traversal see aql functions seems a lot of the options that can be used with and that are documented for traversal can also be used with the graph traversal function but aren t documented there e g visitor visitorreturnsresults data followedges would it be feasible to simply copy the documentation parts for these options over to the graph traversal documentation section to make clear they can be used with graph traversal too or are these options not officially supported there additionally it would be helpful to add links between graphfunctions and graphoperations as it is not clear at the first sight that there exist two different types of graph related traversal functions that can be used i m not sure wether or how this ticket here is related to | 0 |
22,176 | 3,606,567,815 | IssuesEvent | 2016-02-04 11:56:24 | BigBadaboom/androidsvg | https://api.github.com/repos/BigBadaboom/androidsvg | closed | Parse failure (file included) | bug Priority-Medium Type-Defect | I noticed the SVG parser crashes when trying to load this SVG file:
http://soft.vub.ac.be/~mstevens/tmp/crashing_svg.zip
Here's the relevant part of the stacktrace:
```
java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String java.lang.StringBuilder.toString()' on a null object reference
at com.caverock.androidsvg.SVGParser.endElement(SVGParser.java:596)
at org.apache.harmony.xml.ExpatParser.endElement(ExpatParser.java:156)
at org.apache.harmony.xml.ExpatParser.appendBytes(Native Method)
at org.apache.harmony.xml.ExpatParser.parseFragment(ExpatParser.java:513)
at org.apache.harmony.xml.ExpatParser.parseDocument(ExpatParser.java:474)
at org.apache.harmony.xml.ExpatReader.parse(ExpatReader.java:316)
at org.apache.harmony.xml.ExpatReader.parse(ExpatReader.java:279)
at com.caverock.androidsvg.SVGParser.parse(SVGParser.java:394)
at com.caverock.androidsvg.SVG.getFromResource(SVG.java:187)
```
| 1.0 | Parse failure (file included) - I noticed the SVG parser crashes when trying to load this SVG file:
http://soft.vub.ac.be/~mstevens/tmp/crashing_svg.zip
Here's the relevant part of the stacktrace:
```
java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String java.lang.StringBuilder.toString()' on a null object reference
at com.caverock.androidsvg.SVGParser.endElement(SVGParser.java:596)
at org.apache.harmony.xml.ExpatParser.endElement(ExpatParser.java:156)
at org.apache.harmony.xml.ExpatParser.appendBytes(Native Method)
at org.apache.harmony.xml.ExpatParser.parseFragment(ExpatParser.java:513)
at org.apache.harmony.xml.ExpatParser.parseDocument(ExpatParser.java:474)
at org.apache.harmony.xml.ExpatReader.parse(ExpatReader.java:316)
at org.apache.harmony.xml.ExpatReader.parse(ExpatReader.java:279)
at com.caverock.androidsvg.SVGParser.parse(SVGParser.java:394)
at com.caverock.androidsvg.SVG.getFromResource(SVG.java:187)
```
| defect | parse failure file included i noticed the svg parser crashes when trying to load this svg file here s the relevant part of the stacktrace java lang nullpointerexception attempt to invoke virtual method java lang string java lang stringbuilder tostring on a null object reference at com caverock androidsvg svgparser endelement svgparser java at org apache harmony xml expatparser endelement expatparser java at org apache harmony xml expatparser appendbytes native method at org apache harmony xml expatparser parsefragment expatparser java at org apache harmony xml expatparser parsedocument expatparser java at org apache harmony xml expatreader parse expatreader java at org apache harmony xml expatreader parse expatreader java at com caverock androidsvg svgparser parse svgparser java at com caverock androidsvg svg getfromresource svg java | 1 |
409,560 | 27,743,175,727 | IssuesEvent | 2023-03-15 15:28:52 | toursbylocals/wiki | https://api.github.com/repos/toursbylocals/wiki | opened | WIKI - Custom Tours - Business Rules Documentation | documentation SPIKE | **Background**
One of the differentiator that ToursByLocals has is that any tour can be customized to accommodate the customer's needs/wishes. In addition, a custom tour could be created from the scratch to be something completely different than the listings a supplier has.
We have already determined that custom tours would be created from the messaging system but there are certain business rules that still need to be documented and agreed across the company.
**ACS**
- [ ] Document business rules for custom tours
- [ ] Validate with stakeholders (Support, QEM and TX)
Some business rules as examples:
- Expiration or not of a custom tour
- Reutilization or not of same custom tour
- Changes to a custom tour once it's booked | 1.0 | WIKI - Custom Tours - Business Rules Documentation - **Background**
One of the differentiator that ToursByLocals has is that any tour can be customized to accommodate the customer's needs/wishes. In addition, a custom tour could be created from the scratch to be something completely different than the listings a supplier has.
We have already determined that custom tours would be created from the messaging system but there are certain business rules that still need to be documented and agreed across the company.
**ACS**
- [ ] Document business rules for custom tours
- [ ] Validate with stakeholders (Support, QEM and TX)
Some business rules as examples:
- Expiration or not of a custom tour
- Reutilization or not of same custom tour
- Changes to a custom tour once it's booked | non_defect | wiki custom tours business rules documentation background one of the differentiator that toursbylocals has is that any tour can be customized to accommodate the customer s needs wishes in addition a custom tour could be created from the scratch to be something completely different than the listings a supplier has we have already determined that custom tours would be created from the messaging system but there are certain business rules that still need to be documented and agreed across the company acs document business rules for custom tours validate with stakeholders support qem and tx some business rules as examples expiration or not of a custom tour reutilization or not of same custom tour changes to a custom tour once it s booked | 0 |
71,675 | 23,758,160,966 | IssuesEvent | 2022-09-01 06:22:25 | SAP/fundamental-ngx | https://api.github.com/repos/SAP/fundamental-ngx | closed | bug: FDP Dynamic Page / FD Tab List: wrong initial `tabChange` event | Defect Hunting dxp | #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
When an FDP Dynamic Page / FD Tab List is loaded initially, a `tabChange` event is always triggered for the first tab. While this might be expected, this event is also triggered for the first tab even if the selected tab was changed programmatically before with `setSelectedTab`.
Therefore, we first get an event for the new tab, then an event for the first tab, which is wrong.
Additionally, the tab is not switched correctly to the new tab.
#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
Angular 14.0.2, Fundamentals 0.36.0-rc.35
#### If this is a bug, please provide steps for reproducing it.
https://stackblitz.com/edit/angular-e3989p?devToolsHeight=33&file=src/app/platform-dynamic-page-tabbed-example.component.ts
See the log in the console, first the correct event for the new tab is shown, then an event for the first tab.
Additionally, the tab is not switched correctly to the second tab.
#### Please provide relevant source code if applicable.
#### Is there anything else we should know?
| 1.0 | bug: FDP Dynamic Page / FD Tab List: wrong initial `tabChange` event - #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
When an FDP Dynamic Page / FD Tab List is loaded initially, a `tabChange` event is always triggered for the first tab. While this might be expected, this event is also triggered for the first tab even if the selected tab was changed programmatically before with `setSelectedTab`.
Therefore, we first get an event for the new tab, then an event for the first tab, which is wrong.
Additionally, the tab is not switched correctly to the new tab.
#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
Angular 14.0.2, Fundamentals 0.36.0-rc.35
#### If this is a bug, please provide steps for reproducing it.
https://stackblitz.com/edit/angular-e3989p?devToolsHeight=33&file=src/app/platform-dynamic-page-tabbed-example.component.ts
See the log in the console, first the correct event for the new tab is shown, then an event for the first tab.
Additionally, the tab is not switched correctly to the second tab.
#### Please provide relevant source code if applicable.
#### Is there anything else we should know?
| defect | bug fdp dynamic page fd tab list wrong initial tabchange event is this a bug enhancement or feature request bug briefly describe your proposal when an fdp dynamic page fd tab list is loaded initially a tabchange event is always triggered for the first tab while this might be expected this event is also triggered for the first tab even if the selected tab was changed programmatically before with setselectedtab therefore we first get an event for the new tab then an event for the first tab which is wrong additionally the tab is not switched correctly to the new tab which versions of angular and fundamental library for angular are affected if this is a feature request use current version angular fundamentals rc if this is a bug please provide steps for reproducing it see the log in the console first the correct event for the new tab is shown then an event for the first tab additionally the tab is not switched correctly to the second tab please provide relevant source code if applicable is there anything else we should know | 1 |
19,247 | 13,210,222,973 | IssuesEvent | 2020-08-15 15:47:42 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | jenkins_job_info validate_certs failed | affects_2.9 bot_closed bug collection collection:community.general module needs_collection_redirect needs_triage python3 support:community web_infrastructure | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I use the module `jenkins_job_info` on my personnal jenkins server. I have to use `validate_certs` argument set to False but my task send me an error :
```
Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)'),))
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
jenkins_job_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.1
config file = /home/jlebris/.ansible.cfg
configured module search path = ['/home/jlebris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/virtualenv/python3.7/molecule/lib/python3.7/site-packages/ansible
executable location = /opt/virtualenv/python3.7/molecule/bin/ansible
python version = 3.7.5 (default, Nov 15 2019, 17:00:24) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_SSH_CONTROL_PATH(/home/jlebris/.ansible.cfg) = %(directory)s/%%h-%%r
CACHE_PLUGIN(/home/jlebris/.ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/jlebris/.ansible.cfg) = /tmp/ansible_fact_cache
CACHE_PLUGIN_TIMEOUT(/home/jlebris/.ansible.cfg) = 86400
DEFAULT_FORKS(/home/jlebris/.ansible.cfg) = 10
DEFAULT_GATHERING(/home/jlebris/.ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/jlebris/.ansible.cfg) = ['/home/jlebris/inventories/hosts']
DEFAULT_VAULT_PASSWORD_FILE(/home/jlebris/.ansible.cfg) = /home/jlebris/.ansible/.vault_pass
DISPLAY_SKIPPED_HOSTS(/home/jlebris/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/jlebris/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a task with jenkins_job_info with validate_certs set to false. The url used has to have a self signed ssl certificate.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Check if job {{ item }} exists"
jenkins_job_info:
name: "{{ item }}"
token: "{{ templates.jenkins.admin_token }}"
url: "{{ templates.jenkins.url }}"
user: "{{ templates.jenkins.admin_user }}"
validate_certs: False
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
the task have to succeed
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
The error was: requests.exceptions.SSLError: HTTPSConnectionPool(host='my-jenkins.com', port=443): Max retries exceeded with url: /crumbIssuer/api/json (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)'),))
```
| 1.0 | jenkins_job_info validate_certs failed - <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I use the module `jenkins_job_info` on my personnal jenkins server. I have to use `validate_certs` argument set to False but my task send me an error :
```
Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)'),))
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
jenkins_job_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.1
config file = /home/jlebris/.ansible.cfg
configured module search path = ['/home/jlebris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/virtualenv/python3.7/molecule/lib/python3.7/site-packages/ansible
executable location = /opt/virtualenv/python3.7/molecule/bin/ansible
python version = 3.7.5 (default, Nov 15 2019, 17:00:24) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_SSH_CONTROL_PATH(/home/jlebris/.ansible.cfg) = %(directory)s/%%h-%%r
CACHE_PLUGIN(/home/jlebris/.ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/jlebris/.ansible.cfg) = /tmp/ansible_fact_cache
CACHE_PLUGIN_TIMEOUT(/home/jlebris/.ansible.cfg) = 86400
DEFAULT_FORKS(/home/jlebris/.ansible.cfg) = 10
DEFAULT_GATHERING(/home/jlebris/.ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/jlebris/.ansible.cfg) = ['/home/jlebris/inventories/hosts']
DEFAULT_VAULT_PASSWORD_FILE(/home/jlebris/.ansible.cfg) = /home/jlebris/.ansible/.vault_pass
DISPLAY_SKIPPED_HOSTS(/home/jlebris/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/jlebris/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a task with jenkins_job_info with validate_certs set to false. The url used has to have a self signed ssl certificate.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Check if job {{ item }} exists"
jenkins_job_info:
name: "{{ item }}"
token: "{{ templates.jenkins.admin_token }}"
url: "{{ templates.jenkins.url }}"
user: "{{ templates.jenkins.admin_user }}"
validate_certs: False
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
the task have to succeed
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
The error was: requests.exceptions.SSLError: HTTPSConnectionPool(host='my-jenkins.com', port=443): Max retries exceeded with url: /crumbIssuer/api/json (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)'),))
```
| non_defect | jenkins job info validate certs failed summary i use the module jenkins job info on my personnal jenkins server i have to use validate certs argument set to false but my task send me an error caused by sslerror sslerror u certificate verify failed ssl c issue type bug report component name jenkins job info ansible version ansible config file home jlebris ansible cfg configured module search path ansible python module location opt virtualenv molecule lib site packages ansible executable location opt virtualenv molecule bin ansible python version default nov configuration ansible ssh control path home jlebris ansible cfg directory s h r cache plugin home jlebris ansible cfg jsonfile cache plugin connection home jlebris ansible cfg tmp ansible fact cache cache plugin timeout home jlebris ansible cfg default forks home jlebris ansible cfg default gathering home jlebris ansible cfg smart default host list home jlebris ansible cfg default vault password file home jlebris ansible cfg home jlebris ansible vault pass display skipped hosts home jlebris ansible cfg false host key checking home jlebris ansible cfg false os environment distributor id debian description debian gnu linux buster release codename buster steps to reproduce create a task with jenkins job info with validate certs set to false the url used has to have a self signed ssl certificate yaml name check if job item exists jenkins job info name item token templates jenkins admin token url templates jenkins url user templates jenkins admin user validate certs false expected results the task have to succeed actual results the error was requests exceptions sslerror httpsconnectionpool host my jenkins com port max retries exceeded with url crumbissuer api json caused by sslerror sslerror u certificate verify failed ssl c | 0 |
70,766 | 23,311,922,206 | IssuesEvent | 2022-08-08 09:02:01 | vector-im/element-android | https://api.github.com/repos/vector-im/element-android | opened | Scrolling up when at the top of the list make FAB dissapear | T-Defect Z-AppLayout | ### Steps to reproduce
1. Make sure you're scrolled up to the top of the list
2. Scroll up using one finger
### Outcome
#### What did you expect?
For the FAB to always be visible
#### What happened instead?
The FAB dissapear for the duration of the finger interaction with the screen
### Your phone model
_No response_
### Operating system version
_No response_
### Application version and app store
_No response_
### Homeserver
_No response_
### Will you send logs?
No
### Are you willing to provide a PR?
No | 1.0 | Scrolling up when at the top of the list make FAB dissapear - ### Steps to reproduce
1. Make sure you're scrolled up to the top of the list
2. Scroll up using one finger
### Outcome
#### What did you expect?
For the FAB to always be visible
#### What happened instead?
The FAB dissapear for the duration of the finger interaction with the screen
### Your phone model
_No response_
### Operating system version
_No response_
### Application version and app store
_No response_
### Homeserver
_No response_
### Will you send logs?
No
### Are you willing to provide a PR?
No | defect | scrolling up when at the top of the list make fab dissapear steps to reproduce make sure you re scrolled up to the top of the list scroll up using one finger outcome what did you expect for the fab to always be visible what happened instead the fab dissapear for the duration of the finger interaction with the screen your phone model no response operating system version no response application version and app store no response homeserver no response will you send logs no are you willing to provide a pr no | 1 |
139,070 | 12,836,693,888 | IssuesEvent | 2020-07-07 14:43:01 | michbur/tidysq | https://api.github.com/repos/michbur/tidysq | opened | Create flow diagram for construct_sq() | documentation | Currently (as of 7 VII 2020) method `construct_sq()` has four parameters and their usage varies greatly depending on other parameters. All cases are described in documentation, but even clearest of lists have limited clarity. Thus I'd recommend creating a flow diagram that illustrates all possible (and impossible) parameter combinations. It could be used within vignettes, Github readme and possibly cheatsheet. | 1.0 | Create flow diagram for construct_sq() - Currently (as of 7 VII 2020) method `construct_sq()` has four parameters and their usage varies greatly depending on other parameters. All cases are described in documentation, but even clearest of lists have limited clarity. Thus I'd recommend creating a flow diagram that illustrates all possible (and impossible) parameter combinations. It could be used within vignettes, Github readme and possibly cheatsheet. | non_defect | create flow diagram for construct sq currently as of vii method construct sq has four parameters and their usage varies greatly depending on other parameters all cases are described in documentation but even clearest of lists have limited clarity thus i d recommend creating a flow diagram that illustrates all possible and impossible parameter combinations it could be used within vignettes github readme and possibly cheatsheet | 0 |
290,111 | 25,037,544,105 | IssuesEvent | 2022-11-04 17:20:08 | NationalSecurityAgency/skills-service | https://api.github.com/repos/NationalSecurityAgency/skills-service | closed | Styling prepended text in a block quote causes validation to fail | bug review test | Styling the text with bold or italics in a block quote will cause validation to fail | 1.0 | Styling prepended text in a block quote causes validation to fail - Styling the text with bold or italics in a block quote will cause validation to fail | non_defect | styling prepended text in a block quote causes validation to fail styling the text with bold or italics in a block quote will cause validation to fail | 0 |
48,801 | 13,184,745,000 | IssuesEvent | 2020-08-12 20:01:00 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | PFServers hang after the crash of 1-2 fpslaveXX nodes (Trac #236) | Incomplete Migration Migrated from Trac defect jeb + pnf | <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/236
, reported by blaufuss and owned by tschmidt_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-05-25T13:50:15",
"description": "On occasion, like when a G6 fpslaveXX node crashes due to memory\nerrors, the drop out of this slaves pfclients will cause the PFServer(s) to hang\nand lock data processing. \n\nPFserver should not be stopped by a single client node crashing or being\nturned off.\n\n",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1337953815000000",
"component": "jeb + pnf",
"summary": "PFServers hang after the crash of 1-2 fpslaveXX nodes",
"priority": "normal",
"keywords": "",
"time": "2010-12-01T21:33:38",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
| 1.0 | PFServers hang after the crash of 1-2 fpslaveXX nodes (Trac #236) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/236
, reported by blaufuss and owned by tschmidt_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-05-25T13:50:15",
"description": "On occasion, like when a G6 fpslaveXX node crashes due to memory\nerrors, the drop out of this slaves pfclients will cause the PFServer(s) to hang\nand lock data processing. \n\nPFserver should not be stopped by a single client node crashing or being\nturned off.\n\n",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1337953815000000",
"component": "jeb + pnf",
"summary": "PFServers hang after the crash of 1-2 fpslaveXX nodes",
"priority": "normal",
"keywords": "",
"time": "2010-12-01T21:33:38",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
| defect | pfservers hang after the crash of fpslavexx nodes trac migrated from reported by blaufuss and owned by tschmidt json status closed changetime description on occasion like when a fpslavexx node crashes due to memory nerrors the drop out of this slaves pfclients will cause the pfserver s to hang nand lock data processing n npfserver should not be stopped by a single client node crashing or being nturned off n n reporter blaufuss cc resolution fixed ts component jeb pnf summary pfservers hang after the crash of fpslavexx nodes priority normal keywords time milestone owner tschmidt type defect | 1 |
654,202 | 21,640,931,243 | IssuesEvent | 2022-05-05 18:42:13 | googleapis/release-please | https://api.github.com/repos/googleapis/release-please | closed | Node peer dependencies cause spurious version bumps | priority: p2 type: bug |
#### Environment details
- OS: macOS 12.2.1
- Node.js version: 16.14.0
- npm version: 8.5.4
- `release-please` version: 13.13.0
#### Steps to reproduce
1. Create a node workspace with two components, one which is in the `peerDependencies` field of the other (typically you'd use a version range like `1.x`, though this bug happens regardless.) Let's say that `bar` has a peer dependency on `foo`.
2. Make a `feat` commit for package `foo`, causing release-please to create a release PR that bumps the minor version of `foo`.
3. Observe that `bar` will also have a patch version bump in the release PR, but with no change to the version of `foo` in `peerDependencies` field, and a changelog consisting of a 'Dependencies' header and an empty body.
An example of this behaviour can be seen in our repository here: https://github.com/Financial-Times/dotcom-tool-kit/pull/209. Take `@dotcom-tool-kit/babel` as an example in this case: it's only been bumped because its peer dependency, `dotcom-tool-kit`, has been bumped, but no code was changed (excluding `@dotcom-tool-kit/babel`'s version number in its `package.json`.)
### Why this is happening
This is happening because peer dependencies are [included](https://github.com/googleapis/release-please/blob/main/src/plugins/node-workspace.ts#L314) in the dependency graph created in the node workspace plugin, but not included in the [`localDependencies` field from `@lerna/package-graph`](https://github.com/lerna/lerna/blob/main/core/package-graph/index.js#L50) which is [used](https://github.com/googleapis/release-please/blob/main/src/plugins/node-workspace.ts#L235) as the map of all dependencies that are then updated. So peer dependencies are considered when determining what packages need to have their dependencies updated within a workspace, but are not considered when actually doing the update.
### Proposed solution
I'd say peer dependencies should not be considered when determining which packages need dependency bumps – as stated earlier, peer dependencies very rarely use caret version ranges, instead opting to [give ranges as wide as possible](https://nodejs.org/en/blog/npm/peer-dependencies/#using-peer-dependencies) to be as compatible as possible. This is something that is more appropriate to manage manually, rather than being automated by a tool like release-please. | 1.0 | Node peer dependencies cause spurious version bumps -
#### Environment details
- OS: macOS 12.2.1
- Node.js version: 16.14.0
- npm version: 8.5.4
- `release-please` version: 13.13.0
#### Steps to reproduce
1. Create a node workspace with two components, one which is in the `peerDependencies` field of the other (typically you'd use a version range like `1.x`, though this bug happens regardless.) Let's say that `bar` has a peer dependency on `foo`.
2. Make a `feat` commit for package `foo`, causing release-please to create a release PR that bumps the minor version of `foo`.
3. Observe that `bar` will also have a patch version bump in the release PR, but with no change to the version of `foo` in `peerDependencies` field, and a changelog consisting of a 'Dependencies' header and an empty body.
An example of this behaviour can be seen in our repository here: https://github.com/Financial-Times/dotcom-tool-kit/pull/209. Take `@dotcom-tool-kit/babel` as an example in this case: it's only been bumped because its peer dependency, `dotcom-tool-kit`, has been bumped, but no code was changed (excluding `@dotcom-tool-kit/babel`'s version number in its `package.json`.)
### Why this is happening
This is happening because peer dependencies are [included](https://github.com/googleapis/release-please/blob/main/src/plugins/node-workspace.ts#L314) in the dependency graph created in the node workspace plugin, but not included in the [`localDependencies` field from `@lerna/package-graph`](https://github.com/lerna/lerna/blob/main/core/package-graph/index.js#L50) which is [used](https://github.com/googleapis/release-please/blob/main/src/plugins/node-workspace.ts#L235) as the map of all dependencies that are then updated. So peer dependencies are considered when determining what packages need to have their dependencies updated within a workspace, but are not considered when actually doing the update.
### Proposed solution
I'd say peer dependencies should not be considered when determining which packages need dependency bumps – as stated earlier, peer dependencies very rarely use caret version ranges, instead opting to [give ranges as wide as possible](https://nodejs.org/en/blog/npm/peer-dependencies/#using-peer-dependencies) to be as compatible as possible. This is something that is more appropriate to manage manually, rather than being automated by a tool like release-please. | non_defect | node peer dependencies cause spurious version bumps environment details os macos node js version npm version release please version steps to reproduce create a node workspace with two components one which is in the peerdependencies field of the other typically you d use a version range like x though this bug happens regardless let s say that bar has a peer dependency on foo make a feat commit for package foo causing release please to create a release pr that bumps the minor version of foo observe that bar will also have a patch version bump in the release pr but with no change to the version of foo in peerdependencies field and a changelog consisting of a dependencies header and an empty body an example of this behaviour can be seen in our repository here take dotcom tool kit babel as an example in this case it s only been bumped because its peer dependency dotcom tool kit has been bumped but no code was changed excluding dotcom tool kit babel s version number in its package json why this is happening this is happening because peer dependencies are in the dependency graph created in the node workspace plugin but not included in the which is as the map of all dependencies that are then updated so peer dependencies are considered when determining what packages need to have their dependencies updated within a workspace but are not considered when actually doing the update proposed solution i d say peer dependencies should not be considered when determining which packages need dependency bumps – as stated earlier peer dependencies very rarely use caret version ranges instead opting to to be as compatible as possible this is something that is more appropriate to manage manually rather than being automated by a tool like release please | 0 |
45,941 | 13,055,825,719 | IssuesEvent | 2020-07-30 02:50:57 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | Problems with PPC in GPU mode on Ubuntu 10.04 (Trac #298) | Incomplete Migration Migrated from Trac combo simulation defect | Migrated from https://code.icecube.wisc.edu/ticket/298
```json
{
"status": "closed",
"changetime": "2012-02-02T16:12:13",
"description": "First error, compiling:\n/usr/local/cuda/lib/libcudart.so: could not read symbols: File in wrong format\nIt was entering the lib instead of the lib64 directory. Removed the lib directory and created a symlink lib->lib64. Then everything compiled without errors.\n\nThe NVIDIA tests for the card (GPU computing SDK) compile and run with or without the change previously described.\n\nAfter compiling, testing the example script (with the proper path modifications) stops at:\nProcessing files: [filename]\nCUDA error: invalid device function\n\nNo more information.\n\nSystem details:\nUbuntu 10.04 x86_64 2.6.32-33 generic\nNvidia development driver 270.41.19\nCUDA toolkit 4.0.17\nPorts v4\ncmake 2.8.4\nPPC revision V00-00-03\nIcesim 02-05-09 RC\n\nI have tried so far using as well\nPPC r7352, same error\nCUDA toolkit 3.2 (specific for Ubuntu 10.04), same errors\nand the few ideas that the internet could give me. No success.\n\n",
"reporter": "icecube",
"cc": "",
"resolution": "wontfix",
"_ts": "1328199133000000",
"component": "combo simulation",
"summary": "Problems with PPC in GPU mode on Ubuntu 10.04",
"priority": "normal",
"keywords": "ppc gpu ubuntu cuda",
"time": "2011-07-21T16:57:52",
"milestone": "",
"owner": "yanezjua@ifh.de",
"type": "defect"
}
```
| 1.0 | Problems with PPC in GPU mode on Ubuntu 10.04 (Trac #298) - Migrated from https://code.icecube.wisc.edu/ticket/298
```json
{
"status": "closed",
"changetime": "2012-02-02T16:12:13",
"description": "First error, compiling:\n/usr/local/cuda/lib/libcudart.so: could not read symbols: File in wrong format\nIt was entering the lib instead of the lib64 directory. Removed the lib directory and created a symlink lib->lib64. Then everything compiled without errors.\n\nThe NVIDIA tests for the card (GPU computing SDK) compile and run with or without the change previously described.\n\nAfter compiling, testing the example script (with the proper path modifications) stops at:\nProcessing files: [filename]\nCUDA error: invalid device function\n\nNo more information.\n\nSystem details:\nUbuntu 10.04 x86_64 2.6.32-33 generic\nNvidia development driver 270.41.19\nCUDA toolkit 4.0.17\nPorts v4\ncmake 2.8.4\nPPC revision V00-00-03\nIcesim 02-05-09 RC\n\nI have tried so far using as well\nPPC r7352, same error\nCUDA toolkit 3.2 (specific for Ubuntu 10.04), same errors\nand the few ideas that the internet could give me. No success.\n\n",
"reporter": "icecube",
"cc": "",
"resolution": "wontfix",
"_ts": "1328199133000000",
"component": "combo simulation",
"summary": "Problems with PPC in GPU mode on Ubuntu 10.04",
"priority": "normal",
"keywords": "ppc gpu ubuntu cuda",
"time": "2011-07-21T16:57:52",
"milestone": "",
"owner": "yanezjua@ifh.de",
"type": "defect"
}
```
| defect | problems with ppc in gpu mode on ubuntu trac migrated from json status closed changetime description first error compiling n usr local cuda lib libcudart so could not read symbols file in wrong format nit was entering the lib instead of the directory removed the lib directory and created a symlink lib then everything compiled without errors n nthe nvidia tests for the card gpu computing sdk compile and run with or without the change previously described n nafter compiling testing the example script with the proper path modifications stops at nprocessing files ncuda error invalid device function n nno more information n nsystem details nubuntu generic nnvidia development driver ncuda toolkit nports ncmake nppc revision nicesim rc n ni have tried so far using as well nppc same error ncuda toolkit specific for ubuntu same errors nand the few ideas that the internet could give me no success n n reporter icecube cc resolution wontfix ts component combo simulation summary problems with ppc in gpu mode on ubuntu priority normal keywords ppc gpu ubuntu cuda time milestone owner yanezjua ifh de type defect | 1 |
271,987 | 8,494,449,761 | IssuesEvent | 2018-10-28 21:36:26 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | Improve dependency management of MathematicalProgram back-ends | priority: medium team: manipulation type: feature request | The dependency management of MathematicalProgram back-ends is not ideal:
- the test_tags and bazel.rc bits are easy to get wrong;
- users who know they only want a specific back-end or two can't easily just use what they want;
- declaring tests that should pass when mixed with multiple back-ends is error-prone.
All of these should be solvable though improvements to the BUILD rules and C++ interface design. | 1.0 | Improve dependency management of MathematicalProgram back-ends - The dependency management of MathematicalProgram back-ends is not ideal:
- the test_tags and bazel.rc bits are easy to get wrong;
- users who know they only want a specific back-end or two can't easily just use what they want;
- declaring tests that should pass when mixed with multiple back-ends is error-prone.
All of these should be solvable though improvements to the BUILD rules and C++ interface design. | non_defect | improve dependency management of mathematicalprogram back ends the dependency management of mathematicalprogram back ends is not ideal the test tags and bazel rc bits are easy to get wrong users who know they only want a specific back end or two can t easily just use what they want declaring tests that should pass when mixed with multiple back ends is error prone all of these should be solvable though improvements to the build rules and c interface design | 0 |
73,878 | 24,847,012,550 | IssuesEvent | 2022-10-26 16:39:48 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Private space creation fails with M_LIMIT_EXCEEDED error | T-Defect | ### Steps to reproduce
1. Where are you starting? What can you see?
Simple Space creation fails
<img width="521" alt="image" src="https://user-images.githubusercontent.com/9841565/198084279-6ad11b8f-6654-4351-af01-2dcbf202e689.png">
3. What do you click?
Just create a new account, and started the flow to create a private space.
### Outcome
#### What did you expect?
That it worked
Android is failing too.
So probably some server change regarding room creation?
#### What happened instead?
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
matrix.org
### Will you send logs?
Yes | 1.0 | Private space creation fails with M_LIMIT_EXCEEDED error - ### Steps to reproduce
1. Where are you starting? What can you see?
Simple Space creation fails
<img width="521" alt="image" src="https://user-images.githubusercontent.com/9841565/198084279-6ad11b8f-6654-4351-af01-2dcbf202e689.png">
3. What do you click?
Just create a new account, and started the flow to create a private space.
### Outcome
#### What did you expect?
That it worked
Android is failing too.
So probably some server change regarding room creation?
#### What happened instead?
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
matrix.org
### Will you send logs?
Yes | defect | private space creation fails with m limit exceeded error steps to reproduce where are you starting what can you see simple space creation fails img width alt image src what do you click just create a new account and started the flow to create a private space outcome what did you expect that it worked android is failing too so probably some server change regarding room creation what happened instead operating system no response browser information no response url for webapp no response application version no response homeserver matrix org will you send logs yes | 1 |
44,729 | 12,360,609,501 | IssuesEvent | 2020-05-17 15:58:54 | nsalomonis/altanalyze | https://api.github.com/repos/nsalomonis/altanalyze | closed | Error encountered: Exon or junction is zero | Priority-Medium Type-Defect auto-migrated | ```
What steps will reproduce the problem?
1. I generated exon coordinate .bed file from Altanalyze.
2. Sort that file using BEDTools.
3. Re-running the Altanalyze
What is the expected output? What do you see instead?
No ouput.
What version of the product are you using? On what operating system?
AltAnalyze v2.0.6 - Windows
Please provide any additional information below.
The .bam file and junctions.bed (from Tophat) is already in the folder. The
Exon file had been generated from Altanalyze and then used BedTools. Now
re-running the process arising the problem. Looking forward for your support.
Thank You!
```
Original issue reported on code.google.com by `asma.bio...@gmail.com` on 27 May 2012 at 12:27
| 1.0 | Error encountered: Exon or junction is zero - ```
What steps will reproduce the problem?
1. I generated exon coordinate .bed file from Altanalyze.
2. Sort that file using BEDTools.
3. Re-running the Altanalyze
What is the expected output? What do you see instead?
No ouput.
What version of the product are you using? On what operating system?
AltAnalyze v2.0.6 - Windows
Please provide any additional information below.
The .bam file and junctions.bed (from Tophat) is already in the folder. The
Exon file had been generated from Altanalyze and then used BedTools. Now
re-running the process arising the problem. Looking forward for your support.
Thank You!
```
Original issue reported on code.google.com by `asma.bio...@gmail.com` on 27 May 2012 at 12:27
| defect | error encountered exon or junction is zero what steps will reproduce the problem i generated exon coordinate bed file from altanalyze sort that file using bedtools re running the altanalyze what is the expected output what do you see instead no ouput what version of the product are you using on what operating system altanalyze windows please provide any additional information below the bam file and junctions bed from tophat is already in the folder the exon file had been generated from altanalyze and then used bedtools now re running the process arising the problem looking forward for your support thank you original issue reported on code google com by asma bio gmail com on may at | 1 |
287,556 | 21,659,723,374 | IssuesEvent | 2022-05-06 17:50:06 | LSDOlab/csdl | https://api.github.com/repos/LSDOlab/csdl | reopened | TypeError of the `csdl_om` simulator object | documentation help wanted | <!--
Thank you for filing a bug report! Please provide a short summary of the
bug, along with any information you feel relevant to replicating the
bug.
Before filing a bug report, please ensure that your report does not fall
into a category for any of the other issue templates.
-->
Hi Victor,
I was trying to build a `CustomImplicitOperation` object corresponding to the OpenMDAO `ImplicitComponent` class. I assume they are using the same methodology except for the syntax differences.
However, I was having trouble to simulate the model using `csdl_om`, that the error popped up saying "TypeError: CSDL-OM only accepts CSDL Model specifications to construct a Simulator." Here is the complete error message when running the `ex_custom.py` in the CSDL repository, with simulator defined with it.
```py
Traceback (most recent call last):
File "csdl_test.py", line 50, in <module>
sim = Simulator(ExampleImplicitSimple())
File "/home/ru/csdl_om/csdl_om/core/simulator.py", line 49, in __init__
raise TypeError(
TypeError: CSDL-OM only accepts CSDL Model specifications to construct a Simulator.
```
And the complete code example I'm using is as below.
```py
from csdl import CustomExplicitOperation, CustomImplicitOperation, NewtonSolver, ScipyKrylov
import csdl
import numpy as np
class ExampleImplicitSimple(CustomImplicitOperation):
"""
:param var: x
"""
def define(self):
print("="*40)
print(" Running define()...")
print("="*40)
self.add_input('a', val=1.)
self.add_input('b', val=-4.)
self.add_input('c', val=3.)
self.add_output('x', val=0.)
self.declare_derivatives('x', 'x')
self.declare_derivatives('x', ['a', 'b', 'c'])
self.linear_solver = ScipyKrylov()
self.nonlinear_solver = NewtonSolver(solve_subsystems=False)
def evaluate_residuals(self, inputs, outputs, residuals):
print("="*40)
print(" Running evaluate_residual()...")
print("="*40)
x = outputs['x']
a = inputs['a']
b = inputs['b']
c = inputs['c']
residuals['x'] = a * x**2 + b * x + c
def compute_derivatives(self, inputs, outputs, derivatives):
print("="*40)
print(" Running compute_derivatives()...")
print("="*40)
a = inputs['a']
b = inputs['b']
x = outputs['x']
derivatives['x', 'a'] = x**2
derivatives['x', 'b'] = x
derivatives['x', 'c'] = 1.0
derivatives['x', 'x'] = 2 * a * x + b
from csdl_om import Simulator
# Generate an implementation.
sim = Simulator(ExampleImplicitSimple())
# Run simulation.
sim.run()
# Access values
print(sim['x'])
```
It seems that the `CustomImplicitOperation` object is not recognized by the simulator as a valid input, which should be a `model` object ([here](https://github.com/LSDOlab/csdl_om/blob/dc657773c2da3433106dbf69ff7f8e8cd50f276d/csdl_om/core/simulator.py#L48)). Or it could also be me using the simulator wrong. Please let me know what you think.
Best,
Ru
### Meta
<!--
Please include the commit id and timestamp for the commit in the version
of `csdl` and the CSDL compiler back end where this bug occurs.
-->
Here are the versions of [`CSDL`](https://github.com/LSDOlab/csdl/tree/17b79b977e1823c9b09733136352e42bc983f1b9) and [`CSDL-OM`](https://github.com/LSDOlab/csdl_om/tree/dc657773c2da3433106dbf69ff7f8e8cd50f276d) I was using, which are both the latest as of the issue report.
<details><summary><strong>Backtrace</strong></summary>
<p>
```
<backtrace>
```
</p>
</details>
| 1.0 | TypeError of the `csdl_om` simulator object - <!--
Thank you for filing a bug report! Please provide a short summary of the
bug, along with any information you feel relevant to replicating the
bug.
Before filing a bug report, please ensure that your report does not fall
into a category for any of the other issue templates.
-->
Hi Victor,
I was trying to build a `CustomImplicitOperation` object corresponding to the OpenMDAO `ImplicitComponent` class. I assume they are using the same methodology except for the syntax differences.
However, I was having trouble to simulate the model using `csdl_om`, that the error popped up saying "TypeError: CSDL-OM only accepts CSDL Model specifications to construct a Simulator." Here is the complete error message when running the `ex_custom.py` in the CSDL repository, with simulator defined with it.
```py
Traceback (most recent call last):
File "csdl_test.py", line 50, in <module>
sim = Simulator(ExampleImplicitSimple())
File "/home/ru/csdl_om/csdl_om/core/simulator.py", line 49, in __init__
raise TypeError(
TypeError: CSDL-OM only accepts CSDL Model specifications to construct a Simulator.
```
And the complete code example I'm using is as below.
```py
from csdl import CustomExplicitOperation, CustomImplicitOperation, NewtonSolver, ScipyKrylov
import csdl
import numpy as np
class ExampleImplicitSimple(CustomImplicitOperation):
"""
:param var: x
"""
def define(self):
print("="*40)
print(" Running define()...")
print("="*40)
self.add_input('a', val=1.)
self.add_input('b', val=-4.)
self.add_input('c', val=3.)
self.add_output('x', val=0.)
self.declare_derivatives('x', 'x')
self.declare_derivatives('x', ['a', 'b', 'c'])
self.linear_solver = ScipyKrylov()
self.nonlinear_solver = NewtonSolver(solve_subsystems=False)
def evaluate_residuals(self, inputs, outputs, residuals):
print("="*40)
print(" Running evaluate_residual()...")
print("="*40)
x = outputs['x']
a = inputs['a']
b = inputs['b']
c = inputs['c']
residuals['x'] = a * x**2 + b * x + c
def compute_derivatives(self, inputs, outputs, derivatives):
print("="*40)
print(" Running compute_derivatives()...")
print("="*40)
a = inputs['a']
b = inputs['b']
x = outputs['x']
derivatives['x', 'a'] = x**2
derivatives['x', 'b'] = x
derivatives['x', 'c'] = 1.0
derivatives['x', 'x'] = 2 * a * x + b
from csdl_om import Simulator
# Generate an implementation.
sim = Simulator(ExampleImplicitSimple())
# Run simulation.
sim.run()
# Access values
print(sim['x'])
```
It seems that the `CustomImplicitOperation` object is not recognized by the simulator as a valid input, which should be a `model` object ([here](https://github.com/LSDOlab/csdl_om/blob/dc657773c2da3433106dbf69ff7f8e8cd50f276d/csdl_om/core/simulator.py#L48)). Or it could also be me using the simulator wrong. Please let me know what you think.
Best,
Ru
### Meta
<!--
Please include the commit id and timestamp for the commit in the version
of `csdl` and the CSDL compiler back end where this bug occurs.
-->
Here are the versions of [`CSDL`](https://github.com/LSDOlab/csdl/tree/17b79b977e1823c9b09733136352e42bc983f1b9) and [`CSDL-OM`](https://github.com/LSDOlab/csdl_om/tree/dc657773c2da3433106dbf69ff7f8e8cd50f276d) I was using, which are both the latest as of the issue report.
<details><summary><strong>Backtrace</strong></summary>
<p>
```
<backtrace>
```
</p>
</details>
| non_defect | typeerror of the csdl om simulator object thank you for filing a bug report please provide a short summary of the bug along with any information you feel relevant to replicating the bug before filing a bug report please ensure that your report does not fall into a category for any of the other issue templates hi victor i was trying to build a customimplicitoperation object corresponding to the openmdao implicitcomponent class i assume they are using the same methodology except for the syntax differences however i was having trouble to simulate the model using csdl om that the error popped up saying typeerror csdl om only accepts csdl model specifications to construct a simulator here is the complete error message when running the ex custom py in the csdl repository with simulator defined with it py traceback most recent call last file csdl test py line in sim simulator exampleimplicitsimple file home ru csdl om csdl om core simulator py line in init raise typeerror typeerror csdl om only accepts csdl model specifications to construct a simulator and the complete code example i m using is as below py from csdl import customexplicitoperation customimplicitoperation newtonsolver scipykrylov import csdl import numpy as np class exampleimplicitsimple customimplicitoperation param var x def define self print print running define print self add input a val self add input b val self add input c val self add output x val self declare derivatives x x self declare derivatives x self linear solver scipykrylov self nonlinear solver newtonsolver solve subsystems false def evaluate residuals self inputs outputs residuals print print running evaluate residual print x outputs a inputs b inputs c inputs residuals a x b x c def compute derivatives self inputs outputs derivatives print print running compute derivatives print a inputs b inputs x outputs derivatives x derivatives x derivatives derivatives a x b from csdl om import simulator generate an implementation sim simulator exampleimplicitsimple run simulation sim run access values print sim it seems that the customimplicitoperation object is not recognized by the simulator as a valid input which should be a model object or it could also be me using the simulator wrong please let me know what you think best ru meta please include the commit id and timestamp for the commit in the version of csdl and the csdl compiler back end where this bug occurs here are the versions of and i was using which are both the latest as of the issue report backtrace | 0 |
53,318 | 13,261,398,330 | IssuesEvent | 2020-08-20 19:49:35 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | gulliver-modules examples need to be fixed (Trac #1178) | Migrated from Trac combo reconstruction defect | the two python scripts in resources/examples are old and dont run: gulliview_demo.py trace.py
In addition, all of the python files in resources/scripts work fine but they all are icetray scripts so they should be moved to examples, or if they are unit tests moved to resources/test
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1178">https://code.icecube.wisc.edu/projects/icecube/ticket/1178</a>, reported by kjmeagherand owned by kkrings</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"_ts": "1550067117911749",
"description": "the two python scripts in resources/examples are old and dont run: gulliview_demo.py trace.py\n\nIn addition, all of the python files in resources/scripts work fine but they all are icetray scripts so they should be moved to examples, or if they are unit tests moved to resources/test",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"time": "2015-08-19T11:07:23",
"component": "combo reconstruction",
"summary": "gulliver-modules examples need to be fixed",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "kkrings",
"type": "defect"
}
```
</p>
</details>
| 1.0 | gulliver-modules examples need to be fixed (Trac #1178) - the two python scripts in resources/examples are old and dont run: gulliview_demo.py trace.py
In addition, all of the python files in resources/scripts work fine but they all are icetray scripts so they should be moved to examples, or if they are unit tests moved to resources/test
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1178">https://code.icecube.wisc.edu/projects/icecube/ticket/1178</a>, reported by kjmeagherand owned by kkrings</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"_ts": "1550067117911749",
"description": "the two python scripts in resources/examples are old and dont run: gulliview_demo.py trace.py\n\nIn addition, all of the python files in resources/scripts work fine but they all are icetray scripts so they should be moved to examples, or if they are unit tests moved to resources/test",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"time": "2015-08-19T11:07:23",
"component": "combo reconstruction",
"summary": "gulliver-modules examples need to be fixed",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "kkrings",
"type": "defect"
}
```
</p>
</details>
| defect | gulliver modules examples need to be fixed trac the two python scripts in resources examples are old and dont run gulliview demo py trace py in addition all of the python files in resources scripts work fine but they all are icetray scripts so they should be moved to examples or if they are unit tests moved to resources test migrated from json status closed changetime ts description the two python scripts in resources examples are old and dont run gulliview demo py trace py n nin addition all of the python files in resources scripts work fine but they all are icetray scripts so they should be moved to examples or if they are unit tests moved to resources test reporter kjmeagher cc resolution fixed time component combo reconstruction summary gulliver modules examples need to be fixed priority blocker keywords milestone owner kkrings type defect | 1 |
44,803 | 12,393,017,163 | IssuesEvent | 2020-05-20 14:49:30 | telemundo/jira-sync | https://api.github.com/repos/telemundo/jira-sync | closed | GST-44 Testing Sri | BOT DEFECT | ## This ticket is to test big header.
### I would like to test small header.
**_As a QA Engineer i would test bold description.This is for testing only_ .**
### QA Checklist.
- [x] QA Tested on Chrome.
- [x] QA tested on forefox.
- [x] QA tested on Safari.
- [ ] QA tested on IE.
### Min items to be uploaded.
- Sample Urls.
- Any Images available.
- References.
### Below Information if available.
1. High level smoke tests.
2. Any use case from product.
3. Different elements to be tested.
> 
`$element_id = '#' . $this->getSession()->getPage()->find('xpath', '//*[@name="' . $ftype['field_name'] . '"]/preceding-sibling::a')->getAttribute('id');`
[telemundo](https://staging.newsapp.telemundo.com/noticias)
| 1.0 | GST-44 Testing Sri - ## This ticket is to test big header.
### I would like to test small header.
**_As a QA Engineer i would test bold description.This is for testing only_ .**
### QA Checklist.
- [x] QA Tested on Chrome.
- [x] QA tested on forefox.
- [x] QA tested on Safari.
- [ ] QA tested on IE.
### Min items to be uploaded.
- Sample Urls.
- Any Images available.
- References.
### Below Information if available.
1. High level smoke tests.
2. Any use case from product.
3. Different elements to be tested.
> 
`$element_id = '#' . $this->getSession()->getPage()->find('xpath', '//*[@name="' . $ftype['field_name'] . '"]/preceding-sibling::a')->getAttribute('id');`
[telemundo](https://staging.newsapp.telemundo.com/noticias)
| defect | gst testing sri this ticket is to test big header i would like to test small header as a qa engineer i would test bold description this is for testing only qa checklist qa tested on chrome qa tested on forefox qa tested on safari qa tested on ie min items to be uploaded sample urls any images available references below information if available high level smoke tests any use case from product different elements to be tested element id this getsession getpage find xpath preceding sibling a getattribute id | 1 |
38,390 | 12,537,727,348 | IssuesEvent | 2020-06-05 04:25:34 | varora1406/ecmascript-is-number | https://api.github.com/repos/varora1406/ecmascript-is-number | closed | CVE-2020-8116 (High) detected in dot-prop-3.0.0.tgz, dot-prop-4.2.0.tgz | security vulnerability | ## CVE-2020-8116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>dot-prop-3.0.0.tgz</b>, <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>
<details><summary><b>dot-prop-3.0.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-3.0.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-3.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/ecmascript-is-number/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/ecmascript-is-number/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-17.0.8.tgz (Root Library)
- commit-analyzer-8.0.1.tgz
- conventional-changelog-angular-5.0.10.tgz
- compare-func-1.3.4.tgz
- :x: **dot-prop-3.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/ecmascript-is-number/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/ecmascript-is-number/node_modules/npm/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-cli-5.3.1.tgz (Root Library)
- update-notifier-3.0.1.tgz
- configstore-4.0.0.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/varora1406/ecmascript-is-number/commit/2e939637b353a481d523a78341450c71d13b71bf">2e939637b353a481d523a78341450c71d13b71bf</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-8116 (High) detected in dot-prop-3.0.0.tgz, dot-prop-4.2.0.tgz - ## CVE-2020-8116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>dot-prop-3.0.0.tgz</b>, <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>
<details><summary><b>dot-prop-3.0.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-3.0.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-3.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/ecmascript-is-number/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/ecmascript-is-number/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-17.0.8.tgz (Root Library)
- commit-analyzer-8.0.1.tgz
- conventional-changelog-angular-5.0.10.tgz
- compare-func-1.3.4.tgz
- :x: **dot-prop-3.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/ecmascript-is-number/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/ecmascript-is-number/node_modules/npm/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-cli-5.3.1.tgz (Root Library)
- update-notifier-3.0.1.tgz
- configstore-4.0.0.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/varora1406/ecmascript-is-number/commit/2e939637b353a481d523a78341450c71d13b71bf">2e939637b353a481d523a78341450c71d13b71bf</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in dot prop tgz dot prop tgz cve high severity vulnerability vulnerable libraries dot prop tgz dot prop tgz dot prop tgz get set or delete a property from a nested object using a dot path library home page a href path to dependency file tmp ws scm ecmascript is number package json path to vulnerable library tmp ws scm ecmascript is number node modules dot prop package json dependency hierarchy semantic release tgz root library commit analyzer tgz conventional changelog angular tgz compare func tgz x dot prop tgz vulnerable library dot prop tgz get set or delete a property from a nested object using a dot path library home page a href path to dependency file tmp ws scm ecmascript is number package json path to vulnerable library tmp ws scm ecmascript is number node modules npm node modules dot prop package json dependency hierarchy semantic release cli tgz root library update notifier tgz configstore tgz x dot prop tgz vulnerable library found in head commit a href vulnerability details prototype pollution vulnerability in dot prop npm package version and earlier allows an attacker to add arbitrary properties to javascript language constructs such as objects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution dot prop step up your open source security game with whitesource | 0 |
36,109 | 7,861,205,397 | IssuesEvent | 2018-06-21 23:01:00 | StrikeNP/trac_test | https://api.github.com/repos/StrikeNP/trac_test | closed | RICO crashes when l_stats is set to .false. (Trac #21) | Migrated from Trac clubb_src defect dschanen@uwm.edu | I fixed ticket #15, which got rid of the problem of all cases crashing when l_stats was set to .false. However, the RICO case still crashes (at any time step, including the standard 60 sec. time step) when l_stats is set to .false. However, the cause the problem is entirely different from the problem in ticket #15. The RICO runs crash before they ever get started (before the first timestep) due to a NaN found in hydrometeor array Ncm. The failure is reported by the call to invalid_model_arrays from run_clubb on the first time through. Thus, it is before the first call to advance_clubb_core and even before the first call to advance_clubb_forcings. Thus, it must be an initialization problem; perhaps Ncm is uninitialized with NaN values in the memory. Since Ncm hasn't been updated yet, the NaN checker finds NaN values and reports an error. However, the odd thing is that this problem only occurs when l_stats is .false. When l_stats in .true., the RICO case runs just fine. I am working on this problem, but I would like to know if anyone has any ideas. Thanks.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/21
```json
{
"status": "closed",
"changetime": "2009-05-22T22:33:17",
"description": "I fixed ticket #15, which got rid of the problem of all cases crashing when l_stats was set to .false. However, the RICO case still crashes (at any time step, including the standard 60 sec. time step) when l_stats is set to .false. However, the cause the problem is entirely different from the problem in ticket #15. The RICO runs crash before they ever get started (before the first timestep) due to a NaN found in hydrometeor array Ncm. The failure is reported by the call to invalid_model_arrays from run_clubb on the first time through. Thus, it is before the first call to advance_clubb_core and even before the first call to advance_clubb_forcings. Thus, it must be an initialization problem; perhaps Ncm is uninitialized with NaN values in the memory. Since Ncm hasn't been updated yet, the NaN checker finds NaN values and reports an error. However, the odd thing is that this problem only occurs when l_stats is .false. When l_stats in .true., the RICO case runs just fine. I am working on this problem, but I would like to know if anyone has any ideas. Thanks.",
"reporter": "bmg2@uwm.edu",
"cc": "bmg2@uwm.edu",
"resolution": "Verified by V. Larson",
"_ts": "1243031597000000",
"component": "clubb_src",
"summary": "RICO crashes when l_stats is set to .false.",
"priority": "major",
"keywords": "RICO, l_stats, statistics, initialization",
"time": "2009-05-09T23:37:16",
"milestone": "",
"owner": "dschanen@uwm.edu",
"type": "defect"
}
```
| 1.0 | RICO crashes when l_stats is set to .false. (Trac #21) - I fixed ticket #15, which got rid of the problem of all cases crashing when l_stats was set to .false. However, the RICO case still crashes (at any time step, including the standard 60 sec. time step) when l_stats is set to .false. However, the cause the problem is entirely different from the problem in ticket #15. The RICO runs crash before they ever get started (before the first timestep) due to a NaN found in hydrometeor array Ncm. The failure is reported by the call to invalid_model_arrays from run_clubb on the first time through. Thus, it is before the first call to advance_clubb_core and even before the first call to advance_clubb_forcings. Thus, it must be an initialization problem; perhaps Ncm is uninitialized with NaN values in the memory. Since Ncm hasn't been updated yet, the NaN checker finds NaN values and reports an error. However, the odd thing is that this problem only occurs when l_stats is .false. When l_stats in .true., the RICO case runs just fine. I am working on this problem, but I would like to know if anyone has any ideas. Thanks.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/21
```json
{
"status": "closed",
"changetime": "2009-05-22T22:33:17",
"description": "I fixed ticket #15, which got rid of the problem of all cases crashing when l_stats was set to .false. However, the RICO case still crashes (at any time step, including the standard 60 sec. time step) when l_stats is set to .false. However, the cause the problem is entirely different from the problem in ticket #15. The RICO runs crash before they ever get started (before the first timestep) due to a NaN found in hydrometeor array Ncm. The failure is reported by the call to invalid_model_arrays from run_clubb on the first time through. Thus, it is before the first call to advance_clubb_core and even before the first call to advance_clubb_forcings. Thus, it must be an initialization problem; perhaps Ncm is uninitialized with NaN values in the memory. Since Ncm hasn't been updated yet, the NaN checker finds NaN values and reports an error. However, the odd thing is that this problem only occurs when l_stats is .false. When l_stats in .true., the RICO case runs just fine. I am working on this problem, but I would like to know if anyone has any ideas. Thanks.",
"reporter": "bmg2@uwm.edu",
"cc": "bmg2@uwm.edu",
"resolution": "Verified by V. Larson",
"_ts": "1243031597000000",
"component": "clubb_src",
"summary": "RICO crashes when l_stats is set to .false.",
"priority": "major",
"keywords": "RICO, l_stats, statistics, initialization",
"time": "2009-05-09T23:37:16",
"milestone": "",
"owner": "dschanen@uwm.edu",
"type": "defect"
}
```
| defect | rico crashes when l stats is set to false trac i fixed ticket which got rid of the problem of all cases crashing when l stats was set to false however the rico case still crashes at any time step including the standard sec time step when l stats is set to false however the cause the problem is entirely different from the problem in ticket the rico runs crash before they ever get started before the first timestep due to a nan found in hydrometeor array ncm the failure is reported by the call to invalid model arrays from run clubb on the first time through thus it is before the first call to advance clubb core and even before the first call to advance clubb forcings thus it must be an initialization problem perhaps ncm is uninitialized with nan values in the memory since ncm hasn t been updated yet the nan checker finds nan values and reports an error however the odd thing is that this problem only occurs when l stats is false when l stats in true the rico case runs just fine i am working on this problem but i would like to know if anyone has any ideas thanks attachments migrated from json status closed changetime description i fixed ticket which got rid of the problem of all cases crashing when l stats was set to false however the rico case still crashes at any time step including the standard sec time step when l stats is set to false however the cause the problem is entirely different from the problem in ticket the rico runs crash before they ever get started before the first timestep due to a nan found in hydrometeor array ncm the failure is reported by the call to invalid model arrays from run clubb on the first time through thus it is before the first call to advance clubb core and even before the first call to advance clubb forcings thus it must be an initialization problem perhaps ncm is uninitialized with nan values in the memory since ncm hasn t been updated yet the nan checker finds nan values and reports an error however the odd thing is that this problem only occurs when l stats is false when l stats in true the rico case runs just fine i am working on this problem but i would like to know if anyone has any ideas thanks reporter uwm edu cc uwm edu resolution verified by v larson ts component clubb src summary rico crashes when l stats is set to false priority major keywords rico l stats statistics initialization time milestone owner dschanen uwm edu type defect | 1 |
658 | 2,577,888,674 | IssuesEvent | 2015-02-12 19:44:29 | jasonhall/google-styleguide | https://api.github.com/repos/jasonhall/google-styleguide | opened | Missing a word in the `var` section of the JavaScript style guide | auto-migrated Priority-Medium Type-Defect | ```
"as it not supported" -> "as it is not supported"
```
-----
Original issue reported on code.google.com by scalesjo...@gmail.com on 24 Aug 2013 at 12:29
Attachments:
* [javascriptguide.xml.patch](https://storage.googleapis.com/google-code-attachments/google-styleguide/issue-18/comment-0/javascriptguide.xml.patch)
| 1.0 | Missing a word in the `var` section of the JavaScript style guide - ```
"as it not supported" -> "as it is not supported"
```
-----
Original issue reported on code.google.com by scalesjo...@gmail.com on 24 Aug 2013 at 12:29
Attachments:
* [javascriptguide.xml.patch](https://storage.googleapis.com/google-code-attachments/google-styleguide/issue-18/comment-0/javascriptguide.xml.patch)
| defect | missing a word in the var section of the javascript style guide as it not supported as it is not supported original issue reported on code google com by scalesjo gmail com on aug at attachments | 1 |
68,662 | 17,374,720,980 | IssuesEvent | 2021-07-30 19:04:13 | arindam-m/pyslapi | https://api.github.com/repos/arindam-m/pyslapi | closed | Blender 2.9 support | build required | Thank you for bringing us such an amazing plugin, but we need you to improve this plugin even more, for example with the released version 2.9 of blender. | 1.0 | Blender 2.9 support - Thank you for bringing us such an amazing plugin, but we need you to improve this plugin even more, for example with the released version 2.9 of blender. | non_defect | blender support thank you for bringing us such an amazing plugin but we need you to improve this plugin even more for example with the released version of blender | 0 |
6,609 | 2,610,257,711 | IssuesEvent | 2015-02-26 19:22:13 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳激光如何祛除青春痘 | auto-migrated Priority-Medium Type-Defect | ```
深圳激光如何祛除青春痘【深圳韩方科颜全国热线400-869-1818��
�24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以��
�国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品�
��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反
弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国��
�专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸�
��的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:44 | 1.0 | 深圳激光如何祛除青春痘 - ```
深圳激光如何祛除青春痘【深圳韩方科颜全国热线400-869-1818��
�24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以��
�国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品�
��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反
弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国��
�专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸�
��的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:44 | defect | 深圳激光如何祛除青春痘 深圳激光如何祛除青春痘【 �� � 】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 original issue reported on code google com by szft com on may at | 1 |
8,205 | 26,453,349,575 | IssuesEvent | 2023-01-16 12:52:02 | rancher/elemental | https://api.github.com/repos/rancher/elemental | closed | Research - Use autoscaling for self-hosted runners in public cloud | area/automation kind/QA | Right now, we have self hosted runners in the public cloud, the machines is started and stopped on demand.
It is a good first step but when we will add more tests, we will need something better.
We can think about autoscaling the runners, it is possible with the Github API and there are some resources on how to achieve it.
For instance:
https://www.dev-eth0.de/2021/03/09/autoscaling-gitlab-runner-instances-on-google-cloud-platform/
https://medium.com/philips-technology-blog/scaling-github-action-runners-a4a45f7c67a6
https://github.blog/changelog/2021-09-20-github-actions-ephemeral-self-hosted-runners-new-webhooks-for-auto-scaling/
But it's low priority at the moment. | 1.0 | Research - Use autoscaling for self-hosted runners in public cloud - Right now, we have self hosted runners in the public cloud, the machines is started and stopped on demand.
It is a good first step but when we will add more tests, we will need something better.
We can think about autoscaling the runners, it is possible with the Github API and there are some resources on how to achieve it.
For instance:
https://www.dev-eth0.de/2021/03/09/autoscaling-gitlab-runner-instances-on-google-cloud-platform/
https://medium.com/philips-technology-blog/scaling-github-action-runners-a4a45f7c67a6
https://github.blog/changelog/2021-09-20-github-actions-ephemeral-self-hosted-runners-new-webhooks-for-auto-scaling/
But it's low priority at the moment. | non_defect | research use autoscaling for self hosted runners in public cloud right now we have self hosted runners in the public cloud the machines is started and stopped on demand it is a good first step but when we will add more tests we will need something better we can think about autoscaling the runners it is possible with the github api and there are some resources on how to achieve it for instance but it s low priority at the moment | 0 |
266,895 | 23,266,990,424 | IssuesEvent | 2022-08-04 18:24:59 | secure-foundations/verus | https://api.github.com/repos/secure-foundations/verus | closed | Instability in `state_machines/dist_rwlock.rs` | test-coverage | Example `state_machines/dist_rwlock.rs` is currently ignored because it fails with z3 4.10.1 | 1.0 | Instability in `state_machines/dist_rwlock.rs` - Example `state_machines/dist_rwlock.rs` is currently ignored because it fails with z3 4.10.1 | non_defect | instability in state machines dist rwlock rs example state machines dist rwlock rs is currently ignored because it fails with | 0 |
304,613 | 23,074,163,954 | IssuesEvent | 2022-07-25 21:13:55 | oneapi-src/oneAPI-samples | https://api.github.com/repos/oneapi-src/oneAPI-samples | opened | Guided ISO3DFD name update | documentation | # Summary
"guided iso3dfd GPUOptimization Sample" is missing a space.
# URLs
https://github.com/oneapi-src/oneAPI-samples/tree/master/DirectProgramming/DPC%2B%2B/StructuredGrids/guided_iso3dfd_GPUOptimization#readme
@KanclerzPiotr - please review. | 1.0 | Guided ISO3DFD name update - # Summary
"guided iso3dfd GPUOptimization Sample" is missing a space.
# URLs
https://github.com/oneapi-src/oneAPI-samples/tree/master/DirectProgramming/DPC%2B%2B/StructuredGrids/guided_iso3dfd_GPUOptimization#readme
@KanclerzPiotr - please review. | non_defect | guided name update summary guided gpuoptimization sample is missing a space urls kanclerzpiotr please review | 0 |
743,465 | 25,900,458,326 | IssuesEvent | 2022-12-15 04:54:55 | wso2/api-manager | https://api.github.com/repos/wso2/api-manager | closed | APIM 4.2.0 Pre Alpha Release Testing | Type/Task Priority/Normal Component/APIM 4.2.0-alpha | ### Description
This is to track APIM 4.2.0 release testing progress of @piyumaldk
### Tests
- [x] API Categories
- [x] Pass a Custom Authorization Token to the Backend
- [x] External Stores
- [x] Subscription blocking/unblocking
- [x] SOAP APIs Creation/Invocation
- [x] Create api from OpenAPI definition
- [x] Application properties
- [ ] Identity management features
- [x] Forgot Password
### Created Issues
1. https://github.com/wso2/api-manager/issues/981
2. https://github.com/wso2/api-manager/issues/982
3. https://github.com/wso2/api-manager/issues/984
### Created Documentation Issues
1. https://github.com/wso2/docs-apim/issues/6424
2. https://github.com/wso2/docs-apim/issues/6426
3. https://github.com/wso2/docs-apim/issues/6451
4. https://github.com/wso2/docs-apim/issues/6456
### Affected Component
APIM
### Version
4.2.0-M1
### Related Issues
https://github.com/wso2/api-manager/issues/1073
### Suggested Labels
_No response_ | 1.0 | APIM 4.2.0 Pre Alpha Release Testing - ### Description
This is to track APIM 4.2.0 release testing progress of @piyumaldk
### Tests
- [x] API Categories
- [x] Pass a Custom Authorization Token to the Backend
- [x] External Stores
- [x] Subscription blocking/unblocking
- [x] SOAP APIs Creation/Invocation
- [x] Create api from OpenAPI definition
- [x] Application properties
- [ ] Identity management features
- [x] Forgot Password
### Created Issues
1. https://github.com/wso2/api-manager/issues/981
2. https://github.com/wso2/api-manager/issues/982
3. https://github.com/wso2/api-manager/issues/984
### Created Documentation Issues
1. https://github.com/wso2/docs-apim/issues/6424
2. https://github.com/wso2/docs-apim/issues/6426
3. https://github.com/wso2/docs-apim/issues/6451
4. https://github.com/wso2/docs-apim/issues/6456
### Affected Component
APIM
### Version
4.2.0-M1
### Related Issues
https://github.com/wso2/api-manager/issues/1073
### Suggested Labels
_No response_ | non_defect | apim pre alpha release testing description this is to track apim release testing progress of piyumaldk tests api categories pass a custom authorization token to the backend external stores subscription blocking unblocking soap apis creation invocation create api from openapi definition application properties identity management features forgot password created issues created documentation issues affected component apim version related issues suggested labels no response | 0 |
21,232 | 3,476,604,074 | IssuesEvent | 2015-12-27 03:21:42 | abortz/google-dnswall | https://api.github.com/repos/abortz/google-dnswall | closed | Filter GFW or ISP fake IP's perhaps? | auto-migrated Priority-Medium Type-Defect | ```
128.121.146.100
128.121.146.228
168.143.162.100
168.143.162.116
168.143.162.68
202.106.1.2
202.181.7.85
203.161.230.171
209.145.54.50
211.94.66.147
216.234.179.13
4.36.66.178
64.33.88.161
Some of the DNS resolves to fake IP to redirect request to ISP's ad, and
China's GFW may censor ALL TCP/UDP port 53 traffic to the fake IP's above.
```
Original issue reported on code.google.com by `electron...@gmail.com` on 16 Jul 2009 at 1:26 | 1.0 | Filter GFW or ISP fake IP's perhaps? - ```
128.121.146.100
128.121.146.228
168.143.162.100
168.143.162.116
168.143.162.68
202.106.1.2
202.181.7.85
203.161.230.171
209.145.54.50
211.94.66.147
216.234.179.13
4.36.66.178
64.33.88.161
Some of the DNS resolves to fake IP to redirect request to ISP's ad, and
China's GFW may censor ALL TCP/UDP port 53 traffic to the fake IP's above.
```
Original issue reported on code.google.com by `electron...@gmail.com` on 16 Jul 2009 at 1:26 | defect | filter gfw or isp fake ip s perhaps some of the dns resolves to fake ip to redirect request to isp s ad and china s gfw may censor all tcp udp port traffic to the fake ip s above original issue reported on code google com by electron gmail com on jul at | 1 |
43,411 | 11,706,784,976 | IssuesEvent | 2020-03-08 00:47:10 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | Race condition when building combined module | C: Modules P: normal T: defect | ## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
CIVET runs "make all builds -j <n>" when building the module. While we already have a dependency from builds to all, we are experiencing a race condition between two different processes attempting to build the combined executable.
I believe the problem is that we run sub-makes on the individual module directories in parallel to build all of the individual executables for running all of the tests. Serializing this build takes a very long time. The conflict comes when we manually reach into the combined folder to build the combined executable while also building the individual directories.
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
This is difficult to reproduce but it does occur with some amount of regularity.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
Moderate annoyance and causes failures in CIVET.
| 1.0 | Race condition when building combined module - ## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
CIVET runs "make all builds -j <n>" when building the module. While we already have a dependency from builds to all, we are experiencing a race condition between two different processes attempting to build the combined executable.
I believe the problem is that we run sub-makes on the individual module directories in parallel to build all of the individual executables for running all of the tests. Serializing this build takes a very long time. The conflict comes when we manually reach into the combined folder to build the combined executable while also building the individual directories.
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
This is difficult to reproduce but it does occur with some amount of regularity.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
Moderate annoyance and causes failures in CIVET.
| defect | race condition when building combined module bug description civet runs make all builds j when building the module while we already have a dependency from builds to all we are experiencing a race condition between two different processes attempting to build the combined executable i believe the problem is that we run sub makes on the individual module directories in parallel to build all of the individual executables for running all of the tests serializing this build takes a very long time the conflict comes when we manually reach into the combined folder to build the combined executable while also building the individual directories steps to reproduce this is difficult to reproduce but it does occur with some amount of regularity impact moderate annoyance and causes failures in civet | 1 |
461,710 | 13,234,751,450 | IssuesEvent | 2020-08-18 16:49:46 | CDH-Studio/UpSkill | https://api.github.com/repos/CDH-Studio/UpSkill | opened | Properly catch axios failed calls in the backend | enhancement medium priority | **Is your feature request related to a problem? Please describe.**
Right now if GEDS and Keycloak axios calls in the backend fails, they spit out huge hundred of lines of unecesary logs

**Describe the solution you'd like**
Properly catch the async calls and display a smaller error message, for the logs in the backend to be more readable | 1.0 | Properly catch axios failed calls in the backend - **Is your feature request related to a problem? Please describe.**
Right now if GEDS and Keycloak axios calls in the backend fails, they spit out huge hundred of lines of unecesary logs

**Describe the solution you'd like**
Properly catch the async calls and display a smaller error message, for the logs in the backend to be more readable | non_defect | properly catch axios failed calls in the backend is your feature request related to a problem please describe right now if geds and keycloak axios calls in the backend fails they spit out huge hundred of lines of unecesary logs describe the solution you d like properly catch the async calls and display a smaller error message for the logs in the backend to be more readable | 0 |
8,386 | 2,611,495,512 | IssuesEvent | 2015-02-27 05:35:11 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | opened | Sniper rifle cancels laser sight tool | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Activate laser sight
2. Shoot from sniper rifle
What is the expected output? What do you see instead?
I should be able to shoot second time with sniper rifle using laser sight, also
with any other weapons in multiattack mode
```
Original issue reported on code.google.com by `unC0Rr` on 31 Jul 2012 at 3:21 | 1.0 | Sniper rifle cancels laser sight tool - ```
What steps will reproduce the problem?
1. Activate laser sight
2. Shoot from sniper rifle
What is the expected output? What do you see instead?
I should be able to shoot second time with sniper rifle using laser sight, also
with any other weapons in multiattack mode
```
Original issue reported on code.google.com by `unC0Rr` on 31 Jul 2012 at 3:21 | defect | sniper rifle cancels laser sight tool what steps will reproduce the problem activate laser sight shoot from sniper rifle what is the expected output what do you see instead i should be able to shoot second time with sniper rifle using laser sight also with any other weapons in multiattack mode original issue reported on code google com by on jul at | 1 |
78,198 | 10,053,273,041 | IssuesEvent | 2019-07-21 15:24:20 | iancarpenter/HelloWorldDG | https://api.github.com/repos/iancarpenter/HelloWorldDG | closed | Add books to README.md | documentation | One section that would be neat to add to this page is your favourite books | 1.0 | Add books to README.md - One section that would be neat to add to this page is your favourite books | non_defect | add books to readme md one section that would be neat to add to this page is your favourite books | 0 |
72,028 | 18,976,222,178 | IssuesEvent | 2021-11-20 02:45:58 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | Build did NOT complete successfully | stat:awaiting response type:build/install stalled subtype:centos TF 2.4 | Hi,
I'm using python Python 3.6.8 on CentOS 7, my bazel version is bazel 3.1.0. Using "git checkout r2.4" and trying to make the C++ interface I get the following error:
`INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=237
INFO: Reading rc options for 'build' from /home/u211355/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/u211355/tensorflow/.bazelrc:
'build' options: --apple_platform_type=macos --define framework_shared_object=true --define open_source_build=true --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --define=tensorflow_enable_mlir_generated_gpu_kernels=0 --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=short_logs --config=v2
INFO: Reading rc options for 'build' from /home/u211355/tensorflow/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3.6 --action_env PYTHON_LIB_PATH=/usr/lib/python3.6/site-packages --python_path=/usr/bin/python3.6 --config=xla --action_env TF_CONFIGURE_IOS=0
INFO: Found applicable config definition build:short_logs in file /home/u211355/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/u211355/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:xla in file /home/u211355/tensorflow/.bazelrc: --define=with_xla_support=true
INFO: Found applicable config definition build:linux in file /home/u211355/tensorflow/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels
INFO: Found applicable config definition build:dynamic_kernels in file /home/u211355/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
DEBUG: Rule 'io_bazel_rules_go' indicated that a canonical reproducible form can be obtained by modifying arguments shallow_since = "1557349968 -0400"
DEBUG: Repository io_bazel_rules_go instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule git_repository defined at:
/home/u211355/.cache/bazel/_bazel_u211355/4bc589a879d91108e775a9cd349c5c7c/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18: in <toplevel>
INFO: Build options --action_env, --define, and --host_copt have changed, discarding analysis cache.
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/llvm/llvm-project/archive/f402e682d0ef5598eeffc9a21a691b03e602ff58.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
INFO: Analyzed target //tensorflow:libtensorflow_cc.so (211 packages loaded, 20042 targets configured).
INFO: Found 1 target...
ERROR: /home/u211355/.cache/bazel/_bazel_u211355/4bc589a879d91108e775a9cd349c5c7c/external/com_google_protobuf/BUILD:110:1: C++ compilation of rule '@com_google_protobuf//:protobuf_lite' failed (Exit 1): gcc failed: error executing command
(cd /home/u211355/.cache/bazel/_bazel_u211355/4bc589a879d91108e775a9cd349c5c7c/execroot/org_tensorflow && \
exec env - \
LD_LIBRARY_PATH=/opt/rh/devtoolset-7/root/usr/lib64:/opt/rh/devtoolset-7/root/usr/lib:/opt/rh/devtoolset-7/root/usr/lib64/dyninst:/opt/rh/devtoolset-7/root/usr/lib/dyninst:/opt/rh/devtoolset-7/root/usr/lib64:/opt/rh/devtoolset-7/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64/dyninst:/opt/rh/devtoolset-8/root/usr/lib/dyninst:/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-9/root/usr/lib64:/opt/rh/devtoolset-9/root/usr/lib:/opt/rh/devtoolset-9/root/usr/lib64/dyninst:/opt/rh/devtoolset-9/root/usr/lib/dyninst:/opt/rh/devtoolset-9/root/usr/lib64:/opt/rh/devtoolset-9/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib \
PATH=/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-7/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-8/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-9/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-10/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/home/u211355/miniconda3/condabin:/opt/sge/bin:/opt/sge/bin/lx-amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/.local/bin:/home/u211355/bin:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util \
PWD=/proc/self/cwd \
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++0x' -MD -MF bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.d '-frandom-seed=bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.o' -iquote external/com_google_protobuf -iquote bazel-out/host/bin/external/com_google_protobuf -isystem external/com_google_protobuf/src -isystem bazel-out/host/bin/external/com_google_protobuf/src -g0 -w -g0 '-std=c++14' -DHAVE_PTHREAD -DHAVE_ZLIB -Woverloaded-virtual -Wno-sign-compare -Wno-unused-function -Wno-write-strings -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/com_google_protobuf/src/google/protobuf/any_lite.cc -o bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.o)
Execution platform: @local_execution_config_platform//:platform
gcc: error: unrecognized command line option '-std=c++14'
Target //tensorflow:libtensorflow_cc.so failed to build
ERROR: /home/u211355/tensorflow/tensorflow/BUILD:820:1 C++ compilation of rule '@com_google_protobuf//:protobuf_lite' failed (Exit 1): gcc failed: error executing command
(cd /home/u211355/.cache/bazel/_bazel_u211355/4bc589a879d91108e775a9cd349c5c7c/execroot/org_tensorflow && \
exec env - \
LD_LIBRARY_PATH=/opt/rh/devtoolset-7/root/usr/lib64:/opt/rh/devtoolset-7/root/usr/lib:/opt/rh/devtoolset-7/root/usr/lib64/dyninst:/opt/rh/devtoolset-7/root/usr/lib/dyninst:/opt/rh/devtoolset-7/root/usr/lib64:/opt/rh/devtoolset-7/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64/dyninst:/opt/rh/devtoolset-8/root/usr/lib/dyninst:/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-9/root/usr/lib64:/opt/rh/devtoolset-9/root/usr/lib:/opt/rh/devtoolset-9/root/usr/lib64/dyninst:/opt/rh/devtoolset-9/root/usr/lib/dyninst:/opt/rh/devtoolset-9/root/usr/lib64:/opt/rh/devtoolset-9/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib \
PATH=/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-7/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-8/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-9/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-10/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/home/u211355/miniconda3/condabin:/opt/sge/bin:/opt/sge/bin/lx-amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/.local/bin:/home/u211355/bin:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util \
PWD=/proc/self/cwd \
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++0x' -MD -MF bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.d '-frandom-seed=bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.o' -iquote external/com_google_protobuf -iquote bazel-out/host/bin/external/com_google_protobuf -isystem external/com_google_protobuf/src -isystem bazel-out/host/bin/external/com_google_protobuf/src -g0 -w -g0 '-std=c++14' -DHAVE_PTHREAD -DHAVE_ZLIB -Woverloaded-virtual -Wno-sign-compare -Wno-unused-function -Wno-write-strings -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/com_google_protobuf/src/google/protobuf/any_lite.cc -o bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.o)
Execution platform: @local_execution_config_platform//:platform
INFO: Elapsed time: 55.118s, Critical Path: 0.02s
INFO: 0 processes.
FAILED: Build did NOT complete successfully
`
I would appreciate your help with this error. I have also tried different versions of GCC from 7-10 and that also didn't solve the error.
Thanks
Amir | 1.0 | Build did NOT complete successfully - Hi,
I'm using python Python 3.6.8 on CentOS 7, my bazel version is bazel 3.1.0. Using "git checkout r2.4" and trying to make the C++ interface I get the following error:
`INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=237
INFO: Reading rc options for 'build' from /home/u211355/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/u211355/tensorflow/.bazelrc:
'build' options: --apple_platform_type=macos --define framework_shared_object=true --define open_source_build=true --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --define=tensorflow_enable_mlir_generated_gpu_kernels=0 --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=short_logs --config=v2
INFO: Reading rc options for 'build' from /home/u211355/tensorflow/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3.6 --action_env PYTHON_LIB_PATH=/usr/lib/python3.6/site-packages --python_path=/usr/bin/python3.6 --config=xla --action_env TF_CONFIGURE_IOS=0
INFO: Found applicable config definition build:short_logs in file /home/u211355/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/u211355/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:xla in file /home/u211355/tensorflow/.bazelrc: --define=with_xla_support=true
INFO: Found applicable config definition build:linux in file /home/u211355/tensorflow/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels
INFO: Found applicable config definition build:dynamic_kernels in file /home/u211355/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
DEBUG: Rule 'io_bazel_rules_go' indicated that a canonical reproducible form can be obtained by modifying arguments shallow_since = "1557349968 -0400"
DEBUG: Repository io_bazel_rules_go instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule git_repository defined at:
/home/u211355/.cache/bazel/_bazel_u211355/4bc589a879d91108e775a9cd349c5c7c/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18: in <toplevel>
INFO: Build options --action_env, --define, and --host_copt have changed, discarding analysis cache.
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/llvm/llvm-project/archive/f402e682d0ef5598eeffc9a21a691b03e602ff58.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
INFO: Analyzed target //tensorflow:libtensorflow_cc.so (211 packages loaded, 20042 targets configured).
INFO: Found 1 target...
ERROR: /home/u211355/.cache/bazel/_bazel_u211355/4bc589a879d91108e775a9cd349c5c7c/external/com_google_protobuf/BUILD:110:1: C++ compilation of rule '@com_google_protobuf//:protobuf_lite' failed (Exit 1): gcc failed: error executing command
(cd /home/u211355/.cache/bazel/_bazel_u211355/4bc589a879d91108e775a9cd349c5c7c/execroot/org_tensorflow && \
exec env - \
LD_LIBRARY_PATH=/opt/rh/devtoolset-7/root/usr/lib64:/opt/rh/devtoolset-7/root/usr/lib:/opt/rh/devtoolset-7/root/usr/lib64/dyninst:/opt/rh/devtoolset-7/root/usr/lib/dyninst:/opt/rh/devtoolset-7/root/usr/lib64:/opt/rh/devtoolset-7/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64/dyninst:/opt/rh/devtoolset-8/root/usr/lib/dyninst:/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-9/root/usr/lib64:/opt/rh/devtoolset-9/root/usr/lib:/opt/rh/devtoolset-9/root/usr/lib64/dyninst:/opt/rh/devtoolset-9/root/usr/lib/dyninst:/opt/rh/devtoolset-9/root/usr/lib64:/opt/rh/devtoolset-9/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib \
PATH=/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-7/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-8/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-9/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-10/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/home/u211355/miniconda3/condabin:/opt/sge/bin:/opt/sge/bin/lx-amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/.local/bin:/home/u211355/bin:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util \
PWD=/proc/self/cwd \
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++0x' -MD -MF bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.d '-frandom-seed=bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.o' -iquote external/com_google_protobuf -iquote bazel-out/host/bin/external/com_google_protobuf -isystem external/com_google_protobuf/src -isystem bazel-out/host/bin/external/com_google_protobuf/src -g0 -w -g0 '-std=c++14' -DHAVE_PTHREAD -DHAVE_ZLIB -Woverloaded-virtual -Wno-sign-compare -Wno-unused-function -Wno-write-strings -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/com_google_protobuf/src/google/protobuf/any_lite.cc -o bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.o)
Execution platform: @local_execution_config_platform//:platform
gcc: error: unrecognized command line option '-std=c++14'
Target //tensorflow:libtensorflow_cc.so failed to build
ERROR: /home/u211355/tensorflow/tensorflow/BUILD:820:1 C++ compilation of rule '@com_google_protobuf//:protobuf_lite' failed (Exit 1): gcc failed: error executing command
(cd /home/u211355/.cache/bazel/_bazel_u211355/4bc589a879d91108e775a9cd349c5c7c/execroot/org_tensorflow && \
exec env - \
LD_LIBRARY_PATH=/opt/rh/devtoolset-7/root/usr/lib64:/opt/rh/devtoolset-7/root/usr/lib:/opt/rh/devtoolset-7/root/usr/lib64/dyninst:/opt/rh/devtoolset-7/root/usr/lib/dyninst:/opt/rh/devtoolset-7/root/usr/lib64:/opt/rh/devtoolset-7/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-8/root/usr/lib64/dyninst:/opt/rh/devtoolset-8/root/usr/lib/dyninst:/opt/rh/devtoolset-8/root/usr/lib64:/opt/rh/devtoolset-8/root/usr/lib:/opt/rh/devtoolset-9/root/usr/lib64:/opt/rh/devtoolset-9/root/usr/lib:/opt/rh/devtoolset-9/root/usr/lib64/dyninst:/opt/rh/devtoolset-9/root/usr/lib/dyninst:/opt/rh/devtoolset-9/root/usr/lib64:/opt/rh/devtoolset-9/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib:/opt/rh/devtoolset-10/root/usr/lib64/dyninst:/opt/rh/devtoolset-10/root/usr/lib/dyninst:/opt/rh/devtoolset-10/root/usr/lib64:/opt/rh/devtoolset-10/root/usr/lib \
PATH=/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-7/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-8/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-9/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/opt/sge/bin:/opt/sge/bin/lx-amd64:/opt/rh/devtoolset-10/root/usr/bin:/home/u211355/xcrysden-1.5.60-bin-semishared:/home/u211355/miniconda3/condabin:/opt/sge/bin:/opt/sge/bin/lx-amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/.local/bin:/home/u211355/bin:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util:/home/u211355/xcrysden-1.5.60-bin-semishared/scripts:/home/u211355/xcrysden-1.5.60-bin-semishared/util \
PWD=/proc/self/cwd \
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++0x' -MD -MF bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.d '-frandom-seed=bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.o' -iquote external/com_google_protobuf -iquote bazel-out/host/bin/external/com_google_protobuf -isystem external/com_google_protobuf/src -isystem bazel-out/host/bin/external/com_google_protobuf/src -g0 -w -g0 '-std=c++14' -DHAVE_PTHREAD -DHAVE_ZLIB -Woverloaded-virtual -Wno-sign-compare -Wno-unused-function -Wno-write-strings -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/com_google_protobuf/src/google/protobuf/any_lite.cc -o bazel-out/host/bin/external/com_google_protobuf/_objs/protobuf_lite/any_lite.o)
Execution platform: @local_execution_config_platform//:platform
INFO: Elapsed time: 55.118s, Critical Path: 0.02s
INFO: 0 processes.
FAILED: Build did NOT complete successfully
`
I would appreciate your help with this error. I have also tried different versions of GCC from 7-10 and that also didn't solve the error.
Thanks
Amir | non_defect | build did not complete successfully hi i m using python python on centos my bazel version is bazel using git checkout and trying to make the c interface i get the following error info options provided by the client inherited common options isatty terminal columns info reading rc options for build from home tensorflow bazelrc inherited common options experimental repo remote exec info reading rc options for build from home tensorflow bazelrc build options apple platform type macos define framework shared object true define open source build true java toolchain third party toolchains java tf java toolchain host java toolchain third party toolchains java tf java toolchain define tensorflow enable mlir generated gpu kernels define use fast cpp protos true define allow oversize protos true spawn strategy standalone c opt announce rc define grpc no ares true noincompatible remove legacy whole archive noincompatible prohibit enable platform specific config config short logs config info reading rc options for build from home tensorflow tf configure bazelrc build options action env python bin path usr bin action env python lib path usr lib site packages python path usr bin config xla action env tf configure ios info found applicable config definition build short logs in file home tensorflow bazelrc output filter dont match anything info found applicable config definition build in file home tensorflow bazelrc define tf api version action env behavior info found applicable config definition build xla in file home tensorflow bazelrc define with xla support true info found applicable config definition build linux in file home tensorflow bazelrc copt w host copt w define prefix usr define libdir prefix lib define includedir prefix include define protobuf include path prefix include cxxopt std c host cxxopt std c config dynamic kernels info found applicable config definition build dynamic kernels in file home tensorflow bazelrc define dynamic loaded kernels true copt dautoload dynamic kernels debug rule io bazel rules go indicated that a canonical reproducible form can be obtained by modifying arguments shallow since debug repository io bazel rules go instantiated at no stack record rule instantiation callstack not enabled repository rule git repository defined at home cache bazel bazel external bazel tools tools build defs repo git bzl in info build options action env define and host copt have changed discarding analysis cache warning download from failed class com google devtools build lib bazel repository downloader unrecoverablehttpexception get returned not found info analyzed target tensorflow libtensorflow cc so packages loaded targets configured info found target error home cache bazel bazel external com google protobuf build c compilation of rule com google protobuf protobuf lite failed exit gcc failed error executing command cd home cache bazel bazel execroot org tensorflow exec env ld library path opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr dyninst opt rh devtoolset root usr lib dyninst opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr dyninst opt rh devtoolset root usr lib dyninst opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr dyninst opt rh devtoolset root usr lib dyninst opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr dyninst opt rh devtoolset root usr lib dyninst opt rh devtoolset root usr opt rh devtoolset root usr lib path home xcrysden bin semishared opt sge bin opt sge bin lx opt rh devtoolset root usr bin home xcrysden bin semishared opt sge bin opt sge bin lx opt rh devtoolset root usr bin home xcrysden bin semishared opt sge bin opt sge bin lx home xcrysden bin semishared opt sge bin opt sge bin lx opt rh devtoolset root usr bin home xcrysden bin semishared opt sge bin opt sge bin lx opt rh devtoolset root usr bin home xcrysden bin semishared home condabin opt sge bin opt sge bin lx usr local bin usr bin usr local sbin usr sbin home xcrysden bin semishared scripts home xcrysden bin semishared util home local bin home bin home xcrysden bin semishared scripts home xcrysden bin semishared util home xcrysden bin semishared scripts home xcrysden bin semishared util home xcrysden bin semishared scripts home xcrysden bin semishared util home xcrysden bin semishared scripts home xcrysden bin semishared util home xcrysden bin semishared scripts home xcrysden bin semishared util pwd proc self cwd usr bin gcc u fortify source fstack protector wall wunused but set parameter wno free nonheap object fno omit frame pointer d fortify source dndebug ffunction sections fdata sections std c md mf bazel out host bin external com google protobuf objs protobuf lite any lite d frandom seed bazel out host bin external com google protobuf objs protobuf lite any lite o iquote external com google protobuf iquote bazel out host bin external com google protobuf isystem external com google protobuf src isystem bazel out host bin external com google protobuf src w std c dhave pthread dhave zlib woverloaded virtual wno sign compare wno unused function wno write strings fno canonical system headers wno builtin macro redefined d date redacted d timestamp redacted d time redacted c external com google protobuf src google protobuf any lite cc o bazel out host bin external com google protobuf objs protobuf lite any lite o execution platform local execution config platform platform gcc error unrecognized command line option std c target tensorflow libtensorflow cc so failed to build error home tensorflow tensorflow build c compilation of rule com google protobuf protobuf lite failed exit gcc failed error executing command cd home cache bazel bazel execroot org tensorflow exec env ld library path opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr dyninst opt rh devtoolset root usr lib dyninst opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr dyninst opt rh devtoolset root usr lib dyninst opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr dyninst opt rh devtoolset root usr lib dyninst opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr opt rh devtoolset root usr lib opt rh devtoolset root usr dyninst opt rh devtoolset root usr lib dyninst opt rh devtoolset root usr opt rh devtoolset root usr lib path home xcrysden bin semishared opt sge bin opt sge bin lx opt rh devtoolset root usr bin home xcrysden bin semishared opt sge bin opt sge bin lx opt rh devtoolset root usr bin home xcrysden bin semishared opt sge bin opt sge bin lx home xcrysden bin semishared opt sge bin opt sge bin lx opt rh devtoolset root usr bin home xcrysden bin semishared opt sge bin opt sge bin lx opt rh devtoolset root usr bin home xcrysden bin semishared home condabin opt sge bin opt sge bin lx usr local bin usr bin usr local sbin usr sbin home xcrysden bin semishared scripts home xcrysden bin semishared util home local bin home bin home xcrysden bin semishared scripts home xcrysden bin semishared util home xcrysden bin semishared scripts home xcrysden bin semishared util home xcrysden bin semishared scripts home xcrysden bin semishared util home xcrysden bin semishared scripts home xcrysden bin semishared util home xcrysden bin semishared scripts home xcrysden bin semishared util pwd proc self cwd usr bin gcc u fortify source fstack protector wall wunused but set parameter wno free nonheap object fno omit frame pointer d fortify source dndebug ffunction sections fdata sections std c md mf bazel out host bin external com google protobuf objs protobuf lite any lite d frandom seed bazel out host bin external com google protobuf objs protobuf lite any lite o iquote external com google protobuf iquote bazel out host bin external com google protobuf isystem external com google protobuf src isystem bazel out host bin external com google protobuf src w std c dhave pthread dhave zlib woverloaded virtual wno sign compare wno unused function wno write strings fno canonical system headers wno builtin macro redefined d date redacted d timestamp redacted d time redacted c external com google protobuf src google protobuf any lite cc o bazel out host bin external com google protobuf objs protobuf lite any lite o execution platform local execution config platform platform info elapsed time critical path info processes failed build did not complete successfully i would appreciate your help with this error i have also tried different versions of gcc from and that also didn t solve the error thanks amir | 0 |
48,878 | 13,184,765,270 | IssuesEvent | 2020-08-12 20:03:10 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | TTrigger compiler error (Trac #372) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/372
, reported by koskinen and owned by dima_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-05-22T14:17:31",
"description": "While compiling the trunk version of TTrigger I get the following error\n\n/gpfs/apps/x86_64-rhel5/gcc/4.4.2/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.2/include-fixed/sys/stat.h:317: error: inline function \u2018lstat64\u2019 declared but never defined\n/gpfs/apps/x86_64-rhel5/gcc/4.4.2/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.2/include-fixed/sys/stat.h:286: error: inline function \u2018fstatat64\u2019 declared but never defined\n/gpfs/apps/x86_64-rhel5/gcc/4.4.2/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.2/include-fixed/sys/stat.h:255: error: inline function \u2018fstat64\u2019 declared but never defined\n/gpfs/apps/x86_64-rhel5/gcc/4.4.2/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.2/include-fixed/sys/stat.h:250: error: inline function \u2018stat64\u2019 declared but never defined\n\nThe previous error is not generated when compiling in 'optimized' mode. \n\ngcc version 4.4.2",
"reporter": "koskinen",
"cc": "mdunkman@gmail.com",
"resolution": "duplicate",
"_ts": "1337696251000000",
"component": "combo reconstruction",
"summary": "TTrigger compiler error",
"priority": "minor",
"keywords": "",
"time": "2012-03-06T14:56:06",
"milestone": "",
"owner": "dima",
"type": "defect"
}
```
</p>
</details>
| 1.0 | TTrigger compiler error (Trac #372) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/372
, reported by koskinen and owned by dima_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-05-22T14:17:31",
"description": "While compiling the trunk version of TTrigger I get the following error\n\n/gpfs/apps/x86_64-rhel5/gcc/4.4.2/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.2/include-fixed/sys/stat.h:317: error: inline function \u2018lstat64\u2019 declared but never defined\n/gpfs/apps/x86_64-rhel5/gcc/4.4.2/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.2/include-fixed/sys/stat.h:286: error: inline function \u2018fstatat64\u2019 declared but never defined\n/gpfs/apps/x86_64-rhel5/gcc/4.4.2/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.2/include-fixed/sys/stat.h:255: error: inline function \u2018fstat64\u2019 declared but never defined\n/gpfs/apps/x86_64-rhel5/gcc/4.4.2/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.2/include-fixed/sys/stat.h:250: error: inline function \u2018stat64\u2019 declared but never defined\n\nThe previous error is not generated when compiling in 'optimized' mode. \n\ngcc version 4.4.2",
"reporter": "koskinen",
"cc": "mdunkman@gmail.com",
"resolution": "duplicate",
"_ts": "1337696251000000",
"component": "combo reconstruction",
"summary": "TTrigger compiler error",
"priority": "minor",
"keywords": "",
"time": "2012-03-06T14:56:06",
"milestone": "",
"owner": "dima",
"type": "defect"
}
```
</p>
</details>
| defect | ttrigger compiler error trac migrated from reported by koskinen and owned by dima json status closed changetime description while compiling the trunk version of ttrigger i get the following error n n gpfs apps gcc bin lib gcc unknown linux gnu include fixed sys stat h error inline function declared but never defined n gpfs apps gcc bin lib gcc unknown linux gnu include fixed sys stat h error inline function declared but never defined n gpfs apps gcc bin lib gcc unknown linux gnu include fixed sys stat h error inline function declared but never defined n gpfs apps gcc bin lib gcc unknown linux gnu include fixed sys stat h error inline function declared but never defined n nthe previous error is not generated when compiling in optimized mode n ngcc version reporter koskinen cc mdunkman gmail com resolution duplicate ts component combo reconstruction summary ttrigger compiler error priority minor keywords time milestone owner dima type defect | 1 |
71,574 | 23,700,887,850 | IssuesEvent | 2022-08-29 18:52:37 | idaholab/raven | https://api.github.com/repos/idaholab/raven | closed | [DEFECT] Cannot request multiple VaR | priority_minor defect | ### Thank you for the defect report
- [X] I am using the latest version of `RAVEN`.
- [X] I have read the [Wiki](https://github.com/idaholab/raven/wiki).
- [X] I have created a [minimum, reproducible example](https://stackoverflow.com/help/minimal-reproducible-example)
that demonstrates the defect.
### Defect Description
Requesting multiple value at risk (VaR) or expected shortfall (conditional VaR or CVaR) values should work like percentile, but it doesn't.
### Steps to Reproduce
Try requesting VaR for different threshold values, it doesn't work.
### Expected Behavior
Return the requested VaR values.
### Screenshots and Input Files
_No response_
### OS
Windows
### OS Version
_No response_
### Dependency Manager
CONDA
### For Change Control Board: Issue Review
- [x] Is it tagged with a type: defect or task?
- [x] Is it tagged with a priority: critical, normal or minor?
- [x] If it will impact requirements or requirements tests, is it tagged with requirements?
- [x] If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [x] Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
### For Change Control Board: Issue Closure
- [x] If the issue is a defect, is the defect fixed?
- [x] If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [x] If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [x] If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [x] If the issue is being closed without a pull request, has an explanation of why it is being closed been provided? | 1.0 | [DEFECT] Cannot request multiple VaR - ### Thank you for the defect report
- [X] I am using the latest version of `RAVEN`.
- [X] I have read the [Wiki](https://github.com/idaholab/raven/wiki).
- [X] I have created a [minimum, reproducible example](https://stackoverflow.com/help/minimal-reproducible-example)
that demonstrates the defect.
### Defect Description
Requesting multiple value at risk (VaR) or expected shortfall (conditional VaR or CVaR) values should work like percentile, but it doesn't.
### Steps to Reproduce
Try requesting VaR for different threshold values, it doesn't work.
### Expected Behavior
Return the requested VaR values.
### Screenshots and Input Files
_No response_
### OS
Windows
### OS Version
_No response_
### Dependency Manager
CONDA
### For Change Control Board: Issue Review
- [x] Is it tagged with a type: defect or task?
- [x] Is it tagged with a priority: critical, normal or minor?
- [x] If it will impact requirements or requirements tests, is it tagged with requirements?
- [x] If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [x] Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
### For Change Control Board: Issue Closure
- [x] If the issue is a defect, is the defect fixed?
- [x] If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [x] If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [x] If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [x] If the issue is being closed without a pull request, has an explanation of why it is being closed been provided? | defect | cannot request multiple var thank you for the defect report i am using the latest version of raven i have read the i have created a that demonstrates the defect defect description requesting multiple value at risk var or expected shortfall conditional var or cvar values should work like percentile but it doesn t steps to reproduce try requesting var for different threshold values it doesn t work expected behavior return the requested var values screenshots and input files no response os windows os version no response dependency manager conda for change control board issue review is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided | 1 |
41,905 | 10,699,681,674 | IssuesEvent | 2019-10-23 21:32:05 | avereon/xenon | https://api.github.com/repos/avereon/xenon | closed | There are regularly orphaned tools that need to be removed | bug / error / defect | While the process of restoring the UI appears to function correctly, there is always a warning, or set of warnings, about removing orphaned tools. This should not be occurring on a regular basis and needs to be investigated. | 1.0 | There are regularly orphaned tools that need to be removed - While the process of restoring the UI appears to function correctly, there is always a warning, or set of warnings, about removing orphaned tools. This should not be occurring on a regular basis and needs to be investigated. | defect | there are regularly orphaned tools that need to be removed while the process of restoring the ui appears to function correctly there is always a warning or set of warnings about removing orphaned tools this should not be occurring on a regular basis and needs to be investigated | 1 |
62,180 | 8,579,396,968 | IssuesEvent | 2018-11-13 09:03:50 | agr1002/GESPRO_GESTIONTAREAS | https://api.github.com/repos/agr1002/GESPRO_GESTIONTAREAS | closed | Grabar discos | documentation | - Código fuente app
- Ejecutable app
- Herramientas
Plataforma de desarrollo del algoritmo
Herramienta de conteo de abejas
- Documentación
Memoria
Anexos
JavaDoc
- Dataset de videos
**README**
- Repositorio GoBees
- Repositorio prototipos
- Repositorio dependencia OpenCV | 1.0 | Grabar discos - - Código fuente app
- Ejecutable app
- Herramientas
Plataforma de desarrollo del algoritmo
Herramienta de conteo de abejas
- Documentación
Memoria
Anexos
JavaDoc
- Dataset de videos
**README**
- Repositorio GoBees
- Repositorio prototipos
- Repositorio dependencia OpenCV | non_defect | grabar discos código fuente app ejecutable app herramientas plataforma de desarrollo del algoritmo herramienta de conteo de abejas documentación memoria anexos javadoc dataset de videos readme repositorio gobees repositorio prototipos repositorio dependencia opencv | 0 |
43,078 | 11,461,736,633 | IssuesEvent | 2020-02-07 12:39:39 | mozilla-lockwise/lockwise-ios | https://api.github.com/repos/mozilla-lockwise/lockwise-ios | opened | Startup crash on iOS 12.3.1 | defect | ## Steps to reproduce
1. Clean install Lockwise
2. Launch Lockwise
3. Tap "Skip" on the pop-up message
### Actual behavior
- Lockwise will crash.
### Device & build information
* Device: iPhone 7
* Build version: 1.7.2 (4059)
### Notes
- Not reproducible after first launc
- Not reproducible on iOS 13.3
Attachments:
[RPReplay_Final1581078877.MP4.zip](https://github.com/mozilla-lockwise/lockwise-ios/files/4170777/RPReplay_Final1581078877.MP4.zip) | 1.0 | Startup crash on iOS 12.3.1 - ## Steps to reproduce
1. Clean install Lockwise
2. Launch Lockwise
3. Tap "Skip" on the pop-up message
### Actual behavior
- Lockwise will crash.
### Device & build information
* Device: iPhone 7
* Build version: 1.7.2 (4059)
### Notes
- Not reproducible after first launc
- Not reproducible on iOS 13.3
Attachments:
[RPReplay_Final1581078877.MP4.zip](https://github.com/mozilla-lockwise/lockwise-ios/files/4170777/RPReplay_Final1581078877.MP4.zip) | defect | startup crash on ios steps to reproduce clean install lockwise launch lockwise tap skip on the pop up message actual behavior lockwise will crash device build information device iphone build version notes not reproducible after first launc not reproducible on ios attachments | 1 |
29,469 | 5,694,741,179 | IssuesEvent | 2017-04-15 16:05:33 | PiLoT-/pcgamer-minecraft-server-usa | https://api.github.com/repos/PiLoT-/pcgamer-minecraft-server-usa | closed | Promotions not announced to admins and mods | auto-migrated Priority-Low Type-Defect | ```
/promote and /demote aren't announced to the other mods and admins.
```
Original issue reported on code.google.com by `TheQu...@gmail.com` on 20 Dec 2014 at 6:14
| 1.0 | Promotions not announced to admins and mods - ```
/promote and /demote aren't announced to the other mods and admins.
```
Original issue reported on code.google.com by `TheQu...@gmail.com` on 20 Dec 2014 at 6:14
| defect | promotions not announced to admins and mods promote and demote aren t announced to the other mods and admins original issue reported on code google com by thequ gmail com on dec at | 1 |
25,702 | 4,417,714,281 | IssuesEvent | 2016-08-15 07:25:20 | snowie2000/mactype | https://api.github.com/repos/snowie2000/mactype | closed | Unicode strikethrough bug | auto-migrated Priority-Medium Type-Defect | ```
Everything is seen on the first screenshot: the unicode strikethrough symbol
isn’t rendered properly (for example Georgia font (like on screenshot),
Lucida Grande etc.).
This problem does not reproduce in all fonts: cf. screenshot 2 with Tahoma
font—there it’s rendered closer to normal (still not perfectly).
Also, I think that it’s unlikely that the problem is only with one symbol,
perhaps there is some class of symbols which do all fail
```
Original issue reported on code.google.com by `pechyo...@gmail.com` on 20 May 2012 at 4:31
Attachments:
* [mactype05.png](https://storage.googleapis.com/google-code-attachments/mactype/issue-11/comment-0/mactype05.png)
* [mactype06.png](https://storage.googleapis.com/google-code-attachments/mactype/issue-11/comment-0/mactype06.png)
| 1.0 | Unicode strikethrough bug - ```
Everything is seen on the first screenshot: the unicode strikethrough symbol
isn’t rendered properly (for example Georgia font (like on screenshot),
Lucida Grande etc.).
This problem does not reproduce in all fonts: cf. screenshot 2 with Tahoma
font—there it’s rendered closer to normal (still not perfectly).
Also, I think that it’s unlikely that the problem is only with one symbol,
perhaps there is some class of symbols which do all fail
```
Original issue reported on code.google.com by `pechyo...@gmail.com` on 20 May 2012 at 4:31
Attachments:
* [mactype05.png](https://storage.googleapis.com/google-code-attachments/mactype/issue-11/comment-0/mactype05.png)
* [mactype06.png](https://storage.googleapis.com/google-code-attachments/mactype/issue-11/comment-0/mactype06.png)
| defect | unicode strikethrough bug everything is seen on the first screenshot the unicode strikethrough symbol isn’t rendered properly for example georgia font like on screenshot lucida grande etc this problem does not reproduce in all fonts cf screenshot with tahoma font—there it’s rendered closer to normal still not perfectly also i think that it’s unlikely that the problem is only with one symbol perhaps there is some class of symbols which do all fail original issue reported on code google com by pechyo gmail com on may at attachments | 1 |
231,584 | 25,520,122,638 | IssuesEvent | 2022-11-28 19:42:48 | opensearch-project/opensearch-build | https://api.github.com/repos/opensearch-project/opensearch-build | reopened | CVE-2022-25182 (High) detected in workflow-cps-global-lib-2.14.jar | security vulnerability | ## CVE-2022-25182 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>workflow-cps-global-lib-2.18.jar</b></p></summary>
<p>The Jenkins Plugins Parent POM Project</p>
<p>Library home page: <a href="https://github.com/jenkinsci/workflow-cps-global-lib-plugin">https://github.com/jenkinsci/workflow-cps-global-lib-plugin</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /ches/modules-2/files-2.1/org.jenkins-ci.plugins.workflow/workflow-cps-global-lib/2.18/920e6d6256274fd71039a72e9550d0f61340fbc2/workflow-cps-global-lib-2.18.jar</p>
<p>
Dependency Hierarchy:
- :x: **workflow-cps-global-lib-2.18.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A sandbox bypass vulnerability in Jenkins Pipeline: Shared Groovy Libraries Plugin 552.vd9cc05b8a2e1 and earlier allows attackers with Item/Configure permission to execute arbitrary code on the Jenkins controller JVM using specially crafted library names if a global Pipeline library is already configured.
<p>Publish Date: 2022-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25182>CVE-2022-25182</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25182">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25182</a></p>
<p>Release Date: 2022-02-15</p>
<p>Fix Resolution: org.jenkins-ci.plugins.workflow:workflow-cps-global-lib
:2.21.1,561.va_ce0de3c2d69</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.jenkins-ci.plugins.workflow","packageName":"workflow-cps-global-lib","packageVersion":"2.18","packageFilePaths":["/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"org.jenkins-ci.plugins.workflow:workflow-cps-global-lib:2.18","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.jenkins-ci.plugins.workflow:workflow-cps-global-lib\n:2.21.1,561.va_ce0de3c2d69","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-25182","vulnerabilityDetails":"A sandbox bypass vulnerability in Jenkins Pipeline: Shared Groovy Libraries Plugin 552.vd9cc05b8a2e1 and earlier allows attackers with Item/Configure permission to execute arbitrary code on the Jenkins controller JVM using specially crafted library names if a global Pipeline library is already configured.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25182","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2022-25182 (High) detected in workflow-cps-global-lib-2.14.jar - ## CVE-2022-25182 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>workflow-cps-global-lib-2.18.jar</b></p></summary>
<p>The Jenkins Plugins Parent POM Project</p>
<p>Library home page: <a href="https://github.com/jenkinsci/workflow-cps-global-lib-plugin">https://github.com/jenkinsci/workflow-cps-global-lib-plugin</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /ches/modules-2/files-2.1/org.jenkins-ci.plugins.workflow/workflow-cps-global-lib/2.18/920e6d6256274fd71039a72e9550d0f61340fbc2/workflow-cps-global-lib-2.18.jar</p>
<p>
Dependency Hierarchy:
- :x: **workflow-cps-global-lib-2.18.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A sandbox bypass vulnerability in Jenkins Pipeline: Shared Groovy Libraries Plugin 552.vd9cc05b8a2e1 and earlier allows attackers with Item/Configure permission to execute arbitrary code on the Jenkins controller JVM using specially crafted library names if a global Pipeline library is already configured.
<p>Publish Date: 2022-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25182>CVE-2022-25182</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25182">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25182</a></p>
<p>Release Date: 2022-02-15</p>
<p>Fix Resolution: org.jenkins-ci.plugins.workflow:workflow-cps-global-lib
:2.21.1,561.va_ce0de3c2d69</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.jenkins-ci.plugins.workflow","packageName":"workflow-cps-global-lib","packageVersion":"2.18","packageFilePaths":["/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"org.jenkins-ci.plugins.workflow:workflow-cps-global-lib:2.18","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.jenkins-ci.plugins.workflow:workflow-cps-global-lib\n:2.21.1,561.va_ce0de3c2d69","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-25182","vulnerabilityDetails":"A sandbox bypass vulnerability in Jenkins Pipeline: Shared Groovy Libraries Plugin 552.vd9cc05b8a2e1 and earlier allows attackers with Item/Configure permission to execute arbitrary code on the Jenkins controller JVM using specially crafted library names if a global Pipeline library is already configured.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25182","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in workflow cps global lib jar cve high severity vulnerability vulnerable library workflow cps global lib jar the jenkins plugins parent pom project library home page a href path to dependency file build gradle path to vulnerable library ches modules files org jenkins ci plugins workflow workflow cps global lib workflow cps global lib jar dependency hierarchy x workflow cps global lib jar vulnerable library found in base branch main vulnerability details a sandbox bypass vulnerability in jenkins pipeline shared groovy libraries plugin and earlier allows attackers with item configure permission to execute arbitrary code on the jenkins controller jvm using specially crafted library names if a global pipeline library is already configured publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org jenkins ci plugins workflow workflow cps global lib va check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org jenkins ci plugins workflow workflow cps global lib isminimumfixversionavailable true minimumfixversion org jenkins ci plugins workflow workflow cps global lib n va isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails a sandbox bypass vulnerability in jenkins pipeline shared groovy libraries plugin and earlier allows attackers with item configure permission to execute arbitrary code on the jenkins controller jvm using specially crafted library names if a global pipeline library is already configured vulnerabilityurl | 0 |
48,278 | 13,067,594,281 | IssuesEvent | 2020-07-31 00:57:53 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | weighting - crashes with AttributeError under Python3 (Trac #2120) | Migrated from Trac combo simulation defect |
```text
463/470 Test #462: weighting::compare_oneweight.py ................................***Failed 1.25 sec
Traceback (most recent call last):
File "/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/compare_oneweight.py", line 84, in <module>
check_oneweight(dataset)
File "/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/compare_oneweight.py", line 43, in check_oneweight
generator = weighting.from_simprod(dataset)
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 774, in from_simprod
**nugen_kwargs)
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 593, in NeutrinoGenerator
return GenerationProbabilityCollection(probs).to_PDG()
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 195, in to_PDG
spectra += [s.to_PDG() for s in v]
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 195, in <listcomp>
spectra += [s.to_PDG() for s in v]
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 68, in to_PDG
if self.particle_type is not None and getattr(PDGCode, self.particle_type.name) == self.particle_type:
AttributeError: 'int' object has no attribute 'name'
Start 464: weighting::corsika_weight_calculator.py
464/470 Test #463: weighting::corsika_weight_calculator.py ........................***Failed 1.65 sec
INFO (Python): Got a file for dataset 10285: http://icecube:skua@convey.icecube.wisc.edu/data/sim/IceCube/2011/generated/CORSIKA-in-ice/10285/00000-00999/IC86.2011_corsika.010285.000025.i3.bz2 (compare_oneweight.py:38 in get_random_filename)
Traceback (most recent call last):
File "/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/corsika_weight_calculator.py", line 37, in <module>
opts.dataset = weighting.from_simprod(opts.dataset)
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 832, in from_simprod
height=length, radius=radius)
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 478, in FiveComponent
return GenerationProbabilityCollection([PowerLaw(g, emin*mlo, emax*mhi, nevents=n, area=area, particle_type=p) for mlo, mhi, g, n, p in zip(lower_energy_scale, upper_energy_scale, gamma, nshower, pt)]).to_PDG()
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 195, in to_PDG
spectra += [s.to_PDG() for s in v]
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 195, in <listcomp>
spectra += [s.to_PDG() for s in v]
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 68, in to_PDG
if self.particle_type is not None and getattr(PDGCode, self.particle_type.name) == self.particle_type:
AttributeError: 'int' object has no attribute 'name'
```
Migrated from https://code.icecube.wisc.edu/ticket/2120
```json
{
"status": "closed",
"changetime": "2019-04-17T08:15:12",
"description": "{{{\n463/470 Test #463: weighting::compare_oneweight.py ................................***Failed 1.25 sec\nTraceback (most recent call last):\n File \"/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/compare_oneweight.py\", line 84, in <module>\n check_oneweight(dataset)\n File \"/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/compare_oneweight.py\", line 43, in check_oneweight\n generator = weighting.from_simprod(dataset)\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 774, in from_simprod\n **nugen_kwargs)\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 593, in NeutrinoGenerator\n return GenerationProbabilityCollection(probs).to_PDG()\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 195, in to_PDG\n spectra += [s.to_PDG() for s in v]\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 195, in <listcomp>\n spectra += [s.to_PDG() for s in v]\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 68, in to_PDG\n if self.particle_type is not None and getattr(PDGCode, self.particle_type.name) == self.particle_type:\nAttributeError: 'int' object has no attribute 'name'\n\n Start 464: weighting::corsika_weight_calculator.py\n464/470 Test #464: weighting::corsika_weight_calculator.py ........................***Failed 1.65 sec\nINFO (Python): Got a file for dataset 10285: http://icecube:skua@convey.icecube.wisc.edu/data/sim/IceCube/2011/generated/CORSIKA-in-ice/10285/00000-00999/IC86.2011_corsika.010285.000025.i3.bz2 (compare_oneweight.py:38 in get_random_filename)\nTraceback (most recent call last):\n File \"/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/corsika_weight_calculator.py\", line 37, in <module>\n opts.dataset = weighting.from_simprod(opts.dataset)\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 832, in from_simprod\n height=length, radius=radius)\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 478, in FiveComponent\n return GenerationProbabilityCollection([PowerLaw(g, emin*mlo, emax*mhi, nevents=n, area=area, particle_type=p) for mlo, mhi, g, n, p in zip(lower_energy_scale, upper_energy_scale, gamma, nshower, pt)]).to_PDG()\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 195, in to_PDG\n spectra += [s.to_PDG() for s in v]\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 195, in <listcomp>\n spectra += [s.to_PDG() for s in v]\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 68, in to_PDG\n if self.particle_type is not None and getattr(PDGCode, self.particle_type.name) == self.particle_type:\nAttributeError: 'int' object has no attribute 'name'\n}}}\n",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1555488912998122",
"component": "combo simulation",
"summary": "weighting - crashes with AttributeError under Python3",
"priority": "normal",
"keywords": "",
"time": "2017-12-04T20:09:47",
"milestone": "Long-Term Future",
"owner": "jvansanten",
"type": "defect"
}
```
| 1.0 | weighting - crashes with AttributeError under Python3 (Trac #2120) -
```text
463/470 Test #462: weighting::compare_oneweight.py ................................***Failed 1.25 sec
Traceback (most recent call last):
File "/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/compare_oneweight.py", line 84, in <module>
check_oneweight(dataset)
File "/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/compare_oneweight.py", line 43, in check_oneweight
generator = weighting.from_simprod(dataset)
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 774, in from_simprod
**nugen_kwargs)
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 593, in NeutrinoGenerator
return GenerationProbabilityCollection(probs).to_PDG()
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 195, in to_PDG
spectra += [s.to_PDG() for s in v]
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 195, in <listcomp>
spectra += [s.to_PDG() for s in v]
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 68, in to_PDG
if self.particle_type is not None and getattr(PDGCode, self.particle_type.name) == self.particle_type:
AttributeError: 'int' object has no attribute 'name'
Start 464: weighting::corsika_weight_calculator.py
464/470 Test #463: weighting::corsika_weight_calculator.py ........................***Failed 1.65 sec
INFO (Python): Got a file for dataset 10285: http://icecube:skua@convey.icecube.wisc.edu/data/sim/IceCube/2011/generated/CORSIKA-in-ice/10285/00000-00999/IC86.2011_corsika.010285.000025.i3.bz2 (compare_oneweight.py:38 in get_random_filename)
Traceback (most recent call last):
File "/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/corsika_weight_calculator.py", line 37, in <module>
opts.dataset = weighting.from_simprod(opts.dataset)
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 832, in from_simprod
height=length, radius=radius)
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 478, in FiveComponent
return GenerationProbabilityCollection([PowerLaw(g, emin*mlo, emax*mhi, nevents=n, area=area, particle_type=p) for mlo, mhi, g, n, p in zip(lower_energy_scale, upper_energy_scale, gamma, nshower, pt)]).to_PDG()
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 195, in to_PDG
spectra += [s.to_PDG() for s in v]
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 195, in <listcomp>
spectra += [s.to_PDG() for s in v]
File "/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py", line 68, in to_PDG
if self.particle_type is not None and getattr(PDGCode, self.particle_type.name) == self.particle_type:
AttributeError: 'int' object has no attribute 'name'
```
Migrated from https://code.icecube.wisc.edu/ticket/2120
```json
{
"status": "closed",
"changetime": "2019-04-17T08:15:12",
"description": "{{{\n463/470 Test #463: weighting::compare_oneweight.py ................................***Failed 1.25 sec\nTraceback (most recent call last):\n File \"/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/compare_oneweight.py\", line 84, in <module>\n check_oneweight(dataset)\n File \"/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/compare_oneweight.py\", line 43, in check_oneweight\n generator = weighting.from_simprod(dataset)\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 774, in from_simprod\n **nugen_kwargs)\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 593, in NeutrinoGenerator\n return GenerationProbabilityCollection(probs).to_PDG()\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 195, in to_PDG\n spectra += [s.to_PDG() for s in v]\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 195, in <listcomp>\n spectra += [s.to_PDG() for s in v]\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 68, in to_PDG\n if self.particle_type is not None and getattr(PDGCode, self.particle_type.name) == self.particle_type:\nAttributeError: 'int' object has no attribute 'name'\n\n Start 464: weighting::corsika_weight_calculator.py\n464/470 Test #464: weighting::corsika_weight_calculator.py ........................***Failed 1.65 sec\nINFO (Python): Got a file for dataset 10285: http://icecube:skua@convey.icecube.wisc.edu/data/sim/IceCube/2011/generated/CORSIKA-in-ice/10285/00000-00999/IC86.2011_corsika.010285.000025.i3.bz2 (compare_oneweight.py:38 in get_random_filename)\nTraceback (most recent call last):\n File \"/build/buildslave/paimon/Arch_Linux/source/weighting/resources/test/corsika_weight_calculator.py\", line 37, in <module>\n opts.dataset = weighting.from_simprod(opts.dataset)\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 832, in from_simprod\n height=length, radius=radius)\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 478, in FiveComponent\n return GenerationProbabilityCollection([PowerLaw(g, emin*mlo, emax*mhi, nevents=n, area=area, particle_type=p) for mlo, mhi, g, n, p in zip(lower_energy_scale, upper_energy_scale, gamma, nshower, pt)]).to_PDG()\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 195, in to_PDG\n spectra += [s.to_PDG() for s in v]\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 195, in <listcomp>\n spectra += [s.to_PDG() for s in v]\n File \"/build/buildslave/paimon/Arch_Linux/build/lib/icecube/weighting/weighting.py\", line 68, in to_PDG\n if self.particle_type is not None and getattr(PDGCode, self.particle_type.name) == self.particle_type:\nAttributeError: 'int' object has no attribute 'name'\n}}}\n",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1555488912998122",
"component": "combo simulation",
"summary": "weighting - crashes with AttributeError under Python3",
"priority": "normal",
"keywords": "",
"time": "2017-12-04T20:09:47",
"milestone": "Long-Term Future",
"owner": "jvansanten",
"type": "defect"
}
```
| defect | weighting crashes with attributeerror under trac text test weighting compare oneweight py failed sec traceback most recent call last file build buildslave paimon arch linux source weighting resources test compare oneweight py line in check oneweight dataset file build buildslave paimon arch linux source weighting resources test compare oneweight py line in check oneweight generator weighting from simprod dataset file build buildslave paimon arch linux build lib icecube weighting weighting py line in from simprod nugen kwargs file build buildslave paimon arch linux build lib icecube weighting weighting py line in neutrinogenerator return generationprobabilitycollection probs to pdg file build buildslave paimon arch linux build lib icecube weighting weighting py line in to pdg spectra file build buildslave paimon arch linux build lib icecube weighting weighting py line in spectra file build buildslave paimon arch linux build lib icecube weighting weighting py line in to pdg if self particle type is not none and getattr pdgcode self particle type name self particle type attributeerror int object has no attribute name start weighting corsika weight calculator py test weighting corsika weight calculator py failed sec info python got a file for dataset compare oneweight py in get random filename traceback most recent call last file build buildslave paimon arch linux source weighting resources test corsika weight calculator py line in opts dataset weighting from simprod opts dataset file build buildslave paimon arch linux build lib icecube weighting weighting py line in from simprod height length radius radius file build buildslave paimon arch linux build lib icecube weighting weighting py line in fivecomponent return generationprobabilitycollection to pdg file build buildslave paimon arch linux build lib icecube weighting weighting py line in to pdg spectra file build buildslave paimon arch linux build lib icecube weighting weighting py line in spectra file build buildslave paimon arch linux build lib icecube weighting weighting py line in to pdg if self particle type is not none and getattr pdgcode self particle type name self particle type attributeerror int object has no attribute name migrated from json status closed changetime description test weighting compare oneweight py failed sec ntraceback most recent call last n file build buildslave paimon arch linux source weighting resources test compare oneweight py line in n check oneweight dataset n file build buildslave paimon arch linux source weighting resources test compare oneweight py line in check oneweight n generator weighting from simprod dataset n file build buildslave paimon arch linux build lib icecube weighting weighting py line in from simprod n nugen kwargs n file build buildslave paimon arch linux build lib icecube weighting weighting py line in neutrinogenerator n return generationprobabilitycollection probs to pdg n file build buildslave paimon arch linux build lib icecube weighting weighting py line in to pdg n spectra n file build buildslave paimon arch linux build lib icecube weighting weighting py line in n spectra n file build buildslave paimon arch linux build lib icecube weighting weighting py line in to pdg n if self particle type is not none and getattr pdgcode self particle type name self particle type nattributeerror int object has no attribute name n n start weighting corsika weight calculator py test weighting corsika weight calculator py failed sec ninfo python got a file for dataset compare oneweight py in get random filename ntraceback most recent call last n file build buildslave paimon arch linux source weighting resources test corsika weight calculator py line in n opts dataset weighting from simprod opts dataset n file build buildslave paimon arch linux build lib icecube weighting weighting py line in from simprod n height length radius radius n file build buildslave paimon arch linux build lib icecube weighting weighting py line in fivecomponent n return generationprobabilitycollection to pdg n file build buildslave paimon arch linux build lib icecube weighting weighting py line in to pdg n spectra n file build buildslave paimon arch linux build lib icecube weighting weighting py line in n spectra n file build buildslave paimon arch linux build lib icecube weighting weighting py line in to pdg n if self particle type is not none and getattr pdgcode self particle type name self particle type nattributeerror int object has no attribute name n n reporter nega cc resolution fixed ts component combo simulation summary weighting crashes with attributeerror under priority normal keywords time milestone long term future owner jvansanten type defect | 1 |
187,100 | 22,031,530,963 | IssuesEvent | 2022-05-28 00:39:00 | vincenzodistasio97/Slack-Clone | https://api.github.com/repos/vincenzodistasio97/Slack-Clone | closed | WS-2019-0307 (Medium) detected in mem-1.1.0.tgz - autoclosed | security vulnerability | ## WS-2019-0307 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mem-1.1.0.tgz</b></p></summary>
<p>Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input</p>
<p>Library home page: <a href="https://registry.npmjs.org/mem/-/mem-1.1.0.tgz">https://registry.npmjs.org/mem/-/mem-1.1.0.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/npm/node_modules/mem/package.json,/server/node_modules/npm/node_modules/mem/package.json</p>
<p>
Dependency Hierarchy:
- npm-6.4.1.tgz (Root Library)
- libnpx-10.2.0.tgz
- yargs-11.0.0.tgz
- os-locale-2.1.0.tgz
- :x: **mem-1.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/Slack-Clone/commit/125be6381c29e8f8e1d4b2fed216db288fad9798">125be6381c29e8f8e1d4b2fed216db288fad9798</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In 'mem' before v4.0.0 there is a Denial of Service (DoS) vulnerability as a result of a failure in removal old values from the cache.
<p>Publish Date: 2018-08-27
<p>URL: <a href=https://github.com/sindresorhus/mem/commit/da4e4398cb27b602de3bd55f746efa9b4a31702b>WS-2019-0307</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1084">https://www.npmjs.com/advisories/1084</a></p>
<p>Release Date: 2018-08-27</p>
<p>Fix Resolution (mem): 4.0.0</p>
<p>Direct dependency fix Resolution (npm): 8.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0307 (Medium) detected in mem-1.1.0.tgz - autoclosed - ## WS-2019-0307 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mem-1.1.0.tgz</b></p></summary>
<p>Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input</p>
<p>Library home page: <a href="https://registry.npmjs.org/mem/-/mem-1.1.0.tgz">https://registry.npmjs.org/mem/-/mem-1.1.0.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/npm/node_modules/mem/package.json,/server/node_modules/npm/node_modules/mem/package.json</p>
<p>
Dependency Hierarchy:
- npm-6.4.1.tgz (Root Library)
- libnpx-10.2.0.tgz
- yargs-11.0.0.tgz
- os-locale-2.1.0.tgz
- :x: **mem-1.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/Slack-Clone/commit/125be6381c29e8f8e1d4b2fed216db288fad9798">125be6381c29e8f8e1d4b2fed216db288fad9798</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In 'mem' before v4.0.0 there is a Denial of Service (DoS) vulnerability as a result of a failure in removal old values from the cache.
<p>Publish Date: 2018-08-27
<p>URL: <a href=https://github.com/sindresorhus/mem/commit/da4e4398cb27b602de3bd55f746efa9b4a31702b>WS-2019-0307</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1084">https://www.npmjs.com/advisories/1084</a></p>
<p>Release Date: 2018-08-27</p>
<p>Fix Resolution (mem): 4.0.0</p>
<p>Direct dependency fix Resolution (npm): 8.7.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | ws medium detected in mem tgz autoclosed ws medium severity vulnerability vulnerable library mem tgz memoize functions an optimization used to speed up consecutive function calls by caching the result of calls with identical input library home page a href path to dependency file client package json path to vulnerable library client node modules npm node modules mem package json server node modules npm node modules mem package json dependency hierarchy npm tgz root library libnpx tgz yargs tgz os locale tgz x mem tgz vulnerable library found in head commit a href found in base branch master vulnerability details in mem before there is a denial of service dos vulnerability as a result of a failure in removal old values from the cache publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mem direct dependency fix resolution npm step up your open source security game with whitesource | 0 |
10,804 | 2,622,190,791 | IssuesEvent | 2015-03-04 00:22:58 | byzhang/cudpp | https://api.github.com/repos/byzhang/cudpp | closed | Sorting Error | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
- unzip gtc_to_sort_test.tar.gz in NVIDIA_CUDA_SDK project folder
1. make (~/NVIDIA_CUDA_SDK/projects/gtc_to_sort_test/)
2. execution (~/NVIDIA_CUDA_SDK/bin/linux/release)
(e.g.: ./gtc_sort_test
~/NVIDIA_CUDA_SDK/projects/gtc_to_sort_test/input/1.txt 5 30 32)
3.
What is the expected output? What do you see instead?
[extected output]
Finished reading input file.
mi: 161795, mgrid: 32449
Sorting : Success
0.0425751 s Checksum: 0.000000
Sorting : Success
0.0424822 s Checksum: 0.000000
Sorting : Success
0.0425396 s Checksum: 0.000000
Sorting : Success
0.0425428 s Checksum: 0.000000
Sorting : Success
0.0427907 s Checksum: 0.000000
Sorting : Success
0.042537 s Checksum: 0.000000
Sorting : Success
0.0425729 s Checksum: 0.000000
Sorting : Success
0.0425132 s Checksum: 0.000000
Sorting : Success
0.0426874 s Checksum: 0.000000
Sorting : Success
0.0428964 s Checksum: 0.000000
=== Performance summary: BENCH_GPU A0 5057 blocks 32 threads/block ===
0.0286377 Gflops
Min: 0.0424822 s -- 0.674 Gflop/s
Mean: 0.0426137 s -- 0.672 Gflop/s
Max: 0.0428964 s -- 0.668 Gflop/s
Stddev: 0.000134837 s (+/- 0.3164%)
[output]
Finished reading input file.
mi: 161795, mgrid: 32449
Sorting : Success
0.0426965 s Checksum: 0.000000
Sorting : Success
0.0425468 s Checksum: 0.000000
Sorting : Success
0.0426379 s Checksum: 0.000000
Sorting : Success
0.0425811 s Checksum: 0.000000
Sorting : Success
0.0426666 s Checksum: 0.000000
Unordered key[983]: 138 > key[984]: 27
Sorting : FAIL
0.0436186 s Checksum: 0.000000
Unordered key[45]: 6392 > key[46]: 6384
Sorting : FAIL
0.0434239 s Checksum: 0.000000
Unordered key[147]: 3 > key[148]: 0
Sorting : FAIL
0.0435097 s Checksum: 0.000000
Unordered key[210]: 218 > key[211]: 0
Sorting : FAIL
0.0436116 s Checksum: 0.000000
Unordered key[132]: 14 > key[133]: 0
Sorting : FAIL
0.0435575 s Checksum: 0.000000
=== Performance summary: BENCH_GPU A0 5057 blocks 32 threads/block ===
0.0286377 Gflops
Min: 0.0425468 s -- 0.673 Gflop/s
Mean: 0.043085 s -- 0.665 Gflop/s
Max: 0.0436186 s -- 0.657 Gflop/s
Stddev: 0.000488756 s (+/- 1.134%)
What version of the product are you using? On what operating system?
GTX280
Ubuntu 8.04
cuda 2.2
Please provide any additional information below.
Sorting error occurs sometimes like the ouput example above.
Besides, the same error occurs in cudpp1.1 and cudpp1.1.1 test program as
well.
```
Original issue reported on code.google.com by `leeher...@gmail.com` on 19 Feb 2010 at 7:17
Attachments:
* [gtc_to_sort_test.tar.gz](https://storage.googleapis.com/google-code-attachments/cudpp/issue-48/comment-0/gtc_to_sort_test.tar.gz)
| 1.0 | Sorting Error - ```
What steps will reproduce the problem?
- unzip gtc_to_sort_test.tar.gz in NVIDIA_CUDA_SDK project folder
1. make (~/NVIDIA_CUDA_SDK/projects/gtc_to_sort_test/)
2. execution (~/NVIDIA_CUDA_SDK/bin/linux/release)
(e.g.: ./gtc_sort_test
~/NVIDIA_CUDA_SDK/projects/gtc_to_sort_test/input/1.txt 5 30 32)
3.
What is the expected output? What do you see instead?
[extected output]
Finished reading input file.
mi: 161795, mgrid: 32449
Sorting : Success
0.0425751 s Checksum: 0.000000
Sorting : Success
0.0424822 s Checksum: 0.000000
Sorting : Success
0.0425396 s Checksum: 0.000000
Sorting : Success
0.0425428 s Checksum: 0.000000
Sorting : Success
0.0427907 s Checksum: 0.000000
Sorting : Success
0.042537 s Checksum: 0.000000
Sorting : Success
0.0425729 s Checksum: 0.000000
Sorting : Success
0.0425132 s Checksum: 0.000000
Sorting : Success
0.0426874 s Checksum: 0.000000
Sorting : Success
0.0428964 s Checksum: 0.000000
=== Performance summary: BENCH_GPU A0 5057 blocks 32 threads/block ===
0.0286377 Gflops
Min: 0.0424822 s -- 0.674 Gflop/s
Mean: 0.0426137 s -- 0.672 Gflop/s
Max: 0.0428964 s -- 0.668 Gflop/s
Stddev: 0.000134837 s (+/- 0.3164%)
[output]
Finished reading input file.
mi: 161795, mgrid: 32449
Sorting : Success
0.0426965 s Checksum: 0.000000
Sorting : Success
0.0425468 s Checksum: 0.000000
Sorting : Success
0.0426379 s Checksum: 0.000000
Sorting : Success
0.0425811 s Checksum: 0.000000
Sorting : Success
0.0426666 s Checksum: 0.000000
Unordered key[983]: 138 > key[984]: 27
Sorting : FAIL
0.0436186 s Checksum: 0.000000
Unordered key[45]: 6392 > key[46]: 6384
Sorting : FAIL
0.0434239 s Checksum: 0.000000
Unordered key[147]: 3 > key[148]: 0
Sorting : FAIL
0.0435097 s Checksum: 0.000000
Unordered key[210]: 218 > key[211]: 0
Sorting : FAIL
0.0436116 s Checksum: 0.000000
Unordered key[132]: 14 > key[133]: 0
Sorting : FAIL
0.0435575 s Checksum: 0.000000
=== Performance summary: BENCH_GPU A0 5057 blocks 32 threads/block ===
0.0286377 Gflops
Min: 0.0425468 s -- 0.673 Gflop/s
Mean: 0.043085 s -- 0.665 Gflop/s
Max: 0.0436186 s -- 0.657 Gflop/s
Stddev: 0.000488756 s (+/- 1.134%)
What version of the product are you using? On what operating system?
GTX280
Ubuntu 8.04
cuda 2.2
Please provide any additional information below.
Sorting error occurs sometimes like the ouput example above.
Besides, the same error occurs in cudpp1.1 and cudpp1.1.1 test program as
well.
```
Original issue reported on code.google.com by `leeher...@gmail.com` on 19 Feb 2010 at 7:17
Attachments:
* [gtc_to_sort_test.tar.gz](https://storage.googleapis.com/google-code-attachments/cudpp/issue-48/comment-0/gtc_to_sort_test.tar.gz)
| defect | sorting error what steps will reproduce the problem unzip gtc to sort test tar gz in nvidia cuda sdk project folder make nvidia cuda sdk projects gtc to sort test execution nvidia cuda sdk bin linux release e g gtc sort test nvidia cuda sdk projects gtc to sort test input txt what is the expected output what do you see instead finished reading input file mi mgrid sorting success s checksum sorting success s checksum sorting success s checksum sorting success s checksum sorting success s checksum sorting success s checksum sorting success s checksum sorting success s checksum sorting success s checksum sorting success s checksum performance summary bench gpu blocks threads block gflops min s gflop s mean s gflop s max s gflop s stddev s finished reading input file mi mgrid sorting success s checksum sorting success s checksum sorting success s checksum sorting success s checksum sorting success s checksum unordered key key sorting fail s checksum unordered key key sorting fail s checksum unordered key key sorting fail s checksum unordered key key sorting fail s checksum unordered key key sorting fail s checksum performance summary bench gpu blocks threads block gflops min s gflop s mean s gflop s max s gflop s stddev s what version of the product are you using on what operating system ubuntu cuda please provide any additional information below sorting error occurs sometimes like the ouput example above besides the same error occurs in and test program as well original issue reported on code google com by leeher gmail com on feb at attachments | 1 |
21,730 | 3,548,917,610 | IssuesEvent | 2016-01-20 16:09:50 | josecl/cool-php-captcha | https://api.github.com/repos/josecl/cool-php-captcha | closed | png captch | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. png captch not wking
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `binishhe...@gmail.com` on 14 Oct 2014 at 4:45 | 1.0 | png captch - ```
What steps will reproduce the problem?
1. png captch not wking
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `binishhe...@gmail.com` on 14 Oct 2014 at 4:45 | defect | png captch what steps will reproduce the problem png captch not wking what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by binishhe gmail com on oct at | 1 |
239,941 | 18,288,286,002 | IssuesEvent | 2021-10-05 12:48:19 | awaketogether/AI | https://api.github.com/repos/awaketogether/AI | opened | New Article about what will be the end of our EIP | documentation | **As a:** Beta spectator / Users
**I want to:** Have a brief view on what this team has been up to since the last sprint
**Description:**
The BETA version is ready, the supporters should know what we plan to release at the end of a project, what science did we explored and why this way. People should also know what we could add to the current market's state of the art in the field. Writing such article will improve anyway our Awake's voice.
**Definition of done:**
- [ ] Write a paragraphe in the article about: What do we have.
- [ ] Write a paragraphe in the article about: What did we explored and why that way (services chosen/technical stack chosen...)
- [ ] Write a paragraphe in the article about: What do we hope release.
- [ ] Write a paragraphe in the article about: In which way are we worth.
- [ ] Write a paragraphe in the article about: In which way are we worth.
- [ ] Finish with a paragraphe in the article about: what will we release for the public and the private.
**Estimated workload:** 1.5 W/D
| 1.0 | New Article about what will be the end of our EIP - **As a:** Beta spectator / Users
**I want to:** Have a brief view on what this team has been up to since the last sprint
**Description:**
The BETA version is ready, the supporters should know what we plan to release at the end of a project, what science did we explored and why this way. People should also know what we could add to the current market's state of the art in the field. Writing such article will improve anyway our Awake's voice.
**Definition of done:**
- [ ] Write a paragraphe in the article about: What do we have.
- [ ] Write a paragraphe in the article about: What did we explored and why that way (services chosen/technical stack chosen...)
- [ ] Write a paragraphe in the article about: What do we hope release.
- [ ] Write a paragraphe in the article about: In which way are we worth.
- [ ] Write a paragraphe in the article about: In which way are we worth.
- [ ] Finish with a paragraphe in the article about: what will we release for the public and the private.
**Estimated workload:** 1.5 W/D
| non_defect | new article about what will be the end of our eip as a beta spectator users i want to have a brief view on what this team has been up to since the last sprint description the beta version is ready the supporters should know what we plan to release at the end of a project what science did we explored and why this way people should also know what we could add to the current market s state of the art in the field writing such article will improve anyway our awake s voice definition of done write a paragraphe in the article about what do we have write a paragraphe in the article about what did we explored and why that way services chosen technical stack chosen write a paragraphe in the article about what do we hope release write a paragraphe in the article about in which way are we worth write a paragraphe in the article about in which way are we worth finish with a paragraphe in the article about what will we release for the public and the private estimated workload w d | 0 |
35,293 | 7,687,929,213 | IssuesEvent | 2018-05-17 07:50:09 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | SelectOneMenu: after refreshing the widget in an AJAX request the popup opens at wrong position | defect | If an AJAX request updates a SelectOneMenu widget and user opens the popup of that SelectOneMenu the popup opens at wrong position: top left corner of the body instead of aligned to the widget.
Reason:
In PF 6.1 SelectOneMenu had following function, called from init():
```
appendPanel: function() {
var container = this.cfg.appendTo ? PrimeFaces.expressions.SearchExpressionFacade.resolveComponentsAsSelector(this.cfg.appendTo): $(document.body);
if(!container.is(this.jq)) {
**container.children(this.panelId).remove();**
this.panel.appendTo(container);
}
}
```
This removed the old popup panel ('..._panel') before appending the new panel.
In PF 6.2 this method was removed and instead init() contains following code:
`PrimeFaces.utils.registerDynamicOverlay(this, this.panel, this.id + '_panel');`
The passed argument "this.panel" is a two element list, because `this.panel = $(this.panelId)` finds the old and the new popup panel. `PrimeFaces.utils.registerDynamicOverlay` doesn't work correctly with such a two element list: it calls `PrimeFaces.utils.appendDynamicOverlay` which does nothing because if `(!elementParent.is(appendTo) && !appendTo.is(overlay))` returns false for the two element `overlay` var.
## 1) Environment
- PrimeFaces version: 6.2.2
- Does it work on the newest released PrimeFaces version? No
- Does it work on the newest sources in GitHub? No
- Application server + version: Tomcat 8 (arbitrary)
- Affected browsers: IE, FF, Chrome (arbitrary)
## 2) Expected behavior
the popup panel opens aligned to the widget (as normal for a select one dropdown widget)
## 3) Actual behavior
the popup panel opens at the top left corner of the HTML body
## 4) Steps to reproduce
perform an AJAX call which updates the SelectOneMenu widget in it's response. Then open the SelectOneMenu's popup panel via clicking the widget.
## 5) Sample XHTML
```
<p:selectOneMenu id="mySom" value="#{bean.mySom}" effect="none">
<f:selectItems value="#{bean.selectItemsMySom}" />
</p:selectOneMenu>
<p:commandButton id="updateMySom" ajax="true" value="AJAX Action" action="#{bean.updateMySom}" process="@all" update="mySom" />
```
## 6) Sample bean
straight forward
| 1.0 | SelectOneMenu: after refreshing the widget in an AJAX request the popup opens at wrong position - If an AJAX request updates a SelectOneMenu widget and user opens the popup of that SelectOneMenu the popup opens at wrong position: top left corner of the body instead of aligned to the widget.
Reason:
In PF 6.1 SelectOneMenu had following function, called from init():
```
appendPanel: function() {
var container = this.cfg.appendTo ? PrimeFaces.expressions.SearchExpressionFacade.resolveComponentsAsSelector(this.cfg.appendTo): $(document.body);
if(!container.is(this.jq)) {
**container.children(this.panelId).remove();**
this.panel.appendTo(container);
}
}
```
This removed the old popup panel ('..._panel') before appending the new panel.
In PF 6.2 this method was removed and instead init() contains following code:
`PrimeFaces.utils.registerDynamicOverlay(this, this.panel, this.id + '_panel');`
The passed argument "this.panel" is a two element list, because `this.panel = $(this.panelId)` finds the old and the new popup panel. `PrimeFaces.utils.registerDynamicOverlay` doesn't work correctly with such a two element list: it calls `PrimeFaces.utils.appendDynamicOverlay` which does nothing because if `(!elementParent.is(appendTo) && !appendTo.is(overlay))` returns false for the two element `overlay` var.
## 1) Environment
- PrimeFaces version: 6.2.2
- Does it work on the newest released PrimeFaces version? No
- Does it work on the newest sources in GitHub? No
- Application server + version: Tomcat 8 (arbitrary)
- Affected browsers: IE, FF, Chrome (arbitrary)
## 2) Expected behavior
the popup panel opens aligned to the widget (as normal for a select one dropdown widget)
## 3) Actual behavior
the popup panel opens at the top left corner of the HTML body
## 4) Steps to reproduce
perform an AJAX call which updates the SelectOneMenu widget in it's response. Then open the SelectOneMenu's popup panel via clicking the widget.
## 5) Sample XHTML
```
<p:selectOneMenu id="mySom" value="#{bean.mySom}" effect="none">
<f:selectItems value="#{bean.selectItemsMySom}" />
</p:selectOneMenu>
<p:commandButton id="updateMySom" ajax="true" value="AJAX Action" action="#{bean.updateMySom}" process="@all" update="mySom" />
```
## 6) Sample bean
straight forward
| defect | selectonemenu after refreshing the widget in an ajax request the popup opens at wrong position if an ajax request updates a selectonemenu widget and user opens the popup of that selectonemenu the popup opens at wrong position top left corner of the body instead of aligned to the widget reason in pf selectonemenu had following function called from init appendpanel function var container this cfg appendto primefaces expressions searchexpressionfacade resolvecomponentsasselector this cfg appendto document body if container is this jq container children this panelid remove this panel appendto container this removed the old popup panel panel before appending the new panel in pf this method was removed and instead init contains following code primefaces utils registerdynamicoverlay this this panel this id panel the passed argument this panel is a two element list because this panel this panelid finds the old and the new popup panel primefaces utils registerdynamicoverlay doesn t work correctly with such a two element list it calls primefaces utils appenddynamicoverlay which does nothing because if elementparent is appendto appendto is overlay returns false for the two element overlay var environment primefaces version does it work on the newest released primefaces version no does it work on the newest sources in github no application server version tomcat arbitrary affected browsers ie ff chrome arbitrary expected behavior the popup panel opens aligned to the widget as normal for a select one dropdown widget actual behavior the popup panel opens at the top left corner of the html body steps to reproduce perform an ajax call which updates the selectonemenu widget in it s response then open the selectonemenu s popup panel via clicking the widget sample xhtml sample bean straight forward | 1 |
57,190 | 15,726,072,410 | IssuesEvent | 2021-03-29 10:48:36 | danmar/testissues | https://api.github.com/repos/danmar/testissues | opened | false memory leak with --all (destructor) (Trac #53) | False positive Incomplete Migration Migrated from Trac defect hyd_danmar | Migrated from https://trac.cppcheck.net/ticket/53
```json
{
"status": "closed",
"changetime": "2009-01-23T19:25:25",
"description": "cppcheck detects a memory leak with this code :\nclass A\n{\npublic:\n A();\n ~A();\n\nprivate:\n int* i;\n};\n\nA::A()\n{\n this->i = new int;\n}\n\nA::~A()\n{\n delete this->i;\n}\n\ncppcheck returns no error if the destructor is replaced by :\nA::~A()\n{\n delete i;\n}",
"reporter": "cbucher",
"cc": "",
"resolution": "fixed",
"_ts": "1232738725000000",
"component": "False positive",
"summary": "false memory leak with --all (destructor)",
"priority": "",
"keywords": "",
"time": "2009-01-23T09:22:10",
"milestone": "1.28",
"owner": "hyd_danmar",
"type": "defect"
}
```
| 1.0 | false memory leak with --all (destructor) (Trac #53) - Migrated from https://trac.cppcheck.net/ticket/53
```json
{
"status": "closed",
"changetime": "2009-01-23T19:25:25",
"description": "cppcheck detects a memory leak with this code :\nclass A\n{\npublic:\n A();\n ~A();\n\nprivate:\n int* i;\n};\n\nA::A()\n{\n this->i = new int;\n}\n\nA::~A()\n{\n delete this->i;\n}\n\ncppcheck returns no error if the destructor is replaced by :\nA::~A()\n{\n delete i;\n}",
"reporter": "cbucher",
"cc": "",
"resolution": "fixed",
"_ts": "1232738725000000",
"component": "False positive",
"summary": "false memory leak with --all (destructor)",
"priority": "",
"keywords": "",
"time": "2009-01-23T09:22:10",
"milestone": "1.28",
"owner": "hyd_danmar",
"type": "defect"
}
```
| defect | false memory leak with all destructor trac migrated from json status closed changetime description cppcheck detects a memory leak with this code nclass a n npublic n a n a n nprivate n int i n n na a n n this i new int n n na a n n delete this i n n ncppcheck returns no error if the destructor is replaced by na a n n delete i n reporter cbucher cc resolution fixed ts component false positive summary false memory leak with all destructor priority keywords time milestone owner hyd danmar type defect | 1 |
30,319 | 6,105,613,841 | IssuesEvent | 2017-06-21 00:27:33 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | Shell CLI startup override fires twice | Defect | This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.4.7
* Platform and Target: OS X, cake server, php 5.6.30
### What you did
baked a shell (no plugin) and overrided the startup function
namespace App\Shell;
use Cake\Console\Shell;
/**
* Wallet shell command.
*/
class WalletShell extends Shell
{
public function startup()
{
$this->out('hi');
return 0;
}
### What happened
└──╼ bin/cake wallet startup
hi
hi
### What you expected to happen
└──╼ bin/cake wallet startup
hi
| 1.0 | Shell CLI startup override fires twice - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.4.7
* Platform and Target: OS X, cake server, php 5.6.30
### What you did
baked a shell (no plugin) and overrided the startup function
namespace App\Shell;
use Cake\Console\Shell;
/**
* Wallet shell command.
*/
class WalletShell extends Shell
{
public function startup()
{
$this->out('hi');
return 0;
}
### What happened
└──╼ bin/cake wallet startup
hi
hi
### What you expected to happen
└──╼ bin/cake wallet startup
hi
| defect | shell cli startup override fires twice this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target os x cake server php what you did baked a shell no plugin and overrided the startup function namespace app shell use cake console shell wallet shell command class walletshell extends shell public function startup this out hi return what happened └──╼ bin cake wallet startup hi hi what you expected to happen └──╼ bin cake wallet startup hi | 1 |
56,222 | 14,983,531,301 | IssuesEvent | 2021-01-28 17:20:04 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | closed | l2arc_feed is constantly writing to the cache device | Status: Triage Needed Type: Defect | <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | archlinux
Distribution Version | 2021.01.01
Linux Kernel | 5.4.63 and 5.10.10
Architecture | x86_64
ZFS Version | 6fffc88bf and fd95af8d
SPL Version | n/a
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
I'm seeing a behavior that I didn't notice until I rebooted for a kernel update. I switched between 5.4 to the 5.10 branch and updated to the latest ZFS HEAD: fd95af8d
The cache device in my pool was being written to once per second at a single page: 4k. I booted up a VM and recreated this by just creating a new pool with a cache and observing it write once per second as well.
I dug through the commits from the build I was running from early March 2020, to the build I upgraded from at September 2020 and then to today, 2021-01-25, as they pertained to moudle/zfs/arc.c. I ripped out several commits that were parts of larger efforts, but was unable to get the constant writing to stop. From my test VM, the pool is created and then I immediately watch "zpool iostat" and have used other tools like iotop and iostat which confirm there is indeed a write occurring every second.
I'm sure this is undesirable behavior, if not a bug.
### Describe how to reproduce the problem
NOTE: the a.raw, etc, are off by one due to the VM having a root volume for the archlinux install, so the raw files map to vdb+.
```$ for N in {a..e}; do qemu-img create -f raw /vol/kvm/user/archlinux-zfs/$N.raw 100G; done
Formatting '/vol/kvm/user/archlinux-zfs/a.raw', fmt=raw size=107374182400
Formatting '/vol/kvm/user/archlinux-zfs/b.raw', fmt=raw size=107374182400
Formatting '/vol/kvm/user/archlinux-zfs/c.raw', fmt=raw size=107374182400
Formatting '/vol/kvm/user/archlinux-zfs/d.raw', fmt=raw size=107374182400
Formatting '/vol/kvm/user/archlinux-zfs/e.raw', fmt=raw size=107374182400
$ /opt/kvm.pl --kernel=/boot/linux-5.10.10-host+ archlinux
[root@zfs51010test ~]# uname -a
Linux zfs51010test 5.10.10-host+ #213 SMP Sun Jan 24 17:53:15 CST 2021 x86_64 GNU/Linux
[root@zfs5463test ~]# ls -la /usr/local/zfs
lrwxrwxrwx 1 root root 13 Jan 25 12:00 /usr/local/zfs -> zfs-fd95af8d
[root@zfs51010test ~]# zpool create -f vol /dev/vd[bcde] cache /dev/vdf
[ 431.296776] vdb: vdb1 vdb9
[ 431.440321] vdc: vdc1 vdc9
[ 431.585677] vdd: vdd1 vdd9
[ 431.715964] vde: vde1 vde9
[ 431.857756] vdf: vdf1 vdf9
[root@zfs51010test ~]# zpool iostat -v 1 | grep vdf
vdf 0 100G 1 1 38.5K 7.35K
vdf 0 100G 0 0 0 503
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
^C
```
The writes are occurring once every second, indefinitely. I tried this on my older kernel, 5.4.63, and the behavior is identical.
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
None. This isn't a crash but instead runtime behavior.
| 1.0 | l2arc_feed is constantly writing to the cache device - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | archlinux
Distribution Version | 2021.01.01
Linux Kernel | 5.4.63 and 5.10.10
Architecture | x86_64
ZFS Version | 6fffc88bf and fd95af8d
SPL Version | n/a
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
I'm seeing a behavior that I didn't notice until I rebooted for a kernel update. I switched between 5.4 to the 5.10 branch and updated to the latest ZFS HEAD: fd95af8d
The cache device in my pool was being written to once per second at a single page: 4k. I booted up a VM and recreated this by just creating a new pool with a cache and observing it write once per second as well.
I dug through the commits from the build I was running from early March 2020, to the build I upgraded from at September 2020 and then to today, 2021-01-25, as they pertained to moudle/zfs/arc.c. I ripped out several commits that were parts of larger efforts, but was unable to get the constant writing to stop. From my test VM, the pool is created and then I immediately watch "zpool iostat" and have used other tools like iotop and iostat which confirm there is indeed a write occurring every second.
I'm sure this is undesirable behavior, if not a bug.
### Describe how to reproduce the problem
NOTE: the a.raw, etc, are off by one due to the VM having a root volume for the archlinux install, so the raw files map to vdb+.
```$ for N in {a..e}; do qemu-img create -f raw /vol/kvm/user/archlinux-zfs/$N.raw 100G; done
Formatting '/vol/kvm/user/archlinux-zfs/a.raw', fmt=raw size=107374182400
Formatting '/vol/kvm/user/archlinux-zfs/b.raw', fmt=raw size=107374182400
Formatting '/vol/kvm/user/archlinux-zfs/c.raw', fmt=raw size=107374182400
Formatting '/vol/kvm/user/archlinux-zfs/d.raw', fmt=raw size=107374182400
Formatting '/vol/kvm/user/archlinux-zfs/e.raw', fmt=raw size=107374182400
$ /opt/kvm.pl --kernel=/boot/linux-5.10.10-host+ archlinux
[root@zfs51010test ~]# uname -a
Linux zfs51010test 5.10.10-host+ #213 SMP Sun Jan 24 17:53:15 CST 2021 x86_64 GNU/Linux
[root@zfs5463test ~]# ls -la /usr/local/zfs
lrwxrwxrwx 1 root root 13 Jan 25 12:00 /usr/local/zfs -> zfs-fd95af8d
[root@zfs51010test ~]# zpool create -f vol /dev/vd[bcde] cache /dev/vdf
[ 431.296776] vdb: vdb1 vdb9
[ 431.440321] vdc: vdc1 vdc9
[ 431.585677] vdd: vdd1 vdd9
[ 431.715964] vde: vde1 vde9
[ 431.857756] vdf: vdf1 vdf9
[root@zfs51010test ~]# zpool iostat -v 1 | grep vdf
vdf 0 100G 1 1 38.5K 7.35K
vdf 0 100G 0 0 0 503
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
vdf 0 100G 0 0 0 511
^C
```
The writes are occurring once every second, indefinitely. I tried this on my older kernel, 5.4.63, and the behavior is identical.
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
None. This isn't a crash but instead runtime behavior.
| defect | feed is constantly writing to the cache device thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name archlinux distribution version linux kernel and architecture zfs version and spl version n a commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing i m seeing a behavior that i didn t notice until i rebooted for a kernel update i switched between to the branch and updated to the latest zfs head the cache device in my pool was being written to once per second at a single page i booted up a vm and recreated this by just creating a new pool with a cache and observing it write once per second as well i dug through the commits from the build i was running from early march to the build i upgraded from at september and then to today as they pertained to moudle zfs arc c i ripped out several commits that were parts of larger efforts but was unable to get the constant writing to stop from my test vm the pool is created and then i immediately watch zpool iostat and have used other tools like iotop and iostat which confirm there is indeed a write occurring every second i m sure this is undesirable behavior if not a bug describe how to reproduce the problem note the a raw etc are off by one due to the vm having a root volume for the archlinux install so the raw files map to vdb for n in a e do qemu img create f raw vol kvm user archlinux zfs n raw done formatting vol kvm user archlinux zfs a raw fmt raw size formatting vol kvm user archlinux zfs b raw fmt raw size formatting vol kvm user archlinux zfs c raw fmt raw size formatting vol kvm user archlinux zfs d raw fmt raw size formatting vol kvm user archlinux zfs e raw fmt raw size opt kvm pl kernel boot linux host archlinux uname a linux host smp sun jan cst gnu linux ls la usr local zfs lrwxrwxrwx root root jan usr local zfs zfs zpool create f vol dev vd cache dev vdf vdb vdc vdd vde vdf zpool iostat v grep vdf vdf vdf vdf vdf vdf vdf vdf vdf vdf vdf vdf vdf vdf c the writes are occurring once every second indefinitely i tried this on my older kernel and the behavior is identical include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with none this isn t a crash but instead runtime behavior | 1 |
14,089 | 2,789,892,544 | IssuesEvent | 2015-05-08 22:13:20 | google/google-visualization-api-issues | https://api.github.com/repos/google/google-visualization-api-issues | opened | Alaska does not show on Marker mode | Priority-Medium Type-Defect | Original [issue 451](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=451) created by orwant on 2010-11-05T07:09:03.000Z:
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1. Create a marker for Anchorage in Alaska
2. The Map will show there is a 'hit' for USA, which includes Alaska
3. If you click on the USA, it only shows USA and not Alaska at all with markers.
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
Geomap
<b>Are you using the test environment (version 1.1)?</b>
No
<b>What operating system and browser are you using?</b>
All browsers (IE7,8,9, Firefox, Chrome)
Windows 7
Additional Comments
This is not so good because we have a bunch of clients in Alaska and they want to be able to have a visual representation of the data applicable to them.
| 1.0 | Alaska does not show on Marker mode - Original [issue 451](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=451) created by orwant on 2010-11-05T07:09:03.000Z:
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1. Create a marker for Anchorage in Alaska
2. The Map will show there is a 'hit' for USA, which includes Alaska
3. If you click on the USA, it only shows USA and not Alaska at all with markers.
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
Geomap
<b>Are you using the test environment (version 1.1)?</b>
No
<b>What operating system and browser are you using?</b>
All browsers (IE7,8,9, Firefox, Chrome)
Windows 7
Additional Comments
This is not so good because we have a bunch of clients in Alaska and they want to be able to have a visual representation of the data applicable to them.
| defect | alaska does not show on marker mode original created by orwant on what steps will reproduce the problem please provide a link to a demonstration page if at all possible or attach code create a marker for anchorage in alaska the map will show there is a hit for usa which includes alaska if you click on the usa it only shows usa and not alaska at all with markers what component is this issue related to piechart linechart datatable query etc geomap are you using the test environment version no what operating system and browser are you using all browsers firefox chrome windows additional comments this is not so good because we have a bunch of clients in alaska and they want to be able to have a visual representation of the data applicable to them | 1 |
712,888 | 24,509,916,513 | IssuesEvent | 2022-10-10 20:15:21 | MelchiorDahrk/MMM2022 | https://api.github.com/repos/MelchiorDahrk/MMM2022 | closed | Broken Velothi cobblestone foundation | priority-2 | Uneven cobblestone pieces. Maybe that can just be put on top of flat stuff for easy of use. Maybe look at the dwemer road assets? | 1.0 | Broken Velothi cobblestone foundation - Uneven cobblestone pieces. Maybe that can just be put on top of flat stuff for easy of use. Maybe look at the dwemer road assets? | non_defect | broken velothi cobblestone foundation uneven cobblestone pieces maybe that can just be put on top of flat stuff for easy of use maybe look at the dwemer road assets | 0 |
230,261 | 18,527,319,109 | IssuesEvent | 2021-10-20 22:28:29 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/exception_operators_data_types/text·ts - detection engine api security and spaces enabled Detection exceptions data types and operators Rule exception operators for data type text "is one of" operator should filter 2 text if both are set as exceptions | failed-test Team:SIEM Team: SecuritySolution Team:Detection Rules | A test failed on a tracked branch
```
Error: expected 200 "OK", got 409 "Conflict"
at Test._assertStatus (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:268:12)
at Test._assertFunction (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:283:11)
at Test.assert (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:173:18)
at assert (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:131:12)
at /opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:128:5
at Test.Request.callback (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:718:3)
at /opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:906:18
at IncomingMessage.<anonymous> (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/parsers/json.js:19:7)
at IncomingMessage.emit (node:events:402:35)
at endReadableNT (node:internal/streams/readable:1343:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
```
First failure: [CI Build - master](https://buildkite.com/elastic/kibana-hourly/builds/1654#77505f58-71ec-4919-818d-f66ce74fd9d5)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/exception_operators_data_types/text·ts","test.name":"detection engine api security and spaces enabled Detection exceptions data types and operators Rule exception operators for data type text \"is one of\" operator should filter 2 text if both are set as exceptions","test.failCount":1}} --> | 1.0 | Failing test: X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/exception_operators_data_types/text·ts - detection engine api security and spaces enabled Detection exceptions data types and operators Rule exception operators for data type text "is one of" operator should filter 2 text if both are set as exceptions - A test failed on a tracked branch
```
Error: expected 200 "OK", got 409 "Conflict"
at Test._assertStatus (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:268:12)
at Test._assertFunction (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:283:11)
at Test.assert (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:173:18)
at assert (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:131:12)
at /opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:128:5
at Test.Request.callback (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:718:3)
at /opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:906:18
at IncomingMessage.<anonymous> (/opt/local-ssd/buildkite/builds/kb-cigroup-6-ad0cfbfd5d19950a/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/parsers/json.js:19:7)
at IncomingMessage.emit (node:events:402:35)
at endReadableNT (node:internal/streams/readable:1343:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
```
First failure: [CI Build - master](https://buildkite.com/elastic/kibana-hourly/builds/1654#77505f58-71ec-4919-818d-f66ce74fd9d5)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/exception_operators_data_types/text·ts","test.name":"detection engine api security and spaces enabled Detection exceptions data types and operators Rule exception operators for data type text \"is one of\" operator should filter 2 text if both are set as exceptions","test.failCount":1}} --> | non_defect | failing test x pack detection engine api integration tests x pack test detection engine api integration security and spaces tests exception operators data types text·ts detection engine api security and spaces enabled detection exceptions data types and operators rule exception operators for data type text is one of operator should filter text if both are set as exceptions a test failed on a tracked branch error expected ok got conflict at test assertstatus opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest lib test js at test assertfunction opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest lib test js at test assert opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest lib test js at assert opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest lib test js at opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest lib test js at test request callback opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest node modules superagent lib node index js at opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest node modules superagent lib node index js at incomingmessage opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest node modules superagent lib node parsers json js at incomingmessage emit node events at endreadablent node internal streams readable at processticksandrejections node internal process task queues first failure | 0 |
5,439 | 7,156,447,870 | IssuesEvent | 2018-01-26 16:19:23 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | opened | Network/Unifying conflation options used in core don't match those applied by the services | Category: Services Priority: Medium Status: Defined Type: Bug | conflateAdvOps.json needs updating | 1.0 | Network/Unifying conflation options used in core don't match those applied by the services - conflateAdvOps.json needs updating | non_defect | network unifying conflation options used in core don t match those applied by the services conflateadvops json needs updating | 0 |
165,801 | 6,286,523,858 | IssuesEvent | 2017-07-19 13:11:10 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | reopened | Update theme and icons for editing workflow | enhancement pending review Priority: Blocker Project: C040 | Provide theme changes for:
- [ ] editing functionalities (icons, tools, featuregrid)
- [ ] animations | 1.0 | Update theme and icons for editing workflow - Provide theme changes for:
- [ ] editing functionalities (icons, tools, featuregrid)
- [ ] animations | non_defect | update theme and icons for editing workflow provide theme changes for editing functionalities icons tools featuregrid animations | 0 |
23,417 | 3,814,176,935 | IssuesEvent | 2016-03-28 11:25:59 | bridgedotnet/Bridge | https://api.github.com/repos/bridgedotnet/Bridge | opened | A few Int64 bugs | defect | http://forums.bridge.net/forum/bridge-net-pro/bugs/1924
The Int64 functionality has not been released yet. So, it is okay to combine a few issues into one. But each issue is going to be covered by individual unit tests. | 1.0 | A few Int64 bugs - http://forums.bridge.net/forum/bridge-net-pro/bugs/1924
The Int64 functionality has not been released yet. So, it is okay to combine a few issues into one. But each issue is going to be covered by individual unit tests. | defect | a few bugs the functionality has not been released yet so it is okay to combine a few issues into one but each issue is going to be covered by individual unit tests | 1 |
7,178 | 2,610,355,904 | IssuesEvent | 2015-02-26 19:55:15 | chrsmith/scribefire-chrome | https://api.github.com/repos/chrsmith/scribefire-chrome | closed | HTML Tags issues | auto-migrated Priority-Medium Type-Defect | ```
What's the problem?
Why Scibefire remove paragraph tag <p></p>? Is bug or error or it's features?
I hope scribefire don't remove or change any html tags
What browser are you using?
Firefox 3.6.23
What version of ScribeFire are you running?
Scribefire next 1.9
```
-----
Original issue reported on code.google.com by `tonit...@gmail.com` on 9 Oct 2011 at 2:28
* Merged into: #174 | 1.0 | HTML Tags issues - ```
What's the problem?
Why Scibefire remove paragraph tag <p></p>? Is bug or error or it's features?
I hope scribefire don't remove or change any html tags
What browser are you using?
Firefox 3.6.23
What version of ScribeFire are you running?
Scribefire next 1.9
```
-----
Original issue reported on code.google.com by `tonit...@gmail.com` on 9 Oct 2011 at 2:28
* Merged into: #174 | defect | html tags issues what s the problem why scibefire remove paragraph tag is bug or error or it s features i hope scribefire don t remove or change any html tags what browser are you using firefox what version of scribefire are you running scribefire next original issue reported on code google com by tonit gmail com on oct at merged into | 1 |
57,977 | 16,238,367,819 | IssuesEvent | 2021-05-07 05:50:01 | DependencyTrack/dependency-track | https://api.github.com/repos/DependencyTrack/dependency-track | closed | Vulnerabilities list contents | defect p1 | The defect may already be reported! Please search for the defect before creating one.
### Current Behavior:
I would like to create a list of some third party SW that we use in our solution and I would like to start with our datsbase (postgres and oracle), I cannot find an automated way to generate
I created manually a bom.xml that contains 2 component with CPE ( no purl) for 2 databases, the internal scan results are the following:
cpe:/a:oracle:database_server:11.2.0.4 -> 1 vulnerability
cpe:/a:postgresql:postgresql:10.0-> 4 vulnerabilities
If I query the NVD database using the same CPEs I obtainthe following results :
https://services.nvd.nist.gov/rest/json/cves/1.0?cpeMatchString=cpe:/a:oracle:database_server:11.2.0.4 -> 112 results
https://services.nvd.nist.gov/rest/json/cves/1.0?cpeMatchString=cpe:/a:postgresql:postgresql:10.0 -> 25 results
Is there a reason for a so relevant difference?
### Steps to Reproduce:
Create a project
Import the attached bom file
View component vulnerabiliteis
### Expected Behavior:
I expected to obtain a comparable amount of vulnerabilities of NIST REST API results
### Environment:
- Dependency-Track Version:4.2.1
- Distribution: Docker
- BOM Format & Version:SBOM 1.2
- Database Server: H2
- Browser: Firefox
[bomOracle.zip](https://github.com/DependencyTrack/dependency-track/files/6382707/bomOracle.zip)
### Additional Details:
(e.g. detailed explanation, stacktraces, relate
[bomOracle.zip](https://github.com/DependencyTrack/dependency-track/files/6382730/bomOracle.zip)
d issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)
| 1.0 | Vulnerabilities list contents - The defect may already be reported! Please search for the defect before creating one.
### Current Behavior:
I would like to create a list of some third party SW that we use in our solution and I would like to start with our datsbase (postgres and oracle), I cannot find an automated way to generate
I created manually a bom.xml that contains 2 component with CPE ( no purl) for 2 databases, the internal scan results are the following:
cpe:/a:oracle:database_server:11.2.0.4 -> 1 vulnerability
cpe:/a:postgresql:postgresql:10.0-> 4 vulnerabilities
If I query the NVD database using the same CPEs I obtainthe following results :
https://services.nvd.nist.gov/rest/json/cves/1.0?cpeMatchString=cpe:/a:oracle:database_server:11.2.0.4 -> 112 results
https://services.nvd.nist.gov/rest/json/cves/1.0?cpeMatchString=cpe:/a:postgresql:postgresql:10.0 -> 25 results
Is there a reason for a so relevant difference?
### Steps to Reproduce:
Create a project
Import the attached bom file
View component vulnerabiliteis
### Expected Behavior:
I expected to obtain a comparable amount of vulnerabilities of NIST REST API results
### Environment:
- Dependency-Track Version:4.2.1
- Distribution: Docker
- BOM Format & Version:SBOM 1.2
- Database Server: H2
- Browser: Firefox
[bomOracle.zip](https://github.com/DependencyTrack/dependency-track/files/6382707/bomOracle.zip)
### Additional Details:
(e.g. detailed explanation, stacktraces, relate
[bomOracle.zip](https://github.com/DependencyTrack/dependency-track/files/6382730/bomOracle.zip)
d issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)
| defect | vulnerabilities list contents the defect may already be reported please search for the defect before creating one current behavior i would like to create a list of some third party sw that we use in our solution and i would like to start with our datsbase postgres and oracle i cannot find an automated way to generate i created manually a bom xml that contains component with cpe no purl for databases the internal scan results are the following cpe a oracle database server vulnerability cpe a postgresql postgresql vulnerabilities if i query the nvd database using the same cpes i obtainthe following results results results is there a reason for a so relevant difference steps to reproduce create a project import the attached bom file view component vulnerabiliteis expected behavior i expected to obtain a comparable amount of vulnerabilities of nist rest api results environment dependency track version distribution docker bom format version sbom database server browser firefox additional details e g detailed explanation stacktraces relate d issues suggestions how to fix links for us to have context eg stackoverflow gitter etc | 1 |
38,493 | 6,673,006,075 | IssuesEvent | 2017-10-04 13:47:24 | JGCRI/gcamdata | https://api.github.com/repos/JGCRI/gcamdata | closed | Help needed with L232.industry assumption file units | documentation energy | The chunk reads in assumption files `energy/A32.fuelprefElasticity.csv` and `socioeconomics/A32.inc_elas_output` which I'm not sure about the units.
In `energy/A32.fuelprefElasticity.csv`:
`share` -- Unitless?
`fuelprefElasticity` -- Unitless?
In `socioeconomics/A32.inc_elas_output`:
`inc_elas` -- Unitless?
Thanks! | 1.0 | Help needed with L232.industry assumption file units - The chunk reads in assumption files `energy/A32.fuelprefElasticity.csv` and `socioeconomics/A32.inc_elas_output` which I'm not sure about the units.
In `energy/A32.fuelprefElasticity.csv`:
`share` -- Unitless?
`fuelprefElasticity` -- Unitless?
In `socioeconomics/A32.inc_elas_output`:
`inc_elas` -- Unitless?
Thanks! | non_defect | help needed with industry assumption file units the chunk reads in assumption files energy fuelprefelasticity csv and socioeconomics inc elas output which i m not sure about the units in energy fuelprefelasticity csv share unitless fuelprefelasticity unitless in socioeconomics inc elas output inc elas unitless thanks | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.