Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
826
| 2,594,139,088
|
IssuesEvent
|
2015-02-20 00:05:58
|
BALL-Project/ball
|
https://api.github.com/repos/BALL-Project/ball
|
closed
|
acetic acid should not be recognized as an amino acid
|
C: BALL Core P: major R: fixed T: defect
|
**Reported by akdehof on 3 Dec 38777218 04:00 UTC**
Currently Peptides::isAminoAcid recognizes ACE-residues as amino acids.
This is probably wrong :-)
|
1.0
|
acetic acid should not be recognized as an amino acid - **Reported by akdehof on 3 Dec 38777218 04:00 UTC**
Currently Peptides::isAminoAcid recognizes ACE-residues as amino acids.
This is probably wrong :-)
|
defect
|
acetic acid should not be recognized as an amino acid reported by akdehof on dec utc currently peptides isaminoacid recognizes ace residues as amino acids this is probably wrong
| 1
|
13,967
| 16,740,306,977
|
IssuesEvent
|
2021-06-11 08:58:01
|
STEllAR-GROUP/hpx
|
https://api.github.com/repos/STEllAR-GROUP/hpx
|
closed
|
Separate the datapar algorithms
|
category: algorithms project: GSoC type: compatibility issue type: enhancement
|
Currently, our parallel algorithms support being used with the `datapar` execution policy. This is a remnant of an older standardization proposal. This implementation is currently tightly entangled with the implementation of the parallel base algorithms.
We should do two things:
- separate the datapar implementations and expose them through separate algorithm specializations (based on the `tag_invoke` customization point mechanism we have implemented)
- adapt the implementation to support (and rely on) the data-parallel Types introduced by [N4755](https://wg21.link/n4755), section 9, this implies removing `datapar` as it is today,
This work could also go hand-in-hand with #2271.
|
True
|
Separate the datapar algorithms - Currently, our parallel algorithms support being used with the `datapar` execution policy. This is a remnant of an older standardization proposal. This implementation is currently tightly entangled with the implementation of the parallel base algorithms.
We should do two things:
- separate the datapar implementations and expose them through separate algorithm specializations (based on the `tag_invoke` customization point mechanism we have implemented)
- adapt the implementation to support (and rely on) the data-parallel Types introduced by [N4755](https://wg21.link/n4755), section 9, this implies removing `datapar` as it is today,
This work could also go hand-in-hand with #2271.
|
non_defect
|
separate the datapar algorithms currently our parallel algorithms support being used with the datapar execution policy this is a remnant of an older standardization proposal this implementation is currently tightly entangled with the implementation of the parallel base algorithms we should do two things separate the datapar implementations and expose them through separate algorithm specializations based on the tag invoke customization point mechanism we have implemented adapt the implementation to support and rely on the data parallel types introduced by section this implies removing datapar as it is today this work could also go hand in hand with
| 0
|
1,900
| 2,603,973,167
|
IssuesEvent
|
2015-02-24 19:00:48
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳龟头长水泡怎么办
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳龟头长水泡怎么办〓沈陽軍區政治部醫院性病〓TEL:024-3
1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。�
��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌�
��歷史悠久、設備精良、技術權威、專家云集,是預防、保健
、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲��
�部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、�
��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空
軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體��
�等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:04
|
1.0
|
沈阳龟头长水泡怎么办 - ```
沈阳龟头长水泡怎么办〓沈陽軍區政治部醫院性病〓TEL:024-3
1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。�
��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌�
��歷史悠久、設備精良、技術權威、專家云集,是預防、保健
、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲��
�部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、�
��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空
軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體��
�等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:04
|
defect
|
沈阳龟头长水泡怎么办 沈阳龟头长水泡怎么办〓沈陽軍區政治部醫院性病〓tel: 〓 , 。� �� 。是一所與新中國同建立共輝煌� ��歷史悠久、設備精良、技術權威、專家云集,是預防、保健 、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲�� �部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、� ��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空 軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體�� �等功。 original issue reported on code google com by gmail com on jun at
| 1
|
329,979
| 24,241,252,707
|
IssuesEvent
|
2022-09-27 06:54:00
|
keptn-sandbox/sumologic-service
|
https://api.github.com/repos/keptn-sandbox/sumologic-service
|
opened
|
Add info about auto-signoff in the README
|
documentation
|
## Summary
We have a DCO check which runs on every PR to check if the commit has been signed off.

Doing
```
git commit --amend --signoff
```
or something like
```
git rebase HEAD~2 --signoff
```
is inconvenient. Signing off can be automated by adding the following hook in the local `.git` folder:
```bash
#!/bin/sh
#
# An example hook script to prepare the commit log message.
# Called by "git commit" with the name of the file that has the
# commit message, followed by the description of the commit
# message's source. The hook's purpose is to edit the commit
# message file. If the hook fails with a non-zero status,
# the commit is aborted.
#
# To enable this hook, rename this file to "prepare-commit-msg".
# This hook includes three examples. The first comments out the
# "Conflicts:" part of a merge commit.
#
# The second includes the output of "git diff --name-status -r"
# into the message, just before the "git status" output. It is
# commented because it doesn't cope with --amend or with squashed
# commits.
#
# The third example adds a Signed-off-by line to the message, that can
# still be edited. This is rarely a good idea.
SOB=$(git var GIT_AUTHOR_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p')
grep -qs "^$SOB" "$1" || echo "$SOB" >> "$1"
```
This needs be put in the `.git/hooks/prepare-commit-msg` file (you need to create a new file called `prepare-commit-msg`) and give it permission to execute using:
```
chmod +x .git/hooks/prepare-commit-msg
```
## TODO
- [ ] Add info about this in the README after https://github.com/keptn-sandbox/sumologic-service#testing-cloud-events (create a heading called `Auto signoff commit messages`
- [ ] Raise a PR and get it merged
|
1.0
|
Add info about auto-signoff in the README - ## Summary
We have a DCO check which runs on every PR to check if the commit has been signed off.

Doing
```
git commit --amend --signoff
```
or something like
```
git rebase HEAD~2 --signoff
```
is inconvenient. Signing off can be automated by adding the following hook in the local `.git` folder:
```bash
#!/bin/sh
#
# An example hook script to prepare the commit log message.
# Called by "git commit" with the name of the file that has the
# commit message, followed by the description of the commit
# message's source. The hook's purpose is to edit the commit
# message file. If the hook fails with a non-zero status,
# the commit is aborted.
#
# To enable this hook, rename this file to "prepare-commit-msg".
# This hook includes three examples. The first comments out the
# "Conflicts:" part of a merge commit.
#
# The second includes the output of "git diff --name-status -r"
# into the message, just before the "git status" output. It is
# commented because it doesn't cope with --amend or with squashed
# commits.
#
# The third example adds a Signed-off-by line to the message, that can
# still be edited. This is rarely a good idea.
SOB=$(git var GIT_AUTHOR_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p')
grep -qs "^$SOB" "$1" || echo "$SOB" >> "$1"
```
This needs be put in the `.git/hooks/prepare-commit-msg` file (you need to create a new file called `prepare-commit-msg`) and give it permission to execute using:
```
chmod +x .git/hooks/prepare-commit-msg
```
## TODO
- [ ] Add info about this in the README after https://github.com/keptn-sandbox/sumologic-service#testing-cloud-events (create a heading called `Auto signoff commit messages`
- [ ] Raise a PR and get it merged
|
non_defect
|
add info about auto signoff in the readme summary we have a dco check which runs on every pr to check if the commit has been signed off doing git commit amend signoff or something like git rebase head signoff is inconvenient signing off can be automated by adding the following hook in the local git folder bash bin sh an example hook script to prepare the commit log message called by git commit with the name of the file that has the commit message followed by the description of the commit message s source the hook s purpose is to edit the commit message file if the hook fails with a non zero status the commit is aborted to enable this hook rename this file to prepare commit msg this hook includes three examples the first comments out the conflicts part of a merge commit the second includes the output of git diff name status r into the message just before the git status output it is commented because it doesn t cope with amend or with squashed commits the third example adds a signed off by line to the message that can still be edited this is rarely a good idea sob git var git author ident sed n s signed off by p grep qs sob echo sob this needs be put in the git hooks prepare commit msg file you need to create a new file called prepare commit msg and give it permission to execute using chmod x git hooks prepare commit msg todo add info about this in the readme after create a heading called auto signoff commit messages raise a pr and get it merged
| 0
|
7,232
| 2,610,359,321
|
IssuesEvent
|
2015-02-26 19:56:18
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
opened
|
Lost entire post
|
auto-migrated Priority-Medium Type-Defect
|
```
What's the problem?
Lost entire post. I had composed a long post (for Wordpress.com) and then went
off to do other things. One of these things is debugging a problem I've been
having with certain Javascript functions being disabled - so I started
disabling extensions (one of course was Scribefire).
After re-enabling it, I noticed that the window was gone. Starting a new
Scribefire window showed it to be empty and no way to retrieve the lost post.
I found this article
http://leggetter.posterous.com/how-to-recover-a-lost-scribefire-noteblog-pos
but there is no equivalent in Linux, and searching the directory for Chromium
showed up no SQLite files and no apparent database files that might contain the
lost post.
Scribefire SHOULD automatically save the current post, retain it, and
automatically reload it when it is started. It should also automatically save
it as a draft post to the destination blog.
This is NOT the first bug post on this, nor is it the first time I've lost a
post!
The idea that Scribefire is losing posts this many years after it was started
is inconcievable. The fix is simple: save it! Word does it, and vi has done it
for longer than some bloggers have been alive. Why can't Scribefire do it too?
What browser are you using?
Google Chromium 14.0.835.202 (Developer Build 103287 Linux) Ubuntu 10.10
What version of ScribeFire are you running?
4
```
-----
Original issue reported on code.google.com by `ddouth...@gmail.com` on 6 Jan 2012 at 1:59
|
1.0
|
Lost entire post - ```
What's the problem?
Lost entire post. I had composed a long post (for Wordpress.com) and then went
off to do other things. One of these things is debugging a problem I've been
having with certain Javascript functions being disabled - so I started
disabling extensions (one of course was Scribefire).
After re-enabling it, I noticed that the window was gone. Starting a new
Scribefire window showed it to be empty and no way to retrieve the lost post.
I found this article
http://leggetter.posterous.com/how-to-recover-a-lost-scribefire-noteblog-pos
but there is no equivalent in Linux, and searching the directory for Chromium
showed up no SQLite files and no apparent database files that might contain the
lost post.
Scribefire SHOULD automatically save the current post, retain it, and
automatically reload it when it is started. It should also automatically save
it as a draft post to the destination blog.
This is NOT the first bug post on this, nor is it the first time I've lost a
post!
The idea that Scribefire is losing posts this many years after it was started
is inconcievable. The fix is simple: save it! Word does it, and vi has done it
for longer than some bloggers have been alive. Why can't Scribefire do it too?
What browser are you using?
Google Chromium 14.0.835.202 (Developer Build 103287 Linux) Ubuntu 10.10
What version of ScribeFire are you running?
4
```
-----
Original issue reported on code.google.com by `ddouth...@gmail.com` on 6 Jan 2012 at 1:59
|
defect
|
lost entire post what s the problem lost entire post i had composed a long post for wordpress com and then went off to do other things one of these things is debugging a problem i ve been having with certain javascript functions being disabled so i started disabling extensions one of course was scribefire after re enabling it i noticed that the window was gone starting a new scribefire window showed it to be empty and no way to retrieve the lost post i found this article but there is no equivalent in linux and searching the directory for chromium showed up no sqlite files and no apparent database files that might contain the lost post scribefire should automatically save the current post retain it and automatically reload it when it is started it should also automatically save it as a draft post to the destination blog this is not the first bug post on this nor is it the first time i ve lost a post the idea that scribefire is losing posts this many years after it was started is inconcievable the fix is simple save it word does it and vi has done it for longer than some bloggers have been alive why can t scribefire do it too what browser are you using google chromium developer build linux ubuntu what version of scribefire are you running original issue reported on code google com by ddouth gmail com on jan at
| 1
|
5,918
| 2,610,217,908
|
IssuesEvent
|
2015-02-26 19:09:20
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
скачати dxgi.dll для кал оф дьюти.rar
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Август Андреев'''
Привет всем не подскажите где можно найти
.скачати dxgi.dll для кал оф дьюти.rar. где то
видел уже
'''Атеист Лазарев'''
Вот держи линк http://bit.ly/177p3HZ
'''Болеслав Блинов'''
Спасибо вроде то но просит телефон вводить
'''Аполлинарий Комаров'''
Неа все ок у меня ничего не списало
'''Гертруд Сысоев'''
Неа все ок у меня ничего не списало
Информация о файле: скачати dxgi.dll для кал оф
дьюти.rar
Загружен: В этом месяце
Скачан раз: 1316
Рейтинг: 144
Средняя скорость скачивания: 1328
Похожих файлов: 28
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 12:31
|
1.0
|
скачати dxgi.dll для кал оф дьюти.rar - ```
'''Август Андреев'''
Привет всем не подскажите где можно найти
.скачати dxgi.dll для кал оф дьюти.rar. где то
видел уже
'''Атеист Лазарев'''
Вот держи линк http://bit.ly/177p3HZ
'''Болеслав Блинов'''
Спасибо вроде то но просит телефон вводить
'''Аполлинарий Комаров'''
Неа все ок у меня ничего не списало
'''Гертруд Сысоев'''
Неа все ок у меня ничего не списало
Информация о файле: скачати dxgi.dll для кал оф
дьюти.rar
Загружен: В этом месяце
Скачан раз: 1316
Рейтинг: 144
Средняя скорость скачивания: 1328
Похожих файлов: 28
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 12:31
|
defect
|
скачати dxgi dll для кал оф дьюти rar август андреев привет всем не подскажите где можно найти скачати dxgi dll для кал оф дьюти rar где то видел уже атеист лазарев вот держи линк болеслав блинов спасибо вроде то но просит телефон вводить аполлинарий комаров неа все ок у меня ничего не списало гертруд сысоев неа все ок у меня ничего не списало информация о файле скачати dxgi dll для кал оф дьюти rar загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 1
|
44,387
| 12,124,674,033
|
IssuesEvent
|
2020-04-22 14:30:11
|
snowplow/snowplow-objc-tracker
|
https://api.github.com/repos/snowplow/snowplow-objc-tracker
|
closed
|
Replace deprecated CTTelephonyNetworkInfo methods
|
priority:high status:completed type:defect
|
iOS 12 added support for e-sims which means a device can have multiple providers. APIs were added/deprecated in `CTTelephonyNetworkInfo` to support this.
`SPUtilities` makes use of two deprecated APIs:
* `-[CTTelephonyNetworkInfo currentRadioAccessTechnology]`
* `-[CTTelephonyNetworkInfo subscriberCellularProvider]`

|
1.0
|
Replace deprecated CTTelephonyNetworkInfo methods - iOS 12 added support for e-sims which means a device can have multiple providers. APIs were added/deprecated in `CTTelephonyNetworkInfo` to support this.
`SPUtilities` makes use of two deprecated APIs:
* `-[CTTelephonyNetworkInfo currentRadioAccessTechnology]`
* `-[CTTelephonyNetworkInfo subscriberCellularProvider]`

|
defect
|
replace deprecated cttelephonynetworkinfo methods ios added support for e sims which means a device can have multiple providers apis were added deprecated in cttelephonynetworkinfo to support this sputilities makes use of two deprecated apis
| 1
|
171,955
| 6,497,285,966
|
IssuesEvent
|
2017-08-22 13:32:15
|
seanvree/logarr
|
https://api.github.com/repos/seanvree/logarr
|
closed
|
FEAT: Clear search results/highlighting
|
enhancement help wanted Priority: HIGH
|
Issue: User must refresh page in order to 'clear' search results/highlighting.
Suggestion: Implement a feature to clear search results/highlighting w/o having to refresh page.
|
1.0
|
FEAT: Clear search results/highlighting - Issue: User must refresh page in order to 'clear' search results/highlighting.
Suggestion: Implement a feature to clear search results/highlighting w/o having to refresh page.
|
non_defect
|
feat clear search results highlighting issue user must refresh page in order to clear search results highlighting suggestion implement a feature to clear search results highlighting w o having to refresh page
| 0
|
176,198
| 13,630,562,765
|
IssuesEvent
|
2020-09-24 16:37:41
|
microcks/microcks
|
https://api.github.com/repos/microcks/microcks
|
closed
|
Update Helm Chart and Operator for asynchronous API testing
|
component/install component/tests kind/feature type/Part
|
As part of #257, we now need a `microcks-async-minion` Kubernetes `Service` so that webapp component of Microcks will be able to launch tests on a minion.
Helm Chart and Operator need to be released as well.
|
1.0
|
Update Helm Chart and Operator for asynchronous API testing - As part of #257, we now need a `microcks-async-minion` Kubernetes `Service` so that webapp component of Microcks will be able to launch tests on a minion.
Helm Chart and Operator need to be released as well.
|
non_defect
|
update helm chart and operator for asynchronous api testing as part of we now need a microcks async minion kubernetes service so that webapp component of microcks will be able to launch tests on a minion helm chart and operator need to be released as well
| 0
|
64,528
| 18,724,649,562
|
IssuesEvent
|
2021-11-03 15:10:26
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
DataTable: filter/sort - wrong manipulation of list elements on Mojarra without filteredValue defined (non-lazy)
|
defect implementation-specific
|
### 1) Environment
PrimeFaces version: primefaces 10.0.0
Application server: jetty 9.4.44 + version: mojarra-2.3
### 2) Expected behavior
Run the sample project.
Follow these steps:
1. filter on name with value BB2
2. change all BB2 row values to BB3, press Save [result=OK]
3. remove filter BB2, press Save [result=OK]
4. sort on code, press Save [result=WRONG] data rows switch with primefaces 10!
The result must be:
509, EUR, BB, BB3, A
512, EUR, BB, BB3, B
515, EUR, BB, BB3, C
516, EUR, AA, AA, D
517, EUR, AA, AA, E
### 3) Actual behavior
The result is:
509, USA, AA, AA, A
512, USA, AA, AA, B
515, EUR, BB, BB3, C
516, EUR, BB, BB3, D
517, EUR, BB, BB3, E
### 4) Steps to reproduce
See sample project.
This behavior is only in primefaces 10, primefaces 8 is fine.
As a remark i must mention this is only one example. Different combinations of sorting and filtering data
gives different results, even in combination with multiViewState. Also the behavior changes somehow in later primefaces versions. Still managed to get it wrong in PrimeFaces-10.0.7 elite (latest version).
### 5) Sample XHTML / Sample bean
See attached projects.
Primefaces 10:
[primefaces-test-datatable.zip](https://github.com/primefaces/primefaces/files/7389069/primefaces-test-datatable.zip)
Primefaces 8 (works as expected):
[primefaces-test-datatable-8.0.zip](https://github.com/primefaces/primefaces/files/7389071/primefaces-test-datatable-8.0.zip)
run with:
mvn clean jetty:run -Pmojarra23
|
1.0
|
DataTable: filter/sort - wrong manipulation of list elements on Mojarra without filteredValue defined (non-lazy) - ### 1) Environment
PrimeFaces version: primefaces 10.0.0
Application server: jetty 9.4.44 + version: mojarra-2.3
### 2) Expected behavior
Run the sample project.
Follow these steps:
1. filter on name with value BB2
2. change all BB2 row values to BB3, press Save [result=OK]
3. remove filter BB2, press Save [result=OK]
4. sort on code, press Save [result=WRONG] data rows switch with primefaces 10!
The result must be:
509, EUR, BB, BB3, A
512, EUR, BB, BB3, B
515, EUR, BB, BB3, C
516, EUR, AA, AA, D
517, EUR, AA, AA, E
### 3) Actual behavior
The result is:
509, USA, AA, AA, A
512, USA, AA, AA, B
515, EUR, BB, BB3, C
516, EUR, BB, BB3, D
517, EUR, BB, BB3, E
### 4) Steps to reproduce
See sample project.
This behavior is only in primefaces 10, primefaces 8 is fine.
As a remark i must mention this is only one example. Different combinations of sorting and filtering data
gives different results, even in combination with multiViewState. Also the behavior changes somehow in later primefaces versions. Still managed to get it wrong in PrimeFaces-10.0.7 elite (latest version).
### 5) Sample XHTML / Sample bean
See attached projects.
Primefaces 10:
[primefaces-test-datatable.zip](https://github.com/primefaces/primefaces/files/7389069/primefaces-test-datatable.zip)
Primefaces 8 (works as expected):
[primefaces-test-datatable-8.0.zip](https://github.com/primefaces/primefaces/files/7389071/primefaces-test-datatable-8.0.zip)
run with:
mvn clean jetty:run -Pmojarra23
|
defect
|
datatable filter sort wrong manipulation of list elements on mojarra without filteredvalue defined non lazy environment primefaces version primefaces application server jetty version mojarra expected behavior run the sample project follow these steps filter on name with value change all row values to press save remove filter press save sort on code press save data rows switch with primefaces the result must be eur bb a eur bb b eur bb c eur aa aa d eur aa aa e actual behavior the result is usa aa aa a usa aa aa b eur bb c eur bb d eur bb e steps to reproduce see sample project this behavior is only in primefaces primefaces is fine as a remark i must mention this is only one example different combinations of sorting and filtering data gives different results even in combination with multiviewstate also the behavior changes somehow in later primefaces versions still managed to get it wrong in primefaces elite latest version sample xhtml sample bean see attached projects primefaces primefaces works as expected run with mvn clean jetty run
| 1
|
819,544
| 30,741,314,259
|
IssuesEvent
|
2023-07-28 11:45:10
|
PRIME-TU-Delft/Open-LA-Applets
|
https://api.github.com/repos/PRIME-TU-Delft/Open-LA-Applets
|
opened
|
[Issue]: Improve search
|
Low priority
|
### What needs to change?
Search is now automatically updating the url, this is not a desired behaviour. It should only change the URL when the search form is submitted
### Does this issue relate or depend on other issues?
_No response_
|
1.0
|
[Issue]: Improve search - ### What needs to change?
Search is now automatically updating the url, this is not a desired behaviour. It should only change the URL when the search form is submitted
### Does this issue relate or depend on other issues?
_No response_
|
non_defect
|
improve search what needs to change search is now automatically updating the url this is not a desired behaviour it should only change the url when the search form is submitted does this issue relate or depend on other issues no response
| 0
|
151,528
| 19,654,642,000
|
IssuesEvent
|
2022-01-10 11:10:13
|
theWhiteFox/react-tic-tac-toe
|
https://api.github.com/repos/theWhiteFox/react-tic-tac-toe
|
opened
|
CVE-2021-3803 (High) detected in nth-check-1.0.2.tgz
|
security vulnerability
|
## CVE-2021-3803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nth-check-1.0.2.tgz</b></p></summary>
<p>performant nth-check parser & compiler</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz">https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- html-webpack-plugin-4.0.0-beta.11.tgz
- pretty-error-2.1.1.tgz
- renderkid-2.0.3.tgz
- css-select-1.2.0.tgz
- :x: **nth-check-1.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/theWhiteFox/react-tic-tac-toe/commit/e334886398cc42d1544bbf041c785f1d8e0d5d00">e334886398cc42d1544bbf041c785f1d8e0d5d00</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nth-check is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3803>CVE-2021-3803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1">https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: nth-check - v2.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3803 (High) detected in nth-check-1.0.2.tgz - ## CVE-2021-3803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nth-check-1.0.2.tgz</b></p></summary>
<p>performant nth-check parser & compiler</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz">https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- html-webpack-plugin-4.0.0-beta.11.tgz
- pretty-error-2.1.1.tgz
- renderkid-2.0.3.tgz
- css-select-1.2.0.tgz
- :x: **nth-check-1.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/theWhiteFox/react-tic-tac-toe/commit/e334886398cc42d1544bbf041c785f1d8e0d5d00">e334886398cc42d1544bbf041c785f1d8e0d5d00</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nth-check is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3803>CVE-2021-3803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1">https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: nth-check - v2.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in nth check tgz cve high severity vulnerability vulnerable library nth check tgz performant nth check parser compiler library home page a href path to dependency file package json path to vulnerable library node modules nth check package json dependency hierarchy react scripts tgz root library html webpack plugin beta tgz pretty error tgz renderkid tgz css select tgz x nth check tgz vulnerable library found in head commit a href found in base branch master vulnerability details nth check is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nth check step up your open source security game with whitesource
| 0
|
39,299
| 9,381,362,429
|
IssuesEvent
|
2019-04-04 19:25:56
|
CenturyLinkCloud/mdw
|
https://api.github.com/repos/CenturyLinkCloud/mdw
|
opened
|
Activities API returns incorrect total count
|
defect
|
Incorrect count query:
```java
protected String buildActivityCountQuery(Query query, Date start) {
StringBuilder sqlBuff = new StringBuilder();
sqlBuff.append("SELECT count(pi.process_instance_id) ");
```
|
1.0
|
Activities API returns incorrect total count - Incorrect count query:
```java
protected String buildActivityCountQuery(Query query, Date start) {
StringBuilder sqlBuff = new StringBuilder();
sqlBuff.append("SELECT count(pi.process_instance_id) ");
```
|
defect
|
activities api returns incorrect total count incorrect count query java protected string buildactivitycountquery query query date start stringbuilder sqlbuff new stringbuilder sqlbuff append select count pi process instance id
| 1
|
22,328
| 3,634,560,409
|
IssuesEvent
|
2016-02-11 18:27:25
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
PropapageDown does not work correctly in Tree
|
5.2.20 5.3.7 defect
|
In checkbox mode, setting propagate down to false still selects descendants.
|
1.0
|
PropapageDown does not work correctly in Tree - In checkbox mode, setting propagate down to false still selects descendants.
|
defect
|
propapagedown does not work correctly in tree in checkbox mode setting propagate down to false still selects descendants
| 1
|
52,667
| 13,224,891,388
|
IssuesEvent
|
2020-08-17 20:03:27
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
pyroot plays nice with numpy (Trac #126)
|
Migrated from Trac defect documentation
|
recently discovered. very easy to make root histograms
from numpy data that has already had cuts/etc applied.
need examples/docs.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/126">https://code.icecube.wisc.edu/projects/icecube/ticket/126</a>, reported by troyand owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T18:49:16",
"_ts": "1351709356000000",
"description": "recently discovered. very easy to make root histograms\nfrom numpy data that has already had cuts/etc applied.\nneed examples/docs.",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"time": "2008-09-07T13:40:39",
"component": "documentation",
"summary": "pyroot plays nice with numpy",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
pyroot plays nice with numpy (Trac #126) - recently discovered. very easy to make root histograms
from numpy data that has already had cuts/etc applied.
need examples/docs.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/126">https://code.icecube.wisc.edu/projects/icecube/ticket/126</a>, reported by troyand owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T18:49:16",
"_ts": "1351709356000000",
"description": "recently discovered. very easy to make root histograms\nfrom numpy data that has already had cuts/etc applied.\nneed examples/docs.",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"time": "2008-09-07T13:40:39",
"component": "documentation",
"summary": "pyroot plays nice with numpy",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
defect
|
pyroot plays nice with numpy trac recently discovered very easy to make root histograms from numpy data that has already had cuts etc applied need examples docs migrated from json status closed changetime ts description recently discovered very easy to make root histograms nfrom numpy data that has already had cuts etc applied nneed examples docs reporter troy cc resolution wont or cant fix time component documentation summary pyroot plays nice with numpy priority normal keywords milestone owner troy type defect
| 1
|
399,192
| 27,230,173,781
|
IssuesEvent
|
2023-02-21 12:40:17
|
equinor/komodo
|
https://api.github.com/repos/equinor/komodo
|
closed
|
This repo has no docs
|
documentation
|
There doesn't seem to be any documentation for `komodo`. The closest thing I could find is https://fmu-docs.equinor.com/docs/komodo/index.html (not a public link), which I think is really documentation for `komodo-releases`.
I think the documentation could contain the following:
- Expand on the basic example in the README, see #257
- Explain how it relates to `komodoenv` and perhaps, with appropriate signals that it's not public, `komodo-releases` (likewise, those libraries should explain the relationship)
- Map the resulting release directory structure, files, etc.
- Explain all of the arguments to the `kmd` command, because most of them do not have expensive help in the CLI.
- Document the various commands: komodo-check-pypi, komodo-insert-proposals, komodo-post-messages, komodo-check-symlinks, komodo-lint, komodo-reverse-deps, komodo-clean-repository, komodo-lint-maturity, komodo-snyk-test, komodo-create-symlinks, komodo-lint-package-status, komodo-suggest-symlinks, komodo-extract-dep-graph, komodo-non-deployed, komodo-transpiler
- Show how (and if!) `komodo` is intended to be used as a library.
|
1.0
|
This repo has no docs - There doesn't seem to be any documentation for `komodo`. The closest thing I could find is https://fmu-docs.equinor.com/docs/komodo/index.html (not a public link), which I think is really documentation for `komodo-releases`.
I think the documentation could contain the following:
- Expand on the basic example in the README, see #257
- Explain how it relates to `komodoenv` and perhaps, with appropriate signals that it's not public, `komodo-releases` (likewise, those libraries should explain the relationship)
- Map the resulting release directory structure, files, etc.
- Explain all of the arguments to the `kmd` command, because most of them do not have expensive help in the CLI.
- Document the various commands: komodo-check-pypi, komodo-insert-proposals, komodo-post-messages, komodo-check-symlinks, komodo-lint, komodo-reverse-deps, komodo-clean-repository, komodo-lint-maturity, komodo-snyk-test, komodo-create-symlinks, komodo-lint-package-status, komodo-suggest-symlinks, komodo-extract-dep-graph, komodo-non-deployed, komodo-transpiler
- Show how (and if!) `komodo` is intended to be used as a library.
|
non_defect
|
this repo has no docs there doesn t seem to be any documentation for komodo the closest thing i could find is not a public link which i think is really documentation for komodo releases i think the documentation could contain the following expand on the basic example in the readme see explain how it relates to komodoenv and perhaps with appropriate signals that it s not public komodo releases likewise those libraries should explain the relationship map the resulting release directory structure files etc explain all of the arguments to the kmd command because most of them do not have expensive help in the cli document the various commands komodo check pypi komodo insert proposals komodo post messages komodo check symlinks komodo lint komodo reverse deps komodo clean repository komodo lint maturity komodo snyk test komodo create symlinks komodo lint package status komodo suggest symlinks komodo extract dep graph komodo non deployed komodo transpiler show how and if komodo is intended to be used as a library
| 0
|
18,129
| 3,025,475,450
|
IssuesEvent
|
2015-08-03 08:54:52
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
reopened
|
IMap.destroy() - map is destroyed and then immediately recreated
|
Team: Core Type: Defect
|
Creating this issue as suggested by @bilalyasar (see [discussion on the Hazelcast Google Groups forum] (https://groups.google.com/forum/#!topic/hazelcast/G1P43tORJPA))
Hazelcast version that you use: ```3.4.2```, ```3.5```
Cluster size: ```1```
Number of the clients: ```1```
Version of Java: ```OracleJDK 1.7.0_80, x86_64```, ```OracleJDK 1.8.0_45, x86_64```
Operating system: ```OS X Yosemite 10.10.3 (14D136)```
Logs and stack traces: See the console output from executing the TestNG test case below
Detailed description of the steps to reproduce your issue: See the TestNG test case below
Unit test with the hazelcast.xml file:
```java
import static org.testng.Assert.assertEquals;
import static org.testng.Assert.assertNotNull;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import com.hazelcast.client.HazelcastClient;
import com.hazelcast.core.DistributedObjectEvent;
import com.hazelcast.core.DistributedObjectListener;
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.core.IMap;
public class IMapDestroyTest {
private HazelcastInstance server;
private HazelcastInstance client;
@BeforeClass
public void setUpBeforeClass() {
server = Hazelcast.newHazelcastInstance();
sleep(2000);
client = HazelcastClient.newHazelcastClient();
client.addDistributedObjectListener(new DistributedObjectListener() {
@Override public void distributedObjectDestroyed(final DistributedObjectEvent event) {
System.out.println("\n\tdistributedObjectDestroyed(): " + event);
}
@Override public void distributedObjectCreated(final DistributedObjectEvent event) {
System.out.println("\n\tdistributedObjectCreated(): " + event);
}
});
}
@AfterClass
public void tearDownAfterClass() {
client.shutdown();
server.shutdown();
}
@Test
public void testIMapDestroy() {
System.out.print("Creating 'foo' map... ");
IMap<String, String> fooMap = client.getMap("foo");
assertNotNull(fooMap);
System.out.println("done.");
System.out.println("All known distributed objects: " + client.getDistributedObjects());
System.out.print("Inserting an entry... ");
fooMap.put("fu", "bar");
System.out.print("done.\nRetrieving the entry... ");
String value = fooMap.get("fu");
assertEquals(value, "bar");
System.out.print("done.\nDestroying the map... ");
fooMap.destroy();
System.out.println("done.");
System.out.println("All known distributed objects: " + client.getDistributedObjects());
}
private void sleep(final int millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
```
Console output:
```
Creating 'foo' map...
distributedObjectCreated(): DistributedObjectEvent{eventType=CREATED, serviceName='hz:impl:mapService', distributedObject=IMap{name='foo'}}
done.
All known distributed objects: [IMap{name='foo'}]
Inserting an entry... done.
Retrieving the entry... done.
Destroying the map...
distributedObjectDestroyed(): DistributedObjectEvent{eventType=DESTROYED, serviceName='hz:impl:mapService', distributedObject=IMap{name='foo'}}
distributedObjectCreated(): DistributedObjectEvent{eventType=CREATED, serviceName='hz:impl:mapService', distributedObject=IMap{name='foo'}}
done.
All known distributed objects: [IMap{name='foo'}]
```
|
1.0
|
IMap.destroy() - map is destroyed and then immediately recreated - Creating this issue as suggested by @bilalyasar (see [discussion on the Hazelcast Google Groups forum] (https://groups.google.com/forum/#!topic/hazelcast/G1P43tORJPA))
Hazelcast version that you use: ```3.4.2```, ```3.5```
Cluster size: ```1```
Number of the clients: ```1```
Version of Java: ```OracleJDK 1.7.0_80, x86_64```, ```OracleJDK 1.8.0_45, x86_64```
Operating system: ```OS X Yosemite 10.10.3 (14D136)```
Logs and stack traces: See the console output from executing the TestNG test case below
Detailed description of the steps to reproduce your issue: See the TestNG test case below
Unit test with the hazelcast.xml file:
```java
import static org.testng.Assert.assertEquals;
import static org.testng.Assert.assertNotNull;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import com.hazelcast.client.HazelcastClient;
import com.hazelcast.core.DistributedObjectEvent;
import com.hazelcast.core.DistributedObjectListener;
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.core.IMap;
public class IMapDestroyTest {
private HazelcastInstance server;
private HazelcastInstance client;
@BeforeClass
public void setUpBeforeClass() {
server = Hazelcast.newHazelcastInstance();
sleep(2000);
client = HazelcastClient.newHazelcastClient();
client.addDistributedObjectListener(new DistributedObjectListener() {
@Override public void distributedObjectDestroyed(final DistributedObjectEvent event) {
System.out.println("\n\tdistributedObjectDestroyed(): " + event);
}
@Override public void distributedObjectCreated(final DistributedObjectEvent event) {
System.out.println("\n\tdistributedObjectCreated(): " + event);
}
});
}
@AfterClass
public void tearDownAfterClass() {
client.shutdown();
server.shutdown();
}
@Test
public void testIMapDestroy() {
System.out.print("Creating 'foo' map... ");
IMap<String, String> fooMap = client.getMap("foo");
assertNotNull(fooMap);
System.out.println("done.");
System.out.println("All known distributed objects: " + client.getDistributedObjects());
System.out.print("Inserting an entry... ");
fooMap.put("fu", "bar");
System.out.print("done.\nRetrieving the entry... ");
String value = fooMap.get("fu");
assertEquals(value, "bar");
System.out.print("done.\nDestroying the map... ");
fooMap.destroy();
System.out.println("done.");
System.out.println("All known distributed objects: " + client.getDistributedObjects());
}
private void sleep(final int millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
```
Console output:
```
Creating 'foo' map...
distributedObjectCreated(): DistributedObjectEvent{eventType=CREATED, serviceName='hz:impl:mapService', distributedObject=IMap{name='foo'}}
done.
All known distributed objects: [IMap{name='foo'}]
Inserting an entry... done.
Retrieving the entry... done.
Destroying the map...
distributedObjectDestroyed(): DistributedObjectEvent{eventType=DESTROYED, serviceName='hz:impl:mapService', distributedObject=IMap{name='foo'}}
distributedObjectCreated(): DistributedObjectEvent{eventType=CREATED, serviceName='hz:impl:mapService', distributedObject=IMap{name='foo'}}
done.
All known distributed objects: [IMap{name='foo'}]
```
|
defect
|
imap destroy map is destroyed and then immediately recreated creating this issue as suggested by bilalyasar see hazelcast version that you use cluster size number of the clients version of java oraclejdk oraclejdk operating system os x yosemite logs and stack traces see the console output from executing the testng test case below detailed description of the steps to reproduce your issue see the testng test case below unit test with the hazelcast xml file java import static org testng assert assertequals import static org testng assert assertnotnull import org testng annotations afterclass import org testng annotations beforeclass import org testng annotations test import com hazelcast client hazelcastclient import com hazelcast core distributedobjectevent import com hazelcast core distributedobjectlistener import com hazelcast core hazelcast import com hazelcast core hazelcastinstance import com hazelcast core imap public class imapdestroytest private hazelcastinstance server private hazelcastinstance client beforeclass public void setupbeforeclass server hazelcast newhazelcastinstance sleep client hazelcastclient newhazelcastclient client adddistributedobjectlistener new distributedobjectlistener override public void distributedobjectdestroyed final distributedobjectevent event system out println n tdistributedobjectdestroyed event override public void distributedobjectcreated final distributedobjectevent event system out println n tdistributedobjectcreated event afterclass public void teardownafterclass client shutdown server shutdown test public void testimapdestroy system out print creating foo map imap foomap client getmap foo assertnotnull foomap system out println done system out println all known distributed objects client getdistributedobjects system out print inserting an entry foomap put fu bar system out print done nretrieving the entry string value foomap get fu assertequals value bar system out print done ndestroying the map foomap destroy system out println done system out println all known distributed objects client getdistributedobjects private void sleep final int millis try thread sleep millis catch interruptedexception e thread currentthread interrupt console output creating foo map distributedobjectcreated distributedobjectevent eventtype created servicename hz impl mapservice distributedobject imap name foo done all known distributed objects inserting an entry done retrieving the entry done destroying the map distributedobjectdestroyed distributedobjectevent eventtype destroyed servicename hz impl mapservice distributedobject imap name foo distributedobjectcreated distributedobjectevent eventtype created servicename hz impl mapservice distributedobject imap name foo done all known distributed objects
| 1
|
62,005
| 17,023,830,518
|
IssuesEvent
|
2021-07-03 04:04:20
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Potlatch 1 removed from dropdown
|
Component: website Priority: major Resolution: wontfix Type: defect
|
**[Submitted to the original trac issue database at 8.08pm, Saturday, 13th October 2012]**
If I'm not mistaken, it's still the only way to find deleted ways in an area. So it shouldn't be removed unless this feature is added to PL2.
|
1.0
|
Potlatch 1 removed from dropdown - **[Submitted to the original trac issue database at 8.08pm, Saturday, 13th October 2012]**
If I'm not mistaken, it's still the only way to find deleted ways in an area. So it shouldn't be removed unless this feature is added to PL2.
|
defect
|
potlatch removed from dropdown if i m not mistaken it s still the only way to find deleted ways in an area so it shouldn t be removed unless this feature is added to
| 1
|
26,817
| 4,793,196,110
|
IssuesEvent
|
2016-10-31 17:30:22
|
nelenkov/wwwjdic
|
https://api.github.com/repos/nelenkov/wwwjdic
|
closed
|
it just times out and doesn't work at all
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead? A search result. A
message saying there was an unexpected error and it timed out. Also these
numbers: 130.194.64.145:80
What version of the product are you using? On what operating system? I have an
Android but I don't know which one.
If possible, provide a stack trace (logcat).
As I can only test on HTC Magic and Nexus One, I can't reproduce device
specific issues. Without a stack trace, there is usually nothing I can do.
So do attach logcat output. That or, send me a phone :)
Please provide any additional information below.
```
Original issue reported on code.google.com by `Katwoods...@gmail.com` on 20 Oct 2014 at 4:19
|
1.0
|
it just times out and doesn't work at all - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead? A search result. A
message saying there was an unexpected error and it timed out. Also these
numbers: 130.194.64.145:80
What version of the product are you using? On what operating system? I have an
Android but I don't know which one.
If possible, provide a stack trace (logcat).
As I can only test on HTC Magic and Nexus One, I can't reproduce device
specific issues. Without a stack trace, there is usually nothing I can do.
So do attach logcat output. That or, send me a phone :)
Please provide any additional information below.
```
Original issue reported on code.google.com by `Katwoods...@gmail.com` on 20 Oct 2014 at 4:19
|
defect
|
it just times out and doesn t work at all what steps will reproduce the problem what is the expected output what do you see instead a search result a message saying there was an unexpected error and it timed out also these numbers what version of the product are you using on what operating system i have an android but i don t know which one if possible provide a stack trace logcat as i can only test on htc magic and nexus one i can t reproduce device specific issues without a stack trace there is usually nothing i can do so do attach logcat output that or send me a phone please provide any additional information below original issue reported on code google com by katwoods gmail com on oct at
| 1
|
141,934
| 5,447,153,884
|
IssuesEvent
|
2017-03-07 12:46:40
|
vtyulb/BSA-Analytics
|
https://api.github.com/repos/vtyulb/BSA-Analytics
|
closed
|
Новый баг
|
High priority
|
В режиме pulsar fourier analytics при загрузке блоков почти во всех модулях и лучах синие стрелки выскакивают на самое начало файла, пропуская в том числе даже гармонику в 1секунду.
|
1.0
|
Новый баг - В режиме pulsar fourier analytics при загрузке блоков почти во всех модулях и лучах синие стрелки выскакивают на самое начало файла, пропуская в том числе даже гармонику в 1секунду.
|
non_defect
|
новый баг в режиме pulsar fourier analytics при загрузке блоков почти во всех модулях и лучах синие стрелки выскакивают на самое начало файла пропуская в том числе даже гармонику в
| 0
|
24,639
| 4,053,706,226
|
IssuesEvent
|
2016-05-24 09:35:16
|
octavian-paraschiv/protone-suite
|
https://api.github.com/repos/octavian-paraschiv/protone-suite
|
closed
|
Inconsistency between time displayed in Player / media library and actual media position on some MP3 VBR files.
|
Category-Runtime OS-All Priority-P2 ReportSource-DevQA Resolution-Resolved Type-Defect
|
```
This bug is known since version 1.1.x.
Behavior: on some MP3 VBR files the Media Services report an incorrect time
elapsed for the played file. This time has no relation with the actual position
within the file and can yield to confusion amongst users.
```
Original issue reported on code.google.com by `octavian...@gmail.com` on 18 Apr 2013 at 4:43
|
1.0
|
Inconsistency between time displayed in Player / media library and actual media position on some MP3 VBR files. - ```
This bug is known since version 1.1.x.
Behavior: on some MP3 VBR files the Media Services report an incorrect time
elapsed for the played file. This time has no relation with the actual position
within the file and can yield to confusion amongst users.
```
Original issue reported on code.google.com by `octavian...@gmail.com` on 18 Apr 2013 at 4:43
|
defect
|
inconsistency between time displayed in player media library and actual media position on some vbr files this bug is known since version x behavior on some vbr files the media services report an incorrect time elapsed for the played file this time has no relation with the actual position within the file and can yield to confusion amongst users original issue reported on code google com by octavian gmail com on apr at
| 1
|
28,504
| 12,867,767,847
|
IssuesEvent
|
2020-07-10 07:35:27
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
azurerm_app_service_virtual_network_swift_connection Write Network Config HTTP 500 error
|
service/app-service
|
<!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
Terraform version **0.12.28**
AzureRM Provider version **2.17.0**
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_app_service_virtual_network_swift_connection`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
terraform {
backend "azurerm" {
}
}
provider "azurerm" {
version = "=2.17.0"
features {}
}
# Data resource to existing Resource Group
data "azurerm_resource_group" "env-resourcegroup" {
name = "MyResourceGroup"
}
# Data resource to existing vNet
data "azurerm_virtual_network" "vnet" {
name = "vNet-AppServDO-001"
resource_group_name = "MyResourceGroup"
}
# Data resource to existing vNet subnet
data "azurerm_subnet" "vnetsubnet" {
name = "default"
virtual_network_name = data.azurerm_virtual_network.vnet.name
resource_group_name = "MyResourceGroup"
}
# Define App Service Plan
resource "azurerm_app_service_plan" "go-1" {
name = "APPPLAN-AppServDO-Plan01"
location = "WestEurope"
resource_group_name = data.azurerm_resource_group.env-resourcegroup.name
sku {
tier = "PremiumV2"
size = "P1v2"
}
}
# Define WebApp 1
resource "azurerm_app_service" "go-1" {
name = "WA-AppServDO-Site01"
location = "WestEurope"
resource_group_name = data.azurerm_resource_group.env-resourcegroup.name
app_service_plan_id = azurerm_app_service_plan.go-1.id
client_affinity_enabled = "true"
site_config {
scm_type = "VSTSRM"
always_on = "true"
use_32_bit_worker_process = "false"
}
}
resource "azurerm_app_service_virtual_network_swift_connection" "go-1" {
app_service_id = azurerm_app_service.go-1.id
subnet_id = data.azurerm_subnet.vnetsubnet.id
```
### Debug Output
```
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m40s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m50s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m40s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m50s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m40s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m50s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m40s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m50s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m40s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m50s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [7m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [7m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [7m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [7m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [7m40s elapsed]
Error: Error creating/updating App Service VNet association between "WA-AppServDO-Site01" (Resource Group "MyResourceGroup") and Virtual Network "vNet-AppServDO-001": web.AppsClient#CreateOrUpdateSwiftVirtualNetworkConnection: Failure responding to request: StatusCode=500 -- Original Error: autorest/azure: Service returned an error. Status=500 Code="" Message="An error has occurred."
on main.tf line 71, in resource "azurerm_app_service_virtual_network_swift_connection" "go-1":
71: resource "azurerm_app_service_virtual_network_swift_connection" "go-1" {
##[error]Error: The process 'C:\hostedtoolcache\windows\terraform\0.12.28\x64\terraform.exe' failed with exit code 1
Finishing: Terraform : apply
```
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
The App Service 'WA-AppServDO-Site01' is added to the subnet named 'default' in the vNet named 'vNet-AppServDO-001'
### Actual Behavior
Apply retries and produces errors inside the Azure Activity Log of the App Service 'WA-AppServDO-Site01'
Example
```
Operation name Write Network Config
Error code InternalServerError
Message An error has occurred.
```
The JSON inside these messages does not illustrate any useful error code, other than the HTTP 500
e.g.
```
"operationName": {
"value": "Microsoft.Web/sites/networkConfig/write",
"localizedValue": "Microsoft.Web/sites/networkConfig/write"
"status": {
"value": "Failed",
"localizedValue": "Failed"
},
"subStatus": {
"value": "InternalServerError",
"localizedValue": "Internal Server Error (HTTP Status Code: 500)"
},
"properties": {
"statusCode": "InternalServerError",
"serviceRequestId": null,
"statusMessage": "{\"Message\":\"An error has occurred.\"}",
"eventCategory": "Administrative"
```
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. Create manually a resource group called "MyResourceGroup" or similar
2. Create the vNet called "vNet-AppServDO-001" or similar, with a "default" subnet (created as standard)
3. `terraform apply`
I have tried West Europe and UK South locations, and different App Service Plan SKUs, but with the same issue.
To work around this issue I am running the following Azure CLI command outside of Terraform:
`az webapp vnet-integration add --name WA-AppServDO-Site01 --resource-group MyResourceGroup --subnet 'default' --vnet vNet-AppServDO-001`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in a Azure China/Germany/Government? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation?
--->
* #0000
|
2.0
|
azurerm_app_service_virtual_network_swift_connection Write Network Config HTTP 500 error - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
Terraform version **0.12.28**
AzureRM Provider version **2.17.0**
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_app_service_virtual_network_swift_connection`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
terraform {
backend "azurerm" {
}
}
provider "azurerm" {
version = "=2.17.0"
features {}
}
# Data resource to existing Resource Group
data "azurerm_resource_group" "env-resourcegroup" {
name = "MyResourceGroup"
}
# Data resource to existing vNet
data "azurerm_virtual_network" "vnet" {
name = "vNet-AppServDO-001"
resource_group_name = "MyResourceGroup"
}
# Data resource to existing vNet subnet
data "azurerm_subnet" "vnetsubnet" {
name = "default"
virtual_network_name = data.azurerm_virtual_network.vnet.name
resource_group_name = "MyResourceGroup"
}
# Define App Service Plan
resource "azurerm_app_service_plan" "go-1" {
name = "APPPLAN-AppServDO-Plan01"
location = "WestEurope"
resource_group_name = data.azurerm_resource_group.env-resourcegroup.name
sku {
tier = "PremiumV2"
size = "P1v2"
}
}
# Define WebApp 1
resource "azurerm_app_service" "go-1" {
name = "WA-AppServDO-Site01"
location = "WestEurope"
resource_group_name = data.azurerm_resource_group.env-resourcegroup.name
app_service_plan_id = azurerm_app_service_plan.go-1.id
client_affinity_enabled = "true"
site_config {
scm_type = "VSTSRM"
always_on = "true"
use_32_bit_worker_process = "false"
}
}
resource "azurerm_app_service_virtual_network_swift_connection" "go-1" {
app_service_id = azurerm_app_service.go-1.id
subnet_id = data.azurerm_subnet.vnetsubnet.id
```
### Debug Output
```
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m40s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [2m50s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m40s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [3m50s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m40s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [4m50s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m40s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [5m50s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m40s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [6m50s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [7m0s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [7m10s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [7m20s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [7m30s elapsed]
azurerm_app_service_virtual_network_swift_connection.go-1: Still creating... [7m40s elapsed]
Error: Error creating/updating App Service VNet association between "WA-AppServDO-Site01" (Resource Group "MyResourceGroup") and Virtual Network "vNet-AppServDO-001": web.AppsClient#CreateOrUpdateSwiftVirtualNetworkConnection: Failure responding to request: StatusCode=500 -- Original Error: autorest/azure: Service returned an error. Status=500 Code="" Message="An error has occurred."
on main.tf line 71, in resource "azurerm_app_service_virtual_network_swift_connection" "go-1":
71: resource "azurerm_app_service_virtual_network_swift_connection" "go-1" {
##[error]Error: The process 'C:\hostedtoolcache\windows\terraform\0.12.28\x64\terraform.exe' failed with exit code 1
Finishing: Terraform : apply
```
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
The App Service 'WA-AppServDO-Site01' is added to the subnet named 'default' in the vNet named 'vNet-AppServDO-001'
### Actual Behavior
Apply retries and produces errors inside the Azure Activity Log of the App Service 'WA-AppServDO-Site01'
Example
```
Operation name Write Network Config
Error code InternalServerError
Message An error has occurred.
```
The JSON inside these messages does not illustrate any useful error code, other than the HTTP 500
e.g.
```
"operationName": {
"value": "Microsoft.Web/sites/networkConfig/write",
"localizedValue": "Microsoft.Web/sites/networkConfig/write"
"status": {
"value": "Failed",
"localizedValue": "Failed"
},
"subStatus": {
"value": "InternalServerError",
"localizedValue": "Internal Server Error (HTTP Status Code: 500)"
},
"properties": {
"statusCode": "InternalServerError",
"serviceRequestId": null,
"statusMessage": "{\"Message\":\"An error has occurred.\"}",
"eventCategory": "Administrative"
```
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. Create manually a resource group called "MyResourceGroup" or similar
2. Create the vNet called "vNet-AppServDO-001" or similar, with a "default" subnet (created as standard)
3. `terraform apply`
I have tried West Europe and UK South locations, and different App Service Plan SKUs, but with the same issue.
To work around this issue I am running the following Azure CLI command outside of Terraform:
`az webapp vnet-integration add --name WA-AppServDO-Site01 --resource-group MyResourceGroup --subnet 'default' --vnet vNet-AppServDO-001`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in a Azure China/Germany/Government? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation?
--->
* #0000
|
non_defect
|
azurerm app service virtual network swift connection write network config http error please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version terraform version azurerm provider version affected resource s azurerm app service virtual network swift connection terraform configuration files hcl terraform backend azurerm provider azurerm version features data resource to existing resource group data azurerm resource group env resourcegroup name myresourcegroup data resource to existing vnet data azurerm virtual network vnet name vnet appservdo resource group name myresourcegroup data resource to existing vnet subnet data azurerm subnet vnetsubnet name default virtual network name data azurerm virtual network vnet name resource group name myresourcegroup define app service plan resource azurerm app service plan go name appplan appservdo location westeurope resource group name data azurerm resource group env resourcegroup name sku tier size define webapp resource azurerm app service go name wa appservdo location westeurope resource group name data azurerm resource group env resourcegroup name app service plan id azurerm app service plan go id client affinity enabled true site config scm type vstsrm always on true use bit worker process false resource azurerm app service virtual network swift connection go app service id azurerm app service go id subnet id data azurerm subnet vnetsubnet id debug output azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating azurerm app service virtual network swift connection go still creating error error creating updating app service vnet association between wa appservdo resource group myresourcegroup and virtual network vnet appservdo web appsclient createorupdateswiftvirtualnetworkconnection failure responding to request statuscode original error autorest azure service returned an error status code message an error has occurred on main tf line in resource azurerm app service virtual network swift connection go resource azurerm app service virtual network swift connection go error the process c hostedtoolcache windows terraform terraform exe failed with exit code finishing terraform apply panic output expected behavior the app service wa appservdo is added to the subnet named default in the vnet named vnet appservdo actual behavior apply retries and produces errors inside the azure activity log of the app service wa appservdo example operation name write network config error code internalservererror message an error has occurred the json inside these messages does not illustrate any useful error code other than the http e g operationname value microsoft web sites networkconfig write localizedvalue microsoft web sites networkconfig write status value failed localizedvalue failed substatus value internalservererror localizedvalue internal server error http status code properties statuscode internalservererror servicerequestid null statusmessage message an error has occurred eventcategory administrative steps to reproduce create manually a resource group called myresourcegroup or similar create the vnet called vnet appservdo or similar with a default subnet created as standard terraform apply i have tried west europe and uk south locations and different app service plan skus but with the same issue to work around this issue i am running the following azure cli command outside of terraform az webapp vnet integration add name wa appservdo resource group myresourcegroup subnet default vnet vnet appservdo important factoids references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here such as vendor documentation
| 0
|
18,969
| 3,113,644,070
|
IssuesEvent
|
2015-09-03 00:56:23
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
outline view first item always has its offset set to zero
|
Analyzer-Server Area-Analyzer Priority-Medium Resolution-Fixed Type-Defect
|
When getting a tree of outline items for a dart file, the first item in the file always has its offset set to 0. We sync the outline view UI to the file selection using this info. This means that whenever the user is in the first ~20 ish lines of a file, we indicate to them that they are in the first item. It often actually starts 10-20 lines below the first line.
|
1.0
|
outline view first item always has its offset set to zero - When getting a tree of outline items for a dart file, the first item in the file always has its offset set to 0. We sync the outline view UI to the file selection using this info. This means that whenever the user is in the first ~20 ish lines of a file, we indicate to them that they are in the first item. It often actually starts 10-20 lines below the first line.
|
defect
|
outline view first item always has its offset set to zero when getting a tree of outline items for a dart file the first item in the file always has its offset set to we sync the outline view ui to the file selection using this info this means that whenever the user is in the first ish lines of a file we indicate to them that they are in the first item it often actually starts lines below the first line
| 1
|
10,819
| 2,622,191,558
|
IssuesEvent
|
2015-03-04 00:23:18
|
byzhang/cudpp
|
https://api.github.com/repos/byzhang/cudpp
|
closed
|
cudpp_testrig on MS Visual studio 9.0
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. installing UCDA 3.2 drivers, toolkit and SDK.
2. compiling cudppsrc1.1.1
3. compiling dua_testrig project in downloaded cudpp files.
What is the expected output? What do you see instead?
On compiling i see an error
1>LINK : fatal error LNK1104: cannot open file 'uiAccess='false' /DEBUG
/PDB:../../bin/x64/Debug/cudpp_testrig_vc90.pdb'
What version of the product are you using? On what operating system?
Mine is 64 bit HP Z800 workstation with a Tesla C1060 card. I am using Windows
7.
Please provide any additional information below.
```
Original issue reported on code.google.com by `itabhiya...@gmail.com` on 29 Oct 2010 at 9:55
|
1.0
|
cudpp_testrig on MS Visual studio 9.0 - ```
What steps will reproduce the problem?
1. installing UCDA 3.2 drivers, toolkit and SDK.
2. compiling cudppsrc1.1.1
3. compiling dua_testrig project in downloaded cudpp files.
What is the expected output? What do you see instead?
On compiling i see an error
1>LINK : fatal error LNK1104: cannot open file 'uiAccess='false' /DEBUG
/PDB:../../bin/x64/Debug/cudpp_testrig_vc90.pdb'
What version of the product are you using? On what operating system?
Mine is 64 bit HP Z800 workstation with a Tesla C1060 card. I am using Windows
7.
Please provide any additional information below.
```
Original issue reported on code.google.com by `itabhiya...@gmail.com` on 29 Oct 2010 at 9:55
|
defect
|
cudpp testrig on ms visual studio what steps will reproduce the problem installing ucda drivers toolkit and sdk compiling compiling dua testrig project in downloaded cudpp files what is the expected output what do you see instead on compiling i see an error link fatal error cannot open file uiaccess false debug pdb bin debug cudpp testrig pdb what version of the product are you using on what operating system mine is bit hp workstation with a tesla card i am using windows please provide any additional information below original issue reported on code google com by itabhiya gmail com on oct at
| 1
|
177,737
| 29,046,893,407
|
IssuesEvent
|
2023-05-13 17:25:43
|
lichtmetzger/iwar-theme
|
https://api.github.com/repos/lichtmetzger/iwar-theme
|
closed
|
Come up with a basic design
|
design
|
Before I can do anything useful in Bootstrap, I need a good design template.
That means:
- Defining colors (primary, secondary, highlight)
- Font Family (probably "Uni Neue", because it looks Sci-Fi and I already have a license for that)
- Buttons
- Main Menu
- Site Width
- etc...
|
1.0
|
Come up with a basic design - Before I can do anything useful in Bootstrap, I need a good design template.
That means:
- Defining colors (primary, secondary, highlight)
- Font Family (probably "Uni Neue", because it looks Sci-Fi and I already have a license for that)
- Buttons
- Main Menu
- Site Width
- etc...
|
non_defect
|
come up with a basic design before i can do anything useful in bootstrap i need a good design template that means defining colors primary secondary highlight font family probably uni neue because it looks sci fi and i already have a license for that buttons main menu site width etc
| 0
|
101,320
| 16,503,219,842
|
IssuesEvent
|
2021-05-25 16:16:41
|
idonthaveafifaaddiction/automerge-conflict-detector-plugin
|
https://api.github.com/repos/idonthaveafifaaddiction/automerge-conflict-detector-plugin
|
opened
|
CVE-2020-36181 (High) detected in jackson-databind-2.9.10.5.jar
|
security vulnerability
|
## CVE-2020-36181 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.10.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: automerge-conflict-detector-plugin/pom.xml</p>
<p>Path to vulnerable library: 20210525161335_CXNUSD/downloadResource_DAPBMU/20210525161438/jackson-databind-2.9.10.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.10.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/idonthaveafifaaddiction/automerge-conflict-detector-plugin/commit/b54fd03fd52baedb4e95e356419a51cd677014b1">b54fd03fd52baedb4e95e356419a51cd677014b1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10.5","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.10.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36181","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-36181 (High) detected in jackson-databind-2.9.10.5.jar - ## CVE-2020-36181 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.10.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: automerge-conflict-detector-plugin/pom.xml</p>
<p>Path to vulnerable library: 20210525161335_CXNUSD/downloadResource_DAPBMU/20210525161438/jackson-databind-2.9.10.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.10.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/idonthaveafifaaddiction/automerge-conflict-detector-plugin/commit/b54fd03fd52baedb4e95e356419a51cd677014b1">b54fd03fd52baedb4e95e356419a51cd677014b1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/3004">https://github.com/FasterXML/jackson-databind/issues/3004</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.10.5","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.10.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36181","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file automerge conflict detector plugin pom xml path to vulnerable library cxnusd downloadresource dapbmu jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp cpdsadapter driveradaptercpds vulnerabilityurl
| 0
|
37,577
| 8,318,914,332
|
IssuesEvent
|
2018-09-25 15:47:12
|
IATI/IATI-Codelists-NonEmbedded
|
https://api.github.com/repos/IATI/IATI-Codelists-NonEmbedded
|
closed
|
ResultVocabulary: Add Codes
|
Additional code complete enhancement
|
**NOTE:** This Codelist does not currently exist. There is a separate issue (#183) to make it be a thing.
Make the following changes to the ResultVocabulary Codelist:
**Add Code:**
- [x] Code: `99`
- [x] Name: `Reporting Organisation`
|
1.0
|
ResultVocabulary: Add Codes - **NOTE:** This Codelist does not currently exist. There is a separate issue (#183) to make it be a thing.
Make the following changes to the ResultVocabulary Codelist:
**Add Code:**
- [x] Code: `99`
- [x] Name: `Reporting Organisation`
|
non_defect
|
resultvocabulary add codes note this codelist does not currently exist there is a separate issue to make it be a thing make the following changes to the resultvocabulary codelist add code code name reporting organisation
| 0
|
24,679
| 4,073,689,634
|
IssuesEvent
|
2016-05-28 00:01:21
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
closed
|
Boolean.FalseString and Boolean.TrueString give undefined
|
defect
|
### Expected
`False`
`True`
### Actual
`undefined`
`undefined`
### Steps To Reproduce
Example ([Live](http://live.bridge.net/#18e7ca06740ab917f37d55a751e2d524)):
```csharp
public class App
{
[Ready]
public static void Main()
{
Console.WriteLine(bool.FalseString);
Console.WriteLine(bool.TrueString);
}
}
```
JavaScript output:
```javascript
console.log(Boolean.falseString);
console.log(Boolean.trueString);
```
|
1.0
|
Boolean.FalseString and Boolean.TrueString give undefined - ### Expected
`False`
`True`
### Actual
`undefined`
`undefined`
### Steps To Reproduce
Example ([Live](http://live.bridge.net/#18e7ca06740ab917f37d55a751e2d524)):
```csharp
public class App
{
[Ready]
public static void Main()
{
Console.WriteLine(bool.FalseString);
Console.WriteLine(bool.TrueString);
}
}
```
JavaScript output:
```javascript
console.log(Boolean.falseString);
console.log(Boolean.trueString);
```
|
defect
|
boolean falsestring and boolean truestring give undefined expected false true actual undefined undefined steps to reproduce example csharp public class app public static void main console writeline bool falsestring console writeline bool truestring javascript output javascript console log boolean falsestring console log boolean truestring
| 1
|
59,290
| 17,019,334,593
|
IssuesEvent
|
2021-07-02 16:20:02
|
disruptek/cps
|
https://api.github.com/repos/disruptek/cps
|
closed
|
arc/orc leak?
|
nim compiler defect
|
In the little program below two continuations are created, both put on a little work queue. One by doing a dequeue push directly, the other by calling a proc that does exactly that.
One leaks, the other does not.
```nim
import cps
import std/[deques]
# Basic CPS stuff
type
C* = ref object of Continuation
evq*: Evq
Evq* = ref object
work*: Deque[C]
# An object type with destructor
type
ThingObj = object
name: string
Thing = ref ThingObj
proc `=destroy`(t: var ThingObj) =
echo "destroyed ", t.name
# Push proc. Part of the problem? Change this proc into a template an the issue goes away.
proc push*(evq: Evq, c: C) =
evq.work.addLast c
# Happy little CPS proc creating a Thing
proc flop(name: string) {.cps:C.} =
echo "make ", name
var t = Thing(name: name)
echo "done ", name
# Some event queue
var myevq = Evq()
# This thing will leak
myevq.push whelp flop("one")
# But this thing will not
myevq.work.addLast whelp flop("two")
# Pump my queue
while true:
while myevq.work.len > 0:
discard trampoline(myevq.work.popFirst)
```
Expected output:
```
make one
done one
destroyed one
make two
done two
destroyed two
```
Actual output:
```
make one
done one
make two
done two
destroyed two
```
|
1.0
|
arc/orc leak? - In the little program below two continuations are created, both put on a little work queue. One by doing a dequeue push directly, the other by calling a proc that does exactly that.
One leaks, the other does not.
```nim
import cps
import std/[deques]
# Basic CPS stuff
type
C* = ref object of Continuation
evq*: Evq
Evq* = ref object
work*: Deque[C]
# An object type with destructor
type
ThingObj = object
name: string
Thing = ref ThingObj
proc `=destroy`(t: var ThingObj) =
echo "destroyed ", t.name
# Push proc. Part of the problem? Change this proc into a template an the issue goes away.
proc push*(evq: Evq, c: C) =
evq.work.addLast c
# Happy little CPS proc creating a Thing
proc flop(name: string) {.cps:C.} =
echo "make ", name
var t = Thing(name: name)
echo "done ", name
# Some event queue
var myevq = Evq()
# This thing will leak
myevq.push whelp flop("one")
# But this thing will not
myevq.work.addLast whelp flop("two")
# Pump my queue
while true:
while myevq.work.len > 0:
discard trampoline(myevq.work.popFirst)
```
Expected output:
```
make one
done one
destroyed one
make two
done two
destroyed two
```
Actual output:
```
make one
done one
make two
done two
destroyed two
```
|
defect
|
arc orc leak in the little program below two continuations are created both put on a little work queue one by doing a dequeue push directly the other by calling a proc that does exactly that one leaks the other does not nim import cps import std basic cps stuff type c ref object of continuation evq evq evq ref object work deque an object type with destructor type thingobj object name string thing ref thingobj proc destroy t var thingobj echo destroyed t name push proc part of the problem change this proc into a template an the issue goes away proc push evq evq c c evq work addlast c happy little cps proc creating a thing proc flop name string cps c echo make name var t thing name name echo done name some event queue var myevq evq this thing will leak myevq push whelp flop one but this thing will not myevq work addlast whelp flop two pump my queue while true while myevq work len discard trampoline myevq work popfirst expected output make one done one destroyed one make two done two destroyed two actual output make one done one make two done two destroyed two
| 1
|
70,053
| 22,827,245,373
|
IssuesEvent
|
2022-07-12 09:39:57
|
twisted/twisted
|
https://api.github.com/repos/twisted/twisted
|
closed
|
Broken link to deliverBody in web client howto
|
defect priority-normal new web documentation easy
|
|[<img alt="jesstess's avatar" src="https://avatars.githubusercontent.com/u/188336?s=50" width="50" height="50">](https://github.com/jesstess)| @jesstess reported|
|-|-|
|Trac ID|trac#5681|
|Type|defect|
|Created|2012-05-17 03:11:22Z|
The API link to http://twistedmatrix.com/documents/12.0.0/api/twisted.web.client.Response.deliverBody.html in the Receiving Responses section of http://twistedmatrix.com/documents/current/web/howto/client.html is a 404.
I'd expect that to work, but apparently http://twistedmatrix.com/documents/current/api/twisted.web.client.Response.html#deliverBody is the correct reference.
Attachments:
* [ticket-5681.patch](https://raw.githubusercontent.com/twisted/twistedmatrix.com-trac-attachments/trunk/ticket/18a/18acb3b42033486c04115500cc799cb48c7bfc05/84c3d416aa1506a6c5e5442f6c8fd193b7232188.patch) (1144 bytes) - added by hellojamieshin on 2013-04-24 11:55:24Z -
<details><summary>Searchable metadata</summary>
```
trac-id__5681 5681
type__defect defect
reporter__jesstess jesstess
priority__normal normal
milestone__
branch__
branch_author__
status__new new
resolution__None None
component__web web
keywords__documentation_easy documentation easy
time__1337224282000000 1337224282000000
changetime__1367241131000000 1367241131000000
version__None None
owner__hellojamieshin hellojamieshin
cc__jknight cc__mwhudson
```
</details>
|
1.0
|
Broken link to deliverBody in web client howto - |[<img alt="jesstess's avatar" src="https://avatars.githubusercontent.com/u/188336?s=50" width="50" height="50">](https://github.com/jesstess)| @jesstess reported|
|-|-|
|Trac ID|trac#5681|
|Type|defect|
|Created|2012-05-17 03:11:22Z|
The API link to http://twistedmatrix.com/documents/12.0.0/api/twisted.web.client.Response.deliverBody.html in the Receiving Responses section of http://twistedmatrix.com/documents/current/web/howto/client.html is a 404.
I'd expect that to work, but apparently http://twistedmatrix.com/documents/current/api/twisted.web.client.Response.html#deliverBody is the correct reference.
Attachments:
* [ticket-5681.patch](https://raw.githubusercontent.com/twisted/twistedmatrix.com-trac-attachments/trunk/ticket/18a/18acb3b42033486c04115500cc799cb48c7bfc05/84c3d416aa1506a6c5e5442f6c8fd193b7232188.patch) (1144 bytes) - added by hellojamieshin on 2013-04-24 11:55:24Z -
<details><summary>Searchable metadata</summary>
```
trac-id__5681 5681
type__defect defect
reporter__jesstess jesstess
priority__normal normal
milestone__
branch__
branch_author__
status__new new
resolution__None None
component__web web
keywords__documentation_easy documentation easy
time__1337224282000000 1337224282000000
changetime__1367241131000000 1367241131000000
version__None None
owner__hellojamieshin hellojamieshin
cc__jknight cc__mwhudson
```
</details>
|
defect
|
broken link to deliverbody in web client howto jesstess reported trac id trac type defect created the api link to in the receiving responses section of is a i d expect that to work but apparently is the correct reference attachments bytes added by hellojamieshin on searchable metadata trac id type defect defect reporter jesstess jesstess priority normal normal milestone branch branch author status new new resolution none none component web web keywords documentation easy documentation easy time changetime version none none owner hellojamieshin hellojamieshin cc jknight cc mwhudson
| 1
|
300,957
| 22,706,526,583
|
IssuesEvent
|
2022-07-05 15:04:40
|
hashgraph/guardian
|
https://api.github.com/repos/hashgraph/guardian
|
closed
|
Defining the process of Linting Rules.
|
documentation technical task
|
### Problem description
Before making a release, we need to have code stability without bugs and empty functions etc.
### Requirements
To get rid of code errors and achieve Code quality, we need to implement Linting rules.hence, we will be defining the process of implementing linting rules in our code.
### Definition of done
defining and implementing linting rules in a correct manner.
### Acceptance criteria
Achieving High code quality.
|
1.0
|
Defining the process of Linting Rules. - ### Problem description
Before making a release, we need to have code stability without bugs and empty functions etc.
### Requirements
To get rid of code errors and achieve Code quality, we need to implement Linting rules.hence, we will be defining the process of implementing linting rules in our code.
### Definition of done
defining and implementing linting rules in a correct manner.
### Acceptance criteria
Achieving High code quality.
|
non_defect
|
defining the process of linting rules problem description before making a release we need to have code stability without bugs and empty functions etc requirements to get rid of code errors and achieve code quality we need to implement linting rules hence we will be defining the process of implementing linting rules in our code definition of done defining and implementing linting rules in a correct manner acceptance criteria achieving high code quality
| 0
|
340,481
| 10,272,752,115
|
IssuesEvent
|
2019-08-23 17:19:51
|
eclipse/codewind
|
https://api.github.com/repos/eclipse/codewind
|
closed
|
Update Codewind on Kube install docs for the Tekton integration
|
area/docs area/portal kind/bug priority/hot
|
The Tekton integration in PFE that was recently delivered introduced new setup steps if installing Codewind on Che:
```
oc apply -f tekton/codewind-che-plugin/setup/install_che/codewind-tektonrole.yaml
oc apply -f tekton/codewind-che-plugin/setup/install_che/codewind-tektonbinding.yaml
```
Our manual install docs on https://www.eclipse.org/codewind/installoncloud.html need to be updated to reflect that.
|
1.0
|
Update Codewind on Kube install docs for the Tekton integration - The Tekton integration in PFE that was recently delivered introduced new setup steps if installing Codewind on Che:
```
oc apply -f tekton/codewind-che-plugin/setup/install_che/codewind-tektonrole.yaml
oc apply -f tekton/codewind-che-plugin/setup/install_che/codewind-tektonbinding.yaml
```
Our manual install docs on https://www.eclipse.org/codewind/installoncloud.html need to be updated to reflect that.
|
non_defect
|
update codewind on kube install docs for the tekton integration the tekton integration in pfe that was recently delivered introduced new setup steps if installing codewind on che oc apply f tekton codewind che plugin setup install che codewind tektonrole yaml oc apply f tekton codewind che plugin setup install che codewind tektonbinding yaml our manual install docs on need to be updated to reflect that
| 0
|
38,593
| 8,922,133,510
|
IssuesEvent
|
2019-01-21 12:05:22
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Cast is not applied when it should be
|
C: Functionality E: All Editions P: Medium T: Defect
|
### Expected behavior and actual behavior:
The documentation of Field.div states:
> If this is a numeric field, then the result is a number of the same type as this field.
Meaning that `Field<Integer>.div(Field<Double>)` results in an integer. However, PostgreSQL disagrees:
```
postgres=# select 5::integer / 2::double precision;
?column?
--------------------
2.5000000000000000
(1 row)
```
Casting the result to Integer, in order to pass it to a function that requires an Integer then fails, since jooq will eat the cast, as it thinks the result is already Integer.
### Steps to reproduce the problem (if possible, create an MCVE: https://github.com/jOOQ/jOOQ-mcve):
```
SelectSelectStep<Record1<Integer>> select = dslContext.select(DSL.val(7).div(2.0).cast(SQLDataType.INTEGER).mod(2));
Record1<Integer> fetchAny = select.fetchAny();
```
Should work, but doesn't because:
`org.jooq.exception.DataAccessException: SQL [select mod((? / ?), ?)]; ERROR: function mod(double precision, integer) does not exist
`
Note that the cast is gone from the SQL.
### Versions:
- jOOQ: 3.11.7
- Java: any
- Database (include vendor): PostgreSQL 9.5
- OS: any
- JDBC Driver (include name if inofficial driver): postgresql-9.4.1212
|
1.0
|
Cast is not applied when it should be - ### Expected behavior and actual behavior:
The documentation of Field.div states:
> If this is a numeric field, then the result is a number of the same type as this field.
Meaning that `Field<Integer>.div(Field<Double>)` results in an integer. However, PostgreSQL disagrees:
```
postgres=# select 5::integer / 2::double precision;
?column?
--------------------
2.5000000000000000
(1 row)
```
Casting the result to Integer, in order to pass it to a function that requires an Integer then fails, since jooq will eat the cast, as it thinks the result is already Integer.
### Steps to reproduce the problem (if possible, create an MCVE: https://github.com/jOOQ/jOOQ-mcve):
```
SelectSelectStep<Record1<Integer>> select = dslContext.select(DSL.val(7).div(2.0).cast(SQLDataType.INTEGER).mod(2));
Record1<Integer> fetchAny = select.fetchAny();
```
Should work, but doesn't because:
`org.jooq.exception.DataAccessException: SQL [select mod((? / ?), ?)]; ERROR: function mod(double precision, integer) does not exist
`
Note that the cast is gone from the SQL.
### Versions:
- jOOQ: 3.11.7
- Java: any
- Database (include vendor): PostgreSQL 9.5
- OS: any
- JDBC Driver (include name if inofficial driver): postgresql-9.4.1212
|
defect
|
cast is not applied when it should be expected behavior and actual behavior the documentation of field div states if this is a numeric field then the result is a number of the same type as this field meaning that field div field results in an integer however postgresql disagrees postgres select integer double precision column row casting the result to integer in order to pass it to a function that requires an integer then fails since jooq will eat the cast as it thinks the result is already integer steps to reproduce the problem if possible create an mcve selectselectstep select dslcontext select dsl val div cast sqldatatype integer mod fetchany select fetchany should work but doesn t because org jooq exception dataaccessexception sql error function mod double precision integer does not exist note that the cast is gone from the sql versions jooq java any database include vendor postgresql os any jdbc driver include name if inofficial driver postgresql
| 1
|
8,003
| 2,611,071,632
|
IssuesEvent
|
2015-02-27 00:33:25
|
alistairreilly/andors-trail
|
https://api.github.com/repos/alistairreilly/andors-trail
|
closed
|
Updated polish translation
|
auto-migrated Type-Defect
|
```
new version of strings_about.xml
```
Original issue reported on code.google.com by `daniels....@gmail.com` on 21 Dec 2013 at 2:53
Attachments:
* [strings_about.xml](https://storage.googleapis.com/google-code-attachments/andors-trail/issue-355/comment-0/strings_about.xml)
|
1.0
|
Updated polish translation - ```
new version of strings_about.xml
```
Original issue reported on code.google.com by `daniels....@gmail.com` on 21 Dec 2013 at 2:53
Attachments:
* [strings_about.xml](https://storage.googleapis.com/google-code-attachments/andors-trail/issue-355/comment-0/strings_about.xml)
|
defect
|
updated polish translation new version of strings about xml original issue reported on code google com by daniels gmail com on dec at attachments
| 1
|
536
| 2,564,147,479
|
IssuesEvent
|
2015-02-06 17:49:39
|
scientia-est-potentia/tickachu
|
https://api.github.com/repos/scientia-est-potentia/tickachu
|
closed
|
Change step 7b in checkout when status is not 4 or 5 (paid / sent)
|
!must do .Defect @checkout
|
Occurs when an order is deleted / expires; it still shows paid / to pay info.
|
1.0
|
Change step 7b in checkout when status is not 4 or 5 (paid / sent) - Occurs when an order is deleted / expires; it still shows paid / to pay info.
|
defect
|
change step in checkout when status is not or paid sent occurs when an order is deleted expires it still shows paid to pay info
| 1
|
268,124
| 23,346,820,894
|
IssuesEvent
|
2022-08-09 18:49:06
|
blrrryface/blrrryface.github.io
|
https://api.github.com/repos/blrrryface/blrrryface.github.io
|
opened
|
关于双指针的一些小思考 | Blrrryface
|
Gitalk /post/test/
|
https://blrrryface.github.io/post/test/
test 会尽快到 萨达很快就副科级放点绿色卡接发的数据库浪费的时间弗兰克圣诞节里看风景里看风景时大连科技烦死了大姐夫看来都是荆防颗粒的设计费看垃圾毒素浪费就是大佬积分萨迪克了房间里萨克的就
|
1.0
|
关于双指针的一些小思考 | Blrrryface - https://blrrryface.github.io/post/test/
test 会尽快到 萨达很快就副科级放点绿色卡接发的数据库浪费的时间弗兰克圣诞节里看风景里看风景时大连科技烦死了大姐夫看来都是荆防颗粒的设计费看垃圾毒素浪费就是大佬积分萨迪克了房间里萨克的就
|
non_defect
|
关于双指针的一些小思考 blrrryface test 会尽快到 萨达很快就副科级放点绿色卡接发的数据库浪费的时间弗兰克圣诞节里看风景里看风景时大连科技烦死了大姐夫看来都是荆防颗粒的设计费看垃圾毒素浪费就是大佬积分萨迪克了房间里萨克的就
| 0
|
129,937
| 27,593,291,176
|
IssuesEvent
|
2023-03-09 03:05:28
|
alibaba/nacos
|
https://api.github.com/repos/alibaba/nacos
|
closed
|
ServerLoaderController注释有误
|
contribution welcome kind/code quality
|
## Issue Description
如下所示,/nacos/v2/core/reloadClient接口的注释有误,对应的方法为com.alibaba.nacos.core.controller.ServerLoaderController#reloadClient,该接口用于重新平衡server端的连接数,但注释的意思是获得当前server端的状态:
```java
/**
* Get server state of current server.
*
* @return state json.
*/
@Secured(resource = Commons.NACOS_CORE_CONTEXT_V2 + "/loader", action = ActionTypes.WRITE)
@GetMapping("/reloadClient")
public ResponseEntity<String> reloadSingle(@RequestParam String connectionId,
@RequestParam(value = "redirectAddress", required = false) String redirectAddress) {
connectionManager.loadSingle(connectionId, redirectAddress);
return ResponseEntity.ok().body("success");
}
```
|
1.0
|
ServerLoaderController注释有误 - ## Issue Description
如下所示,/nacos/v2/core/reloadClient接口的注释有误,对应的方法为com.alibaba.nacos.core.controller.ServerLoaderController#reloadClient,该接口用于重新平衡server端的连接数,但注释的意思是获得当前server端的状态:
```java
/**
* Get server state of current server.
*
* @return state json.
*/
@Secured(resource = Commons.NACOS_CORE_CONTEXT_V2 + "/loader", action = ActionTypes.WRITE)
@GetMapping("/reloadClient")
public ResponseEntity<String> reloadSingle(@RequestParam String connectionId,
@RequestParam(value = "redirectAddress", required = false) String redirectAddress) {
connectionManager.loadSingle(connectionId, redirectAddress);
return ResponseEntity.ok().body("success");
}
```
|
non_defect
|
serverloadercontroller注释有误 issue description 如下所示, nacos core reloadclient接口的注释有误,对应的方法为com alibaba nacos core controller serverloadercontroller reloadclient,该接口用于重新平衡server端的连接数,但注释的意思是获得当前server端的状态: java get server state of current server return state json secured resource commons nacos core context loader action actiontypes write getmapping reloadclient public responseentity reloadsingle requestparam string connectionid requestparam value redirectaddress required false string redirectaddress connectionmanager loadsingle connectionid redirectaddress return responseentity ok body success
| 0
|
41,317
| 21,630,538,958
|
IssuesEvent
|
2022-05-05 09:15:17
|
DaveTCode/GBADotnet
|
https://api.github.com/repos/DaveTCode/GBADotnet
|
opened
|
Move whatever parts of DMA can be onto the scheduler
|
performance
|
Ticking the DMA controller every cycle is a really significant performance hit for the rest of the application. It must be possible to move at least the startup delay onto the scheduler and it might be possible to offload more of it as well
|
True
|
Move whatever parts of DMA can be onto the scheduler - Ticking the DMA controller every cycle is a really significant performance hit for the rest of the application. It must be possible to move at least the startup delay onto the scheduler and it might be possible to offload more of it as well
|
non_defect
|
move whatever parts of dma can be onto the scheduler ticking the dma controller every cycle is a really significant performance hit for the rest of the application it must be possible to move at least the startup delay onto the scheduler and it might be possible to offload more of it as well
| 0
|
12,982
| 2,732,346,283
|
IssuesEvent
|
2015-04-17 04:48:05
|
rasmus/fast-member
|
https://api.github.com/repos/rasmus/fast-member
|
closed
|
TypeAccessor supports private access but MemberSet is locked to public members
|
auto-migrated Priority-Medium Type-Defect
|
```
Fix is to perform the same check for private types when building a MemberSet
```
Original issue reported on code.google.com by `daniel.crenna` on 14 Jan 2015 at 2:31
|
1.0
|
TypeAccessor supports private access but MemberSet is locked to public members - ```
Fix is to perform the same check for private types when building a MemberSet
```
Original issue reported on code.google.com by `daniel.crenna` on 14 Jan 2015 at 2:31
|
defect
|
typeaccessor supports private access but memberset is locked to public members fix is to perform the same check for private types when building a memberset original issue reported on code google com by daniel crenna on jan at
| 1
|
5,692
| 2,610,193,763
|
IssuesEvent
|
2015-02-26 19:01:11
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
介绍去色斑的小妙招
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
寂静的夜,静的似酒令人微醺,都说:夜晚可以使许多人变��
�,诗人,智者,哲学家。我喜欢夜,但变不了诗人,哲学家�
��可思维总跟哲学有关。那一刻,坐在夜色的深处,寂静的怀
里,一种简单的形式,看书写字或冥想,我便成了自己,一��
�真正的自己。那一刻,仿佛有无数双天使的羽翼温柔地呵护�
��我。我可以自由地穿梭于古今,可以任爱恨悲欢汹涌而来;
可以只倾情于一片落叶,一只蚂蚁。可以拥有一个独立的精��
�世界。此刻的夜,不浅薄,此刻的夜,很深沉。此刻,宇宙�
��像一位穿着黑袍神秘的父亲,而我是他多梦的孩子。梦里的
我皮肤光滑,没有烦人的色斑,肌肤像婴儿一样!去色斑的��
�妙招,
《客户案例》
去除黄褐斑的简单方法,
现在社会生活压力大,责任已经不仅仅是男人的,女人也要��
�担家庭责任,我就是这样。每天除了忙于工作还要照顾好家�
��,就很少有时间关注自己的身体状况和皮肤情况了,有一段
时间我的月经开始不正常了,在这期间不知不觉我脸上竟然��
�了很多很多的色斑,而且是大片大片的,说真的我开始害怕�
��,不知道自己出了什么问题,是不是身体出了什么毛病呢?</
br>
电视上的,商场里的祛斑产品我基本上都用过,但是效��
�却怎么也看不到,而且这斑一直坚持了好几年还是没有去掉�
��国外的一些大牌的护肤品也用了很多,可是护肤效果还不错
,但是祛斑的效果却怎么都看不到,而且我的斑一直不断的��
�多,满脸都是,逛街我都不敢去了。到底有没有有效的祛斑�
��品,我无数次的问自己。但是每次祛斑失败后我都告诉自己
,要坚持,放弃就是失败。虽然皮肤越来越差但是我依然没��
�放弃。后来,我的一个朋友从上海回来找我玩,知道我一直�
��斑烦恼后,向我推荐了黛芙薇尔。我一直很相信我的这个朋
友,就毫不犹豫的买了三个周期的黛芙薇尔。</br>
用了一个月,感觉效果还不错,斑点虽然淡化的不怎么��
�显,不过皮肤明显的变白了很多。后来就一次性买了两个周�
��的,坚持使用完三个周期的,脸上的黄褐斑不仅淡化的看不
见了,皮肤也变好了很多呢。现在走在大街上,自信满满,��
�情也变得格外的好!怎样才可以祛黄褐斑,去黄褐斑的好方法
。
阅读了去色斑的小妙招,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
去色斑的小妙招,同时为您分享祛斑小方法
生活中尽量“隔热”,夏日外出打太阳伞、戴遮阳帽,做完��
�后清洗面部和手臂,尤其注意清洗被热油溅到的部位,烫油�
��造成永久性的黄褐斑,对中老年人尤为重要,所以应立即用
凉水冲洗干净。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:46
|
1.0
|
介绍去色斑的小妙招 - ```
《摘要》
寂静的夜,静的似酒令人微醺,都说:夜晚可以使许多人变��
�,诗人,智者,哲学家。我喜欢夜,但变不了诗人,哲学家�
��可思维总跟哲学有关。那一刻,坐在夜色的深处,寂静的怀
里,一种简单的形式,看书写字或冥想,我便成了自己,一��
�真正的自己。那一刻,仿佛有无数双天使的羽翼温柔地呵护�
��我。我可以自由地穿梭于古今,可以任爱恨悲欢汹涌而来;
可以只倾情于一片落叶,一只蚂蚁。可以拥有一个独立的精��
�世界。此刻的夜,不浅薄,此刻的夜,很深沉。此刻,宇宙�
��像一位穿着黑袍神秘的父亲,而我是他多梦的孩子。梦里的
我皮肤光滑,没有烦人的色斑,肌肤像婴儿一样!去色斑的��
�妙招,
《客户案例》
去除黄褐斑的简单方法,
现在社会生活压力大,责任已经不仅仅是男人的,女人也要��
�担家庭责任,我就是这样。每天除了忙于工作还要照顾好家�
��,就很少有时间关注自己的身体状况和皮肤情况了,有一段
时间我的月经开始不正常了,在这期间不知不觉我脸上竟然��
�了很多很多的色斑,而且是大片大片的,说真的我开始害怕�
��,不知道自己出了什么问题,是不是身体出了什么毛病呢?</
br>
电视上的,商场里的祛斑产品我基本上都用过,但是效��
�却怎么也看不到,而且这斑一直坚持了好几年还是没有去掉�
��国外的一些大牌的护肤品也用了很多,可是护肤效果还不错
,但是祛斑的效果却怎么都看不到,而且我的斑一直不断的��
�多,满脸都是,逛街我都不敢去了。到底有没有有效的祛斑�
��品,我无数次的问自己。但是每次祛斑失败后我都告诉自己
,要坚持,放弃就是失败。虽然皮肤越来越差但是我依然没��
�放弃。后来,我的一个朋友从上海回来找我玩,知道我一直�
��斑烦恼后,向我推荐了黛芙薇尔。我一直很相信我的这个朋
友,就毫不犹豫的买了三个周期的黛芙薇尔。</br>
用了一个月,感觉效果还不错,斑点虽然淡化的不怎么��
�显,不过皮肤明显的变白了很多。后来就一次性买了两个周�
��的,坚持使用完三个周期的,脸上的黄褐斑不仅淡化的看不
见了,皮肤也变好了很多呢。现在走在大街上,自信满满,��
�情也变得格外的好!怎样才可以祛黄褐斑,去黄褐斑的好方法
。
阅读了去色斑的小妙招,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
去色斑的小妙招,同时为您分享祛斑小方法
生活中尽量“隔热”,夏日外出打太阳伞、戴遮阳帽,做完��
�后清洗面部和手臂,尤其注意清洗被热油溅到的部位,烫油�
��造成永久性的黄褐斑,对中老年人尤为重要,所以应立即用
凉水冲洗干净。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:46
|
defect
|
介绍去色斑的小妙招 《摘要》 寂静的夜,静的似酒令人微醺,都说:夜晚可以使许多人变�� �,诗人,智者,哲学家。我喜欢夜,但变不了诗人,哲学家� ��可思维总跟哲学有关。那一刻,坐在夜色的深处,寂静的怀 里,一种简单的形式,看书写字或冥想,我便成了自己,一�� �真正的自己。那一刻,仿佛有无数双天使的羽翼温柔地呵护� ��我。我可以自由地穿梭于古今,可以任爱恨悲欢汹涌而来; 可以只倾情于一片落叶,一只蚂蚁。可以拥有一个独立的精�� �世界。此刻的夜,不浅薄,此刻的夜,很深沉。此刻,宇宙� ��像一位穿着黑袍神秘的父亲,而我是他多梦的孩子。梦里的 我皮肤光滑,没有烦人的色斑,肌肤像婴儿一样!去色斑的�� �妙招, 《客户案例》 去除黄褐斑的简单方法 现在社会生活压力大,责任已经不仅仅是男人的,女人也要�� �担家庭责任,我就是这样。每天除了忙于工作还要照顾好家� ��,就很少有时间关注自己的身体状况和皮肤情况了,有一段 时间我的月经开始不正常了,在这期间不知不觉我脸上竟然�� �了很多很多的色斑,而且是大片大片的,说真的我开始害怕� ��,不知道自己出了什么问题,是不是身体出了什么毛病呢 br 电视上的,商场里的祛斑产品我基本上都用过,但是效�� �却怎么也看不到,而且这斑一直坚持了好几年还是没有去掉� ��国外的一些大牌的护肤品也用了很多,可是护肤效果还不错 ,但是祛斑的效果却怎么都看不到,而且我的斑一直不断的�� �多,满脸都是,逛街我都不敢去了。到底有没有有效的祛斑� ��品,我无数次的问自己。但是每次祛斑失败后我都告诉自己 ,要坚持,放弃就是失败。虽然皮肤越来越差但是我依然没�� �放弃。后来,我的一个朋友从上海回来找我玩,知道我一直� ��斑烦恼后,向我推荐了黛芙薇尔。我一直很相信我的这个朋 友,就毫不犹豫的买了三个周期的黛芙薇尔。 用了一个月,感觉效果还不错,斑点虽然淡化的不怎么�� �显,不过皮肤明显的变白了很多。后来就一次性买了两个周� ��的,坚持使用完三个周期的,脸上的黄褐斑不仅淡化的看不 见了,皮肤也变好了很多呢。现在走在大街上,自信满满,�� �情也变得格外的好 怎样才可以祛黄褐斑,去黄褐斑的好方法 。 阅读了去色斑的小妙招,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 去色斑的小妙招,同时为您分享祛斑小方法 生活中尽量“隔热”,夏日外出打太阳伞、戴遮阳帽,做完�� �后清洗面部和手臂,尤其注意清洗被热油溅到的部位,烫油� ��造成永久性的黄褐斑,对中老年人尤为重要,所以应立即用 凉水冲洗干净。 original issue reported on code google com by additive gmail com on jul at
| 1
|
48,189
| 13,067,507,020
|
IssuesEvent
|
2020-07-31 00:40:57
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
filterscript trunk: Error: »const class OMKey« has no member named »IsIceTop« (Trac #1930)
|
Migrated from Trac combo reconstruction defect
|
Hi,
while building the current trunk I get this:
```text
[ 68%] Built target filter-tools
[ 68%] Built target tensor-of-inertia
[ 68%] Built target gulliver-bootstrap
[ 70%] Built target portia
[ 70%] Building CXX object filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o
/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx: In Elementfunktion »bool I3CosmicRayFilter_13::KeepEvent(I3Frame&)«:
/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx:59:16: Fehler: »const class OMKey« has no member named »IsIceTop«
if(omKey.IsIceTop()){
^~~~~~~~
make[2]: *** [filterscripts/CMakeFiles/filterscripts.dir/build.make:111: filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o] Fehler 1
make[1]: *** [CMakeFiles/Makefile2:7382: filterscripts/CMakeFiles/filterscripts.dir/all] Fehler 2
make: *** [Makefile:128: all] Fehler 2
```
Svn Info:
```text
[flauber@Shion src]$ svn info
Pfad: .
Wurzelpfad der Arbeitskopie: /home/flauber/IceCube/icerec_trunk/src
URL: http://code.icecube.wisc.edu/svn/meta-projects/icerec/trunk
Relative URL: ^/meta-projects/icerec/trunk
Basis des Projektarchivs: http://code.icecube.wisc.edu/svn
UUID des Projektarchivs: 16731396-06f5-0310-8873-f7f720988828
Revision: 152558
Knotentyp: Verzeichnis
Plan: normal
Letzter Autor: nega
Letzte geänderte Rev: 151396
Letztes Änderungsdatum: 2016-11-09 15:23:57 +0100 (Mi, 09. Nov 2016)
```
Migrated from https://code.icecube.wisc.edu/ticket/1930
```json
{
"status": "closed",
"changetime": "2017-01-10T17:17:57",
"description": "Hi,\n\nwhile building the current trunk I get this:\n\n{{{\n[ 68%] Built target filter-tools\n[ 68%] Built target tensor-of-inertia\n[ 68%] Built target gulliver-bootstrap\n[ 70%] Built target portia\n[ 70%] Building CXX object filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o\n/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx: In Elementfunktion \u00bbbool I3CosmicRayFilter_13::KeepEvent(I3Frame&)\u00ab:\n/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx:59:16: Fehler: \u00bbconst class OMKey\u00ab has no member named \u00bbIsIceTop\u00ab\n if(omKey.IsIceTop()){\n ^~~~~~~~\nmake[2]: *** [filterscripts/CMakeFiles/filterscripts.dir/build.make:111: filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o] Fehler 1\nmake[1]: *** [CMakeFiles/Makefile2:7382: filterscripts/CMakeFiles/filterscripts.dir/all] Fehler 2\nmake: *** [Makefile:128: all] Fehler 2\n}}}\n\n\n\nSvn Info:\n{{{\n[flauber@Shion src]$ svn info\nPfad: .\nWurzelpfad der Arbeitskopie: /home/flauber/IceCube/icerec_trunk/src\nURL: http://code.icecube.wisc.edu/svn/meta-projects/icerec/trunk\nRelative URL: ^/meta-projects/icerec/trunk\nBasis des Projektarchivs: http://code.icecube.wisc.edu/svn\nUUID des Projektarchivs: 16731396-06f5-0310-8873-f7f720988828\nRevision: 152558\nKnotentyp: Verzeichnis\nPlan: normal\nLetzter Autor: nega\nLetzte ge\u00e4nderte Rev: 151396\nLetztes \u00c4nderungsdatum: 2016-11-09 15:23:57 +0100 (Mi, 09. Nov 2016)\n}}}",
"reporter": "flauber",
"cc": "",
"resolution": "invalid",
"_ts": "1484068677758096",
"component": "combo reconstruction",
"summary": "filterscript trunk: Error: \u00bbconst class OMKey\u00ab has no member named \u00bbIsIceTop\u00ab",
"priority": "normal",
"keywords": "",
"time": "2017-01-10T16:33:42",
"milestone": "",
"owner": "",
"type": "defect"
}
```
|
1.0
|
filterscript trunk: Error: »const class OMKey« has no member named »IsIceTop« (Trac #1930) - Hi,
while building the current trunk I get this:
```text
[ 68%] Built target filter-tools
[ 68%] Built target tensor-of-inertia
[ 68%] Built target gulliver-bootstrap
[ 70%] Built target portia
[ 70%] Building CXX object filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o
/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx: In Elementfunktion »bool I3CosmicRayFilter_13::KeepEvent(I3Frame&)«:
/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx:59:16: Fehler: »const class OMKey« has no member named »IsIceTop«
if(omKey.IsIceTop()){
^~~~~~~~
make[2]: *** [filterscripts/CMakeFiles/filterscripts.dir/build.make:111: filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o] Fehler 1
make[1]: *** [CMakeFiles/Makefile2:7382: filterscripts/CMakeFiles/filterscripts.dir/all] Fehler 2
make: *** [Makefile:128: all] Fehler 2
```
Svn Info:
```text
[flauber@Shion src]$ svn info
Pfad: .
Wurzelpfad der Arbeitskopie: /home/flauber/IceCube/icerec_trunk/src
URL: http://code.icecube.wisc.edu/svn/meta-projects/icerec/trunk
Relative URL: ^/meta-projects/icerec/trunk
Basis des Projektarchivs: http://code.icecube.wisc.edu/svn
UUID des Projektarchivs: 16731396-06f5-0310-8873-f7f720988828
Revision: 152558
Knotentyp: Verzeichnis
Plan: normal
Letzter Autor: nega
Letzte geänderte Rev: 151396
Letztes Änderungsdatum: 2016-11-09 15:23:57 +0100 (Mi, 09. Nov 2016)
```
Migrated from https://code.icecube.wisc.edu/ticket/1930
```json
{
"status": "closed",
"changetime": "2017-01-10T17:17:57",
"description": "Hi,\n\nwhile building the current trunk I get this:\n\n{{{\n[ 68%] Built target filter-tools\n[ 68%] Built target tensor-of-inertia\n[ 68%] Built target gulliver-bootstrap\n[ 70%] Built target portia\n[ 70%] Building CXX object filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o\n/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx: In Elementfunktion \u00bbbool I3CosmicRayFilter_13::KeepEvent(I3Frame&)\u00ab:\n/home/flauber/IceCube/icerec_trunk/src/filterscripts/private/filterscripts/I3CosmicRayFilter_13.cxx:59:16: Fehler: \u00bbconst class OMKey\u00ab has no member named \u00bbIsIceTop\u00ab\n if(omKey.IsIceTop()){\n ^~~~~~~~\nmake[2]: *** [filterscripts/CMakeFiles/filterscripts.dir/build.make:111: filterscripts/CMakeFiles/filterscripts.dir/private/filterscripts/I3CosmicRayFilter_13.cxx.o] Fehler 1\nmake[1]: *** [CMakeFiles/Makefile2:7382: filterscripts/CMakeFiles/filterscripts.dir/all] Fehler 2\nmake: *** [Makefile:128: all] Fehler 2\n}}}\n\n\n\nSvn Info:\n{{{\n[flauber@Shion src]$ svn info\nPfad: .\nWurzelpfad der Arbeitskopie: /home/flauber/IceCube/icerec_trunk/src\nURL: http://code.icecube.wisc.edu/svn/meta-projects/icerec/trunk\nRelative URL: ^/meta-projects/icerec/trunk\nBasis des Projektarchivs: http://code.icecube.wisc.edu/svn\nUUID des Projektarchivs: 16731396-06f5-0310-8873-f7f720988828\nRevision: 152558\nKnotentyp: Verzeichnis\nPlan: normal\nLetzter Autor: nega\nLetzte ge\u00e4nderte Rev: 151396\nLetztes \u00c4nderungsdatum: 2016-11-09 15:23:57 +0100 (Mi, 09. Nov 2016)\n}}}",
"reporter": "flauber",
"cc": "",
"resolution": "invalid",
"_ts": "1484068677758096",
"component": "combo reconstruction",
"summary": "filterscript trunk: Error: \u00bbconst class OMKey\u00ab has no member named \u00bbIsIceTop\u00ab",
"priority": "normal",
"keywords": "",
"time": "2017-01-10T16:33:42",
"milestone": "",
"owner": "",
"type": "defect"
}
```
|
defect
|
filterscript trunk error »const class omkey« has no member named »isicetop« trac hi while building the current trunk i get this text built target filter tools built target tensor of inertia built target gulliver bootstrap built target portia building cxx object filterscripts cmakefiles filterscripts dir private filterscripts cxx o home flauber icecube icerec trunk src filterscripts private filterscripts cxx in elementfunktion »bool keepevent « home flauber icecube icerec trunk src filterscripts private filterscripts cxx fehler »const class omkey« has no member named »isicetop« if omkey isicetop make fehler make fehler make fehler svn info text svn info pfad wurzelpfad der arbeitskopie home flauber icecube icerec trunk src url relative url meta projects icerec trunk basis des projektarchivs uuid des projektarchivs revision knotentyp verzeichnis plan normal letzter autor nega letzte geänderte rev letztes änderungsdatum mi nov migrated from json status closed changetime description hi n nwhile building the current trunk i get this n n n built target filter tools n built target tensor of inertia n built target gulliver bootstrap n built target portia n building cxx object filterscripts cmakefiles filterscripts dir private filterscripts cxx o n home flauber icecube icerec trunk src filterscripts private filterscripts cxx in elementfunktion keepevent n home flauber icecube icerec trunk src filterscripts private filterscripts cxx fehler class omkey has no member named n if omkey isicetop n nmake fehler nmake fehler nmake fehler n n n n nsvn info n n svn info npfad nwurzelpfad der arbeitskopie home flauber icecube icerec trunk src nurl url meta projects icerec trunk nbasis des projektarchivs des projektarchivs nrevision nknotentyp verzeichnis nplan normal nletzter autor nega nletzte ge rev nletztes mi nov n reporter flauber cc resolution invalid ts component combo reconstruction summary filterscript trunk error class omkey has no member named priority normal keywords time milestone owner type defect
| 1
|
78,792
| 27,760,647,195
|
IssuesEvent
|
2023-03-16 07:59:07
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
PR builder broken even when build is a success.
|
Type: Defect
|
The PR builder is broken even though the actual build is a success.
An example of this is the following. This PR only contains a Javadoc change.
https://github.com/hazelcast/hazelcast/pull/23911
The build artifacts can be found here:
https://s3.console.aws.amazon.com/s3/buckets/j-artifacts/Hazelcast-pr-builder/17043/Hazelcast-pr-builder-17043.zip
Some snippets from the build:
```
[INFO] Reactor Summary for Hazelcast Root 5.3.0-SNAPSHOT:
[INFO]
[INFO] Hazelcast Root ..................................... SUCCESS [ 22.029 s]
[INFO] hazelcast-tpc-engine ............................... SUCCESS [ 27.873 s]
[INFO] hazelcast-archunit-rules ........................... SUCCESS [ 7.220 s]
[INFO] hazelcast .......................................... SUCCESS [38:34 min]
[INFO] hazelcast-spring ................................... SUCCESS [ 9.036 s]
[INFO] hazelcast-spring-tests ............................. SUCCESS [ 54.444 s]
[INFO] hazelcast-build-utils .............................. SUCCESS [ 10.972 s]
[INFO] hazelcast-jet-extensions ........................... SUCCESS [ 4.557 s]
[INFO] hazelcast-jet-kafka ................................ SUCCESS [02:18 min]
[INFO] hazelcast-jet-mongodb .............................. SUCCESS [02:40 min]
[INFO] hazelcast-jet-avro ................................. SUCCESS [ 19.810 s]
[INFO] hazelcast-jet-csv .................................. SUCCESS [ 11.449 s]
[INFO] hazelcast-jet-hadoop-core .......................... SUCCESS [ 30.100 s]
[INFO] hazelcast-sql ...................................... SUCCESS [04:15 min]
[INFO] hazelcast-jet-cdc-debezium ......................... SUCCESS [ 52.973 s]
[INFO] hazelcast-jet-cdc-mysql ............................ SUCCESS [ 39.064 s]
[INFO] hazelcast-jet-cdc-postgres ......................... SUCCESS [ 35.860 s]
[INFO] hazelcast-jet-elasticsearch-6 ...................... SUCCESS [ 29.772 s]
[INFO] hazelcast-jet-elasticsearch-7 ...................... SUCCESS [02:12 min]
[INFO] hazelcast-jet-hadoop-dist .......................... SUCCESS [ 7.228 s]
[INFO] hazelcast-jet-files-azure .......................... SUCCESS [ 7.096 s]
[INFO] hazelcast-jet-files-gcs ............................ SUCCESS [ 7.958 s]
[INFO] hazelcast-jet-files-s3 ............................. SUCCESS [ 7.616 s]
[INFO] hazelcast-jet-hadoop ............................... SUCCESS [ 6.561 s]
[INFO] hazelcast-jet-hadoop-all ........................... SUCCESS [ 8.116 s]
[INFO] hazelcast-3-connector-root ......................... SUCCESS [ 2.410 s]
[INFO] hazelcast-3-connector-interface .................... SUCCESS [ 3.804 s]
[INFO] hazelcast-3-connector-impl ......................... SUCCESS [ 6.624 s]
[INFO] hazelcast-3-connector-common ....................... SUCCESS [01:40 min]
[INFO] hazelcast-jet-kafka-connect ........................ SUCCESS [ 35.140 s]
[INFO] hazelcast-jet-kinesis .............................. SUCCESS [02:28 min]
[INFO] hazelcast-mapstore ................................. SUCCESS [01:32 min]
[INFO] hazelcast-jet-s3 ................................... SUCCESS [ 42.622 s]
[INFO] hazelcast-jet-grpc ................................. SUCCESS [ 23.056 s]
[INFO] hazelcast-jet-protobuf ............................. SUCCESS [ 13.825 s]
[INFO] hazelcast-jet-python ............................... SUCCESS [ 55.701 s]
[INFO] hazelcast-distribution ............................. SUCCESS [ 18.822 s]
[INFO] hazelcast-it ....................................... SUCCESS [ 2.316 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 48:23 min (Wall Clock)
[INFO] Finished at: 2023-03-13T09:24:22Z
[INFO] ------------------------------------------------------------------------
```
And the end:
```
[INFO]
[INFO] <<< maven-source-plugin:3.2.1:jar (attach-sources) < generate-sources @ modulepath-tests <<<
[INFO]
[INFO]
[INFO] --- maven-source-plugin:3.2.1:jar (attach-sources) @ modulepath-tests ---
[INFO] Building jar: /home/jenkins/jenkins_slave/workspace/Hazelcast-pr-builder_2/modulepath-tests/target/modulepath-tests-5.3.0-SNAPSHOT-sources.jar
[INFO]
[INFO] --- maven-failsafe-plugin:3.0.0-M9:integration-test (default) @ modulepath-tests ---
[INFO]
[INFO] --- maven-failsafe-plugin:3.0.0-M9:verify (default) @ modulepath-tests ---
[INFO]
[INFO] --- maven-install-plugin:2.4:install (default-install) @ modulepath-tests ---
[INFO] Installing /home/jenkins/jenkins_slave/workspace/Hazelcast-pr-builder_2/modulepath-tests/target/modulepath-tests-5.3.0-SNAPSHOT.jar to /home/jenkins/.m2/repository/com/hazelcast/modulepath-tests/5.3.0-SNAPSHOT/modulepath-tests-5.3.0-SNAPSHOT.jar
[INFO] Installing /home/jenkins/jenkins_slave/workspace/Hazelcast-pr-builder_2/modulepath-tests/pom.xml to /home/jenkins/.m2/repository/com/hazelcast/modulepath-tests/5.3.0-SNAPSHOT/modulepath-tests-5.3.0-SNAPSHOT.pom
[INFO] Installing /home/jenkins/jenkins_slave/workspace/Hazelcast-pr-builder_2/modulepath-tests/target/modulepath-tests-5.3.0-SNAPSHOT-sources.jar to /home/jenkins/.m2/repository/com/hazelcast/modulepath-tests/5.3.0-SNAPSHOT/modulepath-tests-5.3.0-SNAPSHOT-sources.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 17.510 s (Wall Clock)
[INFO] Finished at: 2023-03-13T09:24:45Z
[INFO] ------------------------------------------------------------------------
```
So the actual Maven build was a success. But the PR builder still reports the build as a failure:

|
1.0
|
PR builder broken even when build is a success. - The PR builder is broken even though the actual build is a success.
An example of this is the following. This PR only contains a Javadoc change.
https://github.com/hazelcast/hazelcast/pull/23911
The build artifacts can be found here:
https://s3.console.aws.amazon.com/s3/buckets/j-artifacts/Hazelcast-pr-builder/17043/Hazelcast-pr-builder-17043.zip
Some snippets from the build:
```
[INFO] Reactor Summary for Hazelcast Root 5.3.0-SNAPSHOT:
[INFO]
[INFO] Hazelcast Root ..................................... SUCCESS [ 22.029 s]
[INFO] hazelcast-tpc-engine ............................... SUCCESS [ 27.873 s]
[INFO] hazelcast-archunit-rules ........................... SUCCESS [ 7.220 s]
[INFO] hazelcast .......................................... SUCCESS [38:34 min]
[INFO] hazelcast-spring ................................... SUCCESS [ 9.036 s]
[INFO] hazelcast-spring-tests ............................. SUCCESS [ 54.444 s]
[INFO] hazelcast-build-utils .............................. SUCCESS [ 10.972 s]
[INFO] hazelcast-jet-extensions ........................... SUCCESS [ 4.557 s]
[INFO] hazelcast-jet-kafka ................................ SUCCESS [02:18 min]
[INFO] hazelcast-jet-mongodb .............................. SUCCESS [02:40 min]
[INFO] hazelcast-jet-avro ................................. SUCCESS [ 19.810 s]
[INFO] hazelcast-jet-csv .................................. SUCCESS [ 11.449 s]
[INFO] hazelcast-jet-hadoop-core .......................... SUCCESS [ 30.100 s]
[INFO] hazelcast-sql ...................................... SUCCESS [04:15 min]
[INFO] hazelcast-jet-cdc-debezium ......................... SUCCESS [ 52.973 s]
[INFO] hazelcast-jet-cdc-mysql ............................ SUCCESS [ 39.064 s]
[INFO] hazelcast-jet-cdc-postgres ......................... SUCCESS [ 35.860 s]
[INFO] hazelcast-jet-elasticsearch-6 ...................... SUCCESS [ 29.772 s]
[INFO] hazelcast-jet-elasticsearch-7 ...................... SUCCESS [02:12 min]
[INFO] hazelcast-jet-hadoop-dist .......................... SUCCESS [ 7.228 s]
[INFO] hazelcast-jet-files-azure .......................... SUCCESS [ 7.096 s]
[INFO] hazelcast-jet-files-gcs ............................ SUCCESS [ 7.958 s]
[INFO] hazelcast-jet-files-s3 ............................. SUCCESS [ 7.616 s]
[INFO] hazelcast-jet-hadoop ............................... SUCCESS [ 6.561 s]
[INFO] hazelcast-jet-hadoop-all ........................... SUCCESS [ 8.116 s]
[INFO] hazelcast-3-connector-root ......................... SUCCESS [ 2.410 s]
[INFO] hazelcast-3-connector-interface .................... SUCCESS [ 3.804 s]
[INFO] hazelcast-3-connector-impl ......................... SUCCESS [ 6.624 s]
[INFO] hazelcast-3-connector-common ....................... SUCCESS [01:40 min]
[INFO] hazelcast-jet-kafka-connect ........................ SUCCESS [ 35.140 s]
[INFO] hazelcast-jet-kinesis .............................. SUCCESS [02:28 min]
[INFO] hazelcast-mapstore ................................. SUCCESS [01:32 min]
[INFO] hazelcast-jet-s3 ................................... SUCCESS [ 42.622 s]
[INFO] hazelcast-jet-grpc ................................. SUCCESS [ 23.056 s]
[INFO] hazelcast-jet-protobuf ............................. SUCCESS [ 13.825 s]
[INFO] hazelcast-jet-python ............................... SUCCESS [ 55.701 s]
[INFO] hazelcast-distribution ............................. SUCCESS [ 18.822 s]
[INFO] hazelcast-it ....................................... SUCCESS [ 2.316 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 48:23 min (Wall Clock)
[INFO] Finished at: 2023-03-13T09:24:22Z
[INFO] ------------------------------------------------------------------------
```
And the end:
```
[INFO]
[INFO] <<< maven-source-plugin:3.2.1:jar (attach-sources) < generate-sources @ modulepath-tests <<<
[INFO]
[INFO]
[INFO] --- maven-source-plugin:3.2.1:jar (attach-sources) @ modulepath-tests ---
[INFO] Building jar: /home/jenkins/jenkins_slave/workspace/Hazelcast-pr-builder_2/modulepath-tests/target/modulepath-tests-5.3.0-SNAPSHOT-sources.jar
[INFO]
[INFO] --- maven-failsafe-plugin:3.0.0-M9:integration-test (default) @ modulepath-tests ---
[INFO]
[INFO] --- maven-failsafe-plugin:3.0.0-M9:verify (default) @ modulepath-tests ---
[INFO]
[INFO] --- maven-install-plugin:2.4:install (default-install) @ modulepath-tests ---
[INFO] Installing /home/jenkins/jenkins_slave/workspace/Hazelcast-pr-builder_2/modulepath-tests/target/modulepath-tests-5.3.0-SNAPSHOT.jar to /home/jenkins/.m2/repository/com/hazelcast/modulepath-tests/5.3.0-SNAPSHOT/modulepath-tests-5.3.0-SNAPSHOT.jar
[INFO] Installing /home/jenkins/jenkins_slave/workspace/Hazelcast-pr-builder_2/modulepath-tests/pom.xml to /home/jenkins/.m2/repository/com/hazelcast/modulepath-tests/5.3.0-SNAPSHOT/modulepath-tests-5.3.0-SNAPSHOT.pom
[INFO] Installing /home/jenkins/jenkins_slave/workspace/Hazelcast-pr-builder_2/modulepath-tests/target/modulepath-tests-5.3.0-SNAPSHOT-sources.jar to /home/jenkins/.m2/repository/com/hazelcast/modulepath-tests/5.3.0-SNAPSHOT/modulepath-tests-5.3.0-SNAPSHOT-sources.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 17.510 s (Wall Clock)
[INFO] Finished at: 2023-03-13T09:24:45Z
[INFO] ------------------------------------------------------------------------
```
So the actual Maven build was a success. But the PR builder still reports the build as a failure:

|
defect
|
pr builder broken even when build is a success the pr builder is broken even though the actual build is a success an example of this is the following this pr only contains a javadoc change the build artifacts can be found here some snippets from the build reactor summary for hazelcast root snapshot hazelcast root success hazelcast tpc engine success hazelcast archunit rules success hazelcast success hazelcast spring success hazelcast spring tests success hazelcast build utils success hazelcast jet extensions success hazelcast jet kafka success hazelcast jet mongodb success hazelcast jet avro success hazelcast jet csv success hazelcast jet hadoop core success hazelcast sql success hazelcast jet cdc debezium success hazelcast jet cdc mysql success hazelcast jet cdc postgres success hazelcast jet elasticsearch success hazelcast jet elasticsearch success hazelcast jet hadoop dist success hazelcast jet files azure success hazelcast jet files gcs success hazelcast jet files success hazelcast jet hadoop success hazelcast jet hadoop all success hazelcast connector root success hazelcast connector interface success hazelcast connector impl success hazelcast connector common success hazelcast jet kafka connect success hazelcast jet kinesis success hazelcast mapstore success hazelcast jet success hazelcast jet grpc success hazelcast jet protobuf success hazelcast jet python success hazelcast distribution success hazelcast it success build success total time min wall clock finished at and the end maven source plugin jar attach sources generate sources modulepath tests maven source plugin jar attach sources modulepath tests building jar home jenkins jenkins slave workspace hazelcast pr builder modulepath tests target modulepath tests snapshot sources jar maven failsafe plugin integration test default modulepath tests maven failsafe plugin verify default modulepath tests maven install plugin install default install modulepath tests installing home jenkins jenkins slave workspace hazelcast pr builder modulepath tests target modulepath tests snapshot jar to home jenkins repository com hazelcast modulepath tests snapshot modulepath tests snapshot jar installing home jenkins jenkins slave workspace hazelcast pr builder modulepath tests pom xml to home jenkins repository com hazelcast modulepath tests snapshot modulepath tests snapshot pom installing home jenkins jenkins slave workspace hazelcast pr builder modulepath tests target modulepath tests snapshot sources jar to home jenkins repository com hazelcast modulepath tests snapshot modulepath tests snapshot sources jar build success total time s wall clock finished at so the actual maven build was a success but the pr builder still reports the build as a failure
| 1
|
121,655
| 26,009,679,557
|
IssuesEvent
|
2022-12-20 23:36:41
|
greenplum-db/gpdb
|
https://api.github.com/repos/greenplum-db/gpdb
|
closed
|
Inconsistency in locking behavior between AOCSCompact() vs. AppendOnlyCompact()
|
type: bug version: 7X_ALPHA Priority 1 code-quality
|
### Greenplum version or build
6X_STABLE
In 6X_STABLE branch, both `AOCSCompact()` and `AppendOnlyCompact()` functions call `LockRelationAppendOnlySegmentFile()`, but they set the `dontWait` parameter differently. `AOCSCompact()` sets `dontWait` to true and skips any seg file that has been locked (https://github.com/greenplum-db/gpdb/blob/6X_STABLE/src/backend/access/aocs/aocs_compaction.c#L558), whereas `AppendOnlyCompact()` sets `dontWait` to false (https://github.com/greenplum-db/gpdb/blob/6X_STABLE/src/backend/access/appendonly/appendonly_compaction.c#L715). I am wondering if there is any potential locking issue.
|
1.0
|
Inconsistency in locking behavior between AOCSCompact() vs. AppendOnlyCompact() - ### Greenplum version or build
6X_STABLE
In 6X_STABLE branch, both `AOCSCompact()` and `AppendOnlyCompact()` functions call `LockRelationAppendOnlySegmentFile()`, but they set the `dontWait` parameter differently. `AOCSCompact()` sets `dontWait` to true and skips any seg file that has been locked (https://github.com/greenplum-db/gpdb/blob/6X_STABLE/src/backend/access/aocs/aocs_compaction.c#L558), whereas `AppendOnlyCompact()` sets `dontWait` to false (https://github.com/greenplum-db/gpdb/blob/6X_STABLE/src/backend/access/appendonly/appendonly_compaction.c#L715). I am wondering if there is any potential locking issue.
|
non_defect
|
inconsistency in locking behavior between aocscompact vs appendonlycompact greenplum version or build stable in stable branch both aocscompact and appendonlycompact functions call lockrelationappendonlysegmentfile but they set the dontwait parameter differently aocscompact sets dontwait to true and skips any seg file that has been locked whereas appendonlycompact sets dontwait to false i am wondering if there is any potential locking issue
| 0
|
309,768
| 26,678,182,892
|
IssuesEvent
|
2023-01-26 15:43:25
|
ntop/ntopng
|
https://api.github.com/repos/ntop/ntopng
|
closed
|
Invalid protocols in Port Analysis dropdown
|
Bug Ready to Test
|
Either you remove from the list protocols with no ports (so all except TCP/UDP) or add a 0 port as with ICMP that IMHO is not a good idea

|
1.0
|
Invalid protocols in Port Analysis dropdown - Either you remove from the list protocols with no ports (so all except TCP/UDP) or add a 0 port as with ICMP that IMHO is not a good idea

|
non_defect
|
invalid protocols in port analysis dropdown either you remove from the list protocols with no ports so all except tcp udp or add a port as with icmp that imho is not a good idea
| 0
|
141,315
| 18,957,805,909
|
IssuesEvent
|
2021-11-18 22:42:47
|
Recidiviz/pulse-dashboard
|
https://api.github.com/repos/Recidiviz/pulse-dashboard
|
opened
|
Security Alert - Package: node-forge; Severity: HIGH;
|
Subject: Security Severity: High Subject: Vulnerability
|
A new vulnerability has been reported by Dependabot. The criticality of this vulnerability is HIGH.
HIGH vulnerabilities have an SLA of 30 days according to our policy.
Affected package: node-forge
Ecosystem: NPM
Affected version range: < 0.10.0
Summary: Prototype Pollution in node-forge
Description: The package node-forge before 0.10.0 is vulnerable to Prototype Pollution via the util.setPath function. Note: Version 0.10.0 is a breaking change removing the vulnerable functions.
identifiers: [{'type': 'GHSA', 'value': 'GHSA-92xj-mqp7-vmcj'}, {'type': 'CVE', 'value': 'CVE-2020-7720'}]
Fixed Version: 0.10.0
Created Date = November 18, 2021
***Additional Context***
https://github.com/Recidiviz/pulse-dashboard/security/dependabot?q=is%3Aopen+sort%3Anewest
|
True
|
Security Alert - Package: node-forge; Severity: HIGH; -
A new vulnerability has been reported by Dependabot. The criticality of this vulnerability is HIGH.
HIGH vulnerabilities have an SLA of 30 days according to our policy.
Affected package: node-forge
Ecosystem: NPM
Affected version range: < 0.10.0
Summary: Prototype Pollution in node-forge
Description: The package node-forge before 0.10.0 is vulnerable to Prototype Pollution via the util.setPath function. Note: Version 0.10.0 is a breaking change removing the vulnerable functions.
identifiers: [{'type': 'GHSA', 'value': 'GHSA-92xj-mqp7-vmcj'}, {'type': 'CVE', 'value': 'CVE-2020-7720'}]
Fixed Version: 0.10.0
Created Date = November 18, 2021
***Additional Context***
https://github.com/Recidiviz/pulse-dashboard/security/dependabot?q=is%3Aopen+sort%3Anewest
|
non_defect
|
security alert package node forge severity high a new vulnerability has been reported by dependabot the criticality of this vulnerability is high high vulnerabilities have an sla of days according to our policy affected package node forge ecosystem npm affected version range summary prototype pollution in node forge description the package node forge before is vulnerable to prototype pollution via the util setpath function note version is a breaking change removing the vulnerable functions identifiers fixed version created date november additional context
| 0
|
6,767
| 2,610,277,710
|
IssuesEvent
|
2015-02-26 19:28:50
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
closed
|
insert image maintain aspect ratio
|
auto-migrated Milestone-1.9 Priority-High Type-Defect
|
```
It would be a great idea to modify the file insert window, such that when you
select an image to be inserted and modify one dimension (say the width), the
other dimension is automatically changed so that the aspect ratio stays the
same.
Thank you and keep up the good work!
```
-----
Original issue reported on code.google.com by `lucianim...@gmail.com` on 13 May 2011 at 5:54
|
1.0
|
insert image maintain aspect ratio - ```
It would be a great idea to modify the file insert window, such that when you
select an image to be inserted and modify one dimension (say the width), the
other dimension is automatically changed so that the aspect ratio stays the
same.
Thank you and keep up the good work!
```
-----
Original issue reported on code.google.com by `lucianim...@gmail.com` on 13 May 2011 at 5:54
|
defect
|
insert image maintain aspect ratio it would be a great idea to modify the file insert window such that when you select an image to be inserted and modify one dimension say the width the other dimension is automatically changed so that the aspect ratio stays the same thank you and keep up the good work original issue reported on code google com by lucianim gmail com on may at
| 1
|
8,067
| 2,611,450,812
|
IssuesEvent
|
2015-02-27 04:59:05
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
The hat sometimes disappears during the game
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. During weapon-using e.g. fire punch
2. When the animation of the hedge changes
What is the expected output? What do you see instead?
Not to disappear
What version of the product are you using? On what operating system?
0.9.13, Win7
```
Original issue reported on code.google.com by `joship...@gmail.com` on 15 Sep 2010 at 9:28
|
1.0
|
The hat sometimes disappears during the game - ```
What steps will reproduce the problem?
1. During weapon-using e.g. fire punch
2. When the animation of the hedge changes
What is the expected output? What do you see instead?
Not to disappear
What version of the product are you using? On what operating system?
0.9.13, Win7
```
Original issue reported on code.google.com by `joship...@gmail.com` on 15 Sep 2010 at 9:28
|
defect
|
the hat sometimes disappears during the game what steps will reproduce the problem during weapon using e g fire punch when the animation of the hedge changes what is the expected output what do you see instead not to disappear what version of the product are you using on what operating system original issue reported on code google com by joship gmail com on sep at
| 1
|
74,926
| 25,409,328,255
|
IssuesEvent
|
2022-11-22 17:35:10
|
FreeRADIUS/freeradius-server
|
https://api.github.com/repos/FreeRADIUS/freeradius-server
|
closed
|
Segmentation fault Freeradius 3.2.1 Robust proxy
|
defect v3.2.x
|
### What type of defect/bug is this?
Crash or memory corruption (segv, abort, etc...)
### How can the issue be reproduced?
When try to send accounting package to robust proxy radius crash with Segmentation fault
### Log output from the FreeRADIUS daemon
```shell
freeradius -Xxxxxx
Tue Nov 22 21:31:19 2022 : Debug: Server was built with:
Tue Nov 22 21:31:19 2022 : Debug: accounting : yes
Tue Nov 22 21:31:19 2022 : Debug: authentication : yes
Tue Nov 22 21:31:19 2022 : Debug: ascend-binary-attributes : yes
Tue Nov 22 21:31:19 2022 : Debug: coa : yes
Tue Nov 22 21:31:19 2022 : Debug: recv-coa-from-home-server : yes
Tue Nov 22 21:31:19 2022 : Debug: control-socket : yes
Tue Nov 22 21:31:19 2022 : Debug: detail : yes
Tue Nov 22 21:31:19 2022 : Debug: dhcp : yes
Tue Nov 22 21:31:19 2022 : Debug: dynamic-clients : yes
Tue Nov 22 21:31:19 2022 : Debug: osfc2 : no
Tue Nov 22 21:31:19 2022 : Debug: proxy : yes
Tue Nov 22 21:31:19 2022 : Debug: regex-pcre : yes
Tue Nov 22 21:31:19 2022 : Debug: regex-posix : no
Tue Nov 22 21:31:19 2022 : Debug: regex-posix-extended : no
Tue Nov 22 21:31:19 2022 : Debug: session-management : yes
Tue Nov 22 21:31:19 2022 : Debug: stats : yes
Tue Nov 22 21:31:19 2022 : Debug: systemd : no
Tue Nov 22 21:31:19 2022 : Debug: tcp : yes
Tue Nov 22 21:31:19 2022 : Debug: threads : yes
Tue Nov 22 21:31:19 2022 : Debug: tls : yes
Tue Nov 22 21:31:19 2022 : Debug: unlang : yes
Tue Nov 22 21:31:19 2022 : Debug: vmps : yes
Tue Nov 22 21:31:19 2022 : Debug: developer : no
Tue Nov 22 21:31:19 2022 : Debug: Server core libs:
Tue Nov 22 21:31:19 2022 : Debug: freeradius-server : 3.2.2
Tue Nov 22 21:31:19 2022 : Debug: talloc : 2.3.*
Tue Nov 22 21:31:19 2022 : Debug: ssl : 3.0.0g dev
Tue Nov 22 21:31:19 2022 : Debug: pcre : 8.45 2021-06-15
Tue Nov 22 21:31:19 2022 : Debug: Endianness:
Tue Nov 22 21:31:19 2022 : Debug: little
Tue Nov 22 21:31:19 2022 : Debug: Compilation flags:
Tue Nov 22 21:31:19 2022 : Debug: cppflags : -pipe -O3 -Wall -march=skylake -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector -mno-stackrealign -flto=4 -I/build/orionos/s64/include -I/build/orionos/s64/usr/include
Tue Nov 22 21:31:19 2022 : Debug: cflags : -I. -Isrc -include src/freeradius-devel/autoconf.h -include src/freeradius-devel/build.h -include src/freeradius-devel/features.h -include src/freeradius-devel/radpaths.h -fno-strict-aliasing -Wno-date-time -pipe -O3 -Wall -march=skylake -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector -mno-stackrealign -flto=4 -I/build/orionos/s64/include -I/build/orionos/s64/usr/include -Wall -std=c99 -D_GNU_SOURCE -D_REENTRANT -D_POSIX_PTHREAD_SEMANTICS -DOPENSSL_NO_KRB5 -DNDEBUG -DIS_MODULE=1
Tue Nov 22 21:31:19 2022 : Debug: ldflags : -L/build/orionos/s64/lib -L/build/orionos/s64/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: libs : -lcrypto -lssl -ltalloc -latomic -lpcre -lcap -lresolv -ldl -lpthread -lz -lmariadb -lp11-kit -lhogweed -lgmp -lidn -lidn2 -lffi -lgnutls -lnettle -lmysqlclient -lreadline -lssl -lcrypto -lncurses -ltinfo -lltdl -liconv -lexpat -lpcre -lbz2 -llzma -lxml2 -lnl-3 -lnl-genl-3 -lnl-route-3 -ltinfo -ltirpc -lssl -lcrypto -lboost_regex -lboost_serialization -lboost_wserialization -lboost_system -lcap -lpcap -lreadline
Tue Nov 22 21:31:19 2022 : Debug:
Tue Nov 22 21:31:19 2022 : Info: FreeRADIUS Version 3.2.2
Tue Nov 22 21:31:19 2022 : Info: Copyright (C) 1999-2022 The FreeRADIUS server project and contributors
Tue Nov 22 21:31:19 2022 : Info: There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
Tue Nov 22 21:31:19 2022 : Info: PARTICULAR PURPOSE
Tue Nov 22 21:31:19 2022 : Info: You may redistribute copies of FreeRADIUS under the terms of the
Tue Nov 22 21:31:19 2022 : Info: GNU General Public License
Tue Nov 22 21:31:19 2022 : Info: For more information about these matters, see the file named COPYRIGHT
Tue Nov 22 21:31:19 2022 : Info: Starting - reading configuration files ...
Tue Nov 22 21:31:19 2022 : Debug: including dictionary file /etc/freeradius/dictionary
Tue Nov 22 21:31:19 2022 : Debug: including dictionary file /etc/freeradius/dictionary
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/freeradius.conf
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/proxy.conf
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/clients.conf
Tue Nov 22 21:31:19 2022 : Debug: including files in directory /etc/freeradius/modules/
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/chap
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/exec
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/expr
Tue Nov 22 21:31:19 2022 : Debug: including files in directory /etc/freeradius/sites-enabled/
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/sites-enabled/default
Tue Nov 22 21:31:19 2022 : Debug: main {
Tue Nov 22 21:31:19 2022 : Debug: security {
Tue Nov 22 21:31:19 2022 : Debug: user = "freerad"
Tue Nov 22 21:31:19 2022 : Debug: group = "freerad"
Tue Nov 22 21:31:19 2022 : Debug: allow_core_dumps = no
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[38]: The item 'max_attributes' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[39]: The item 'reject_delay' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[40]: The item 'status_server' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[41]: The item 'allow_vulnerable_openssl' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: name = "radiusd"
Tue Nov 22 21:31:19 2022 : Debug: prefix = "/usr"
Tue Nov 22 21:31:19 2022 : Debug: localstatedir = "/var"
Tue Nov 22 21:31:19 2022 : Debug: logdir = "/var/log/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: run_dir = "/var/run/freeradius"
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[2]: The item 'ignore_case' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[4]: The item 'sysconfdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[9]: The item 'log_file' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[10]: The item 'log_destination' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[12]: The item 'confdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[14]: The item 'libdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[15]: The item 'pidfile' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[16]: The item 'max_request_time' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[17]: The item 'cleanup_delay' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[18]: The item 'max_requests' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[29]: The item 'hostname_lookups' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[30]: The item 'regular_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[31]: The item 'extended_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[32]: The item 'checkrad' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[44]: The item 'proxy_requests' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: main {
Tue Nov 22 21:31:19 2022 : Debug: name = "radiusd"
Tue Nov 22 21:31:19 2022 : Debug: prefix = "/usr"
Tue Nov 22 21:31:19 2022 : Debug: localstatedir = "/var"
Tue Nov 22 21:31:19 2022 : Debug: sbindir = "/usr/sbin"
Tue Nov 22 21:31:19 2022 : Debug: logdir = "/var/log/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: run_dir = "/var/run/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: libdir = "/usr/lib/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: radacctdir = "/var/log/freeradius/radacct"
Tue Nov 22 21:31:19 2022 : Debug: hostname_lookups = no
Tue Nov 22 21:31:19 2022 : Debug: max_request_time = 30
Tue Nov 22 21:31:19 2022 : Debug: cleanup_delay = 5
Tue Nov 22 21:31:19 2022 : Debug: max_requests = 1024
Tue Nov 22 21:31:19 2022 : Debug: postauth_client_lost = no
Tue Nov 22 21:31:19 2022 : Debug: pidfile = "/var/run/freeradius/freeradius.pid"
Tue Nov 22 21:31:19 2022 : Debug: checkrad = "/usr/sbin/checkrad"
Tue Nov 22 21:31:19 2022 : Debug: debug_level = 0
Tue Nov 22 21:31:19 2022 : Debug: proxy_requests = yes
Tue Nov 22 21:31:19 2022 : Debug: log {
Tue Nov 22 21:31:19 2022 : Debug: stripped_names = no
Tue Nov 22 21:31:19 2022 : Debug: auth = no
Tue Nov 22 21:31:19 2022 : Debug: auth_badpass = no
Tue Nov 22 21:31:19 2022 : Debug: auth_goodpass = no
Tue Nov 22 21:31:19 2022 : Debug: msg_denied = "You are already logged in - access denied"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: resources {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: security {
Tue Nov 22 21:31:19 2022 : Debug: max_attributes = 200
Tue Nov 22 21:31:19 2022 : Debug: reject_delay = 1.000000
Tue Nov 22 21:31:19 2022 : Debug: status_server = no
Tue Nov 22 21:31:19 2022 : Debug: allow_vulnerable_openssl = "no"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[2]: The item 'ignore_case' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[4]: The item 'sysconfdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[9]: The item 'log_file' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[10]: The item 'log_destination' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[12]: The item 'confdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[30]: The item 'regular_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[31]: The item 'extended_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Realms and Home Servers ####
Tue Nov 22 21:31:19 2022 : Debug: proxy server {
Tue Nov 22 21:31:19 2022 : Debug: retry_delay = 5
Tue Nov 22 21:31:19 2022 : Debug: retry_count = 3
Tue Nov 22 21:31:19 2022 : Debug: default_fallback = yes
Tue Nov 22 21:31:19 2022 : Debug: dead_time = 120
Tue Nov 22 21:31:19 2022 : Debug: wake_all_if_all_dead = no
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/proxy.conf[2]: The item 'synchronous' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/proxy.conf[7]: The item 'post_proxy_authorize' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: home_server home1.example.com {
Tue Nov 22 21:31:19 2022 : Debug: nonblock = no
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = 139.5.241.16
Tue Nov 22 21:31:19 2022 : Debug: port = 1813
Tue Nov 22 21:31:19 2022 : Debug: type = "acct"
Tue Nov 22 21:31:19 2022 : Debug: secret = "secret"
Tue Nov 22 21:31:19 2022 : Debug: response_window = 20.000000
Tue Nov 22 21:31:19 2022 : Debug: response_timeouts = 1
Tue Nov 22 21:31:19 2022 : Debug: max_outstanding = 65536
Tue Nov 22 21:31:19 2022 : Debug: zombie_period = 40
Tue Nov 22 21:31:19 2022 : Debug: status_check = "request"
Tue Nov 22 21:31:19 2022 : Debug: ping_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: check_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: check_timeout = 4
Tue Nov 22 21:31:19 2022 : Debug: num_answers_to_alive = 3
Tue Nov 22 21:31:19 2022 : Debug: revive_interval = 120
Tue Nov 22 21:31:19 2022 : Debug: username = "test_bgbras"
Tue Nov 22 21:31:19 2022 : Debug: limit {
Tue Nov 22 21:31:19 2022 : Debug: max_connections = 16
Tue Nov 22 21:31:19 2022 : Debug: max_requests = 0
Tue Nov 22 21:31:19 2022 : Debug: lifetime = 0
Tue Nov 22 21:31:19 2022 : Debug: idle_timeout = 0
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: coa {
Tue Nov 22 21:31:19 2022 : Debug: irt = 2
Tue Nov 22 21:31:19 2022 : Debug: mrt = 16
Tue Nov 22 21:31:19 2022 : Debug: mrc = 5
Tue Nov 22 21:31:19 2022 : Debug: mrd = 30
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: recv_coa {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: realm LOCAL {
Tue Nov 22 21:31:19 2022 : Debug: authhost = LOCAL
Tue Nov 22 21:31:19 2022 : Debug: accthost = LOCAL
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: home_server_pool acct_pool.example.com {
Tue Nov 22 21:31:19 2022 : Debug: type = fail-over
Tue Nov 22 21:31:19 2022 : Debug: virtual_server = home.example.com
Tue Nov 22 21:31:19 2022 : Debug: home_server = home1.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: realm acct_realm.example.com {
Tue Nov 22 21:31:19 2022 : Debug: acct_pool = acct_pool.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Clients ####
Tue Nov 22 21:31:19 2022 : Debug: client 127.0.0.1 {
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = 10.10.10.1
Tue Nov 22 21:31:19 2022 : Debug: require_message_authenticator = no
Tue Nov 22 21:31:19 2022 : Debug: secret = "secret"
Tue Nov 22 21:31:19 2022 : Debug: limit {
Tue Nov 22 21:31:19 2022 : Debug: max_connections = 16
Tue Nov 22 21:31:19 2022 : Debug: lifetime = 0
Tue Nov 22 21:31:19 2022 : Debug: idle_timeout = 30
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Adding client 10.10.10.1/32 (10.10.10.1) to prefix tree 32
Tue Nov 22 21:31:19 2022 : Info: Debugger not attached
Tue Nov 22 21:31:19 2022 : Debug: # Creating Post-Proxy-Type = Fail-Accounting
Tue Nov 22 21:31:19 2022 : Debug: # Creating Post-Proxy-Type = Fail-Authentication
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Instantiating modules ####
Tue Nov 22 21:31:19 2022 : Debug: modules {
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_detail with path: /usr/lib/freeradius/rlm_detail.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_detail, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_detail
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "detail.example.com" from file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: detail detail.example.com {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/detail.example.com/detail-%Y%m%d:%H:%G"
Tue Nov 22 21:31:19 2022 : Debug: header = "%t"
Tue Nov 22 21:31:19 2022 : Debug: permissions = 384
Tue Nov 22 21:31:19 2022 : Debug: locking = no
Tue Nov 22 21:31:19 2022 : Debug: escape_filenames = no
Tue Nov 22 21:31:19 2022 : Debug: log_packet_header = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "detail" from file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: detail {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/detail-%Y%m%d"
Tue Nov 22 21:31:19 2022 : Debug: header = "%t"
Tue Nov 22 21:31:19 2022 : Debug: permissions = 384
Tue Nov 22 21:31:19 2022 : Debug: locking = no
Tue Nov 22 21:31:19 2022 : Debug: escape_filenames = no
Tue Nov 22 21:31:19 2022 : Debug: log_packet_header = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_preprocess with path: /usr/lib/freeradius/rlm_preprocess.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_preprocess, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_preprocess
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "preprocess" from file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: preprocess {
Tue Nov 22 21:31:19 2022 : Debug: huntgroups = "/etc/freeradius/huntgroups"
Tue Nov 22 21:31:19 2022 : Debug: hints = "/etc/freeradius/hints"
Tue Nov 22 21:31:19 2022 : Debug: with_ascend_hack = no
Tue Nov 22 21:31:19 2022 : Debug: ascend_channels_per_line = 23
Tue Nov 22 21:31:19 2022 : Debug: with_ntdomain_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_specialix_jetstream_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_cisco_vsa_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_alvarion_vsa_hack = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_pap with path: /usr/lib/freeradius/rlm_pap.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_pap, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_pap
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "pap" from file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: pap {
Tue Nov 22 21:31:19 2022 : Debug: normalise = yes
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/modules/pap[21]: The item 'auto_header' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_chap with path: /usr/lib/freeradius/rlm_chap.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_chap, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_chap
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "chap" from file /etc/freeradius/modules/chap
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_exec with path: /usr/lib/freeradius/rlm_exec.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_exec, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_exec
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "exec" from file /etc/freeradius/modules/exec
Tue Nov 22 21:31:19 2022 : Debug: exec {
Tue Nov 22 21:31:19 2022 : Debug: wait = no
Tue Nov 22 21:31:19 2022 : Debug: input_pairs = "request"
Tue Nov 22 21:31:19 2022 : Debug: shell_escape = yes
Tue Nov 22 21:31:19 2022 : Debug: timeout = 10
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/modules/exec[28]: The item 'output' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_expr with path: /usr/lib/freeradius/rlm_expr.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_expr, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_expr
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "expr" from file /etc/freeradius/modules/expr
Tue Nov 22 21:31:19 2022 : Debug: expr {
Tue Nov 22 21:31:19 2022 : Debug: safe_characters = "@abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-_: /"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: instantiate {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "detail.example.com" from file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "detail" from file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "preprocess" from file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: reading pairlist file /etc/freeradius/huntgroups
Tue Nov 22 21:31:19 2022 : Debug: reading pairlist file /etc/freeradius/hints
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "pap" from file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: } # modules
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Virtual Servers ####
Tue Nov 22 21:31:19 2022 : Debug: server { # from file /etc/freeradius/freeradius.conf
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[7]: The authenticate section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: authenticate {
Tue Nov 22 21:31:19 2022 : Debug: Compiling Auth-Type PAP for attr Auth-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: pap
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Compiling Auth-Type CHAP for attr Auth-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: chap
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # authenticate
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[1]: The authorize section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: authorize {
Tue Nov 22 21:31:19 2022 : Debug: preprocess
Tue Nov 22 21:31:19 2022 : Debug: chap
Tue Nov 22 21:31:19 2022 : Debug: pap
Tue Nov 22 21:31:19 2022 : Debug: } # authorize
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[18]: The preacct section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: preacct {
Tue Nov 22 21:31:19 2022 : Debug: preprocess
Tue Nov 22 21:31:19 2022 : Debug: } # preacct
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[23]: The accounting section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: } # server
Tue Nov 22 21:31:19 2022 : Debug: server acct_detail.example.com { # from file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: } # server acct_detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: server home.example.com { # from file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: update {
Tue Nov 22 21:31:19 2022 : Debug: &control:Proxy-To-Realm := 'acct_realm.example.com'
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: post-proxy {
Tue Nov 22 21:31:19 2022 : Debug: Compiling Post-Proxy-Type Fail-Accounting for attr Post-Proxy-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Compiling Post-Proxy-Type Fail-Authentication for attr Post-Proxy-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # post-proxy
Tue Nov 22 21:31:19 2022 : Debug: } # server home.example.com
Tue Nov 22 21:31:19 2022 : Debug: Created signal pipe. Read end FD 5, write end FD 6
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Opening IP addresses and Ports ####
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_acct with path: /usr/lib/freeradius/proto_acct.so
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_acct failed: /usr/lib/freeradius/proto_acct.so: cannot open shared object file: No such file or directory - No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: Loading library using linker search path(s)
Tue Nov 22 21:31:19 2022 : Debug: Defaults : /lib:/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: Failed with error: proto_acct.so: cannot open shared object file: No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: type = "acct"
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = *
Tue Nov 22 21:31:19 2022 : Debug: port = 0
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_detail with path: /usr/lib/freeradius/proto_detail.so
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_detail failed: /usr/lib/freeradius/proto_detail.so: cannot open shared object file: No such file or directory - No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: Loading library using linker search path(s)
Tue Nov 22 21:31:19 2022 : Debug: Defaults : /lib:/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: Failed with error: proto_detail.so: cannot open shared object file: No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: type = "detail"
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/detail.example.com/detail-*:*"
Tue Nov 22 21:31:19 2022 : Debug: load_factor = 10
Tue Nov 22 21:31:19 2022 : Debug: poll_interval = 1
Tue Nov 22 21:31:19 2022 : Debug: retry_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: one_shot = no
Tue Nov 22 21:31:19 2022 : Debug: track = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Listening on acct address * port 1813
Tue Nov 22 21:31:19 2022 : Debug: Listening on detail file /var/log/freeradius/radacct/detail.example.com/detail-*:* as server home.example.com
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - User-Name = "test_bgbras"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Identifier = "demo-bng"
Tue Nov 22 21:31:19 2022 : Info: Ready to process requests
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-IP-Address = 10.10.10.1
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port-Id = "vlan100"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port-Type = Virtual
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Service-Type = Framed-User
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Framed-Protocol = PPP
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Calling-Station-Id = "f6:cb:f6:40:1f:a0"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Called-Station-Id = "1a:27:b5:27:bb:53"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Class = 0x6375693d746573745f6267627261732c73753d3934323033392c736b3d6969786c6c7a776f3333386f2c626b69643d32323534303038392c626b7269643d332c6d763d3532393135323231333836302c6d743d3630353235392c676d743d3630343830302c73703d312c73723d2c61633d312c75693d3934323033392c6363693d3138382c63633d47656e6572616c2c61733d312c636f3d2c64623d3130323430302c75623d3130323430302c63626b69643d3232353430303930
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Status-Type = Start
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Authentic = RADIUS
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Session-Id = "1628808cd06d3273"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Session-Time = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Octets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Octets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Packets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Packets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Gigawords = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Gigawords = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Framed-IP-Address = 10.0.0.5
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Event-Timestamp = "Nov 22 2022 21:21:45 IST"
Segmentation fault
```
### Relevant log output from client utilities
windows pppoe client try to connect
### Backtrace from LLDB or GDB
```shell
#0 0x00007ffff6bb1230 in open64 () from /lib64/libc.so.6
No symbol table info available.
#1 0x0000000000449243 in detail_poll ()
No symbol table info available.
#2 0x000000000044a278 in detail_handler_thread ()
No symbol table info available.
#3 0x00007ffff6b45026 in ?? () from /lib64/libc.so.6
No symbol table info available.
#4 0x00007ffff6bc0d60 in clone () from /lib64/libc.so.6
No symbol table info available.
```
|
1.0
|
Segmentation fault Freeradius 3.2.1 Robust proxy - ### What type of defect/bug is this?
Crash or memory corruption (segv, abort, etc...)
### How can the issue be reproduced?
When try to send accounting package to robust proxy radius crash with Segmentation fault
### Log output from the FreeRADIUS daemon
```shell
freeradius -Xxxxxx
Tue Nov 22 21:31:19 2022 : Debug: Server was built with:
Tue Nov 22 21:31:19 2022 : Debug: accounting : yes
Tue Nov 22 21:31:19 2022 : Debug: authentication : yes
Tue Nov 22 21:31:19 2022 : Debug: ascend-binary-attributes : yes
Tue Nov 22 21:31:19 2022 : Debug: coa : yes
Tue Nov 22 21:31:19 2022 : Debug: recv-coa-from-home-server : yes
Tue Nov 22 21:31:19 2022 : Debug: control-socket : yes
Tue Nov 22 21:31:19 2022 : Debug: detail : yes
Tue Nov 22 21:31:19 2022 : Debug: dhcp : yes
Tue Nov 22 21:31:19 2022 : Debug: dynamic-clients : yes
Tue Nov 22 21:31:19 2022 : Debug: osfc2 : no
Tue Nov 22 21:31:19 2022 : Debug: proxy : yes
Tue Nov 22 21:31:19 2022 : Debug: regex-pcre : yes
Tue Nov 22 21:31:19 2022 : Debug: regex-posix : no
Tue Nov 22 21:31:19 2022 : Debug: regex-posix-extended : no
Tue Nov 22 21:31:19 2022 : Debug: session-management : yes
Tue Nov 22 21:31:19 2022 : Debug: stats : yes
Tue Nov 22 21:31:19 2022 : Debug: systemd : no
Tue Nov 22 21:31:19 2022 : Debug: tcp : yes
Tue Nov 22 21:31:19 2022 : Debug: threads : yes
Tue Nov 22 21:31:19 2022 : Debug: tls : yes
Tue Nov 22 21:31:19 2022 : Debug: unlang : yes
Tue Nov 22 21:31:19 2022 : Debug: vmps : yes
Tue Nov 22 21:31:19 2022 : Debug: developer : no
Tue Nov 22 21:31:19 2022 : Debug: Server core libs:
Tue Nov 22 21:31:19 2022 : Debug: freeradius-server : 3.2.2
Tue Nov 22 21:31:19 2022 : Debug: talloc : 2.3.*
Tue Nov 22 21:31:19 2022 : Debug: ssl : 3.0.0g dev
Tue Nov 22 21:31:19 2022 : Debug: pcre : 8.45 2021-06-15
Tue Nov 22 21:31:19 2022 : Debug: Endianness:
Tue Nov 22 21:31:19 2022 : Debug: little
Tue Nov 22 21:31:19 2022 : Debug: Compilation flags:
Tue Nov 22 21:31:19 2022 : Debug: cppflags : -pipe -O3 -Wall -march=skylake -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector -mno-stackrealign -flto=4 -I/build/orionos/s64/include -I/build/orionos/s64/usr/include
Tue Nov 22 21:31:19 2022 : Debug: cflags : -I. -Isrc -include src/freeradius-devel/autoconf.h -include src/freeradius-devel/build.h -include src/freeradius-devel/features.h -include src/freeradius-devel/radpaths.h -fno-strict-aliasing -Wno-date-time -pipe -O3 -Wall -march=skylake -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector -mno-stackrealign -flto=4 -I/build/orionos/s64/include -I/build/orionos/s64/usr/include -Wall -std=c99 -D_GNU_SOURCE -D_REENTRANT -D_POSIX_PTHREAD_SEMANTICS -DOPENSSL_NO_KRB5 -DNDEBUG -DIS_MODULE=1
Tue Nov 22 21:31:19 2022 : Debug: ldflags : -L/build/orionos/s64/lib -L/build/orionos/s64/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: libs : -lcrypto -lssl -ltalloc -latomic -lpcre -lcap -lresolv -ldl -lpthread -lz -lmariadb -lp11-kit -lhogweed -lgmp -lidn -lidn2 -lffi -lgnutls -lnettle -lmysqlclient -lreadline -lssl -lcrypto -lncurses -ltinfo -lltdl -liconv -lexpat -lpcre -lbz2 -llzma -lxml2 -lnl-3 -lnl-genl-3 -lnl-route-3 -ltinfo -ltirpc -lssl -lcrypto -lboost_regex -lboost_serialization -lboost_wserialization -lboost_system -lcap -lpcap -lreadline
Tue Nov 22 21:31:19 2022 : Debug:
Tue Nov 22 21:31:19 2022 : Info: FreeRADIUS Version 3.2.2
Tue Nov 22 21:31:19 2022 : Info: Copyright (C) 1999-2022 The FreeRADIUS server project and contributors
Tue Nov 22 21:31:19 2022 : Info: There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
Tue Nov 22 21:31:19 2022 : Info: PARTICULAR PURPOSE
Tue Nov 22 21:31:19 2022 : Info: You may redistribute copies of FreeRADIUS under the terms of the
Tue Nov 22 21:31:19 2022 : Info: GNU General Public License
Tue Nov 22 21:31:19 2022 : Info: For more information about these matters, see the file named COPYRIGHT
Tue Nov 22 21:31:19 2022 : Info: Starting - reading configuration files ...
Tue Nov 22 21:31:19 2022 : Debug: including dictionary file /etc/freeradius/dictionary
Tue Nov 22 21:31:19 2022 : Debug: including dictionary file /etc/freeradius/dictionary
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/freeradius.conf
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/proxy.conf
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/clients.conf
Tue Nov 22 21:31:19 2022 : Debug: including files in directory /etc/freeradius/modules/
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/chap
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/exec
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/expr
Tue Nov 22 21:31:19 2022 : Debug: including files in directory /etc/freeradius/sites-enabled/
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/sites-enabled/default
Tue Nov 22 21:31:19 2022 : Debug: main {
Tue Nov 22 21:31:19 2022 : Debug: security {
Tue Nov 22 21:31:19 2022 : Debug: user = "freerad"
Tue Nov 22 21:31:19 2022 : Debug: group = "freerad"
Tue Nov 22 21:31:19 2022 : Debug: allow_core_dumps = no
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[38]: The item 'max_attributes' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[39]: The item 'reject_delay' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[40]: The item 'status_server' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[41]: The item 'allow_vulnerable_openssl' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: name = "radiusd"
Tue Nov 22 21:31:19 2022 : Debug: prefix = "/usr"
Tue Nov 22 21:31:19 2022 : Debug: localstatedir = "/var"
Tue Nov 22 21:31:19 2022 : Debug: logdir = "/var/log/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: run_dir = "/var/run/freeradius"
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[2]: The item 'ignore_case' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[4]: The item 'sysconfdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[9]: The item 'log_file' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[10]: The item 'log_destination' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[12]: The item 'confdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[14]: The item 'libdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[15]: The item 'pidfile' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[16]: The item 'max_request_time' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[17]: The item 'cleanup_delay' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[18]: The item 'max_requests' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[29]: The item 'hostname_lookups' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[30]: The item 'regular_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[31]: The item 'extended_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[32]: The item 'checkrad' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[44]: The item 'proxy_requests' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: main {
Tue Nov 22 21:31:19 2022 : Debug: name = "radiusd"
Tue Nov 22 21:31:19 2022 : Debug: prefix = "/usr"
Tue Nov 22 21:31:19 2022 : Debug: localstatedir = "/var"
Tue Nov 22 21:31:19 2022 : Debug: sbindir = "/usr/sbin"
Tue Nov 22 21:31:19 2022 : Debug: logdir = "/var/log/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: run_dir = "/var/run/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: libdir = "/usr/lib/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: radacctdir = "/var/log/freeradius/radacct"
Tue Nov 22 21:31:19 2022 : Debug: hostname_lookups = no
Tue Nov 22 21:31:19 2022 : Debug: max_request_time = 30
Tue Nov 22 21:31:19 2022 : Debug: cleanup_delay = 5
Tue Nov 22 21:31:19 2022 : Debug: max_requests = 1024
Tue Nov 22 21:31:19 2022 : Debug: postauth_client_lost = no
Tue Nov 22 21:31:19 2022 : Debug: pidfile = "/var/run/freeradius/freeradius.pid"
Tue Nov 22 21:31:19 2022 : Debug: checkrad = "/usr/sbin/checkrad"
Tue Nov 22 21:31:19 2022 : Debug: debug_level = 0
Tue Nov 22 21:31:19 2022 : Debug: proxy_requests = yes
Tue Nov 22 21:31:19 2022 : Debug: log {
Tue Nov 22 21:31:19 2022 : Debug: stripped_names = no
Tue Nov 22 21:31:19 2022 : Debug: auth = no
Tue Nov 22 21:31:19 2022 : Debug: auth_badpass = no
Tue Nov 22 21:31:19 2022 : Debug: auth_goodpass = no
Tue Nov 22 21:31:19 2022 : Debug: msg_denied = "You are already logged in - access denied"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: resources {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: security {
Tue Nov 22 21:31:19 2022 : Debug: max_attributes = 200
Tue Nov 22 21:31:19 2022 : Debug: reject_delay = 1.000000
Tue Nov 22 21:31:19 2022 : Debug: status_server = no
Tue Nov 22 21:31:19 2022 : Debug: allow_vulnerable_openssl = "no"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[2]: The item 'ignore_case' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[4]: The item 'sysconfdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[9]: The item 'log_file' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[10]: The item 'log_destination' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[12]: The item 'confdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[30]: The item 'regular_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[31]: The item 'extended_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Realms and Home Servers ####
Tue Nov 22 21:31:19 2022 : Debug: proxy server {
Tue Nov 22 21:31:19 2022 : Debug: retry_delay = 5
Tue Nov 22 21:31:19 2022 : Debug: retry_count = 3
Tue Nov 22 21:31:19 2022 : Debug: default_fallback = yes
Tue Nov 22 21:31:19 2022 : Debug: dead_time = 120
Tue Nov 22 21:31:19 2022 : Debug: wake_all_if_all_dead = no
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/proxy.conf[2]: The item 'synchronous' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/proxy.conf[7]: The item 'post_proxy_authorize' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: home_server home1.example.com {
Tue Nov 22 21:31:19 2022 : Debug: nonblock = no
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = 139.5.241.16
Tue Nov 22 21:31:19 2022 : Debug: port = 1813
Tue Nov 22 21:31:19 2022 : Debug: type = "acct"
Tue Nov 22 21:31:19 2022 : Debug: secret = "secret"
Tue Nov 22 21:31:19 2022 : Debug: response_window = 20.000000
Tue Nov 22 21:31:19 2022 : Debug: response_timeouts = 1
Tue Nov 22 21:31:19 2022 : Debug: max_outstanding = 65536
Tue Nov 22 21:31:19 2022 : Debug: zombie_period = 40
Tue Nov 22 21:31:19 2022 : Debug: status_check = "request"
Tue Nov 22 21:31:19 2022 : Debug: ping_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: check_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: check_timeout = 4
Tue Nov 22 21:31:19 2022 : Debug: num_answers_to_alive = 3
Tue Nov 22 21:31:19 2022 : Debug: revive_interval = 120
Tue Nov 22 21:31:19 2022 : Debug: username = "test_bgbras"
Tue Nov 22 21:31:19 2022 : Debug: limit {
Tue Nov 22 21:31:19 2022 : Debug: max_connections = 16
Tue Nov 22 21:31:19 2022 : Debug: max_requests = 0
Tue Nov 22 21:31:19 2022 : Debug: lifetime = 0
Tue Nov 22 21:31:19 2022 : Debug: idle_timeout = 0
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: coa {
Tue Nov 22 21:31:19 2022 : Debug: irt = 2
Tue Nov 22 21:31:19 2022 : Debug: mrt = 16
Tue Nov 22 21:31:19 2022 : Debug: mrc = 5
Tue Nov 22 21:31:19 2022 : Debug: mrd = 30
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: recv_coa {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: realm LOCAL {
Tue Nov 22 21:31:19 2022 : Debug: authhost = LOCAL
Tue Nov 22 21:31:19 2022 : Debug: accthost = LOCAL
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: home_server_pool acct_pool.example.com {
Tue Nov 22 21:31:19 2022 : Debug: type = fail-over
Tue Nov 22 21:31:19 2022 : Debug: virtual_server = home.example.com
Tue Nov 22 21:31:19 2022 : Debug: home_server = home1.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: realm acct_realm.example.com {
Tue Nov 22 21:31:19 2022 : Debug: acct_pool = acct_pool.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Clients ####
Tue Nov 22 21:31:19 2022 : Debug: client 127.0.0.1 {
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = 10.10.10.1
Tue Nov 22 21:31:19 2022 : Debug: require_message_authenticator = no
Tue Nov 22 21:31:19 2022 : Debug: secret = "secret"
Tue Nov 22 21:31:19 2022 : Debug: limit {
Tue Nov 22 21:31:19 2022 : Debug: max_connections = 16
Tue Nov 22 21:31:19 2022 : Debug: lifetime = 0
Tue Nov 22 21:31:19 2022 : Debug: idle_timeout = 30
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Adding client 10.10.10.1/32 (10.10.10.1) to prefix tree 32
Tue Nov 22 21:31:19 2022 : Info: Debugger not attached
Tue Nov 22 21:31:19 2022 : Debug: # Creating Post-Proxy-Type = Fail-Accounting
Tue Nov 22 21:31:19 2022 : Debug: # Creating Post-Proxy-Type = Fail-Authentication
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Instantiating modules ####
Tue Nov 22 21:31:19 2022 : Debug: modules {
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_detail with path: /usr/lib/freeradius/rlm_detail.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_detail, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_detail
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "detail.example.com" from file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: detail detail.example.com {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/detail.example.com/detail-%Y%m%d:%H:%G"
Tue Nov 22 21:31:19 2022 : Debug: header = "%t"
Tue Nov 22 21:31:19 2022 : Debug: permissions = 384
Tue Nov 22 21:31:19 2022 : Debug: locking = no
Tue Nov 22 21:31:19 2022 : Debug: escape_filenames = no
Tue Nov 22 21:31:19 2022 : Debug: log_packet_header = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "detail" from file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: detail {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/detail-%Y%m%d"
Tue Nov 22 21:31:19 2022 : Debug: header = "%t"
Tue Nov 22 21:31:19 2022 : Debug: permissions = 384
Tue Nov 22 21:31:19 2022 : Debug: locking = no
Tue Nov 22 21:31:19 2022 : Debug: escape_filenames = no
Tue Nov 22 21:31:19 2022 : Debug: log_packet_header = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_preprocess with path: /usr/lib/freeradius/rlm_preprocess.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_preprocess, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_preprocess
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "preprocess" from file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: preprocess {
Tue Nov 22 21:31:19 2022 : Debug: huntgroups = "/etc/freeradius/huntgroups"
Tue Nov 22 21:31:19 2022 : Debug: hints = "/etc/freeradius/hints"
Tue Nov 22 21:31:19 2022 : Debug: with_ascend_hack = no
Tue Nov 22 21:31:19 2022 : Debug: ascend_channels_per_line = 23
Tue Nov 22 21:31:19 2022 : Debug: with_ntdomain_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_specialix_jetstream_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_cisco_vsa_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_alvarion_vsa_hack = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_pap with path: /usr/lib/freeradius/rlm_pap.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_pap, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_pap
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "pap" from file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: pap {
Tue Nov 22 21:31:19 2022 : Debug: normalise = yes
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/modules/pap[21]: The item 'auto_header' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_chap with path: /usr/lib/freeradius/rlm_chap.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_chap, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_chap
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "chap" from file /etc/freeradius/modules/chap
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_exec with path: /usr/lib/freeradius/rlm_exec.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_exec, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_exec
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "exec" from file /etc/freeradius/modules/exec
Tue Nov 22 21:31:19 2022 : Debug: exec {
Tue Nov 22 21:31:19 2022 : Debug: wait = no
Tue Nov 22 21:31:19 2022 : Debug: input_pairs = "request"
Tue Nov 22 21:31:19 2022 : Debug: shell_escape = yes
Tue Nov 22 21:31:19 2022 : Debug: timeout = 10
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/modules/exec[28]: The item 'output' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_expr with path: /usr/lib/freeradius/rlm_expr.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_expr, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_expr
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "expr" from file /etc/freeradius/modules/expr
Tue Nov 22 21:31:19 2022 : Debug: expr {
Tue Nov 22 21:31:19 2022 : Debug: safe_characters = "@abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-_: /"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: instantiate {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "detail.example.com" from file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "detail" from file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "preprocess" from file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: reading pairlist file /etc/freeradius/huntgroups
Tue Nov 22 21:31:19 2022 : Debug: reading pairlist file /etc/freeradius/hints
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "pap" from file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: } # modules
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Virtual Servers ####
Tue Nov 22 21:31:19 2022 : Debug: server { # from file /etc/freeradius/freeradius.conf
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[7]: The authenticate section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: authenticate {
Tue Nov 22 21:31:19 2022 : Debug: Compiling Auth-Type PAP for attr Auth-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: pap
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Compiling Auth-Type CHAP for attr Auth-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: chap
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # authenticate
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[1]: The authorize section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: authorize {
Tue Nov 22 21:31:19 2022 : Debug: preprocess
Tue Nov 22 21:31:19 2022 : Debug: chap
Tue Nov 22 21:31:19 2022 : Debug: pap
Tue Nov 22 21:31:19 2022 : Debug: } # authorize
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[18]: The preacct section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: preacct {
Tue Nov 22 21:31:19 2022 : Debug: preprocess
Tue Nov 22 21:31:19 2022 : Debug: } # preacct
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[23]: The accounting section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: } # server
Tue Nov 22 21:31:19 2022 : Debug: server acct_detail.example.com { # from file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: } # server acct_detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: server home.example.com { # from file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: update {
Tue Nov 22 21:31:19 2022 : Debug: &control:Proxy-To-Realm := 'acct_realm.example.com'
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: post-proxy {
Tue Nov 22 21:31:19 2022 : Debug: Compiling Post-Proxy-Type Fail-Accounting for attr Post-Proxy-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Compiling Post-Proxy-Type Fail-Authentication for attr Post-Proxy-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # post-proxy
Tue Nov 22 21:31:19 2022 : Debug: } # server home.example.com
Tue Nov 22 21:31:19 2022 : Debug: Created signal pipe. Read end FD 5, write end FD 6
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Opening IP addresses and Ports ####
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_acct with path: /usr/lib/freeradius/proto_acct.so
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_acct failed: /usr/lib/freeradius/proto_acct.so: cannot open shared object file: No such file or directory - No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: Loading library using linker search path(s)
Tue Nov 22 21:31:19 2022 : Debug: Defaults : /lib:/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: Failed with error: proto_acct.so: cannot open shared object file: No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: type = "acct"
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = *
Tue Nov 22 21:31:19 2022 : Debug: port = 0
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_detail with path: /usr/lib/freeradius/proto_detail.so
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_detail failed: /usr/lib/freeradius/proto_detail.so: cannot open shared object file: No such file or directory - No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: Loading library using linker search path(s)
Tue Nov 22 21:31:19 2022 : Debug: Defaults : /lib:/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: Failed with error: proto_detail.so: cannot open shared object file: No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: type = "detail"
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/detail.example.com/detail-*:*"
Tue Nov 22 21:31:19 2022 : Debug: load_factor = 10
Tue Nov 22 21:31:19 2022 : Debug: poll_interval = 1
Tue Nov 22 21:31:19 2022 : Debug: retry_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: one_shot = no
Tue Nov 22 21:31:19 2022 : Debug: track = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Listening on acct address * port 1813
Tue Nov 22 21:31:19 2022 : Debug: Listening on detail file /var/log/freeradius/radacct/detail.example.com/detail-*:* as server home.example.com
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - User-Name = "test_bgbras"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Identifier = "demo-bng"
Tue Nov 22 21:31:19 2022 : Info: Ready to process requests
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-IP-Address = 10.10.10.1
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port-Id = "vlan100"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port-Type = Virtual
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Service-Type = Framed-User
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Framed-Protocol = PPP
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Calling-Station-Id = "f6:cb:f6:40:1f:a0"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Called-Station-Id = "1a:27:b5:27:bb:53"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Class = 0x6375693d746573745f6267627261732c73753d3934323033392c736b3d6969786c6c7a776f3333386f2c626b69643d32323534303038392c626b7269643d332c6d763d3532393135323231333836302c6d743d3630353235392c676d743d3630343830302c73703d312c73723d2c61633d312c75693d3934323033392c6363693d3138382c63633d47656e6572616c2c61733d312c636f3d2c64623d3130323430302c75623d3130323430302c63626b69643d3232353430303930
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Status-Type = Start
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Authentic = RADIUS
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Session-Id = "1628808cd06d3273"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Session-Time = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Octets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Octets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Packets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Packets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Gigawords = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Gigawords = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Framed-IP-Address = 10.0.0.5
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Event-Timestamp = "Nov 22 2022 21:21:45 IST"
Segmentation fault
```
### Relevant log output from client utilities
windows pppoe client try to connect
### Backtrace from LLDB or GDB
```shell
#0 0x00007ffff6bb1230 in open64 () from /lib64/libc.so.6
No symbol table info available.
#1 0x0000000000449243 in detail_poll ()
No symbol table info available.
#2 0x000000000044a278 in detail_handler_thread ()
No symbol table info available.
#3 0x00007ffff6b45026 in ?? () from /lib64/libc.so.6
No symbol table info available.
#4 0x00007ffff6bc0d60 in clone () from /lib64/libc.so.6
No symbol table info available.
```
|
defect
|
segmentation fault freeradius robust proxy what type of defect bug is this crash or memory corruption segv abort etc how can the issue be reproduced when try to send accounting package to robust proxy radius crash with segmentation fault log output from the freeradius daemon shell freeradius xxxxxx tue nov debug server was built with tue nov debug accounting yes tue nov debug authentication yes tue nov debug ascend binary attributes yes tue nov debug coa yes tue nov debug recv coa from home server yes tue nov debug control socket yes tue nov debug detail yes tue nov debug dhcp yes tue nov debug dynamic clients yes tue nov debug no tue nov debug proxy yes tue nov debug regex pcre yes tue nov debug regex posix no tue nov debug regex posix extended no tue nov debug session management yes tue nov debug stats yes tue nov debug systemd no tue nov debug tcp yes tue nov debug threads yes tue nov debug tls yes tue nov debug unlang yes tue nov debug vmps yes tue nov debug developer no tue nov debug server core libs tue nov debug freeradius server tue nov debug talloc tue nov debug ssl dev tue nov debug pcre tue nov debug endianness tue nov debug little tue nov debug compilation flags tue nov debug cppflags pipe wall march skylake d largefile source d source d file offset bits fno stack protector mno stackrealign flto i build orionos include i build orionos usr include tue nov debug cflags i isrc include src freeradius devel autoconf h include src freeradius devel build h include src freeradius devel features h include src freeradius devel radpaths h fno strict aliasing wno date time pipe wall march skylake d largefile source d source d file offset bits fno stack protector mno stackrealign flto i build orionos include i build orionos usr include wall std d gnu source d reentrant d posix pthread semantics dopenssl no dndebug dis module tue nov debug ldflags l build orionos lib l build orionos usr lib tue nov debug libs lcrypto lssl ltalloc latomic lpcre lcap lresolv ldl lpthread lz lmariadb kit lhogweed lgmp lidn lffi lgnutls lnettle lmysqlclient lreadline lssl lcrypto lncurses ltinfo lltdl liconv lexpat lpcre llzma lnl lnl genl lnl route ltinfo ltirpc lssl lcrypto lboost regex lboost serialization lboost wserialization lboost system lcap lpcap lreadline tue nov debug tue nov info freeradius version tue nov info copyright c the freeradius server project and contributors tue nov info there is no warranty not even for merchantability or fitness for a tue nov info particular purpose tue nov info you may redistribute copies of freeradius under the terms of the tue nov info gnu general public license tue nov info for more information about these matters see the file named copyright tue nov info starting reading configuration files tue nov debug including dictionary file etc freeradius dictionary tue nov debug including dictionary file etc freeradius dictionary tue nov debug including configuration file etc freeradius freeradius conf tue nov debug including configuration file etc freeradius proxy conf tue nov debug including configuration file etc freeradius clients conf tue nov debug including files in directory etc freeradius modules tue nov debug including configuration file etc freeradius modules detail example com tue nov debug including configuration file etc freeradius modules detail tue nov debug including configuration file etc freeradius modules preprocess tue nov debug including configuration file etc freeradius modules pap tue nov debug including configuration file etc freeradius modules chap tue nov debug including configuration file etc freeradius modules exec tue nov debug including configuration file etc freeradius modules expr tue nov debug including files in directory etc freeradius sites enabled tue nov debug including configuration file etc freeradius sites enabled robust proxy accounting tue nov debug including configuration file etc freeradius sites enabled default tue nov debug main tue nov debug security tue nov debug user freerad tue nov debug group freerad tue nov debug allow core dumps no tue nov warning etc freeradius freeradius conf the item max attributes is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item reject delay is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item status server is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item allow vulnerable openssl is defined but is unused by the configuration tue nov debug tue nov debug name radiusd tue nov debug prefix usr tue nov debug localstatedir var tue nov debug logdir var log freeradius tue nov debug run dir var run freeradius tue nov warning etc freeradius freeradius conf the item ignore case is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item sysconfdir is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item log file is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item log destination is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item confdir is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item libdir is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item pidfile is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item max request time is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item cleanup delay is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item max requests is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item hostname lookups is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item regular expressions is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item extended expressions is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item checkrad is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item proxy requests is defined but is unused by the configuration tue nov debug tue nov debug main tue nov debug name radiusd tue nov debug prefix usr tue nov debug localstatedir var tue nov debug sbindir usr sbin tue nov debug logdir var log freeradius tue nov debug run dir var run freeradius tue nov debug libdir usr lib freeradius tue nov debug radacctdir var log freeradius radacct tue nov debug hostname lookups no tue nov debug max request time tue nov debug cleanup delay tue nov debug max requests tue nov debug postauth client lost no tue nov debug pidfile var run freeradius freeradius pid tue nov debug checkrad usr sbin checkrad tue nov debug debug level tue nov debug proxy requests yes tue nov debug log tue nov debug stripped names no tue nov debug auth no tue nov debug auth badpass no tue nov debug auth goodpass no tue nov debug msg denied you are already logged in access denied tue nov debug tue nov debug resources tue nov debug tue nov debug security tue nov debug max attributes tue nov debug reject delay tue nov debug status server no tue nov debug allow vulnerable openssl no tue nov debug tue nov warning etc freeradius freeradius conf the item ignore case is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item sysconfdir is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item log file is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item log destination is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item confdir is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item regular expressions is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item extended expressions is defined but is unused by the configuration tue nov debug tue nov debug freeradius loading realms and home servers tue nov debug proxy server tue nov debug retry delay tue nov debug retry count tue nov debug default fallback yes tue nov debug dead time tue nov debug wake all if all dead no tue nov warning etc freeradius proxy conf the item synchronous is defined but is unused by the configuration tue nov warning etc freeradius proxy conf the item post proxy authorize is defined but is unused by the configuration tue nov debug tue nov debug home server example com tue nov debug nonblock no tue nov debug ipaddr tue nov debug port tue nov debug type acct tue nov debug secret secret tue nov debug response window tue nov debug response timeouts tue nov debug max outstanding tue nov debug zombie period tue nov debug status check request tue nov debug ping interval tue nov debug check interval tue nov debug check timeout tue nov debug num answers to alive tue nov debug revive interval tue nov debug username test bgbras tue nov debug limit tue nov debug max connections tue nov debug max requests tue nov debug lifetime tue nov debug idle timeout tue nov debug tue nov debug coa tue nov debug irt tue nov debug mrt tue nov debug mrc tue nov debug mrd tue nov debug tue nov debug recv coa tue nov debug tue nov debug tue nov debug realm local tue nov debug authhost local tue nov debug accthost local tue nov debug tue nov debug home server pool acct pool example com tue nov debug type fail over tue nov debug virtual server home example com tue nov debug home server example com tue nov debug tue nov debug realm acct realm example com tue nov debug acct pool acct pool example com tue nov debug tue nov debug freeradius loading clients tue nov debug client tue nov debug ipaddr tue nov debug require message authenticator no tue nov debug secret secret tue nov debug limit tue nov debug max connections tue nov debug lifetime tue nov debug idle timeout tue nov debug tue nov debug tue nov debug adding client to prefix tree tue nov info debugger not attached tue nov debug creating post proxy type fail accounting tue nov debug creating post proxy type fail authentication tue nov debug freeradius instantiating modules tue nov debug modules tue nov debug loading rlm detail with path usr lib freeradius rlm detail so tue nov debug loaded rlm detail checking if it s valid tue nov debug loaded module rlm detail tue nov debug loading module detail example com from file etc freeradius modules detail example com tue nov debug detail detail example com tue nov debug filename var log freeradius radacct detail example com detail y m d h g tue nov debug header t tue nov debug permissions tue nov debug locking no tue nov debug escape filenames no tue nov debug log packet header no tue nov debug tue nov debug loading module detail from file etc freeradius modules detail tue nov debug detail tue nov debug filename var log freeradius radacct packet src ip address packet src address detail y m d tue nov debug header t tue nov debug permissions tue nov debug locking no tue nov debug escape filenames no tue nov debug log packet header no tue nov debug tue nov debug loading rlm preprocess with path usr lib freeradius rlm preprocess so tue nov debug loaded rlm preprocess checking if it s valid tue nov debug loaded module rlm preprocess tue nov debug loading module preprocess from file etc freeradius modules preprocess tue nov debug preprocess tue nov debug huntgroups etc freeradius huntgroups tue nov debug hints etc freeradius hints tue nov debug with ascend hack no tue nov debug ascend channels per line tue nov debug with ntdomain hack no tue nov debug with specialix jetstream hack no tue nov debug with cisco vsa hack no tue nov debug with alvarion vsa hack no tue nov debug tue nov debug loading rlm pap with path usr lib freeradius rlm pap so tue nov debug loaded rlm pap checking if it s valid tue nov debug loaded module rlm pap tue nov debug loading module pap from file etc freeradius modules pap tue nov debug pap tue nov debug normalise yes tue nov warning etc freeradius modules pap the item auto header is defined but is unused by the configuration tue nov debug tue nov debug loading rlm chap with path usr lib freeradius rlm chap so tue nov debug loaded rlm chap checking if it s valid tue nov debug loaded module rlm chap tue nov debug loading module chap from file etc freeradius modules chap tue nov debug loading rlm exec with path usr lib freeradius rlm exec so tue nov debug loaded rlm exec checking if it s valid tue nov debug loaded module rlm exec tue nov debug loading module exec from file etc freeradius modules exec tue nov debug exec tue nov debug wait no tue nov debug input pairs request tue nov debug shell escape yes tue nov debug timeout tue nov warning etc freeradius modules exec the item output is defined but is unused by the configuration tue nov debug tue nov debug loading rlm expr with path usr lib freeradius rlm expr so tue nov debug loaded rlm expr checking if it s valid tue nov debug loaded module rlm expr tue nov debug loading module expr from file etc freeradius modules expr tue nov debug expr tue nov debug safe characters tue nov debug tue nov debug instantiate tue nov debug tue nov debug instantiating module detail example com from file etc freeradius modules detail example com tue nov debug instantiating module detail from file etc freeradius modules detail tue nov debug instantiating module preprocess from file etc freeradius modules preprocess tue nov debug reading pairlist file etc freeradius huntgroups tue nov debug reading pairlist file etc freeradius hints tue nov debug instantiating module pap from file etc freeradius modules pap tue nov debug modules tue nov debug freeradius loading virtual servers tue nov debug server from file etc freeradius freeradius conf tue nov error etc freeradius sites enabled default the authenticate section should be inside of a server block tue nov debug authenticate tue nov debug compiling auth type pap for attr auth type tue nov debug group tue nov debug pap tue nov debug tue nov debug compiling auth type chap for attr auth type tue nov debug group tue nov debug chap tue nov debug tue nov debug authenticate tue nov error etc freeradius sites enabled default the authorize section should be inside of a server block tue nov debug authorize tue nov debug preprocess tue nov debug chap tue nov debug pap tue nov debug authorize tue nov error etc freeradius sites enabled default the preacct section should be inside of a server block tue nov debug preacct tue nov debug preprocess tue nov debug preacct tue nov error etc freeradius sites enabled default the accounting section should be inside of a server block tue nov debug accounting tue nov debug detail example com tue nov debug accounting tue nov debug server tue nov debug server acct detail example com from file etc freeradius sites enabled robust proxy accounting tue nov debug accounting tue nov debug detail example com tue nov debug accounting tue nov debug server acct detail example com tue nov debug server home example com from file etc freeradius sites enabled robust proxy accounting tue nov debug accounting tue nov debug update tue nov debug control proxy to realm acct realm example com tue nov debug tue nov debug accounting tue nov debug post proxy tue nov debug compiling post proxy type fail accounting for attr post proxy type tue nov debug group tue nov debug detail example com tue nov debug tue nov debug compiling post proxy type fail authentication for attr post proxy type tue nov debug group tue nov debug tue nov debug post proxy tue nov debug server home example com tue nov debug created signal pipe read end fd write end fd tue nov debug freeradius opening ip addresses and ports tue nov debug loading proto acct with path usr lib freeradius proto acct so tue nov debug loading proto acct failed usr lib freeradius proto acct so cannot open shared object file no such file or directory no such file or directory tue nov debug loading library using linker search path s tue nov debug defaults lib usr lib tue nov debug failed with error proto acct so cannot open shared object file no such file or directory tue nov debug listen tue nov debug type acct tue nov debug ipaddr tue nov debug port tue nov debug tue nov debug loading proto detail with path usr lib freeradius proto detail so tue nov debug loading proto detail failed usr lib freeradius proto detail so cannot open shared object file no such file or directory no such file or directory tue nov debug loading library using linker search path s tue nov debug defaults lib usr lib tue nov debug failed with error proto detail so cannot open shared object file no such file or directory tue nov debug listen tue nov debug type detail tue nov debug listen tue nov debug filename var log freeradius radacct detail example com detail tue nov debug load factor tue nov debug poll interval tue nov debug retry interval tue nov debug one shot no tue nov debug track no tue nov debug tue nov debug tue nov debug listening on acct address port tue nov debug listening on detail file var log freeradius radacct detail example com detail as server home example com tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line user name test bgbras tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line nas identifier demo bng tue nov info ready to process requests tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line nas ip address tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line nas port tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line nas port id tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line nas port type virtual tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line service type framed user tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line framed protocol ppp tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line calling station id cb tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line called station id bb tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line class tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct status type start tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct authentic radius tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct session id tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct session time tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct input octets tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct output octets tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct input packets tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct output packets tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct input gigawords tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct output gigawords tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line framed ip address tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line event timestamp nov ist segmentation fault relevant log output from client utilities windows pppoe client try to connect backtrace from lldb or gdb shell in from libc so no symbol table info available in detail poll no symbol table info available in detail handler thread no symbol table info available in from libc so no symbol table info available in clone from libc so no symbol table info available
| 1
|
193,784
| 6,888,106,879
|
IssuesEvent
|
2017-11-22 03:32:55
|
rnleach/sonde
|
https://api.github.com/repos/rnleach/sonde
|
closed
|
Use tags instead of CSS for text view styling.
|
bug High Priority
|
CSS is not consistent across platforms/versions of GTK+, it doesn't work on windows.
|
1.0
|
Use tags instead of CSS for text view styling. - CSS is not consistent across platforms/versions of GTK+, it doesn't work on windows.
|
non_defect
|
use tags instead of css for text view styling css is not consistent across platforms versions of gtk it doesn t work on windows
| 0
|
297,086
| 9,160,363,055
|
IssuesEvent
|
2019-03-01 07:06:34
|
projectacrn/acrn-hypervisor
|
https://api.github.com/repos/projectacrn/acrn-hypervisor
|
closed
|
AaaG kernel watchdog reset hit during stability testing- backtrace: io_schedule+0x16/0x40(apl_sdc_stable)
|
priority: P2-High status: implemented type: bug
|
We AaaG watchdog reset hit during stress AaaG warm reset or AaaG create/destroy:
<3>[ 246.925567] INFO: task init:1 blocked for more than 120 seconds.
<3>[ 246.932987] Tainted: G U W 4.19.8-quilt-2e5dc0ac-00023-g0bbd9b5f57cc #1
<3>[ 246.945872] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[ 246.953581] init D 0 1 0 0x00000000
<4>[ 246.953587] Call Trace:
<4>[ 246.953616] __schedule+0x2a1/0x890
<4>[ 246.953621] ? bit_wait+0x60/0x60
<4>[ 246.953623] schedule+0x36/0x90
<4>[ 246.953642] io_schedule+0x16/0x40
<4>[ 246.953645] bit_wait_io+0x11/0x60
<4>[ 246.953647] __wait_on_bit+0x4c/0x90
<4>[ 246.953650] out_of_line_wait_on_bit+0x90/0xb0
<4>[ 246.953653] ? init_wait_var_entry+0x50/0x50
<4>[ 246.953658] __wait_on_buffer+0x40/0x50
<4>[ 246.953661] __ext4_get_inode_loc+0x1b5/0x430
<4>[ 246.953664] ext4_iget+0x92/0xb90
<4>[ 246.953667] ext4_iget_normal+0x2f/0x40
Not sure whether it's related with storage emulation, so create ACRN bug to track it.
|
1.0
|
AaaG kernel watchdog reset hit during stability testing- backtrace: io_schedule+0x16/0x40(apl_sdc_stable) - We AaaG watchdog reset hit during stress AaaG warm reset or AaaG create/destroy:
<3>[ 246.925567] INFO: task init:1 blocked for more than 120 seconds.
<3>[ 246.932987] Tainted: G U W 4.19.8-quilt-2e5dc0ac-00023-g0bbd9b5f57cc #1
<3>[ 246.945872] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<6>[ 246.953581] init D 0 1 0 0x00000000
<4>[ 246.953587] Call Trace:
<4>[ 246.953616] __schedule+0x2a1/0x890
<4>[ 246.953621] ? bit_wait+0x60/0x60
<4>[ 246.953623] schedule+0x36/0x90
<4>[ 246.953642] io_schedule+0x16/0x40
<4>[ 246.953645] bit_wait_io+0x11/0x60
<4>[ 246.953647] __wait_on_bit+0x4c/0x90
<4>[ 246.953650] out_of_line_wait_on_bit+0x90/0xb0
<4>[ 246.953653] ? init_wait_var_entry+0x50/0x50
<4>[ 246.953658] __wait_on_buffer+0x40/0x50
<4>[ 246.953661] __ext4_get_inode_loc+0x1b5/0x430
<4>[ 246.953664] ext4_iget+0x92/0xb90
<4>[ 246.953667] ext4_iget_normal+0x2f/0x40
Not sure whether it's related with storage emulation, so create ACRN bug to track it.
|
non_defect
|
aaag kernel watchdog reset hit during stability testing backtrace io schedule apl sdc stable we aaag watchdog reset hit during stress aaag warm reset or aaag create destroy info task init blocked for more than seconds tainted g u w quilt echo proc sys kernel hung task timeout secs disables this message init d call trace schedule bit wait schedule io schedule bit wait io wait on bit out of line wait on bit init wait var entry wait on buffer get inode loc iget iget normal not sure whether it s related with storage emulation so create acrn bug to track it
| 0
|
6,440
| 2,846,762,101
|
IssuesEvent
|
2015-05-29 13:33:38
|
bedita/bedita
|
https://api.github.com/repos/bedita/bedita
|
closed
|
web app capable meta data
|
Module - Publications Status - Test Topic - Frontend Type - New Feature
|
The main mobile browsers have implemented (or they are going to) a lot of feature for web apps development.
For example:
- Chrome on Android: now, you can add a web site to the home screen. Through an app manifest, you can specify the size of the app (fullscreen | with address bar | medium), the app icons, the start page and you can use application cache, localStorage, FileSystem and all Chrome features.
- Safari on iOS: you can add your sites to the home, using custom icons and splashscreens. It supports application cache and localStorage.
- IE 11 on Windows 8: you can "pin" the site in your home, using custom "tiles" (tiny | square | wide | large) and a custom feed for notifications (!!). It supports application cache and localStorage.
What about to add a section in the BEdita areas module in order to handle all this metadata/links?
@stefanorosanelli @batopa @didoda @qwerg @xho
|
1.0
|
web app capable meta data - The main mobile browsers have implemented (or they are going to) a lot of feature for web apps development.
For example:
- Chrome on Android: now, you can add a web site to the home screen. Through an app manifest, you can specify the size of the app (fullscreen | with address bar | medium), the app icons, the start page and you can use application cache, localStorage, FileSystem and all Chrome features.
- Safari on iOS: you can add your sites to the home, using custom icons and splashscreens. It supports application cache and localStorage.
- IE 11 on Windows 8: you can "pin" the site in your home, using custom "tiles" (tiny | square | wide | large) and a custom feed for notifications (!!). It supports application cache and localStorage.
What about to add a section in the BEdita areas module in order to handle all this metadata/links?
@stefanorosanelli @batopa @didoda @qwerg @xho
|
non_defect
|
web app capable meta data the main mobile browsers have implemented or they are going to a lot of feature for web apps development for example chrome on android now you can add a web site to the home screen through an app manifest you can specify the size of the app fullscreen with address bar medium the app icons the start page and you can use application cache localstorage filesystem and all chrome features safari on ios you can add your sites to the home using custom icons and splashscreens it supports application cache and localstorage ie on windows you can pin the site in your home using custom tiles tiny square wide large and a custom feed for notifications it supports application cache and localstorage what about to add a section in the bedita areas module in order to handle all this metadata links stefanorosanelli batopa didoda qwerg xho
| 0
|
251,068
| 21,414,657,410
|
IssuesEvent
|
2022-04-22 09:40:59
|
antcamgil/Acme-Toolkits
|
https://api.github.com/repos/antcamgil/Acme-Toolkits
|
closed
|
Task-073/T: Show inventor's patronages
|
D03 testing
|
Test that system show their patronages, including the profile of the corresponding patron.
|
1.0
|
Task-073/T: Show inventor's patronages - Test that system show their patronages, including the profile of the corresponding patron.
|
non_defect
|
task t show inventor s patronages test that system show their patronages including the profile of the corresponding patron
| 0
|
170,000
| 26,889,408,568
|
IssuesEvent
|
2023-02-06 07:36:12
|
starplanter93/The_Garden_of_Musicsheet
|
https://api.github.com/repos/starplanter93/The_Garden_of_Musicsheet
|
closed
|
Feat: MainSongSection organism 작성
|
Feat Design
|
## Description
MainSongSection organism 작성
## Todo
- [x] MainSongSection organism 컴포넌트 구현
- [x] MainSongSection organism 스토리북 등록
## ETC
.
|
1.0
|
Feat: MainSongSection organism 작성 - ## Description
MainSongSection organism 작성
## Todo
- [x] MainSongSection organism 컴포넌트 구현
- [x] MainSongSection organism 스토리북 등록
## ETC
.
|
non_defect
|
feat mainsongsection organism 작성 description mainsongsection organism 작성 todo mainsongsection organism 컴포넌트 구현 mainsongsection organism 스토리북 등록 etc
| 0
|
4,211
| 2,610,089,312
|
IssuesEvent
|
2015-02-26 18:27:02
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳痘痘的祛除
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳痘痘的祛除【深圳韩方科颜全国热线400-869-1818,24小时QQ4
008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方��
�—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科�
��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康
祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治��
�粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘�
��
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:34
|
1.0
|
深圳痘痘的祛除 - ```
深圳痘痘的祛除【深圳韩方科颜全国热线400-869-1818,24小时QQ4
008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方��
�—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科�
��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康
祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治��
�粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘�
��
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:34
|
defect
|
深圳痘痘的祛除 深圳痘痘的祛除【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方�� �—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科� ��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康 祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治�� �粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘� �� original issue reported on code google com by szft com on may at
| 1
|
94,369
| 3,924,961,184
|
IssuesEvent
|
2016-04-22 17:05:28
|
byaka/flaskJSONRPCServer
|
https://api.github.com/repos/byaka/flaskJSONRPCServer
|
opened
|
Добавить поддержку long() в пропатченный json-backend
|
enhancement High-priority
|
Патчинг при помощи experimentalPackage подменяет json-backend на ujson.
К сожалению ujson не поддерживает long() и поддержка не предвидется (подробнее esnme/ultrajson#99).
Однако можно добавить поддержку через experimental.asyncJSON_dumps(), у него нет проблем с поддержкой типов. Если сделать отлов ошибок у ujson, можно сделать автоматическое переключение при неудаче.
|
1.0
|
Добавить поддержку long() в пропатченный json-backend - Патчинг при помощи experimentalPackage подменяет json-backend на ujson.
К сожалению ujson не поддерживает long() и поддержка не предвидется (подробнее esnme/ultrajson#99).
Однако можно добавить поддержку через experimental.asyncJSON_dumps(), у него нет проблем с поддержкой типов. Если сделать отлов ошибок у ujson, можно сделать автоматическое переключение при неудаче.
|
non_defect
|
добавить поддержку long в пропатченный json backend патчинг при помощи experimentalpackage подменяет json backend на ujson к сожалению ujson не поддерживает long и поддержка не предвидется подробнее esnme ultrajson однако можно добавить поддержку через experimental asyncjson dumps у него нет проблем с поддержкой типов если сделать отлов ошибок у ujson можно сделать автоматическое переключение при неудаче
| 0
|
45,344
| 12,733,191,999
|
IssuesEvent
|
2020-06-25 11:51:01
|
naev/naev
|
https://api.github.com/repos/naev/naev
|
closed
|
Adding assets with Unidiff seems to sometimes stop other (virtual) assets from working.
|
Priority-High Type-Defect
|
This is something currently witnessed in the "FLF_base" unidiff. The Sigur system by default has three assets: "Virtual Sindbad", "Virtual Empire Unpresence", and "Virtual Soromid Unpresence". (That said, this bug happened when it only had the latter two as well; it's actually been around for years, I checked pretty far back with git bisect.) These are all virtual assets: Virtual Sindbad increases FLF presence, and Virtual Empire Unpresence and Virtual Soromid Unpresence remove Empire and Soromid presence, respectively. This works as normal when the game is started.
However, when the FLF_base diff is applied, which adds the "Sindbad" asset (a station) to the Sigur system, the Virtual Empire Unpresence asset stops working, leading to Empire ships appearing in Sigur. The virtual assets still technically exist; when the "flf_dead" diff is applied on top of this, it removes both the Virtual Empire Unpresence and the Virtual Soromid Unpresence assets, and this doesn't cause any warnings.
Thus far I haven't witnessed this in any other system, but that would largely be because there aren't any other systems where this bug would cause noticeable results. The only other places where assets are added are in the "Fury_Station" diff and the "Thurion_found" diff; in both cases, a new asset is added into a generally empty system with no virtual assets and no important real assets.
|
1.0
|
Adding assets with Unidiff seems to sometimes stop other (virtual) assets from working. - This is something currently witnessed in the "FLF_base" unidiff. The Sigur system by default has three assets: "Virtual Sindbad", "Virtual Empire Unpresence", and "Virtual Soromid Unpresence". (That said, this bug happened when it only had the latter two as well; it's actually been around for years, I checked pretty far back with git bisect.) These are all virtual assets: Virtual Sindbad increases FLF presence, and Virtual Empire Unpresence and Virtual Soromid Unpresence remove Empire and Soromid presence, respectively. This works as normal when the game is started.
However, when the FLF_base diff is applied, which adds the "Sindbad" asset (a station) to the Sigur system, the Virtual Empire Unpresence asset stops working, leading to Empire ships appearing in Sigur. The virtual assets still technically exist; when the "flf_dead" diff is applied on top of this, it removes both the Virtual Empire Unpresence and the Virtual Soromid Unpresence assets, and this doesn't cause any warnings.
Thus far I haven't witnessed this in any other system, but that would largely be because there aren't any other systems where this bug would cause noticeable results. The only other places where assets are added are in the "Fury_Station" diff and the "Thurion_found" diff; in both cases, a new asset is added into a generally empty system with no virtual assets and no important real assets.
|
defect
|
adding assets with unidiff seems to sometimes stop other virtual assets from working this is something currently witnessed in the flf base unidiff the sigur system by default has three assets virtual sindbad virtual empire unpresence and virtual soromid unpresence that said this bug happened when it only had the latter two as well it s actually been around for years i checked pretty far back with git bisect these are all virtual assets virtual sindbad increases flf presence and virtual empire unpresence and virtual soromid unpresence remove empire and soromid presence respectively this works as normal when the game is started however when the flf base diff is applied which adds the sindbad asset a station to the sigur system the virtual empire unpresence asset stops working leading to empire ships appearing in sigur the virtual assets still technically exist when the flf dead diff is applied on top of this it removes both the virtual empire unpresence and the virtual soromid unpresence assets and this doesn t cause any warnings thus far i haven t witnessed this in any other system but that would largely be because there aren t any other systems where this bug would cause noticeable results the only other places where assets are added are in the fury station diff and the thurion found diff in both cases a new asset is added into a generally empty system with no virtual assets and no important real assets
| 1
|
187,368
| 14,427,589,693
|
IssuesEvent
|
2020-12-06 05:00:40
|
kalexmills/github-vet-tests-dec2020
|
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
|
closed
|
oracle/oci-volume-provisioner: vendor/k8s.io/kubernetes/pkg/kubectl/genericclioptions/resource/builder_test.go; 3 LoC
|
fresh test tiny vendored
|
Found a possible issue in [oracle/oci-volume-provisioner](https://www.github.com/oracle/oci-volume-provisioner) at [vendor/k8s.io/kubernetes/pkg/kubectl/genericclioptions/resource/builder_test.go](https://github.com/oracle/oci-volume-provisioner/blob/43d0de110dcb59c5e1a71adb2958063344517c79/vendor/k8s.io/kubernetes/pkg/kubectl/genericclioptions/resource/builder_test.go#L70-L72)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to e at line 71 may start a goroutine
[Click here to see the code in its original context.](https://github.com/oracle/oci-volume-provisioner/blob/43d0de110dcb59c5e1a71adb2958063344517c79/vendor/k8s.io/kubernetes/pkg/kubectl/genericclioptions/resource/builder_test.go#L70-L72)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, e := range events {
enc.Encode(&e)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 43d0de110dcb59c5e1a71adb2958063344517c79
|
1.0
|
oracle/oci-volume-provisioner: vendor/k8s.io/kubernetes/pkg/kubectl/genericclioptions/resource/builder_test.go; 3 LoC -
Found a possible issue in [oracle/oci-volume-provisioner](https://www.github.com/oracle/oci-volume-provisioner) at [vendor/k8s.io/kubernetes/pkg/kubectl/genericclioptions/resource/builder_test.go](https://github.com/oracle/oci-volume-provisioner/blob/43d0de110dcb59c5e1a71adb2958063344517c79/vendor/k8s.io/kubernetes/pkg/kubectl/genericclioptions/resource/builder_test.go#L70-L72)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to e at line 71 may start a goroutine
[Click here to see the code in its original context.](https://github.com/oracle/oci-volume-provisioner/blob/43d0de110dcb59c5e1a71adb2958063344517c79/vendor/k8s.io/kubernetes/pkg/kubectl/genericclioptions/resource/builder_test.go#L70-L72)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, e := range events {
enc.Encode(&e)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 43d0de110dcb59c5e1a71adb2958063344517c79
|
non_defect
|
oracle oci volume provisioner vendor io kubernetes pkg kubectl genericclioptions resource builder test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to e at line may start a goroutine click here to show the line s of go which triggered the analyzer go for e range events enc encode e leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
110,616
| 4,435,524,479
|
IssuesEvent
|
2016-08-18 08:54:26
|
architecture-building-systems/CEAforArcGIS
|
https://api.github.com/repos/architecture-building-systems/CEAforArcGIS
|
opened
|
Write Paper on comparing to BESTEST benchmark
|
Priority 1
|
Also create reference-case using these buildings.
|
1.0
|
Write Paper on comparing to BESTEST benchmark - Also create reference-case using these buildings.
|
non_defect
|
write paper on comparing to bestest benchmark also create reference case using these buildings
| 0
|
53,851
| 13,262,388,566
|
IssuesEvent
|
2020-08-20 21:41:49
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
[simulation] geant4 compiled in multi-threaded mode (Trac #2196)
|
Migrated from Trac combo simulation defect
|
@amedina noticed that on the forthcoming cvmfs version, py3-v4, geant4 was compiled in multi-threaded mode (which I think is the default now?). This breaks several parts of simulation that interface with geant4 (the error is related to thread-local storage).
@olivas has said he wants to fix our code to be MT compatible, so this ticket is to track that effort.
Here is the general page for porting old code to the MT model:
https://twiki.cern.ch/twiki/bin/view/Geant4/QuickMigrationGuideForGeant4V10
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2196">https://code.icecube.wisc.edu/projects/icecube/ticket/2196</a>, reported by david.schultzand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:15:23",
"_ts": "1550067323910946",
"description": "@amedina noticed that on the forthcoming cvmfs version, py3-v4, geant4 was compiled in multi-threaded mode (which I think is the default now?). This breaks several parts of simulation that interface with geant4 (the error is related to thread-local storage).\n\n@olivas has said he wants to fix our code to be MT compatible, so this ticket is to track that effort.\n\nHere is the general page for porting old code to the MT model:\nhttps://twiki.cern.ch/twiki/bin/view/Geant4/QuickMigrationGuideForGeant4V10",
"reporter": "david.schultz",
"cc": "amedina",
"resolution": "fixed",
"time": "2018-10-09T01:32:34",
"component": "combo simulation",
"summary": "[simulation] geant4 compiled in multi-threaded mode",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[simulation] geant4 compiled in multi-threaded mode (Trac #2196) - @amedina noticed that on the forthcoming cvmfs version, py3-v4, geant4 was compiled in multi-threaded mode (which I think is the default now?). This breaks several parts of simulation that interface with geant4 (the error is related to thread-local storage).
@olivas has said he wants to fix our code to be MT compatible, so this ticket is to track that effort.
Here is the general page for porting old code to the MT model:
https://twiki.cern.ch/twiki/bin/view/Geant4/QuickMigrationGuideForGeant4V10
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2196">https://code.icecube.wisc.edu/projects/icecube/ticket/2196</a>, reported by david.schultzand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:15:23",
"_ts": "1550067323910946",
"description": "@amedina noticed that on the forthcoming cvmfs version, py3-v4, geant4 was compiled in multi-threaded mode (which I think is the default now?). This breaks several parts of simulation that interface with geant4 (the error is related to thread-local storage).\n\n@olivas has said he wants to fix our code to be MT compatible, so this ticket is to track that effort.\n\nHere is the general page for porting old code to the MT model:\nhttps://twiki.cern.ch/twiki/bin/view/Geant4/QuickMigrationGuideForGeant4V10",
"reporter": "david.schultz",
"cc": "amedina",
"resolution": "fixed",
"time": "2018-10-09T01:32:34",
"component": "combo simulation",
"summary": "[simulation] geant4 compiled in multi-threaded mode",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
defect
|
compiled in multi threaded mode trac amedina noticed that on the forthcoming cvmfs version was compiled in multi threaded mode which i think is the default now this breaks several parts of simulation that interface with the error is related to thread local storage olivas has said he wants to fix our code to be mt compatible so this ticket is to track that effort here is the general page for porting old code to the mt model migrated from json status closed changetime ts description amedina noticed that on the forthcoming cvmfs version was compiled in multi threaded mode which i think is the default now this breaks several parts of simulation that interface with the error is related to thread local storage n n olivas has said he wants to fix our code to be mt compatible so this ticket is to track that effort n nhere is the general page for porting old code to the mt model n reporter david schultz cc amedina resolution fixed time component combo simulation summary compiled in multi threaded mode priority normal keywords milestone owner olivas type defect
| 1
|
80,065
| 29,954,610,628
|
IssuesEvent
|
2023-06-23 06:19:15
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
Datatable: Issue with scrollable + rowexpander
|
:lady_beetle: defect :bangbang: needs-triage
|
### Describe the bug
While using a scrollable datatable with a rowexpander, the rowexpansion is not visible if there are only a few items in the datatable availble.
I did made a reproducer:
https://github.com/kneringerjohann/primefaces-test/tree/scrollable-datatable-rowexpander
Open page, Click on Rowexpander, no output is seen, scroll down, output in rowexpander is seen.


### Reproducer
_No response_
### Expected behavior
I would expect to see the datatable in rowexpander...but its not visible
### PrimeFaces edition
Community
### PrimeFaces version
12.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
4.0.2
### Java version
17
### Browser(s)
112.0.5615.137
|
1.0
|
Datatable: Issue with scrollable + rowexpander - ### Describe the bug
While using a scrollable datatable with a rowexpander, the rowexpansion is not visible if there are only a few items in the datatable availble.
I did made a reproducer:
https://github.com/kneringerjohann/primefaces-test/tree/scrollable-datatable-rowexpander
Open page, Click on Rowexpander, no output is seen, scroll down, output in rowexpander is seen.


### Reproducer
_No response_
### Expected behavior
I would expect to see the datatable in rowexpander...but its not visible
### PrimeFaces edition
Community
### PrimeFaces version
12.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
4.0.2
### Java version
17
### Browser(s)
112.0.5615.137
|
defect
|
datatable issue with scrollable rowexpander describe the bug while using a scrollable datatable with a rowexpander the rowexpansion is not visible if there are only a few items in the datatable availble i did made a reproducer open page click on rowexpander no output is seen scroll down output in rowexpander is seen reproducer no response expected behavior i would expect to see the datatable in rowexpander but its not visible primefaces edition community primefaces version theme no response jsf implementation mojarra jsf version java version browser s
| 1
|
470,012
| 13,529,660,697
|
IssuesEvent
|
2020-09-15 18:39:22
|
open-wa/wa-automate-nodejs
|
https://api.github.com/repos/open-wa/wa-automate-nodejs
|
closed
|
BROKEN METHODS: 31477
|
Auto PRIORITY
|
Broken methods detected. Details below.
<details>
<summary>data</summary>
```javascript
{
"WA_VERSION": "2.2037.6",
"PAGE_UA": "WhatsApp/2.2029.4 Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36",
"WA_AUTOMATE_VERSION": "2.0.12",
"BROWSER_VERSION": "HeadlessChrome/85.0.4183.83",
"SESSION_ID": "55279992755278",
"BROKEN_METHODS": [
"Store.Participants.removeParticipants",
"Store.Participants.addParticipants",
"Store.Participants.promoteParticipants",
"Store.Participants.demoteParticipants"
],
"occurances": 4,
"lastReported": "Tue Sep 15 2020 11:40:32 GMT+0000 (Coordinated Universal Time)"
}
```
</details>
|
1.0
|
BROKEN METHODS: 31477 - Broken methods detected. Details below.
<details>
<summary>data</summary>
```javascript
{
"WA_VERSION": "2.2037.6",
"PAGE_UA": "WhatsApp/2.2029.4 Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36",
"WA_AUTOMATE_VERSION": "2.0.12",
"BROWSER_VERSION": "HeadlessChrome/85.0.4183.83",
"SESSION_ID": "55279992755278",
"BROKEN_METHODS": [
"Store.Participants.removeParticipants",
"Store.Participants.addParticipants",
"Store.Participants.promoteParticipants",
"Store.Participants.demoteParticipants"
],
"occurances": 4,
"lastReported": "Tue Sep 15 2020 11:40:32 GMT+0000 (Coordinated Universal Time)"
}
```
</details>
|
non_defect
|
broken methods broken methods detected details below data javascript wa version page ua whatsapp mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari wa automate version browser version headlesschrome session id broken methods store participants removeparticipants store participants addparticipants store participants promoteparticipants store participants demoteparticipants occurances lastreported tue sep gmt coordinated universal time
| 0
|
75,994
| 26,197,531,591
|
IssuesEvent
|
2023-01-03 14:44:22
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Backwards incompatibility: in new version, field.cast(String.class) in a query with MYSQL dialect results in casting to char
|
T: Defect C: Functionality P: Medium E: All Editions
|
### Expected behavior
`field.cast(String.class)` used to render as `cast(field as varchar)` in versions 3.12-3.13.
### Actual behavior
`field.cast(String.class)` is now being rendered as `cast(field as char)` in 3.16.10
### Steps to reproduce the problem
```java
System.out.println(DSL.using("jdbc:h2:mem:ginmon;MODE=MySQL;DB_CLOSE_DELAY=-1;TRACE_LEVEL_FILE=0", MYSQL)
.selectFrom(TABLE1.FIELD1.cast(String.class)).getSQL(ParamType.INLINED));
// this prints out:
// select cast(`schema`.`table1`.`field1` as char)
```
### jOOQ Version
3.16.10
### Database product and version
H2 2.1.214 in MySQL compatibility mode
### Java Version
openjdk version "11.0.17" 2022-10-18 LTS
### OS Version
macOS 12.6
### JDBC driver name and version (include name if unofficial driver)
com.h2database:h2:2.1.214
|
1.0
|
Backwards incompatibility: in new version, field.cast(String.class) in a query with MYSQL dialect results in casting to char - ### Expected behavior
`field.cast(String.class)` used to render as `cast(field as varchar)` in versions 3.12-3.13.
### Actual behavior
`field.cast(String.class)` is now being rendered as `cast(field as char)` in 3.16.10
### Steps to reproduce the problem
```java
System.out.println(DSL.using("jdbc:h2:mem:ginmon;MODE=MySQL;DB_CLOSE_DELAY=-1;TRACE_LEVEL_FILE=0", MYSQL)
.selectFrom(TABLE1.FIELD1.cast(String.class)).getSQL(ParamType.INLINED));
// this prints out:
// select cast(`schema`.`table1`.`field1` as char)
```
### jOOQ Version
3.16.10
### Database product and version
H2 2.1.214 in MySQL compatibility mode
### Java Version
openjdk version "11.0.17" 2022-10-18 LTS
### OS Version
macOS 12.6
### JDBC driver name and version (include name if unofficial driver)
com.h2database:h2:2.1.214
|
defect
|
backwards incompatibility in new version field cast string class in a query with mysql dialect results in casting to char expected behavior field cast string class used to render as cast field as varchar in versions actual behavior field cast string class is now being rendered as cast field as char in steps to reproduce the problem java system out println dsl using jdbc mem ginmon mode mysql db close delay trace level file mysql selectfrom cast string class getsql paramtype inlined this prints out select cast schema as char jooq version database product and version in mysql compatibility mode java version openjdk version lts os version macos jdbc driver name and version include name if unofficial driver com
| 1
|
1,206
| 2,601,758,784
|
IssuesEvent
|
2015-02-24 00:34:10
|
chrsmith/bwapi
|
https://api.github.com/repos/chrsmith/bwapi
|
closed
|
getTarget changes location
|
auto-migrated Component-Logic Milestone-Release Priority-High Type-Defect Usability
|
```
It was reported by Dowal on IRC.
getTarget supposedly changes positions frequently and does not always represent
the order target location.
```
-----
Original issue reported on code.google.com by `AHeinerm` on 26 Nov 2010 at 5:54
|
1.0
|
getTarget changes location - ```
It was reported by Dowal on IRC.
getTarget supposedly changes positions frequently and does not always represent
the order target location.
```
-----
Original issue reported on code.google.com by `AHeinerm` on 26 Nov 2010 at 5:54
|
defect
|
gettarget changes location it was reported by dowal on irc gettarget supposedly changes positions frequently and does not always represent the order target location original issue reported on code google com by aheinerm on nov at
| 1
|
82,223
| 32,069,533,371
|
IssuesEvent
|
2023-09-25 07:00:56
|
SAP/abap-cleaner
|
https://api.github.com/repos/SAP/abap-cleaner
|
closed
|
Mini bug--Convert lower case?
|
no defect
|
I set variant to lower case,but it sound not work on field symbol .

I think it's mini bug.
Now I just put shift +F1 ,and then CTRL+4 to clean abap code.
|
1.0
|
Mini bug--Convert lower case? - I set variant to lower case,but it sound not work on field symbol .

I think it's mini bug.
Now I just put shift +F1 ,and then CTRL+4 to clean abap code.
|
defect
|
mini bug convert lower case i set variant to lower case but it sound not work on field symbol i think it s mini bug now i just put shift and then ctrl to clean abap code
| 1
|
25,536
| 4,375,494,610
|
IssuesEvent
|
2016-08-05 00:03:08
|
FreeMedForms/google-code-archive
|
https://api.github.com/repos/FreeMedForms/google-code-archive
|
closed
|
New form doesn't show up in Preferences until clean reinstall of FMF
|
auto-migrated Milestone-Release0.9.0 OpSys-All Priority-Critical Type-Defect
|
```
What steps will reproduce the problem?
0. Environment is freshly compiled from master (06/26/2014) FMF 0.9.1 on Debian
7 64 bits in debug mode
1.compile FMF in debug_mode
2.use previously created /home/username/.freemedforms &
/home/username/freemedforms folders
3.create a new completeform in /global_resources/forms/completeforms by
copy/pasting an existing functional complete form, only changing the author's
name and form name to be able to distinguish the new form the others
4. start FMF ./freemedforms_debug
5. answer yes when asked to detect new forms
6. Try to use the newly created form: Configuration/Preferences/Forms/Selector
7. The newly created form is not showing up in the list
What is the expected output? New form showing up in the list
What do you see instead? Nothing.
Workaround: reinstall FMF as another user in another /home or delete
.freemedforms & freemedforms and reinstall FMF, then the new form is showing up.
Could this bug explain why user forms put in
/home/username/freemedforms/Documents/forms/completeforms don't show up either?
I didn't test it yet but I will.
```
Original issue reported on code.google.com by `contact@medecinelibre.com` on 29 Jun 2014 at 6:13
|
1.0
|
New form doesn't show up in Preferences until clean reinstall of FMF - ```
What steps will reproduce the problem?
0. Environment is freshly compiled from master (06/26/2014) FMF 0.9.1 on Debian
7 64 bits in debug mode
1.compile FMF in debug_mode
2.use previously created /home/username/.freemedforms &
/home/username/freemedforms folders
3.create a new completeform in /global_resources/forms/completeforms by
copy/pasting an existing functional complete form, only changing the author's
name and form name to be able to distinguish the new form the others
4. start FMF ./freemedforms_debug
5. answer yes when asked to detect new forms
6. Try to use the newly created form: Configuration/Preferences/Forms/Selector
7. The newly created form is not showing up in the list
What is the expected output? New form showing up in the list
What do you see instead? Nothing.
Workaround: reinstall FMF as another user in another /home or delete
.freemedforms & freemedforms and reinstall FMF, then the new form is showing up.
Could this bug explain why user forms put in
/home/username/freemedforms/Documents/forms/completeforms don't show up either?
I didn't test it yet but I will.
```
Original issue reported on code.google.com by `contact@medecinelibre.com` on 29 Jun 2014 at 6:13
|
defect
|
new form doesn t show up in preferences until clean reinstall of fmf what steps will reproduce the problem environment is freshly compiled from master fmf on debian bits in debug mode compile fmf in debug mode use previously created home username freemedforms home username freemedforms folders create a new completeform in global resources forms completeforms by copy pasting an existing functional complete form only changing the author s name and form name to be able to distinguish the new form the others start fmf freemedforms debug answer yes when asked to detect new forms try to use the newly created form configuration preferences forms selector the newly created form is not showing up in the list what is the expected output new form showing up in the list what do you see instead nothing workaround reinstall fmf as another user in another home or delete freemedforms freemedforms and reinstall fmf then the new form is showing up could this bug explain why user forms put in home username freemedforms documents forms completeforms don t show up either i didn t test it yet but i will original issue reported on code google com by contact medecinelibre com on jun at
| 1
|
165,887
| 14,014,511,485
|
IssuesEvent
|
2020-10-29 12:02:02
|
cemac/forest-barc
|
https://api.github.com/repos/cemac/forest-barc
|
opened
|
Create basic User documentation
|
documentation
|
Create at least a basic set of user documentation for the icons included in the UI.
This should be uploaded to the read the docs site and linked to from the UI help icon.
|
1.0
|
Create basic User documentation - Create at least a basic set of user documentation for the icons included in the UI.
This should be uploaded to the read the docs site and linked to from the UI help icon.
|
non_defect
|
create basic user documentation create at least a basic set of user documentation for the icons included in the ui this should be uploaded to the read the docs site and linked to from the ui help icon
| 0
|
72,667
| 24,227,416,748
|
IssuesEvent
|
2022-09-26 15:22:49
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
closed
|
PM Investigate who should be able to publish and archive CAPs
|
Defect Drupal engineering ⭐️ Facilities Needs refining Users and permissions Vet Center ⭐️ User support
|
## Describe the defect
Currently Vet Center Cap editors have roles of "Content creator - Vet Center" and "Content Editor" do **not** have permission to either publish CAPs or archive them. We need to first understand the desired business process to see if those permissions for either role need to be adjusted.. If they do, then we need to adjust them.
Workflow for publishing content is here https://prod.cms.va.gov/help/vet-centers/what-is-the-process-for-publishing-my-vet-centers-pages
Workflow specific to CAPs is here https://prod.cms.va.gov/help/vet-centers/how-to-add-change-or-remove-a-community-access-point
## Questions?
1. Should a user with the role of "Content Editor" be able to **publish** VC CAP's?
2. Should a user with the role of "Content Editor" be able to **archive** VC CAP's?
3. Should a user with the role of "Content creator - Vet Center" be able to **publish** VC CAP's?
2. Should a user with the role of "Content creator - Vet Center" be able to **archive** VC CAP's?
3. Should a user with the role of "Content publisher" be able to **publish** VC CAP's?
2. Should a user with the role of "Content publisher" be able to **archive** VC CAP's?
## AC / Expected behavior
- [x] Make AC's match the answers to the ^^ questions. Either adjusting or validating the perms match the quesitons.
- [x] Make any adjustments to https://prod.cms.va.gov/help/vet-centers/what-is-the-process-for-publishing-my-vet-centers-pages
- [x] Make any adjustments to https://prod.cms.va.gov/help/vet-centers/how-to-add-change-or-remove-a-community-access-point
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [x] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
1.0
|
PM Investigate who should be able to publish and archive CAPs - ## Describe the defect
Currently Vet Center Cap editors have roles of "Content creator - Vet Center" and "Content Editor" do **not** have permission to either publish CAPs or archive them. We need to first understand the desired business process to see if those permissions for either role need to be adjusted.. If they do, then we need to adjust them.
Workflow for publishing content is here https://prod.cms.va.gov/help/vet-centers/what-is-the-process-for-publishing-my-vet-centers-pages
Workflow specific to CAPs is here https://prod.cms.va.gov/help/vet-centers/how-to-add-change-or-remove-a-community-access-point
## Questions?
1. Should a user with the role of "Content Editor" be able to **publish** VC CAP's?
2. Should a user with the role of "Content Editor" be able to **archive** VC CAP's?
3. Should a user with the role of "Content creator - Vet Center" be able to **publish** VC CAP's?
2. Should a user with the role of "Content creator - Vet Center" be able to **archive** VC CAP's?
3. Should a user with the role of "Content publisher" be able to **publish** VC CAP's?
2. Should a user with the role of "Content publisher" be able to **archive** VC CAP's?
## AC / Expected behavior
- [x] Make AC's match the answers to the ^^ questions. Either adjusting or validating the perms match the quesitons.
- [x] Make any adjustments to https://prod.cms.va.gov/help/vet-centers/what-is-the-process-for-publishing-my-vet-centers-pages
- [x] Make any adjustments to https://prod.cms.va.gov/help/vet-centers/how-to-add-change-or-remove-a-community-access-point
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [x] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
defect
|
pm investigate who should be able to publish and archive caps describe the defect currently vet center cap editors have roles of content creator vet center and content editor do not have permission to either publish caps or archive them we need to first understand the desired business process to see if those permissions for either role need to be adjusted if they do then we need to adjust them workflow for publishing content is here workflow specific to caps is here questions should a user with the role of content editor be able to publish vc cap s should a user with the role of content editor be able to archive vc cap s should a user with the role of content creator vet center be able to publish vc cap s should a user with the role of content creator vet center be able to archive vc cap s should a user with the role of content publisher be able to publish vc cap s should a user with the role of content publisher be able to archive vc cap s ac expected behavior make ac s match the answers to the questions either adjusting or validating the perms match the quesitons make any adjustments to make any adjustments to cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support
| 1
|
56,991
| 6,535,932,864
|
IssuesEvent
|
2017-08-31 16:10:51
|
vmware/vic
|
https://api.github.com/repos/vmware/vic
|
opened
|
Fix Bridge Network leaks
|
component/test component/vic-machine
|
**Details:**
There are multiple VIC-Machine tests cases that are leaking bridge networks due to
improper VCH names and the lack of manual deletion of networks.
**Acceptance Criteria:**
The below tests cases don't leak bridge networks.
Here's a list of notes from @anchal-agrawal
```text
From build https://ci.vcna.io/vmware/vic/12949
6-03: Delete VCH and verify -> 9470 - more info needed
6-03: Attach Disks and Delete VCH -> 5943 and 9762 also leaks a VCH
6-04 basic timeout -> 4721 -> use force?
6-07: Public network - invalid -> 3948 -> manual delete
6-07: Management network - invalid -> 6402 -> manual delete
6-09: Verify inspect output for a full TLS VCH —> 1846 regression
redundant Set Test VCH Name
Trailing slash works as expected -> needs a new VCH name
6-07: Bridge network - reused port group needs new VCH names
6-07: Container network - space in network name invalid - needs new VCH names
6-13: Create VCH - invalid keys - needs VCH names
6-13: Create VCH - reuse keys - needs VCH names
```
|
1.0
|
Fix Bridge Network leaks - **Details:**
There are multiple VIC-Machine tests cases that are leaking bridge networks due to
improper VCH names and the lack of manual deletion of networks.
**Acceptance Criteria:**
The below tests cases don't leak bridge networks.
Here's a list of notes from @anchal-agrawal
```text
From build https://ci.vcna.io/vmware/vic/12949
6-03: Delete VCH and verify -> 9470 - more info needed
6-03: Attach Disks and Delete VCH -> 5943 and 9762 also leaks a VCH
6-04 basic timeout -> 4721 -> use force?
6-07: Public network - invalid -> 3948 -> manual delete
6-07: Management network - invalid -> 6402 -> manual delete
6-09: Verify inspect output for a full TLS VCH —> 1846 regression
redundant Set Test VCH Name
Trailing slash works as expected -> needs a new VCH name
6-07: Bridge network - reused port group needs new VCH names
6-07: Container network - space in network name invalid - needs new VCH names
6-13: Create VCH - invalid keys - needs VCH names
6-13: Create VCH - reuse keys - needs VCH names
```
|
non_defect
|
fix bridge network leaks details there are multiple vic machine tests cases that are leaking bridge networks due to improper vch names and the lack of manual deletion of networks acceptance criteria the below tests cases don t leak bridge networks here s a list of notes from anchal agrawal text from build delete vch and verify more info needed attach disks and delete vch and also leaks a vch basic timeout use force public network invalid manual delete management network invalid manual delete verify inspect output for a full tls vch — regression redundant set test vch name trailing slash works as expected needs a new vch name bridge network reused port group needs new vch names container network space in network name invalid needs new vch names create vch invalid keys needs vch names create vch reuse keys needs vch names
| 0
|
28,412
| 5,254,651,266
|
IssuesEvent
|
2017-02-02 13:34:47
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
Popup menu positioning on window resize
|
defect
|
When popup menu visible and window is resizing container needs to be repositioned due to changes.
|
1.0
|
Popup menu positioning on window resize - When popup menu visible and window is resizing container needs to be repositioned due to changes.
|
defect
|
popup menu positioning on window resize when popup menu visible and window is resizing container needs to be repositioned due to changes
| 1
|
17,189
| 2,982,371,522
|
IssuesEvent
|
2015-07-17 10:39:35
|
testing-cabal/mock
|
https://api.github.com/repos/testing-cabal/mock
|
closed
|
Default value for __len__ on MagicMock type in documentation
|
auto-migrated Priority-Medium Type-Defect
|
```
In the documentation here:
http://www.voidspace.org.uk/python/mock/magicmock.html#mock.NonCallableMagicMock
The default return value for __len__ for a MagicMock is stated incorrectly as
1, it is actually 0, as is confirmed in the code example that immediately
follows in the documentation.
__contains__ : False
>>__len__ : 1
__iter__ : iter([])
```
Original issue reported on code.google.com by `stca...@gmail.com` on 27 Jul 2013 at 5:48
|
1.0
|
Default value for __len__ on MagicMock type in documentation - ```
In the documentation here:
http://www.voidspace.org.uk/python/mock/magicmock.html#mock.NonCallableMagicMock
The default return value for __len__ for a MagicMock is stated incorrectly as
1, it is actually 0, as is confirmed in the code example that immediately
follows in the documentation.
__contains__ : False
>>__len__ : 1
__iter__ : iter([])
```
Original issue reported on code.google.com by `stca...@gmail.com` on 27 Jul 2013 at 5:48
|
defect
|
default value for len on magicmock type in documentation in the documentation here the default return value for len for a magicmock is stated incorrectly as it is actually as is confirmed in the code example that immediately follows in the documentation contains false len iter iter original issue reported on code google com by stca gmail com on jul at
| 1
|
45,773
| 13,132,612,461
|
IssuesEvent
|
2020-08-06 19:15:18
|
RG4421/LunchLearningApp
|
https://api.github.com/repos/RG4421/LunchLearningApp
|
opened
|
CVE-2016-10540 (High) detected in minimatch-0.3.0.tgz
|
security vulnerability
|
## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-0.3.0.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/LunchLearningApp/LunchAndLearnWebUI/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/LunchLearningApp/LunchAndLearnWebUI/node_modules/jasmine/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- protractor-4.0.14.tgz (Root Library)
- jasmine-2.4.1.tgz
- glob-3.2.11.tgz
- :x: **minimatch-0.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/LunchLearningApp/commit/dabd2522ce385cf3e53e00b24a0b6eb174caab1c">dabd2522ce385cf3e53e00b24a0b6eb174caab1c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p>
<p>Release Date: 2016-06-20</p>
<p>Fix Resolution: Update to version 3.0.2 or later.</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimatch","packageVersion":"0.3.0","isTransitiveDependency":true,"dependencyTree":"protractor:4.0.14;jasmine:2.4.1;glob:3.2.11;minimatch:0.3.0","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2016-10540","vulnerabilityDetails":"Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2016-10540 (High) detected in minimatch-0.3.0.tgz - ## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-0.3.0.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.3.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/LunchLearningApp/LunchAndLearnWebUI/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/LunchLearningApp/LunchAndLearnWebUI/node_modules/jasmine/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- protractor-4.0.14.tgz (Root Library)
- jasmine-2.4.1.tgz
- glob-3.2.11.tgz
- :x: **minimatch-0.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/LunchLearningApp/commit/dabd2522ce385cf3e53e00b24a0b6eb174caab1c">dabd2522ce385cf3e53e00b24a0b6eb174caab1c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p>
<p>Release Date: 2016-06-20</p>
<p>Fix Resolution: Update to version 3.0.2 or later.</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimatch","packageVersion":"0.3.0","isTransitiveDependency":true,"dependencyTree":"protractor:4.0.14;jasmine:2.4.1;glob:3.2.11;minimatch:0.3.0","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2016-10540","vulnerabilityDetails":"Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve high detected in minimatch tgz cve high severity vulnerability vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file tmp ws scm lunchlearningapp lunchandlearnwebui package json path to vulnerable library tmp ws scm lunchlearningapp lunchandlearnwebui node modules jasmine node modules minimatch package json dependency hierarchy protractor tgz root library jasmine tgz glob tgz x minimatch tgz vulnerable library found in head commit a href vulnerability details minimatch is a minimal matching utility that works by converting glob expressions into javascript regexp objects the primary function minimatch path pattern in minimatch and earlier is vulnerable to redos in the pattern parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution update to version or later isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails minimatch is a minimal matching utility that works by converting glob expressions into javascript regexp objects the primary function minimatch path pattern in minimatch and earlier is vulnerable to redos in the pattern parameter vulnerabilityurl
| 0
|
77,533
| 27,044,398,013
|
IssuesEvent
|
2023-02-13 08:42:45
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
opened
|
zpool trim service fails on pools that contain trimmable cache devices
|
Type: Defect
|
### System information
<!-- add version after "|" character -->
Type | Version/Name
Linux | 37
Distribution Name | Fedora
Distribution Version | 37
Kernel Version | 6.1.10-200.fc37.x86_64
Architecture | x86_64
OpenZFS Version |`master` branch
### Describe the problem you're observing
I've enabled weekly trimming of my pool. The pool contains rotational devices as data bearing devices, and one SSD as cache device. All of them are encrypted with LUKS, with the dm option `discard` so the SSDs are trimmable. `blkdiscard` of the LUKS device works perfectly.
Nevertheless, attempting a trim through the unit (`/sbin/zpool trim -w backups` is basically what it resolves to) fails with log:
```
Feb 13 00:48:29 milena.dragonfear systemd[1]: Started zfs-trim@backups.service - zpool trim on backups.
Feb 13 00:48:29 milena.dragonfear sh[29079]: cannot trim: no devices in pool support trim operations
Feb 13 00:48:29 milena.dragonfear systemd[1]: zfs-trim@backups.service: Main process exited, code=exited, status=255/EXCEPTION
Feb 13 00:48:29 milena.dragonfear systemd[1]: zfs-trim@backups.service: Failed with result 'exit-code'.
```
I would expect at least to see the SSD cache device trimmed.
|
1.0
|
zpool trim service fails on pools that contain trimmable cache devices - ### System information
<!-- add version after "|" character -->
Type | Version/Name
Linux | 37
Distribution Name | Fedora
Distribution Version | 37
Kernel Version | 6.1.10-200.fc37.x86_64
Architecture | x86_64
OpenZFS Version |`master` branch
### Describe the problem you're observing
I've enabled weekly trimming of my pool. The pool contains rotational devices as data bearing devices, and one SSD as cache device. All of them are encrypted with LUKS, with the dm option `discard` so the SSDs are trimmable. `blkdiscard` of the LUKS device works perfectly.
Nevertheless, attempting a trim through the unit (`/sbin/zpool trim -w backups` is basically what it resolves to) fails with log:
```
Feb 13 00:48:29 milena.dragonfear systemd[1]: Started zfs-trim@backups.service - zpool trim on backups.
Feb 13 00:48:29 milena.dragonfear sh[29079]: cannot trim: no devices in pool support trim operations
Feb 13 00:48:29 milena.dragonfear systemd[1]: zfs-trim@backups.service: Main process exited, code=exited, status=255/EXCEPTION
Feb 13 00:48:29 milena.dragonfear systemd[1]: zfs-trim@backups.service: Failed with result 'exit-code'.
```
I would expect at least to see the SSD cache device trimmed.
|
defect
|
zpool trim service fails on pools that contain trimmable cache devices system information type version name linux distribution name fedora distribution version kernel version architecture openzfs version master branch describe the problem you re observing i ve enabled weekly trimming of my pool the pool contains rotational devices as data bearing devices and one ssd as cache device all of them are encrypted with luks with the dm option discard so the ssds are trimmable blkdiscard of the luks device works perfectly nevertheless attempting a trim through the unit sbin zpool trim w backups is basically what it resolves to fails with log feb milena dragonfear systemd started zfs trim backups service zpool trim on backups feb milena dragonfear sh cannot trim no devices in pool support trim operations feb milena dragonfear systemd zfs trim backups service main process exited code exited status exception feb milena dragonfear systemd zfs trim backups service failed with result exit code i would expect at least to see the ssd cache device trimmed
| 1
|
39,247
| 9,345,032,909
|
IssuesEvent
|
2019-03-30 03:18:07
|
Automattic/wp-calypso
|
https://api.github.com/repos/Automattic/wp-calypso
|
opened
|
Featured image drop zone text hard to read
|
Color Schemes [Pri] Normal [Type] Defect [Type] Question
|
Featured image drop zone background color is accent, which was introduced here https://github.com/Automattic/wp-calypso/issues/29936 👍🏽
The text and icon should be white though for better reading. It's hard to read at the moment.
<img width="1091" alt="Screenshot 2019-03-30 at 08 44 09" src="https://user-images.githubusercontent.com/18581859/55270696-23425f80-52c8-11e9-9794-13433220c6ee.png">
@Automattic/color-theming, thoughts on this, please?
|
1.0
|
Featured image drop zone text hard to read - Featured image drop zone background color is accent, which was introduced here https://github.com/Automattic/wp-calypso/issues/29936 👍🏽
The text and icon should be white though for better reading. It's hard to read at the moment.
<img width="1091" alt="Screenshot 2019-03-30 at 08 44 09" src="https://user-images.githubusercontent.com/18581859/55270696-23425f80-52c8-11e9-9794-13433220c6ee.png">
@Automattic/color-theming, thoughts on this, please?
|
defect
|
featured image drop zone text hard to read featured image drop zone background color is accent which was introduced here 👍🏽 the text and icon should be white though for better reading it s hard to read at the moment img width alt screenshot at src automattic color theming thoughts on this please
| 1
|
61,066
| 17,023,593,142
|
IssuesEvent
|
2021-07-03 02:49:29
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
No tiles / wrong tiles / crash if the date line (180 lon) is inside the viewport
|
Component: merkaartor Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 7.52pm, Thursday, 20th May 2010]**
Zoom out or scroll until you have the date line inside the viewport: No more tiles.
If the lat/lon grid is activated, Merkaartor crashes. A workaround for this crash
is available on gitorious merkaartor/dantje.
|
1.0
|
No tiles / wrong tiles / crash if the date line (180 lon) is inside the viewport - **[Submitted to the original trac issue database at 7.52pm, Thursday, 20th May 2010]**
Zoom out or scroll until you have the date line inside the viewport: No more tiles.
If the lat/lon grid is activated, Merkaartor crashes. A workaround for this crash
is available on gitorious merkaartor/dantje.
|
defect
|
no tiles wrong tiles crash if the date line lon is inside the viewport zoom out or scroll until you have the date line inside the viewport no more tiles if the lat lon grid is activated merkaartor crashes a workaround for this crash is available on gitorious merkaartor dantje
| 1
|
58,187
| 14,320,393,873
|
IssuesEvent
|
2020-11-26 00:16:21
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
SPL/ZFS RPM KMOD build issue with custom kernel on Fedora 29
|
Component: Packaging Status: Stale Type: Building
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please search our issue tracker *before* making a new issue.
If you cannot find a similar issue, then create a new issue.
https://github.com/zfsonlinux/zfs/issues
*IMPORTANT* - This issue tracker is for *bugs* and *issues* only.
Please search the wiki and the mailing list archives before asking
questions on the mailing list.
https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Fedora
Distribution Version | 29
Linux Kernel | 4.19.23_1.fc29.xen+-1 (built from upstream source)
Architecture | x86_64
ZFS Version | 0.7.12-1
SPL Version | 0.7.12-1
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
I am trying to build SPL and ZFS kmods for Fedora 29. I am building them against a custom kernel I built from upstream source (kernel.org). When I run `/.configure --with-linux=/usr/src/kernels/4.19.23-1.fc29.xen+/ --with-linux-obj=/usr/src/kernels/4.19.23-1.fc29.xen+/` the configuration completes with not errors.
When I run the `make rpm-kmod` however I receive an error stating the following;
```
Installing spl-kmod-0.7.12-1.fc29.src.rpm
error: Failed build dependencies:
kernel-devel-uname-r = 4.19.23-1.fc29.xen+ is needed by spl-kmod-0.7.12-1.fc29.x86_64
make[1]: *** [Makefile:1098: rpm-common] Error 1
make[1]: Leaving directory '/home/ian/src/spl'
make: *** [Makefile:1049: rpm-kmod] Error 2
```
However, if I comment out these lines in `spl/scripts/kmodtool` the build completes with no errors.
```
Requires: kernel-uname-r = ${kernel_uname_r} (line 168)
BuildRequires: kernel-devel-uname-r = ${kernel_uname_r} (line 169)
Requires: kernel-devel-uname-r = ${kernel_uname_r} (line 282)
BuildRequires: kernel-devel-uname-r = ${kernel_uname_r} (line 283)
```
I receive the same error with a few more build dependancy failures for the zfs module.
```
Installing zfs-kmod-0.7.12-1.fc29.src.rpm
error: Failed build dependencies:
kernel-devel-uname-r = 4.19.23-1.fc29.xen+ is needed by zfs-kmod-0.7.12-1.fc29.x86_64
kmod-spl-devel = 0.7.12 is needed by zfs-kmod-0.7.12-1.fc29.x86_64
kmod-spl-devel-uname-r = 4.19.23-1.fc29.xen+ is needed by zfs-kmod-0.7.12-1.fc29.x86_64
kmod-spl-uname-r = 4.19.23-1.fc29.xen+ is needed by zfs-kmod-0.7.12-1.fc29.x86_64
make[1]: *** [Makefile:1227: rpm-common] Error 1
make[1]: Leaving directory '/home/ian/src/zfs'
make: *** [Makefile:1178: rpm-kmod] Error 2
```
Same thing for `zfs/scripts/kmodtool` the build completes with no errors
```
Requires: kernel-uname-r = ${kernel_uname_r} (line 168)
BuildRequires: kernel-devel-uname-r = ${kernel_uname_r} (line 169)
Requires: kernel-devel-uname-r = ${kernel_uname_r} (line 282)
BuildRequires: kernel-devel-uname-r = ${kernel_uname_r} (line 283)
Requires: kmod-${kmodname}-${kernel_uname_r} >= %{?epoch:%{epoch}:}%{version}-%{release} (line 314)
```
It should be noted, I am not actually running on this kernel while I am compiling. I have only installed the bare essentials to run a kmod build for another system which I do not want the build tools on.
### Describe how to reproduce the problem
If I use a modified `scripts/kmodtool` the build completes. If I restore the original `scripts/kmodtool` the builds begin to fail again.
I had also found a previous open ticket referring to the exact same problem on a different system.
https://github.com/zfsonlinux/zfs/issues/2046
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
|
1.0
|
SPL/ZFS RPM KMOD build issue with custom kernel on Fedora 29 - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please search our issue tracker *before* making a new issue.
If you cannot find a similar issue, then create a new issue.
https://github.com/zfsonlinux/zfs/issues
*IMPORTANT* - This issue tracker is for *bugs* and *issues* only.
Please search the wiki and the mailing list archives before asking
questions on the mailing list.
https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Fedora
Distribution Version | 29
Linux Kernel | 4.19.23_1.fc29.xen+-1 (built from upstream source)
Architecture | x86_64
ZFS Version | 0.7.12-1
SPL Version | 0.7.12-1
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
I am trying to build SPL and ZFS kmods for Fedora 29. I am building them against a custom kernel I built from upstream source (kernel.org). When I run `/.configure --with-linux=/usr/src/kernels/4.19.23-1.fc29.xen+/ --with-linux-obj=/usr/src/kernels/4.19.23-1.fc29.xen+/` the configuration completes with not errors.
When I run the `make rpm-kmod` however I receive an error stating the following;
```
Installing spl-kmod-0.7.12-1.fc29.src.rpm
error: Failed build dependencies:
kernel-devel-uname-r = 4.19.23-1.fc29.xen+ is needed by spl-kmod-0.7.12-1.fc29.x86_64
make[1]: *** [Makefile:1098: rpm-common] Error 1
make[1]: Leaving directory '/home/ian/src/spl'
make: *** [Makefile:1049: rpm-kmod] Error 2
```
However, if I comment out these lines in `spl/scripts/kmodtool` the build completes with no errors.
```
Requires: kernel-uname-r = ${kernel_uname_r} (line 168)
BuildRequires: kernel-devel-uname-r = ${kernel_uname_r} (line 169)
Requires: kernel-devel-uname-r = ${kernel_uname_r} (line 282)
BuildRequires: kernel-devel-uname-r = ${kernel_uname_r} (line 283)
```
I receive the same error with a few more build dependancy failures for the zfs module.
```
Installing zfs-kmod-0.7.12-1.fc29.src.rpm
error: Failed build dependencies:
kernel-devel-uname-r = 4.19.23-1.fc29.xen+ is needed by zfs-kmod-0.7.12-1.fc29.x86_64
kmod-spl-devel = 0.7.12 is needed by zfs-kmod-0.7.12-1.fc29.x86_64
kmod-spl-devel-uname-r = 4.19.23-1.fc29.xen+ is needed by zfs-kmod-0.7.12-1.fc29.x86_64
kmod-spl-uname-r = 4.19.23-1.fc29.xen+ is needed by zfs-kmod-0.7.12-1.fc29.x86_64
make[1]: *** [Makefile:1227: rpm-common] Error 1
make[1]: Leaving directory '/home/ian/src/zfs'
make: *** [Makefile:1178: rpm-kmod] Error 2
```
Same thing for `zfs/scripts/kmodtool` the build completes with no errors
```
Requires: kernel-uname-r = ${kernel_uname_r} (line 168)
BuildRequires: kernel-devel-uname-r = ${kernel_uname_r} (line 169)
Requires: kernel-devel-uname-r = ${kernel_uname_r} (line 282)
BuildRequires: kernel-devel-uname-r = ${kernel_uname_r} (line 283)
Requires: kmod-${kmodname}-${kernel_uname_r} >= %{?epoch:%{epoch}:}%{version}-%{release} (line 314)
```
It should be noted, I am not actually running on this kernel while I am compiling. I have only installed the bare essentials to run a kmod build for another system which I do not want the build tools on.
### Describe how to reproduce the problem
If I use a modified `scripts/kmodtool` the build completes. If I restore the original `scripts/kmodtool` the builds begin to fail again.
I had also found a previous open ticket referring to the exact same problem on a different system.
https://github.com/zfsonlinux/zfs/issues/2046
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
|
non_defect
|
spl zfs rpm kmod build issue with custom kernel on fedora thank you for reporting an issue important please search our issue tracker before making a new issue if you cannot find a similar issue then create a new issue important this issue tracker is for bugs and issues only please search the wiki and the mailing list archives before asking questions on the mailing list please fill in as much of the template as possible system information type version name distribution name fedora distribution version linux kernel xen built from upstream source architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing i am trying to build spl and zfs kmods for fedora i am building them against a custom kernel i built from upstream source kernel org when i run configure with linux usr src kernels xen with linux obj usr src kernels xen the configuration completes with not errors when i run the make rpm kmod however i receive an error stating the following installing spl kmod src rpm error failed build dependencies kernel devel uname r xen is needed by spl kmod make error make leaving directory home ian src spl make error however if i comment out these lines in spl scripts kmodtool the build completes with no errors requires kernel uname r kernel uname r line buildrequires kernel devel uname r kernel uname r line requires kernel devel uname r kernel uname r line buildrequires kernel devel uname r kernel uname r line i receive the same error with a few more build dependancy failures for the zfs module installing zfs kmod src rpm error failed build dependencies kernel devel uname r xen is needed by zfs kmod kmod spl devel is needed by zfs kmod kmod spl devel uname r xen is needed by zfs kmod kmod spl uname r xen is needed by zfs kmod make error make leaving directory home ian src zfs make error same thing for zfs scripts kmodtool the build completes with no errors requires kernel uname r kernel uname r line buildrequires kernel devel uname r kernel uname r line requires kernel devel uname r kernel uname r line buildrequires kernel devel uname r kernel uname r line requires kmod kmodname kernel uname r epoch epoch version release line it should be noted i am not actually running on this kernel while i am compiling i have only installed the bare essentials to run a kmod build for another system which i do not want the build tools on describe how to reproduce the problem if i use a modified scripts kmodtool the build completes if i restore the original scripts kmodtool the builds begin to fail again i had also found a previous open ticket referring to the exact same problem on a different system include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with
| 0
|
70,222
| 23,058,944,842
|
IssuesEvent
|
2022-07-25 08:10:38
|
pymc-devs/pymc
|
https://api.github.com/repos/pymc-devs/pymc
|
closed
|
Sampling in numba mode fails with scalar RVs
|
defects aesara-related
|
```python
import aesara
aesara.config.mode = "NUMBA"
import pymc as pm
with pm.Model():
x = pm.Normal("x") # does not work
#x = pm.Normal("x", shape=1) # works
pm.sample(chains=1, cores=1)
```
Trace:
```
"""
Traceback (most recent call last):
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/parallel_sampling.py", line 129, in run
self._start_loop()
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/parallel_sampling.py", line 182, in _start_loop
point, stats = self._compute_point()
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/parallel_sampling.py", line 207, in _compute_point
point, stats = self._step_method.step(self._point)
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/step_methods/arraystep.py", line 286, in step
return super().step(point)
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/step_methods/arraystep.py", line 208, in step
step_res = self.astep(q)
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/step_methods/hmc/base_hmc.py", line 156, in astep
start = self.integrator.compute_state(q0, p0)
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/step_methods/hmc/integration.py", line 47, in compute_state
logp, dlogp = self._logp_dlogp_func(q)
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/model.py", line 411, in __call__
grads_raveled = DictToArrayBijection.map(
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/blocking.py", line 61, in map
vars_info = tuple((v, k, v.shape, v.dtype) for k, v in var_dict.items())
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/blocking.py", line 61, in <genexpr>
vars_info = tuple((v, k, v.shape, v.dtype) for k, v in var_dict.items())
AttributeError: 'float' object has no attribute 'shape'
```
|
1.0
|
Sampling in numba mode fails with scalar RVs - ```python
import aesara
aesara.config.mode = "NUMBA"
import pymc as pm
with pm.Model():
x = pm.Normal("x") # does not work
#x = pm.Normal("x", shape=1) # works
pm.sample(chains=1, cores=1)
```
Trace:
```
"""
Traceback (most recent call last):
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/parallel_sampling.py", line 129, in run
self._start_loop()
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/parallel_sampling.py", line 182, in _start_loop
point, stats = self._compute_point()
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/parallel_sampling.py", line 207, in _compute_point
point, stats = self._step_method.step(self._point)
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/step_methods/arraystep.py", line 286, in step
return super().step(point)
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/step_methods/arraystep.py", line 208, in step
step_res = self.astep(q)
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/step_methods/hmc/base_hmc.py", line 156, in astep
start = self.integrator.compute_state(q0, p0)
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/step_methods/hmc/integration.py", line 47, in compute_state
logp, dlogp = self._logp_dlogp_func(q)
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/model.py", line 411, in __call__
grads_raveled = DictToArrayBijection.map(
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/blocking.py", line 61, in map
vars_info = tuple((v, k, v.shape, v.dtype) for k, v in var_dict.items())
File "/Users/twiecki/miniforge3/envs/pymc4/lib/python3.10/site-packages/pymc/blocking.py", line 61, in <genexpr>
vars_info = tuple((v, k, v.shape, v.dtype) for k, v in var_dict.items())
AttributeError: 'float' object has no attribute 'shape'
```
|
defect
|
sampling in numba mode fails with scalar rvs python import aesara aesara config mode numba import pymc as pm with pm model x pm normal x does not work x pm normal x shape works pm sample chains cores trace traceback most recent call last file users twiecki envs lib site packages pymc parallel sampling py line in run self start loop file users twiecki envs lib site packages pymc parallel sampling py line in start loop point stats self compute point file users twiecki envs lib site packages pymc parallel sampling py line in compute point point stats self step method step self point file users twiecki envs lib site packages pymc step methods arraystep py line in step return super step point file users twiecki envs lib site packages pymc step methods arraystep py line in step step res self astep q file users twiecki envs lib site packages pymc step methods hmc base hmc py line in astep start self integrator compute state file users twiecki envs lib site packages pymc step methods hmc integration py line in compute state logp dlogp self logp dlogp func q file users twiecki envs lib site packages pymc model py line in call grads raveled dicttoarraybijection map file users twiecki envs lib site packages pymc blocking py line in map vars info tuple v k v shape v dtype for k v in var dict items file users twiecki envs lib site packages pymc blocking py line in vars info tuple v k v shape v dtype for k v in var dict items attributeerror float object has no attribute shape
| 1
|
32,630
| 6,877,941,326
|
IssuesEvent
|
2017-11-20 10:02:53
|
contao/core-bundle
|
https://api.github.com/repos/contao/core-bundle
|
closed
|
Default file and folder permissions
|
defect
|
I have set new default file and folder permissions, but Contao does not seem to respect these chmod settings.
To reproduce:
* add the new file and folder permissions to the `localconfig.php` configuration file:
```php
$GLOBALS['TL_CONFIG']['defaultFileChmod'] = 0664;
$GLOBALS['TL_CONFIG']['defaultFolderChmod'] = 0775;
```
* switch to the Contao back end
* <kbd>Layout</kbd> > <kbd>Templates</kbd> > Create a new template `foo` in the target folder
* <kbd>System</kbd> > <kbd>File manager</kbd> > Create a new folder `bar`
* check the file and folder permissions for `foo` and `bar`, respectively
|
1.0
|
Default file and folder permissions - I have set new default file and folder permissions, but Contao does not seem to respect these chmod settings.
To reproduce:
* add the new file and folder permissions to the `localconfig.php` configuration file:
```php
$GLOBALS['TL_CONFIG']['defaultFileChmod'] = 0664;
$GLOBALS['TL_CONFIG']['defaultFolderChmod'] = 0775;
```
* switch to the Contao back end
* <kbd>Layout</kbd> > <kbd>Templates</kbd> > Create a new template `foo` in the target folder
* <kbd>System</kbd> > <kbd>File manager</kbd> > Create a new folder `bar`
* check the file and folder permissions for `foo` and `bar`, respectively
|
defect
|
default file and folder permissions i have set new default file and folder permissions but contao does not seem to respect these chmod settings to reproduce add the new file and folder permissions to the localconfig php configuration file php globals globals switch to the contao back end layout templates create a new template foo in the target folder system file manager create a new folder bar check the file and folder permissions for foo and bar respectively
| 1
|
588,820
| 17,672,447,352
|
IssuesEvent
|
2021-08-23 08:10:23
|
apluslms/a-plus
|
https://api.github.com/repos/apluslms/a-plus
|
closed
|
Add roles to enrollments
|
area: LTI area: pseudonymization priority: medium area: API type: refactoring effort: weeks experience: moderate requester: internal area: end-of-course
|
Currently, the Course model has a many-to-many field to UserProfile for setting the teachers of the course and the CourseInstance model has a many-to-many field to UserProfile for setting the assistants.
Separating students from the course staff has been problematic. For example, the course points page mixes assistants with the students: https://github.com/apluslms/a-plus/issues/540
There are two approaches to this problem.
1. We don't change the database schema and try to filter teachers and assistants out from the course points page and probably the participants page too.
2. We change the Enrollment model so that it includes a role of the user: student, assistant or teacher. It would make sense to enroll all users in the course, but separate their roles. Currently, teachers should not enroll so that they are not included in the course points page. Enrollment is also needed for storing the pseudonymized name of the user that may be used with, e.g., LTI connections. The old teacher and assistants fields in the Course and CourseInstance fields would be removed.
This approach enables adding new roles in the future, for example, a role for teaching support personnel that are not part of the course staff, but they need to access the course data without having full admin access to the system.
I wrote this issue in favor of alternative 2, but before we commit to it, we should plan ahead. How much would old code have to be changed for alternative 2? At least some user API endpoints are affected, as well as the course points page and the course participants page. All references to assistants, teachers, and enrollments should be checked.
|
1.0
|
Add roles to enrollments - Currently, the Course model has a many-to-many field to UserProfile for setting the teachers of the course and the CourseInstance model has a many-to-many field to UserProfile for setting the assistants.
Separating students from the course staff has been problematic. For example, the course points page mixes assistants with the students: https://github.com/apluslms/a-plus/issues/540
There are two approaches to this problem.
1. We don't change the database schema and try to filter teachers and assistants out from the course points page and probably the participants page too.
2. We change the Enrollment model so that it includes a role of the user: student, assistant or teacher. It would make sense to enroll all users in the course, but separate their roles. Currently, teachers should not enroll so that they are not included in the course points page. Enrollment is also needed for storing the pseudonymized name of the user that may be used with, e.g., LTI connections. The old teacher and assistants fields in the Course and CourseInstance fields would be removed.
This approach enables adding new roles in the future, for example, a role for teaching support personnel that are not part of the course staff, but they need to access the course data without having full admin access to the system.
I wrote this issue in favor of alternative 2, but before we commit to it, we should plan ahead. How much would old code have to be changed for alternative 2? At least some user API endpoints are affected, as well as the course points page and the course participants page. All references to assistants, teachers, and enrollments should be checked.
|
non_defect
|
add roles to enrollments currently the course model has a many to many field to userprofile for setting the teachers of the course and the courseinstance model has a many to many field to userprofile for setting the assistants separating students from the course staff has been problematic for example the course points page mixes assistants with the students there are two approaches to this problem we don t change the database schema and try to filter teachers and assistants out from the course points page and probably the participants page too we change the enrollment model so that it includes a role of the user student assistant or teacher it would make sense to enroll all users in the course but separate their roles currently teachers should not enroll so that they are not included in the course points page enrollment is also needed for storing the pseudonymized name of the user that may be used with e g lti connections the old teacher and assistants fields in the course and courseinstance fields would be removed this approach enables adding new roles in the future for example a role for teaching support personnel that are not part of the course staff but they need to access the course data without having full admin access to the system i wrote this issue in favor of alternative but before we commit to it we should plan ahead how much would old code have to be changed for alternative at least some user api endpoints are affected as well as the course points page and the course participants page all references to assistants teachers and enrollments should be checked
| 0
|
210,679
| 23,768,684,625
|
IssuesEvent
|
2022-09-01 14:38:58
|
alieint/aspnetcore-2.1.24
|
https://api.github.com/repos/alieint/aspnetcore-2.1.24
|
opened
|
microsoft.aspnetcore.1.1.3.nupkg: 2 vulnerabilities (highest severity is: 7.5)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>microsoft.aspnetcore.1.1.3.nupkg</b></p></summary>
<p>Microsoft.AspNetCore</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.1.1.3.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.1.1.3.nupkg</a></p>
<p>Path to dependency file: /src/AzureIntegration/test/Microsoft.AspNetCore.AzureAppServices.FunctionalTests/Assets/Legacy.1.1.3.mvc.csproj</p>
<p>Path to vulnerable library: /ages/microsoft.aspnetcore/1.1.3/microsoft.aspnetcore.1.1.3.nupkg</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/alieint/aspnetcore-2.1.24/commit/ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d">ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2018-0808](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-0808) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | detected in multiple dependencies | Transitive | N/A | ❌ |
| [CVE-2017-11770](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-11770) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | microsoft.aspnetcore.1.1.3.nupkg | Direct | 1.0.8;1.1.5;2.0.3 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2018-0808</summary>
### Vulnerable Libraries - <b>microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg</b>, <b>microsoft.aspnetcore.hosting.1.1.3.nupkg</b></p>
<p>
### <b>microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg</b></p>
<p>ASP.NET Core components for working with the IIS AspNetCoreModule.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg</a></p>
<p>Path to dependency file: /src/AzureIntegration/test/Microsoft.AspNetCore.AzureAppServices.FunctionalTests/Assets/Legacy.1.1.3.mvc.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/microsoft.aspnetcore.server.iisintegration/1.1.3/microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.aspnetcore.1.1.3.nupkg (Root Library)
- :x: **microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg** (Vulnerable Library)
### <b>microsoft.aspnetcore.hosting.1.1.3.nupkg</b></p>
<p>ASP.NET Core hosting infrastructure and startup logic for web applications.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.hosting.1.1.3.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.hosting.1.1.3.nupkg</a></p>
<p>Path to dependency file: /src/AzureIntegration/test/Microsoft.AspNetCore.AzureAppServices.FunctionalTests/Assets/Legacy.1.1.3.mvc.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/microsoft.aspnetcore.hosting/1.1.3/microsoft.aspnetcore.hosting.1.1.3.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.aspnetcore.1.1.3.nupkg (Root Library)
- microsoft.aspnetcore.server.kestrel.1.1.3.nupkg
- :x: **microsoft.aspnetcore.hosting.1.1.3.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alieint/aspnetcore-2.1.24/commit/ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d">ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
ASP.NET Core 1.0. 1.1, and 2.0 allow an elevation of privilege vulnerability due to how ASP.NET web applications handle web requests, aka "ASP.NET Core Elevation Of Privilege Vulnerability". This CVE is unique from CVE-2018-0784.
<p>Publish Date: 2018-03-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-0808>CVE-2018-0808</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2018-0808">https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2018-0808</a></p>
<p>Release Date: 2018-03-14</p>
<p>Fix Resolution: Microsoft.AspNetCore.Server.IISIntegration - 2.1.0, Microsoft.AspNetCore.Hosting - 2.1.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2017-11770</summary>
### Vulnerable Library - <b>microsoft.aspnetcore.1.1.3.nupkg</b></p>
<p>Microsoft.AspNetCore</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.1.1.3.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.1.1.3.nupkg</a></p>
<p>Path to dependency file: /src/AzureIntegration/test/Microsoft.AspNetCore.AzureAppServices.FunctionalTests/Assets/Legacy.1.1.3.mvc.csproj</p>
<p>Path to vulnerable library: /ages/microsoft.aspnetcore/1.1.3/microsoft.aspnetcore.1.1.3.nupkg</p>
<p>
Dependency Hierarchy:
- :x: **microsoft.aspnetcore.1.1.3.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alieint/aspnetcore-2.1.24/commit/ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d">ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
.NET Core 1.0, 1.1, and 2.0 allow an unauthenticated attacker to remotely cause a denial of service attack against a .NET Core web application by improperly parsing certificate data. A denial of service vulnerability exists when .NET Core improperly handles parsing certificate data, aka ".NET CORE Denial Of Service Vulnerability".
<p>Publish Date: 2017-11-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-11770>CVE-2017-11770</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11770">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11770</a></p>
<p>Release Date: 2017-11-15</p>
<p>Fix Resolution: 1.0.8;1.1.5;2.0.3</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
True
|
microsoft.aspnetcore.1.1.3.nupkg: 2 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>microsoft.aspnetcore.1.1.3.nupkg</b></p></summary>
<p>Microsoft.AspNetCore</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.1.1.3.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.1.1.3.nupkg</a></p>
<p>Path to dependency file: /src/AzureIntegration/test/Microsoft.AspNetCore.AzureAppServices.FunctionalTests/Assets/Legacy.1.1.3.mvc.csproj</p>
<p>Path to vulnerable library: /ages/microsoft.aspnetcore/1.1.3/microsoft.aspnetcore.1.1.3.nupkg</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/alieint/aspnetcore-2.1.24/commit/ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d">ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2018-0808](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-0808) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | detected in multiple dependencies | Transitive | N/A | ❌ |
| [CVE-2017-11770](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-11770) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | microsoft.aspnetcore.1.1.3.nupkg | Direct | 1.0.8;1.1.5;2.0.3 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2018-0808</summary>
### Vulnerable Libraries - <b>microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg</b>, <b>microsoft.aspnetcore.hosting.1.1.3.nupkg</b></p>
<p>
### <b>microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg</b></p>
<p>ASP.NET Core components for working with the IIS AspNetCoreModule.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg</a></p>
<p>Path to dependency file: /src/AzureIntegration/test/Microsoft.AspNetCore.AzureAppServices.FunctionalTests/Assets/Legacy.1.1.3.mvc.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/microsoft.aspnetcore.server.iisintegration/1.1.3/microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.aspnetcore.1.1.3.nupkg (Root Library)
- :x: **microsoft.aspnetcore.server.iisintegration.1.1.3.nupkg** (Vulnerable Library)
### <b>microsoft.aspnetcore.hosting.1.1.3.nupkg</b></p>
<p>ASP.NET Core hosting infrastructure and startup logic for web applications.</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.hosting.1.1.3.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.hosting.1.1.3.nupkg</a></p>
<p>Path to dependency file: /src/AzureIntegration/test/Microsoft.AspNetCore.AzureAppServices.FunctionalTests/Assets/Legacy.1.1.3.mvc.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/microsoft.aspnetcore.hosting/1.1.3/microsoft.aspnetcore.hosting.1.1.3.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.aspnetcore.1.1.3.nupkg (Root Library)
- microsoft.aspnetcore.server.kestrel.1.1.3.nupkg
- :x: **microsoft.aspnetcore.hosting.1.1.3.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alieint/aspnetcore-2.1.24/commit/ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d">ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
ASP.NET Core 1.0. 1.1, and 2.0 allow an elevation of privilege vulnerability due to how ASP.NET web applications handle web requests, aka "ASP.NET Core Elevation Of Privilege Vulnerability". This CVE is unique from CVE-2018-0784.
<p>Publish Date: 2018-03-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-0808>CVE-2018-0808</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2018-0808">https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2018-0808</a></p>
<p>Release Date: 2018-03-14</p>
<p>Fix Resolution: Microsoft.AspNetCore.Server.IISIntegration - 2.1.0, Microsoft.AspNetCore.Hosting - 2.1.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2017-11770</summary>
### Vulnerable Library - <b>microsoft.aspnetcore.1.1.3.nupkg</b></p>
<p>Microsoft.AspNetCore</p>
<p>Library home page: <a href="https://api.nuget.org/packages/microsoft.aspnetcore.1.1.3.nupkg">https://api.nuget.org/packages/microsoft.aspnetcore.1.1.3.nupkg</a></p>
<p>Path to dependency file: /src/AzureIntegration/test/Microsoft.AspNetCore.AzureAppServices.FunctionalTests/Assets/Legacy.1.1.3.mvc.csproj</p>
<p>Path to vulnerable library: /ages/microsoft.aspnetcore/1.1.3/microsoft.aspnetcore.1.1.3.nupkg</p>
<p>
Dependency Hierarchy:
- :x: **microsoft.aspnetcore.1.1.3.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alieint/aspnetcore-2.1.24/commit/ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d">ebe7c6237e41b2d6f63ad16df6b50fa372ea6b4d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
.NET Core 1.0, 1.1, and 2.0 allow an unauthenticated attacker to remotely cause a denial of service attack against a .NET Core web application by improperly parsing certificate data. A denial of service vulnerability exists when .NET Core improperly handles parsing certificate data, aka ".NET CORE Denial Of Service Vulnerability".
<p>Publish Date: 2017-11-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-11770>CVE-2017-11770</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11770">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-11770</a></p>
<p>Release Date: 2017-11-15</p>
<p>Fix Resolution: 1.0.8;1.1.5;2.0.3</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
|
non_defect
|
microsoft aspnetcore nupkg vulnerabilities highest severity is vulnerable library microsoft aspnetcore nupkg microsoft aspnetcore library home page a href path to dependency file src azureintegration test microsoft aspnetcore azureappservices functionaltests assets legacy mvc csproj path to vulnerable library ages microsoft aspnetcore microsoft aspnetcore nupkg found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high detected in multiple dependencies transitive n a high microsoft aspnetcore nupkg direct details cve vulnerable libraries microsoft aspnetcore server iisintegration nupkg microsoft aspnetcore hosting nupkg microsoft aspnetcore server iisintegration nupkg asp net core components for working with the iis aspnetcoremodule library home page a href path to dependency file src azureintegration test microsoft aspnetcore azureappservices functionaltests assets legacy mvc csproj path to vulnerable library home wss scanner nuget packages microsoft aspnetcore server iisintegration microsoft aspnetcore server iisintegration nupkg dependency hierarchy microsoft aspnetcore nupkg root library x microsoft aspnetcore server iisintegration nupkg vulnerable library microsoft aspnetcore hosting nupkg asp net core hosting infrastructure and startup logic for web applications library home page a href path to dependency file src azureintegration test microsoft aspnetcore azureappservices functionaltests assets legacy mvc csproj path to vulnerable library home wss scanner nuget packages microsoft aspnetcore hosting microsoft aspnetcore hosting nupkg dependency hierarchy microsoft aspnetcore nupkg root library microsoft aspnetcore server kestrel nupkg x microsoft aspnetcore hosting nupkg vulnerable library found in head commit a href found in base branch main vulnerability details asp net core and allow an elevation of privilege vulnerability due to how asp net web applications handle web requests aka asp net core elevation of privilege vulnerability this cve is unique from cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution microsoft aspnetcore server iisintegration microsoft aspnetcore hosting step up your open source security game with mend cve vulnerable library microsoft aspnetcore nupkg microsoft aspnetcore library home page a href path to dependency file src azureintegration test microsoft aspnetcore azureappservices functionaltests assets legacy mvc csproj path to vulnerable library ages microsoft aspnetcore microsoft aspnetcore nupkg dependency hierarchy x microsoft aspnetcore nupkg vulnerable library found in head commit a href found in base branch main vulnerability details net core and allow an unauthenticated attacker to remotely cause a denial of service attack against a net core web application by improperly parsing certificate data a denial of service vulnerability exists when net core improperly handles parsing certificate data aka net core denial of service vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
57,549
| 15,839,186,843
|
IssuesEvent
|
2021-04-07 00:13:55
|
dkfans/keeperfx
|
https://api.github.com/repos/dkfans/keeperfx
|
closed
|
NG+ 102: Lighting trap kills imp in hand when Warlock is dead
|
Priority-Low Type-Defect
|
If I a creature runs into a lightning trap and I hover my hand with an imp in it directly over the trap the imp in my hand gets killed too.
I then can drop the dead imp anywhere. Its icon becomes the chicken icon until you drop him.
Discovered in a Bonus level (hellhound) in the NG+ campaign

|
1.0
|
NG+ 102: Lighting trap kills imp in hand when Warlock is dead - If I a creature runs into a lightning trap and I hover my hand with an imp in it directly over the trap the imp in my hand gets killed too.
I then can drop the dead imp anywhere. Its icon becomes the chicken icon until you drop him.
Discovered in a Bonus level (hellhound) in the NG+ campaign

|
defect
|
ng lighting trap kills imp in hand when warlock is dead if i a creature runs into a lightning trap and i hover my hand with an imp in it directly over the trap the imp in my hand gets killed too i then can drop the dead imp anywhere its icon becomes the chicken icon until you drop him discovered in a bonus level hellhound in the ng campaign
| 1
|
31,673
| 6,583,671,528
|
IssuesEvent
|
2017-09-13 07:07:59
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
DataTable with Virtual Scroll flickers
|
defect
|
<!--
- IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING.
- IF YOU'D LIKE TO SECURE OUR RESPONSE, YOU MAY CONSIDER PRIMENG PRO SUPPORT WHERE SUPPORT IS PROVIDED WITHIN 4 hours.
-->
**I'm submitting a ...** (check one with "x")
```
[X] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
http://plnkr.co/edit/V3JY3MASoc1Dh2pm5IJO?p=preview
https://www.primefaces.org/primeng/#/datatable/scroll
Related to this support request:
https://forum.primefaces.org/viewtopic.php?f=35&t=51675&sid=997449c59e643d44aed79a1461037caf
**Current behavior**
When using the table with virtual scroll, if you scroll down to where the lazy load gets fired the table will continually fire the lazy load event. This causes the table to flicker with data loading and never actually complete loading the data until you scroll further down or back up to before the lazy load event fired. This appears to only be an issue when the viewport is showing part of the original loaded data and part of the new loaded data.
**Expected behavior**
The expected behavior is for the the data to load to the table without continually firing the lazy load event.
**Minimal reproduction of the problem with instructions**
You can clearly see the issue on the example virtual scroll table in the demo. If you scroll the second table to the point in which lazy load event occurs the table will flicker and never complete loading.
**What is the motivation / use case for changing the behavior?**
Table loading is not working properly.
**Please tell us about your environment:**
Windows 2010, Visual Studio Code, node.js, npm
* **Angular version:** 2.0.X
4.0 +
* **PrimeNG version:** 2.0.X
Checked this issue with 4.0.3, 4.1.0-rc2, 4.1.0-rc3, 4.1.0 and it seems to manifest in all of them.
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Chrome 59
<!-- All browsers where this could be reproduced -->
* **Language:** [all]
|
1.0
|
DataTable with Virtual Scroll flickers - <!--
- IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING.
- IF YOU'D LIKE TO SECURE OUR RESPONSE, YOU MAY CONSIDER PRIMENG PRO SUPPORT WHERE SUPPORT IS PROVIDED WITHIN 4 hours.
-->
**I'm submitting a ...** (check one with "x")
```
[X] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
http://plnkr.co/edit/V3JY3MASoc1Dh2pm5IJO?p=preview
https://www.primefaces.org/primeng/#/datatable/scroll
Related to this support request:
https://forum.primefaces.org/viewtopic.php?f=35&t=51675&sid=997449c59e643d44aed79a1461037caf
**Current behavior**
When using the table with virtual scroll, if you scroll down to where the lazy load gets fired the table will continually fire the lazy load event. This causes the table to flicker with data loading and never actually complete loading the data until you scroll further down or back up to before the lazy load event fired. This appears to only be an issue when the viewport is showing part of the original loaded data and part of the new loaded data.
**Expected behavior**
The expected behavior is for the the data to load to the table without continually firing the lazy load event.
**Minimal reproduction of the problem with instructions**
You can clearly see the issue on the example virtual scroll table in the demo. If you scroll the second table to the point in which lazy load event occurs the table will flicker and never complete loading.
**What is the motivation / use case for changing the behavior?**
Table loading is not working properly.
**Please tell us about your environment:**
Windows 2010, Visual Studio Code, node.js, npm
* **Angular version:** 2.0.X
4.0 +
* **PrimeNG version:** 2.0.X
Checked this issue with 4.0.3, 4.1.0-rc2, 4.1.0-rc3, 4.1.0 and it seems to manifest in all of them.
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Chrome 59
<!-- All browsers where this could be reproduced -->
* **Language:** [all]
|
defect
|
datatable with virtual scroll flickers if you don t fill out the following information we might close your issue without investigating if you d like to secure our response you may consider primeng pro support where support is provided within hours i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports related to this support request current behavior when using the table with virtual scroll if you scroll down to where the lazy load gets fired the table will continually fire the lazy load event this causes the table to flicker with data loading and never actually complete loading the data until you scroll further down or back up to before the lazy load event fired this appears to only be an issue when the viewport is showing part of the original loaded data and part of the new loaded data expected behavior the expected behavior is for the the data to load to the table without continually firing the lazy load event minimal reproduction of the problem with instructions you can clearly see the issue on the example virtual scroll table in the demo if you scroll the second table to the point in which lazy load event occurs the table will flicker and never complete loading what is the motivation use case for changing the behavior table loading is not working properly please tell us about your environment windows visual studio code node js npm angular version x primeng version x checked this issue with and it seems to manifest in all of them browser all chrome xx firefox xx ie xx safari xx mobile chrome xx android x x web chrome language
| 1
|
45,419
| 12,797,837,521
|
IssuesEvent
|
2020-07-02 13:00:40
|
Hippocampome-Org/php
|
https://api.github.com/repos/Hippocampome-Org/php
|
opened
|
DWW: [synaptome] Number of significant digits
|
defect
|
In the hover-over statistics, the mean value has the correct number of significant digits (4), but the standard deviation value, the lower range value, and the upper range value have too many digits displayed. They should also be limited to 4 significant digits.
|
1.0
|
DWW: [synaptome] Number of significant digits - In the hover-over statistics, the mean value has the correct number of significant digits (4), but the standard deviation value, the lower range value, and the upper range value have too many digits displayed. They should also be limited to 4 significant digits.
|
defect
|
dww number of significant digits in the hover over statistics the mean value has the correct number of significant digits but the standard deviation value the lower range value and the upper range value have too many digits displayed they should also be limited to significant digits
| 1
|
29,276
| 5,632,225,273
|
IssuesEvent
|
2017-04-05 16:06:35
|
BOINC/boinc
|
https://api.github.com/repos/BOINC/boinc
|
closed
|
BOINC-wide teams has several bugs
|
C: Server - Other P: Major T: Defect
|
**Reported by Saenger on 4 Apr 37833731 04:26 UTC**
1.: No multi-founder options:
We (SETI.Germany, not one of the smallest teams) have 28 founders. We like it this way. But our team was automatically founded on the team list in Berkeley, so, afaik, we will be affected by this. Since most projects have not implemented this "feature" I can't yet say how.
2.: No project specific team description possible:
We have in our team descriptions a (bigger) common part for all projects, and some special part for just this project where it's good for. This has to be possible in the future, so an opt-out of the description update is necessary.
3.: Teams were created with copy'n'paste from Seti with unwanted consequences:
Our founder probably never was asked about this "feature", he was not seen on our boards for quite a while, and he is since even longer not any more our team page admin. That job has changed twice since he founded S.G @Classic. So we don't have access to this team account. The actual team leaders need to be contacted by those who set up this accounts.
4.: Team leaders have to be notified about new founded teams by mail from new projects
I've had another ticket about this issue with the mail addresses, but have to concede that you generally don't hold security and spam protection that high as I do. So if the mail is imported by some new project, the founder hast to be made aware of this asap, otherwise someone can probably hijack the team by "initiate transfer".
Discussion of this in the Forum: http://boinc.berkeley.edu/dev/forum_thread.php?id=2234
Migrated-From: http://boinc.berkeley.edu/trac/ticket/455
|
1.0
|
BOINC-wide teams has several bugs - **Reported by Saenger on 4 Apr 37833731 04:26 UTC**
1.: No multi-founder options:
We (SETI.Germany, not one of the smallest teams) have 28 founders. We like it this way. But our team was automatically founded on the team list in Berkeley, so, afaik, we will be affected by this. Since most projects have not implemented this "feature" I can't yet say how.
2.: No project specific team description possible:
We have in our team descriptions a (bigger) common part for all projects, and some special part for just this project where it's good for. This has to be possible in the future, so an opt-out of the description update is necessary.
3.: Teams were created with copy'n'paste from Seti with unwanted consequences:
Our founder probably never was asked about this "feature", he was not seen on our boards for quite a while, and he is since even longer not any more our team page admin. That job has changed twice since he founded S.G @Classic. So we don't have access to this team account. The actual team leaders need to be contacted by those who set up this accounts.
4.: Team leaders have to be notified about new founded teams by mail from new projects
I've had another ticket about this issue with the mail addresses, but have to concede that you generally don't hold security and spam protection that high as I do. So if the mail is imported by some new project, the founder hast to be made aware of this asap, otherwise someone can probably hijack the team by "initiate transfer".
Discussion of this in the Forum: http://boinc.berkeley.edu/dev/forum_thread.php?id=2234
Migrated-From: http://boinc.berkeley.edu/trac/ticket/455
|
defect
|
boinc wide teams has several bugs reported by saenger on apr utc no multi founder options we seti germany not one of the smallest teams have founders we like it this way but our team was automatically founded on the team list in berkeley so afaik we will be affected by this since most projects have not implemented this feature i can t yet say how no project specific team description possible we have in our team descriptions a bigger common part for all projects and some special part for just this project where it s good for this has to be possible in the future so an opt out of the description update is necessary teams were created with copy n paste from seti with unwanted consequences our founder probably never was asked about this feature he was not seen on our boards for quite a while and he is since even longer not any more our team page admin that job has changed twice since he founded s g classic so we don t have access to this team account the actual team leaders need to be contacted by those who set up this accounts team leaders have to be notified about new founded teams by mail from new projects i ve had another ticket about this issue with the mail addresses but have to concede that you generally don t hold security and spam protection that high as i do so if the mail is imported by some new project the founder hast to be made aware of this asap otherwise someone can probably hijack the team by initiate transfer discussion of this in the forum migrated from
| 1
|
139,940
| 11,299,688,107
|
IssuesEvent
|
2020-01-17 11:49:22
|
radareorg/radare2
|
https://api.github.com/repos/radareorg/radare2
|
closed
|
Segmentation fault on af on a big function.
|
RAnal crash has-test
|
# Work environment
Questions | Answers
-- | --
OS/arch/bits (mandatory) | openSUSE Tumbleweed x86_64
File format of the file you reverse (mandatory) | ELF
Architecture/bits of the file (mandatory) | x64
r2 -v full output, not truncated (mandatory) | radare2 4.1.0-git 23310 @ linux-x86-64 git.4.0.0-132-g9c08b9e4c commit: 9c08b9e4c02ec49c3c789760504c8ae4461a7c20 build: 2019-11-26__08:22:54
# Expected behavious
The command `af bigfunction 4198793` should analyse my function.
# Actual behaviour
`
/home/myuser/Softwares/radare2/env.sh: line 65: 4681 Segmentation fault (core dumped) R2_ENV_IS_SET=1 R2_LIBR_PLUGINS=${pfx}/lib/radare2 PATH=$pfx/bin:${PATH} LD_LIBRARY_PATH=$pfx/lib:$LD_LIBRARY_PATH DYLD_LIBRARY_PATH=$pfx/lib:$DYLD_LIBRARY_PATH PKG_CONFIG_PATH=$pfx/lib/pkgconfig:$PKG_CONFIG_PATH "${1}" "${2}"
`
# Steps to reproduce the behaviour
I've been stucked at :
```bash
r2 ./binary
e anal.depth =0x1400
af bigfunction 4198793
```
It does not give me the full function if I try with e anal.depth=0x1300 for example.
# Additional Logs, screenshots, source-code, configuration dump, ...
The binary is to big for github so I've put it [here](http://challenge01.root-me.org/cracking/ch34/ch34.xz).
|
1.0
|
Segmentation fault on af on a big function. - # Work environment
Questions | Answers
-- | --
OS/arch/bits (mandatory) | openSUSE Tumbleweed x86_64
File format of the file you reverse (mandatory) | ELF
Architecture/bits of the file (mandatory) | x64
r2 -v full output, not truncated (mandatory) | radare2 4.1.0-git 23310 @ linux-x86-64 git.4.0.0-132-g9c08b9e4c commit: 9c08b9e4c02ec49c3c789760504c8ae4461a7c20 build: 2019-11-26__08:22:54
# Expected behavious
The command `af bigfunction 4198793` should analyse my function.
# Actual behaviour
`
/home/myuser/Softwares/radare2/env.sh: line 65: 4681 Segmentation fault (core dumped) R2_ENV_IS_SET=1 R2_LIBR_PLUGINS=${pfx}/lib/radare2 PATH=$pfx/bin:${PATH} LD_LIBRARY_PATH=$pfx/lib:$LD_LIBRARY_PATH DYLD_LIBRARY_PATH=$pfx/lib:$DYLD_LIBRARY_PATH PKG_CONFIG_PATH=$pfx/lib/pkgconfig:$PKG_CONFIG_PATH "${1}" "${2}"
`
# Steps to reproduce the behaviour
I've been stucked at :
```bash
r2 ./binary
e anal.depth =0x1400
af bigfunction 4198793
```
It does not give me the full function if I try with e anal.depth=0x1300 for example.
# Additional Logs, screenshots, source-code, configuration dump, ...
The binary is to big for github so I've put it [here](http://challenge01.root-me.org/cracking/ch34/ch34.xz).
|
non_defect
|
segmentation fault on af on a big function work environment questions answers os arch bits mandatory opensuse tumbleweed file format of the file you reverse mandatory elf architecture bits of the file mandatory v full output not truncated mandatory git linux git commit build expected behavious the command af bigfunction should analyse my function actual behaviour home myuser softwares env sh line segmentation fault core dumped env is set libr plugins pfx lib path pfx bin path ld library path pfx lib ld library path dyld library path pfx lib dyld library path pkg config path pfx lib pkgconfig pkg config path steps to reproduce the behaviour i ve been stucked at bash binary e anal depth af bigfunction it does not give me the full function if i try with e anal depth for example additional logs screenshots source code configuration dump the binary is to big for github so i ve put it
| 0
|
135,823
| 11,019,192,255
|
IssuesEvent
|
2019-12-05 12:08:05
|
psychopy/psychopy
|
https://api.github.com/repos/psychopy/psychopy
|
closed
|
pyglet testing on Travis
|
tests
|
Currently we test all supported Py3 versions with both pyglet 1.3 and 1.4. I'm wondering if we could reduce the number of pyglet 1.3 tests, e.g. only test on Python 3.6 (which is what's included in the standalone build) to save CI resources? WDYT?
|
1.0
|
pyglet testing on Travis - Currently we test all supported Py3 versions with both pyglet 1.3 and 1.4. I'm wondering if we could reduce the number of pyglet 1.3 tests, e.g. only test on Python 3.6 (which is what's included in the standalone build) to save CI resources? WDYT?
|
non_defect
|
pyglet testing on travis currently we test all supported versions with both pyglet and i m wondering if we could reduce the number of pyglet tests e g only test on python which is what s included in the standalone build to save ci resources wdyt
| 0
|
69,574
| 22,536,169,460
|
IssuesEvent
|
2022-06-25 08:45:26
|
cakephp/bake
|
https://api.github.com/repos/cakephp/bake
|
opened
|
Pending task for Cake 5
|
defect
|
### Description
- [ ] Cleanup code / files related to tasks and shells.
### Bake Version
x
### PHP Version
_No response_
|
1.0
|
Pending task for Cake 5 - ### Description
- [ ] Cleanup code / files related to tasks and shells.
### Bake Version
x
### PHP Version
_No response_
|
defect
|
pending task for cake description cleanup code files related to tasks and shells bake version x php version no response
| 1
|
46,708
| 13,055,962,369
|
IssuesEvent
|
2020-07-30 03:14:52
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
[simprod] includes projects which are not in combo (Trac #1743)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
Migrated from https://code.icecube.wisc.edu/ticket/1743
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "This causes warnings in sphinx\n{{{\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.simprod.util.rst:23: WARNING: autodoc: failed to import module u'icecube.simprod.util.corsika_binary_stager'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/simprod/util/corsika_binary_stager.py\", line 13, in <module>\n class CorsikaBinaryStager(CorsikaBinary):\nNameError: name 'CorsikaBinary' is not defined\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.simprod.util.rst:47: WARNING: autodoc: failed to import module u'icecube.simprod.util.gaussSpreadDOMeff'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/simprod/util/gaussSpreadDOMeff.py\", line 6, in <module>\n from iceprod.modules import ipmodule\nImportError: No module named iceprod.modules\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.simprod.util.rst:55: WARNING: autodoc: failed to import module u'icecube.simprod.util.modifyevent'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/simprod/util/modifyevent.py\", line 14, in <module>\n from iceprod.modules import ipmodule\nImportError: No module named iceprod.modules\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.simprod.util.rst:63: WARNING: autodoc: failed to import module u'icecube.simprod.util.splitter'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/simprod/util/splitter.py\", line 5, in <module>\n from iceprod.modules import ipmodule\nImportError: No module named iceprod.modules\n}}}",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "combo simulation",
"summary": "[simprod] includes projects which are not in combo",
"priority": "minor",
"keywords": "documentation",
"time": "2016-06-10T14:16:51",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
|
1.0
|
[simprod] includes projects which are not in combo (Trac #1743) - Migrated from https://code.icecube.wisc.edu/ticket/1743
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "This causes warnings in sphinx\n{{{\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.simprod.util.rst:23: WARNING: autodoc: failed to import module u'icecube.simprod.util.corsika_binary_stager'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/simprod/util/corsika_binary_stager.py\", line 13, in <module>\n class CorsikaBinaryStager(CorsikaBinary):\nNameError: name 'CorsikaBinary' is not defined\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.simprod.util.rst:47: WARNING: autodoc: failed to import module u'icecube.simprod.util.gaussSpreadDOMeff'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/simprod/util/gaussSpreadDOMeff.py\", line 6, in <module>\n from iceprod.modules import ipmodule\nImportError: No module named iceprod.modules\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.simprod.util.rst:55: WARNING: autodoc: failed to import module u'icecube.simprod.util.modifyevent'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/simprod/util/modifyevent.py\", line 14, in <module>\n from iceprod.modules import ipmodule\nImportError: No module named iceprod.modules\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.simprod.util.rst:63: WARNING: autodoc: failed to import module u'icecube.simprod.util.splitter'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/simprod/util/splitter.py\", line 5, in <module>\n from iceprod.modules import ipmodule\nImportError: No module named iceprod.modules\n}}}",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "combo simulation",
"summary": "[simprod] includes projects which are not in combo",
"priority": "minor",
"keywords": "documentation",
"time": "2016-06-10T14:16:51",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
|
defect
|
includes projects which are not in combo trac migrated from json status closed changetime description this causes warnings in sphinx n n users kmeagher icecube combo release sphinx build source python icecube simprod util rst warning autodoc failed to import module u icecube simprod util corsika binary stager the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube simprod util corsika binary stager py line in n class corsikabinarystager corsikabinary nnameerror name corsikabinary is not defined n users kmeagher icecube combo release sphinx build source python icecube simprod util rst warning autodoc failed to import module u icecube simprod util gaussspreaddomeff the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube simprod util gaussspreaddomeff py line in n from iceprod modules import ipmodule nimporterror no module named iceprod modules n users kmeagher icecube combo release sphinx build source python icecube simprod util rst warning autodoc failed to import module u icecube simprod util modifyevent the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube simprod util modifyevent py line in n from iceprod modules import ipmodule nimporterror no module named iceprod modules n users kmeagher icecube combo release sphinx build source python icecube simprod util rst warning autodoc failed to import module u icecube simprod util splitter the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube simprod util splitter py line in n from iceprod modules import ipmodule nimporterror no module named iceprod modules n reporter kjmeagher cc resolution fixed ts component combo simulation summary includes projects which are not in combo priority minor keywords documentation time milestone owner david schultz type defect
| 1
|
267,189
| 23,287,124,962
|
IssuesEvent
|
2022-08-05 17:47:57
|
nucleus-security/Test-repo
|
https://api.github.com/repos/nucleus-security/Test-repo
|
opened
|
Nucleus - [Critical] - Security Issue - CVE-2016-1000027
|
Test
|
Source: SONATYPE
Finding Description: Pivotal Spring Framework through 5.3.16 suffers from a potential remote code execution (RCE) issue if used for Java deserialization of untrusted data. Depending on how the library is implemented within a product, this issue may or not occur, and authentication may be required. NOTE: the vendor's position is that untrusted data is not an intended use case. The product's behavior will not be changed because some users rely on deserialization of trusted data.
Explanation: The <code>org.springframework:spring-web</code> package is vulnerable to deserialization of untrusted data leading to Remote Code Execution (RCE). The <code>readRemoteInvocation()</code> method in <code>HttpInvokerServiceExporter.class</code> does not properly verify or restrict untrusted objects prior to deserializing them. An attacker can exploit this vulnerability by sending malicious requests containing crafted objects, which when deserialized, execute arbitrary code on the vulnerable system.
<em>NOTE:</em> This vulnerability is related to a previously reported deserialization vulnerability (CVE-2011-2894) within the package, impacting a different class.
Detection: The application is vulnerable by using this component under specific scenarios as listed out in the <a href="https://www.tenable.com/security/research/tra-2016-20">advisory</a>.
CVE Link: <a href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000027" target="_blank">http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000027</a>
Target(s): Asset name: sandbox-application Path:org.springframework : spring-web : 5.2.2.RELEASE
Asset name: webgoat Path:org.springframework : spring-web : 5.3.1
Solution: There is no non-vulnerable upgrade path for this component/package. We recommend investigating alternative components or a potential mitigating control.
A warning has been provided in the official <a href="https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/remoting/httpinvoker/HttpInvokerServiceExporter.html">Javadocs</a> of the <code>HttpInvokerServiceExporter</code> class and support for several serialization-based remoting technologies including this class has been deprecated from 5.3.0 onwards:
>WARNING: Be aware of vulnerabilities due to unsafe Java deserialization: Manipulated input streams could lead to unwanted code execution on the server during the deserialization step. As a consequence, do not expose HTTP invoker endpoints to untrusted clients but rather just between your own services. In general, we strongly recommend any other message format (e.g. JSON) instead.
The developer's general advice also states:
>Do not use Java serialization for external endpoints, in particular not for unauthorized ones. HTTP invoker is not a well-kept secret (or an "oversight") but rather the typical case of how a Spring application would expose serialization endpoints to begin with... he has a point that we should make this case all across our documentation, including the javadoc. But I don't really see a CVE case here, just a documentation improvement.
>
>Pivotal will enhance their documentation for the 4.2.6 and 3.2.17 releases.
Reference: <a href="https://www.tenable.com/security/research/tra-2016-20">https://www.tenable.com/security/research/tra-2016-20</a>
References:
CVSS Base Score:7.5
CVSS Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
CVSS3 Base Score:9.8
CVSS3 Sonatype:9.8
Severity: Critical
Date Discovered: 2022-08-03 21:25:22
Nucleus Notification Rules Triggered: r2
Project Name: 4288-1
Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/168000005/Q1ZFLTIwMTYtMTAwMDAyNw--/U09OQVRZUEU-/VnVsbg--/false/MTY4MDAwMDA1/c3VtbWFyeQ--/false
|
1.0
|
Nucleus - [Critical] - Security Issue - CVE-2016-1000027 - Source: SONATYPE
Finding Description: Pivotal Spring Framework through 5.3.16 suffers from a potential remote code execution (RCE) issue if used for Java deserialization of untrusted data. Depending on how the library is implemented within a product, this issue may or not occur, and authentication may be required. NOTE: the vendor's position is that untrusted data is not an intended use case. The product's behavior will not be changed because some users rely on deserialization of trusted data.
Explanation: The <code>org.springframework:spring-web</code> package is vulnerable to deserialization of untrusted data leading to Remote Code Execution (RCE). The <code>readRemoteInvocation()</code> method in <code>HttpInvokerServiceExporter.class</code> does not properly verify or restrict untrusted objects prior to deserializing them. An attacker can exploit this vulnerability by sending malicious requests containing crafted objects, which when deserialized, execute arbitrary code on the vulnerable system.
<em>NOTE:</em> This vulnerability is related to a previously reported deserialization vulnerability (CVE-2011-2894) within the package, impacting a different class.
Detection: The application is vulnerable by using this component under specific scenarios as listed out in the <a href="https://www.tenable.com/security/research/tra-2016-20">advisory</a>.
CVE Link: <a href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000027" target="_blank">http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000027</a>
Target(s): Asset name: sandbox-application Path:org.springframework : spring-web : 5.2.2.RELEASE
Asset name: webgoat Path:org.springframework : spring-web : 5.3.1
Solution: There is no non-vulnerable upgrade path for this component/package. We recommend investigating alternative components or a potential mitigating control.
A warning has been provided in the official <a href="https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/remoting/httpinvoker/HttpInvokerServiceExporter.html">Javadocs</a> of the <code>HttpInvokerServiceExporter</code> class and support for several serialization-based remoting technologies including this class has been deprecated from 5.3.0 onwards:
>WARNING: Be aware of vulnerabilities due to unsafe Java deserialization: Manipulated input streams could lead to unwanted code execution on the server during the deserialization step. As a consequence, do not expose HTTP invoker endpoints to untrusted clients but rather just between your own services. In general, we strongly recommend any other message format (e.g. JSON) instead.
The developer's general advice also states:
>Do not use Java serialization for external endpoints, in particular not for unauthorized ones. HTTP invoker is not a well-kept secret (or an "oversight") but rather the typical case of how a Spring application would expose serialization endpoints to begin with... he has a point that we should make this case all across our documentation, including the javadoc. But I don't really see a CVE case here, just a documentation improvement.
>
>Pivotal will enhance their documentation for the 4.2.6 and 3.2.17 releases.
Reference: <a href="https://www.tenable.com/security/research/tra-2016-20">https://www.tenable.com/security/research/tra-2016-20</a>
References:
CVSS Base Score:7.5
CVSS Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
CVSS3 Base Score:9.8
CVSS3 Sonatype:9.8
Severity: Critical
Date Discovered: 2022-08-03 21:25:22
Nucleus Notification Rules Triggered: r2
Project Name: 4288-1
Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/168000005/Q1ZFLTIwMTYtMTAwMDAyNw--/U09OQVRZUEU-/VnVsbg--/false/MTY4MDAwMDA1/c3VtbWFyeQ--/false
|
non_defect
|
nucleus security issue cve source sonatype finding description pivotal spring framework through suffers from a potential remote code execution rce issue if used for java deserialization of untrusted data depending on how the library is implemented within a product this issue may or not occur and authentication may be required note the vendor s position is that untrusted data is not an intended use case the product s behavior will not be changed because some users rely on deserialization of trusted data explanation the org springframework spring web package is vulnerable to deserialization of untrusted data leading to remote code execution rce the readremoteinvocation method in httpinvokerserviceexporter class does not properly verify or restrict untrusted objects prior to deserializing them an attacker can exploit this vulnerability by sending malicious requests containing crafted objects which when deserialized execute arbitrary code on the vulnerable system note this vulnerability is related to a previously reported deserialization vulnerability cve within the package impacting a different class detection the application is vulnerable by using this component under specific scenarios as listed out in the a href cve link target s asset name sandbox application path org springframework spring web release asset name webgoat path org springframework spring web solution there is no non vulnerable upgrade path for this component package we recommend investigating alternative components or a potential mitigating control a warning has been provided in the official httpinvokerserviceexporter class and support for several serialization based remoting technologies including this class has been deprecated from onwards gt warning be aware of vulnerabilities due to unsafe java deserialization manipulated input streams could lead to unwanted code execution on the server during the deserialization step as a consequence do not expose http invoker endpoints to untrusted clients but rather just between your own services in general we strongly recommend any other message format e g json instead the developer s general advice also states gt do not use java serialization for external endpoints in particular not for unauthorized ones http invoker is not a well kept secret or an oversight but rather the typical case of how a spring application would expose serialization endpoints to begin with he has a point that we should make this case all across our documentation including the javadoc but i don t really see a cve case here just a documentation improvement gt gt pivotal will enhance their documentation for the and releases reference a href references cvss base score cvss vector cvss av n ac l pr n ui n s u c h i h a h base score sonatype severity critical date discovered nucleus notification rules triggered project name please see nucleus for more information on these vulnerabilities
| 0
|
256,627
| 19,430,565,320
|
IssuesEvent
|
2021-12-21 11:25:59
|
SoftwareAG/cumulocity-flexy-integration
|
https://api.github.com/repos/SoftwareAG/cumulocity-flexy-integration
|
opened
|
Clarify Management tenant
|
Documentation
|
I needs to be clarified which Cumulocity Management Tenant are we allowed to show.
|
1.0
|
Clarify Management tenant - I needs to be clarified which Cumulocity Management Tenant are we allowed to show.
|
non_defect
|
clarify management tenant i needs to be clarified which cumulocity management tenant are we allowed to show
| 0
|
5,837
| 2,610,216,419
|
IssuesEvent
|
2015-02-26 19:08:58
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
physxextensions.dll
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Альбин Громов'''
Привет всем не подскажите где можно найти
.physxextensions.dll. как то выкладывали уже
'''Василько Агафонов'''
Вот держи линк http://bit.ly/1hbkgtq
'''Анвар Шестаков'''
Просит ввести номер мобилы!Не опасно ли это?
'''Авксентий Кузнецов'''
Не это не влияет на баланс
'''Викентий Крылов'''
Не это не влияет на баланс
Информация о файле: physxextensions.dll
Загружен: В этом месяце
Скачан раз: 222
Рейтинг: 229
Средняя скорость скачивания: 857
Похожих файлов: 12
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 7:45
|
1.0
|
physxextensions.dll - ```
'''Альбин Громов'''
Привет всем не подскажите где можно найти
.physxextensions.dll. как то выкладывали уже
'''Василько Агафонов'''
Вот держи линк http://bit.ly/1hbkgtq
'''Анвар Шестаков'''
Просит ввести номер мобилы!Не опасно ли это?
'''Авксентий Кузнецов'''
Не это не влияет на баланс
'''Викентий Крылов'''
Не это не влияет на баланс
Информация о файле: physxextensions.dll
Загружен: В этом месяце
Скачан раз: 222
Рейтинг: 229
Средняя скорость скачивания: 857
Похожих файлов: 12
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 7:45
|
defect
|
physxextensions dll альбин громов привет всем не подскажите где можно найти physxextensions dll как то выкладывали уже василько агафонов вот держи линк анвар шестаков просит ввести номер мобилы не опасно ли это авксентий кузнецов не это не влияет на баланс викентий крылов не это не влияет на баланс информация о файле physxextensions dll загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 1
|
25,556
| 4,383,349,771
|
IssuesEvent
|
2016-08-07 13:34:52
|
menatwork/semantic_html5
|
https://api.github.com/repos/menatwork/semantic_html5
|
closed
|
Nur Gästen zeigen
|
Defect
|
Habe ein kleines Problem. Aktiviere ich "Nur Gästen zeigen" wird sich die Einstellung nur auf das öffnende Tag aber nicht auf das schließende Tag aus.
|
1.0
|
Nur Gästen zeigen - Habe ein kleines Problem. Aktiviere ich "Nur Gästen zeigen" wird sich die Einstellung nur auf das öffnende Tag aber nicht auf das schließende Tag aus.
|
defect
|
nur gästen zeigen habe ein kleines problem aktiviere ich nur gästen zeigen wird sich die einstellung nur auf das öffnende tag aber nicht auf das schließende tag aus
| 1
|
308,462
| 26,609,016,390
|
IssuesEvent
|
2023-01-23 21:59:02
|
dotnet/source-build
|
https://api.github.com/repos/dotnet/source-build
|
closed
|
re-enable dotnet-watch tests
|
area-ci-testing
|
dotnet-watch is broken in .NET 8 - see https://github.com/dotnet/sdk/issues/29609 for details.
The dotnet-watch tests have been disabled because of this.
|
1.0
|
re-enable dotnet-watch tests - dotnet-watch is broken in .NET 8 - see https://github.com/dotnet/sdk/issues/29609 for details.
The dotnet-watch tests have been disabled because of this.
|
non_defect
|
re enable dotnet watch tests dotnet watch is broken in net see for details the dotnet watch tests have been disabled because of this
| 0
|
161,679
| 12,558,937,293
|
IssuesEvent
|
2020-06-07 17:24:21
|
CICE-Consortium/CICE
|
https://api.github.com/repos/CICE-Consortium/CICE
|
opened
|
ice_transport_remap seg faults
|
Priority: Medium Software Engineering Testing
|
#460 includes what we believe is a compiler bug workaround in ice_transport_remap, but more analysis needs to be done. As highlighted in #460
> ice_transport_remap seems to have persistent seg fault issues, but they appear in different places; there's a comment about one of the omp directives seg faulting, and in the past, I've had to unroll a loop in the transport (I no longer remember which one) in order for optimization to not create a seg fault. Is there a particular (set of) variable(s) that need to be allocated, or is it really all of them? Is there a reason to not allocate here all the time? Would it help to move to a vector version of the transport (e.g. the new unstructured-grid code in MPAS)?
We need to test on other machines with the intel20 compiler, understand the problem better (whether a coding issue or simply a coding vulnerability), and try to figure out a more robust solution.
|
1.0
|
ice_transport_remap seg faults - #460 includes what we believe is a compiler bug workaround in ice_transport_remap, but more analysis needs to be done. As highlighted in #460
> ice_transport_remap seems to have persistent seg fault issues, but they appear in different places; there's a comment about one of the omp directives seg faulting, and in the past, I've had to unroll a loop in the transport (I no longer remember which one) in order for optimization to not create a seg fault. Is there a particular (set of) variable(s) that need to be allocated, or is it really all of them? Is there a reason to not allocate here all the time? Would it help to move to a vector version of the transport (e.g. the new unstructured-grid code in MPAS)?
We need to test on other machines with the intel20 compiler, understand the problem better (whether a coding issue or simply a coding vulnerability), and try to figure out a more robust solution.
|
non_defect
|
ice transport remap seg faults includes what we believe is a compiler bug workaround in ice transport remap but more analysis needs to be done as highlighted in ice transport remap seems to have persistent seg fault issues but they appear in different places there s a comment about one of the omp directives seg faulting and in the past i ve had to unroll a loop in the transport i no longer remember which one in order for optimization to not create a seg fault is there a particular set of variable s that need to be allocated or is it really all of them is there a reason to not allocate here all the time would it help to move to a vector version of the transport e g the new unstructured grid code in mpas we need to test on other machines with the compiler understand the problem better whether a coding issue or simply a coding vulnerability and try to figure out a more robust solution
| 0
|
40,176
| 9,884,360,863
|
IssuesEvent
|
2019-06-24 21:54:33
|
openanthem/nimbus-core
|
https://api.github.com/repos/openanthem/nimbus-core
|
closed
|
Mandatory fields validation is skipped when completing fields non-sequentially
|
Defect Open needs-more-detail
|
Mandatory fields validation is skipped when completing fields non-sequentially
|
1.0
|
Mandatory fields validation is skipped when completing fields non-sequentially - Mandatory fields validation is skipped when completing fields non-sequentially
|
defect
|
mandatory fields validation is skipped when completing fields non sequentially mandatory fields validation is skipped when completing fields non sequentially
| 1
|
28,789
| 5,367,299,311
|
IssuesEvent
|
2017-02-22 03:30:57
|
TNGSB/eWallet
|
https://api.github.com/repos/TNGSB/eWallet
|
opened
|
e-wallet_Add Voucher Campaign 22022017
|
Defect - High (Sev-2)
|
There is no checking for Uploading Voucher.
Scenario: User set 200 Maximum Set per Campaign when creating Campaign, but only generate 100 Voucher.
Issue: The upload was success although the value is not same and the campaign's status will always pending.
|
1.0
|
e-wallet_Add Voucher Campaign 22022017 - There is no checking for Uploading Voucher.
Scenario: User set 200 Maximum Set per Campaign when creating Campaign, but only generate 100 Voucher.
Issue: The upload was success although the value is not same and the campaign's status will always pending.
|
defect
|
e wallet add voucher campaign there is no checking for uploading voucher scenario user set maximum set per campaign when creating campaign but only generate voucher issue the upload was success although the value is not same and the campaign s status will always pending
| 1
|
55,957
| 14,860,754,268
|
IssuesEvent
|
2021-01-18 21:12:02
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
images going into chigger gif animation are not sorted
|
C: Chigger T: defect
|
## Bug Description
Gif creation in `chigger.utils.anim` uses `glob` for image search but results from `glob` are [not guaranteed to be sorted](https://stackoverflow.com/questions/6773584/how-is-pythons-glob-glob-ordered). As a result, gif frames can be rendered out of order.
## Steps to Reproduce
Create a gif from an image stack and see for yourself.
## Impact
Will help with visualization.
|
1.0
|
images going into chigger gif animation are not sorted - ## Bug Description
Gif creation in `chigger.utils.anim` uses `glob` for image search but results from `glob` are [not guaranteed to be sorted](https://stackoverflow.com/questions/6773584/how-is-pythons-glob-glob-ordered). As a result, gif frames can be rendered out of order.
## Steps to Reproduce
Create a gif from an image stack and see for yourself.
## Impact
Will help with visualization.
|
defect
|
images going into chigger gif animation are not sorted bug description gif creation in chigger utils anim uses glob for image search but results from glob are as a result gif frames can be rendered out of order steps to reproduce create a gif from an image stack and see for yourself impact will help with visualization
| 1
|
352,471
| 32,071,688,888
|
IssuesEvent
|
2023-09-25 08:26:16
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
eks-operator k8s 1.27 chart support
|
[zube]: To Test area/charts team/highlander
|
This issue is to implement the chart changes & perform the testing required to upgrade to k8s 1.27. Please perform below steps.
- [ ] Check whether the chart has any references to the below deprecated features of k8s 1.27. You can find information about deprecated APIs here on [Kubernetes Deprecation guide](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-27).
**storage.k8s.io/v1beta1** API version of **CSIStorageCapacity**
- [ ] Update kube-version annotation in Chart.yaml. catalog.cattle.io/kube-version: '>= 1.23.0-0 < 1.28.0-0'
Note: If this is an upstream chart and if the upstream chart doesn't support k8s 1.27 then do not update the kube-version annotation or update the k8s version accordingly.
- [ ] Perform below tests
- Fresh install/uninstall/upgrade/downgrade of the charts on K8s 1.27 provisioned on Rancher. Test the chart functionality.
(Please refer to the https://github.com/rancher/rancher/issues/41395 to know what testing was done for k8s 1.26 upgrade for your chart)
- Provision rancher on k8s 1.26 or lesser
- Install chart on this cluster
- Upgrade this cluster in rancher to k8s 1.27
- Make sure the existing chart and the app is functional after cluster upgrade
## PR's:
- [x] https://github.com/rancher/charts/pull/2889
- [x] https://github.com/rancher/charts/pull/2892
- [x] https://github.com/rancher/rancher/pull/42879
|
1.0
|
eks-operator k8s 1.27 chart support - This issue is to implement the chart changes & perform the testing required to upgrade to k8s 1.27. Please perform below steps.
- [ ] Check whether the chart has any references to the below deprecated features of k8s 1.27. You can find information about deprecated APIs here on [Kubernetes Deprecation guide](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-27).
**storage.k8s.io/v1beta1** API version of **CSIStorageCapacity**
- [ ] Update kube-version annotation in Chart.yaml. catalog.cattle.io/kube-version: '>= 1.23.0-0 < 1.28.0-0'
Note: If this is an upstream chart and if the upstream chart doesn't support k8s 1.27 then do not update the kube-version annotation or update the k8s version accordingly.
- [ ] Perform below tests
- Fresh install/uninstall/upgrade/downgrade of the charts on K8s 1.27 provisioned on Rancher. Test the chart functionality.
(Please refer to the https://github.com/rancher/rancher/issues/41395 to know what testing was done for k8s 1.26 upgrade for your chart)
- Provision rancher on k8s 1.26 or lesser
- Install chart on this cluster
- Upgrade this cluster in rancher to k8s 1.27
- Make sure the existing chart and the app is functional after cluster upgrade
## PR's:
- [x] https://github.com/rancher/charts/pull/2889
- [x] https://github.com/rancher/charts/pull/2892
- [x] https://github.com/rancher/rancher/pull/42879
|
non_defect
|
eks operator chart support this issue is to implement the chart changes perform the testing required to upgrade to please perform below steps check whether the chart has any references to the below deprecated features of you can find information about deprecated apis here on storage io api version of csistoragecapacity update kube version annotation in chart yaml catalog cattle io kube version note if this is an upstream chart and if the upstream chart doesn t support then do not update the kube version annotation or update the version accordingly perform below tests fresh install uninstall upgrade downgrade of the charts on provisioned on rancher test the chart functionality please refer to the to know what testing was done for upgrade for your chart provision rancher on or lesser install chart on this cluster upgrade this cluster in rancher to make sure the existing chart and the app is functional after cluster upgrade pr s
| 0
|
292,174
| 21,955,117,999
|
IssuesEvent
|
2022-05-24 11:23:34
|
r5py/r5py
|
https://api.github.com/repos/r5py/r5py
|
closed
|
Make it clearer how to set memory limits, make it easier to set them.
|
documentation enhancement
|
> 1. Setting memory
> Setting Java memory `TransportNetwork(java_params=['-Xmx2G'])`. What's the best approach to allows users allocate emory to Java? In `r5r`, users need to set one time this at the start of the R session, before loading the r5r library. I believe there should be better ways to do this in Python.
This is splitting up issue #25 into smaller pieces (re: https://github.com/r5py/r5py/issues/25#issuecomment-1122901350 )
|
1.0
|
Make it clearer how to set memory limits, make it easier to set them. - > 1. Setting memory
> Setting Java memory `TransportNetwork(java_params=['-Xmx2G'])`. What's the best approach to allows users allocate emory to Java? In `r5r`, users need to set one time this at the start of the R session, before loading the r5r library. I believe there should be better ways to do this in Python.
This is splitting up issue #25 into smaller pieces (re: https://github.com/r5py/r5py/issues/25#issuecomment-1122901350 )
|
non_defect
|
make it clearer how to set memory limits make it easier to set them setting memory setting java memory transportnetwork java params what s the best approach to allows users allocate emory to java in users need to set one time this at the start of the r session before loading the library i believe there should be better ways to do this in python this is splitting up issue into smaller pieces re
| 0
|
92,175
| 18,785,609,111
|
IssuesEvent
|
2021-11-08 11:48:22
|
hashicorp/terraform-ls
|
https://api.github.com/repos/hashicorp/terraform-ls
|
opened
|
Report mismatching types of variables in tfvars
|
enhancement textDocument/codeAction textDocument/publishDiagnostics
|
### Use-cases
Users with more complex modules and many variables may not always notice mismatching types within variable files.
For example, say we have the following variable declaration
```hcl
variable "example" {
type = list(string)
}
```
and then a `terraform.tfvars` file entry:
```hcl
example = "foobar"
```
User may not immediately spot the mismatching type here, if they have many variables and variables of more complex types.
### Attempted Solutions
Manual inspection or `terraform validate`.
### Proposal
Report mismatching types of variables in any `*.tfvars` and `*.tfvars.json` via `textDocument/publishDiagnostics`.
|
1.0
|
Report mismatching types of variables in tfvars - ### Use-cases
Users with more complex modules and many variables may not always notice mismatching types within variable files.
For example, say we have the following variable declaration
```hcl
variable "example" {
type = list(string)
}
```
and then a `terraform.tfvars` file entry:
```hcl
example = "foobar"
```
User may not immediately spot the mismatching type here, if they have many variables and variables of more complex types.
### Attempted Solutions
Manual inspection or `terraform validate`.
### Proposal
Report mismatching types of variables in any `*.tfvars` and `*.tfvars.json` via `textDocument/publishDiagnostics`.
|
non_defect
|
report mismatching types of variables in tfvars use cases users with more complex modules and many variables may not always notice mismatching types within variable files for example say we have the following variable declaration hcl variable example type list string and then a terraform tfvars file entry hcl example foobar user may not immediately spot the mismatching type here if they have many variables and variables of more complex types attempted solutions manual inspection or terraform validate proposal report mismatching types of variables in any tfvars and tfvars json via textdocument publishdiagnostics
| 0
|
597,340
| 18,161,700,714
|
IssuesEvent
|
2021-09-27 10:20:10
|
AY2122S1-CS2103T-T09-4/tp
|
https://api.github.com/repos/AY2122S1-CS2103T-T09-4/tp
|
opened
|
As a user, I can use the application offline
|
priority.High type.Story
|
... so that I do not need an internet connection to view my contacts
|
1.0
|
As a user, I can use the application offline - ... so that I do not need an internet connection to view my contacts
|
non_defect
|
as a user i can use the application offline so that i do not need an internet connection to view my contacts
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.