Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
77,133
| 26,792,798,936
|
IssuesEvent
|
2023-02-01 09:44:56
|
MarcusWolschon/osmeditor4android
|
https://api.github.com/repos/MarcusWolschon/osmeditor4android
|
closed
|
No playback when adding layer from GPX file
|
Defect Minor
|
Today I tested using
- Add layer from GPX file, then
- Start playback.
I found no playback ensued.

All I know is I had to remember to push Stop Playback later otherwise I feared that something was going on that I could not see, using up CPU cycles.
I was using tracks made by
https://play.google.com/store/apps/details?id=net.osmtracker , with plenty of waypoints in them.
|
1.0
|
No playback when adding layer from GPX file - Today I tested using
- Add layer from GPX file, then
- Start playback.
I found no playback ensued.

All I know is I had to remember to push Stop Playback later otherwise I feared that something was going on that I could not see, using up CPU cycles.
I was using tracks made by
https://play.google.com/store/apps/details?id=net.osmtracker , with plenty of waypoints in them.
|
defect
|
no playback when adding layer from gpx file today i tested using add layer from gpx file then start playback i found no playback ensued all i know is i had to remember to push stop playback later otherwise i feared that something was going on that i could not see using up cpu cycles i was using tracks made by with plenty of waypoints in them
| 1
|
187,950
| 14,434,932,377
|
IssuesEvent
|
2020-12-07 07:58:10
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: disk-stalled/log=false,data=false failed
|
C-test-failure O-roachtest O-robot branch-release-20.2 release-blocker
|
[(roachtest).disk-stalled/log=false,data=false failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2500126&tab=buildLog) on [release-20.2@c15afb605a18772689caf46c3dd74c4aca33badd](https://github.com/cockroachdb/cockroach/commits/c15afb605a18772689caf46c3dd74c4aca33badd):
```
| | ```
| | set -exuo pipefail;
| | thrift_dir="/opt/thrift"
| |
| | if [ ! -f "/usr/bin/thrift" ]; then
| | sudo apt-get update;
| | sudo apt-get install -qy automake bison flex g++ git libboost-all-dev libevent-dev libssl-dev libtool make pkg-config python-setuptools libglib2.0-dev
| |
| | sudo mkdir -p "${thrift_dir}"
| | sudo chmod 777 "${thrift_dir}"
| | cd "${thrift_dir}"
| | curl "https://downloads.apache.org/thrift/0.13.0/thrift-0.13.0.tar.gz" | sudo tar xvz --strip-components 1
| | sudo ./configure --prefix=/usr
| | sudo make -j$(nproc)
| | sudo make install
| | (cd "${thrift_dir}/lib/py" && sudo python setup.py install)
| | fi
| |
| | charybde_dir="/opt/charybdefs"
| | nemesis_path="${charybde_dir}/charybdefs-nemesis"
| |
| | if [ ! -f "${nemesis_path}" ]; then
| | sudo apt-get install -qy build-essential cmake libfuse-dev fuse
| | sudo rm -rf "${charybde_dir}" "${nemesis_path}" /usr/local/bin/charybdefs{,-nemesis}
| | sudo mkdir -p "${charybde_dir}"
| | sudo chmod 777 "${charybde_dir}"
| | # TODO(bilal): Change URL back to scylladb/charybdefs once https://github.com/scylladb/charybdefs/pull/21 is merged.
| | git clone --depth 1 "https://github.com/itsbilal/charybdefs.git" "${charybde_dir}"
| |
| | cd "${charybde_dir}"
| | thrift -r --gen cpp server.thrift
| | cmake CMakeLists.txt
| | make -j$(nproc)
| |
| | sudo modprobe fuse
| | sudo ln -s "${charybde_dir}/charybdefs" /usr/local/bin/charybdefs
| | cat > "${nemesis_path}" <<EOF
| | #!/bin/bash
| | cd /opt/charybdefs/cookbook
| | ./recipes "\$@"
| | EOF
| | chmod +x "${nemesis_path}"
| | sudo ln -s "${nemesis_path}" /usr/local/bin/charybdefs-nemesis
| | fi
| |
| | ```
| Wraps: (3) exit status 2
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
Wraps: (2) exit status 20
Error types: (1) *main.withCommandDetails (2) *exec.ExitError
```
<details><summary>More</summary><p>
Artifacts: [/disk-stalled/log=false,data=false](https://teamcity.cockroachdb.com/viewLog.html?buildId=2500126&tab=artifacts#/disk-stalled/log=false,data=false)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Adisk-stalled%2Flog%3Dfalse%2Cdata%3Dfalse.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: disk-stalled/log=false,data=false failed - [(roachtest).disk-stalled/log=false,data=false failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2500126&tab=buildLog) on [release-20.2@c15afb605a18772689caf46c3dd74c4aca33badd](https://github.com/cockroachdb/cockroach/commits/c15afb605a18772689caf46c3dd74c4aca33badd):
```
| | ```
| | set -exuo pipefail;
| | thrift_dir="/opt/thrift"
| |
| | if [ ! -f "/usr/bin/thrift" ]; then
| | sudo apt-get update;
| | sudo apt-get install -qy automake bison flex g++ git libboost-all-dev libevent-dev libssl-dev libtool make pkg-config python-setuptools libglib2.0-dev
| |
| | sudo mkdir -p "${thrift_dir}"
| | sudo chmod 777 "${thrift_dir}"
| | cd "${thrift_dir}"
| | curl "https://downloads.apache.org/thrift/0.13.0/thrift-0.13.0.tar.gz" | sudo tar xvz --strip-components 1
| | sudo ./configure --prefix=/usr
| | sudo make -j$(nproc)
| | sudo make install
| | (cd "${thrift_dir}/lib/py" && sudo python setup.py install)
| | fi
| |
| | charybde_dir="/opt/charybdefs"
| | nemesis_path="${charybde_dir}/charybdefs-nemesis"
| |
| | if [ ! -f "${nemesis_path}" ]; then
| | sudo apt-get install -qy build-essential cmake libfuse-dev fuse
| | sudo rm -rf "${charybde_dir}" "${nemesis_path}" /usr/local/bin/charybdefs{,-nemesis}
| | sudo mkdir -p "${charybde_dir}"
| | sudo chmod 777 "${charybde_dir}"
| | # TODO(bilal): Change URL back to scylladb/charybdefs once https://github.com/scylladb/charybdefs/pull/21 is merged.
| | git clone --depth 1 "https://github.com/itsbilal/charybdefs.git" "${charybde_dir}"
| |
| | cd "${charybde_dir}"
| | thrift -r --gen cpp server.thrift
| | cmake CMakeLists.txt
| | make -j$(nproc)
| |
| | sudo modprobe fuse
| | sudo ln -s "${charybde_dir}/charybdefs" /usr/local/bin/charybdefs
| | cat > "${nemesis_path}" <<EOF
| | #!/bin/bash
| | cd /opt/charybdefs/cookbook
| | ./recipes "\$@"
| | EOF
| | chmod +x "${nemesis_path}"
| | sudo ln -s "${nemesis_path}" /usr/local/bin/charybdefs-nemesis
| | fi
| |
| | ```
| Wraps: (3) exit status 2
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
Wraps: (2) exit status 20
Error types: (1) *main.withCommandDetails (2) *exec.ExitError
```
<details><summary>More</summary><p>
Artifacts: [/disk-stalled/log=false,data=false](https://teamcity.cockroachdb.com/viewLog.html?buildId=2500126&tab=artifacts#/disk-stalled/log=false,data=false)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Adisk-stalled%2Flog%3Dfalse%2Cdata%3Dfalse.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_defect
|
roachtest disk stalled log false data false failed on set exuo pipefail thrift dir opt thrift if then sudo apt get update sudo apt get install qy automake bison flex g git libboost all dev libevent dev libssl dev libtool make pkg config python setuptools dev sudo mkdir p thrift dir sudo chmod thrift dir cd thrift dir curl sudo tar xvz strip components sudo configure prefix usr sudo make j nproc sudo make install cd thrift dir lib py sudo python setup py install fi charybde dir opt charybdefs nemesis path charybde dir charybdefs nemesis if then sudo apt get install qy build essential cmake libfuse dev fuse sudo rm rf charybde dir nemesis path usr local bin charybdefs nemesis sudo mkdir p charybde dir sudo chmod charybde dir todo bilal change url back to scylladb charybdefs once is merged git clone depth charybde dir cd charybde dir thrift r gen cpp server thrift cmake cmakelists txt make j nproc sudo modprobe fuse sudo ln s charybde dir charybdefs usr local bin charybdefs cat nemesis path eof bin bash cd opt charybdefs cookbook recipes eof chmod x nemesis path sudo ln s nemesis path usr local bin charybdefs nemesis fi wraps exit status error types errors cmd hintdetail withdetail exec exiterror wraps exit status error types main withcommanddetails exec exiterror more artifacts powered by
| 0
|
14,241
| 2,795,455,128
|
IssuesEvent
|
2015-05-11 22:07:27
|
revelc/formatter-maven-plugin
|
https://api.github.com/repos/revelc/formatter-maven-plugin
|
closed
|
plugin ignores line wrapping settings
|
auto-migrated Priority-Medium Type-Defect
|
```
Expected output:
private static EntityManagerFactory emf = Persistence.createEntityManagerFactory("myHappyService", Collections.emptyMap());
Actual output:
private static EntityManagerFactory emf = Persistence
.createEntityManagerFactory("myHappyService",
Collections.emptyMap());
Steps to reproduce:
1. have a java file with more than 80 characters in one line
2. run maven-java-formatter-plugin
Plugin version: 0.4
Maven version: 3.1.1
Java version: 1.7.51
Eclipse version: jee, 4.3.1
OS: ubuntu linux
Additional details:
$ grep 140 ~/CodeFormat.xml
<setting id="org.eclipse.jdt.core.formatter.lineSplit" value="140"/>
<setting id="org.eclipse.jdt.core.formatter.comment.line_length" value="140"/>
When I use the format file within eclipse, it see the expected, non wrapped
output, but the maven-plugin seems to ignore the wrap settings. I've tried 120,
140, 200 and even 400, nothing happens. I tried number lower than 80 and that
did not work as well. So, my guess is, the line wrapping is completely ignored.
```
Original issue reported on code.google.com by `matthias.coy@gmail.com` on 14 Feb 2014 at 5:23
|
1.0
|
plugin ignores line wrapping settings - ```
Expected output:
private static EntityManagerFactory emf = Persistence.createEntityManagerFactory("myHappyService", Collections.emptyMap());
Actual output:
private static EntityManagerFactory emf = Persistence
.createEntityManagerFactory("myHappyService",
Collections.emptyMap());
Steps to reproduce:
1. have a java file with more than 80 characters in one line
2. run maven-java-formatter-plugin
Plugin version: 0.4
Maven version: 3.1.1
Java version: 1.7.51
Eclipse version: jee, 4.3.1
OS: ubuntu linux
Additional details:
$ grep 140 ~/CodeFormat.xml
<setting id="org.eclipse.jdt.core.formatter.lineSplit" value="140"/>
<setting id="org.eclipse.jdt.core.formatter.comment.line_length" value="140"/>
When I use the format file within eclipse, it see the expected, non wrapped
output, but the maven-plugin seems to ignore the wrap settings. I've tried 120,
140, 200 and even 400, nothing happens. I tried number lower than 80 and that
did not work as well. So, my guess is, the line wrapping is completely ignored.
```
Original issue reported on code.google.com by `matthias.coy@gmail.com` on 14 Feb 2014 at 5:23
|
defect
|
plugin ignores line wrapping settings expected output private static entitymanagerfactory emf persistence createentitymanagerfactory myhappyservice collections emptymap actual output private static entitymanagerfactory emf persistence createentitymanagerfactory myhappyservice collections emptymap steps to reproduce have a java file with more than characters in one line run maven java formatter plugin plugin version maven version java version eclipse version jee os ubuntu linux additional details grep codeformat xml when i use the format file within eclipse it see the expected non wrapped output but the maven plugin seems to ignore the wrap settings i ve tried and even nothing happens i tried number lower than and that did not work as well so my guess is the line wrapping is completely ignored original issue reported on code google com by matthias coy gmail com on feb at
| 1
|
56,617
| 15,222,546,771
|
IssuesEvent
|
2021-02-18 00:32:01
|
naev/naev
|
https://api.github.com/repos/naev/naev
|
closed
|
Gprof lists planet_exists as the most expensive function in the game
|
Priority-Medium Type-Defect
|
```
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
14.63 1.34 1.34 449585 0.00 0.00 planet_exists
13.65 2.59 1.25 1911404 0.00 0.00 pilot_update
...
```
Am I reading this wrong, or is this seriously claiming `planet_exists()` accounts for 14.6 percent of game's CPU time? This seems rather absurd. (It should be a very insignificant component).
It's called in various places, including (perhaps significantly) the definition of `planet_isKnown()`. I'm not really sure why it's used there. But it is definitely not desirable for that to invoke a O(n) function that performs up to `planet_nstack` string comparisons.
|
1.0
|
Gprof lists planet_exists as the most expensive function in the game - ```
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
14.63 1.34 1.34 449585 0.00 0.00 planet_exists
13.65 2.59 1.25 1911404 0.00 0.00 pilot_update
...
```
Am I reading this wrong, or is this seriously claiming `planet_exists()` accounts for 14.6 percent of game's CPU time? This seems rather absurd. (It should be a very insignificant component).
It's called in various places, including (perhaps significantly) the definition of `planet_isKnown()`. I'm not really sure why it's used there. But it is definitely not desirable for that to invoke a O(n) function that performs up to `planet_nstack` string comparisons.
|
defect
|
gprof lists planet exists as the most expensive function in the game flat profile each sample counts as seconds cumulative self self total time seconds seconds calls ms call ms call name planet exists pilot update am i reading this wrong or is this seriously claiming planet exists accounts for percent of game s cpu time this seems rather absurd it should be a very insignificant component it s called in various places including perhaps significantly the definition of planet isknown i m not really sure why it s used there but it is definitely not desirable for that to invoke a o n function that performs up to planet nstack string comparisons
| 1
|
742,727
| 25,867,292,055
|
IssuesEvent
|
2022-12-13 22:07:59
|
solgenomics/sgn
|
https://api.github.com/repos/solgenomics/sgn
|
closed
|
Genotyping "shift" issue when using "." in vcf upload/download
|
Priority: Critical Type: Bug
|
Expected Behavior <!-- Describe the desired or expected behavour here. -->
--------------------------------------------------------------------------
As reported by Jeffrey Endelman:
To clarify, the problem is not that we are missing markers. The markers are there, but Breedbase removed the two fields with “.” when exporting the data, shifting all of the data on that row. Here is a toy example:
INPUT file
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Sample1
chr01 100 m1 A B . . . GT:AD 0/1.:44,88
chr01 110 m2 A B . . . GT:AD 1/1:0,81
. . m3 A B . . . GT:AD 1/1:2,72
OUTPUT
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Sample1
chr01 100 m1 A B . . . GT:AD 0/1.:44,88
chr01 110 m2 A B . . . GT:AD 1/1:0,81
m3 A B . . . GT:AD 1/1:2,72
For Bugs:
---------
### Environment
<!-- Where did you encounter the error. -->
#### Steps to Reproduce
<!-- Provide an example, or an unambiguous set of steps to reproduce -->
<!-- this bug. Include code to reproduce, if relevant. -->
|
1.0
|
Genotyping "shift" issue when using "." in vcf upload/download - Expected Behavior <!-- Describe the desired or expected behavour here. -->
--------------------------------------------------------------------------
As reported by Jeffrey Endelman:
To clarify, the problem is not that we are missing markers. The markers are there, but Breedbase removed the two fields with “.” when exporting the data, shifting all of the data on that row. Here is a toy example:
INPUT file
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Sample1
chr01 100 m1 A B . . . GT:AD 0/1.:44,88
chr01 110 m2 A B . . . GT:AD 1/1:0,81
. . m3 A B . . . GT:AD 1/1:2,72
OUTPUT
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Sample1
chr01 100 m1 A B . . . GT:AD 0/1.:44,88
chr01 110 m2 A B . . . GT:AD 1/1:0,81
m3 A B . . . GT:AD 1/1:2,72
For Bugs:
---------
### Environment
<!-- Where did you encounter the error. -->
#### Steps to Reproduce
<!-- Provide an example, or an unambiguous set of steps to reproduce -->
<!-- this bug. Include code to reproduce, if relevant. -->
|
non_defect
|
genotyping shift issue when using in vcf upload download expected behavior as reported by jeffrey endelman to clarify the problem is not that we are missing markers the markers are there but breedbase removed the two fields with “ ” when exporting the data shifting all of the data on that row here is a toy example input file chrom pos id ref alt qual filter info format a b gt ad a b gt ad a b gt ad output chrom pos id ref alt qual filter info format a b gt ad a b gt ad a b gt ad for bugs environment steps to reproduce
| 0
|
65,026
| 19,028,716,349
|
IssuesEvent
|
2021-11-24 08:19:58
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
opened
|
BUG: Implementation of Bessel Functions in Scipy
|
defect
|
### Describe your issue.
According to the documentation in scipy, the spherical bessel function (https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.spherical_jn.html#r1a410864550e-3)
is implemented using a recursion relation provided in the documentation:
(https://dlmf.nist.gov/10.51#E1)
However, there are two issues I seem to have with this:
1) The documentation says that for arguments bigger than the order, reference [1] is used and for small arguments
reference [2] is used. However, when rearranging reference [2], I get _the same equation_ as in reference [1], which seems
very suspicious to me. Can someone explain this?
2) When using the relation given in the reference, it only seems to hold for arguments bigger than the order
but not for arguments smaller than the order, even though the documentation specifies that for small arguments
the same relation is used.
_PLEASE_ clarify this. It is a very important and urgent issue for me.
### Reproducing Code Example
```python
#Implementation using the derivative definition for the spherical bessel functions
def besselrecursion(n,x):
if n >= 2:
result = (2*(n-1)+1)/x * besselrecursion(n-1,x) - besselrecursion(n-2,x)
elif n == 1:
result = np.sin(x)/x**2 - np.cos(x)/x
elif n == 0:
result = np.sin(x)/x
return result
#Alternative definition of the recurrence relation using the implemented jn directly
'''
def besselrecursion(n,x):
if n >= 2:
result = (2*(n-1)+1)/x * besselrecursion(n-1,x) - besselrecursion(n-2,x)
elif n == 1:
result = spherical_jn(1,x)
elif n == 0:
result = spherical_jn(0,x)
return result
'''
#already with order 30 and argument 12 the results vary greatly
z = 12
n = 30
print(besselrecursion(n,z))
print(spherical_jn(n,z))
```
### Error message
```shell
No Error message
```
### SciPy/NumPy/Python version information
1.7.1 1.19.5 sys.version_info(major=3, minor=8, micro=11, releaselevel='final', serial=0)
|
1.0
|
BUG: Implementation of Bessel Functions in Scipy - ### Describe your issue.
According to the documentation in scipy, the spherical bessel function (https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.spherical_jn.html#r1a410864550e-3)
is implemented using a recursion relation provided in the documentation:
(https://dlmf.nist.gov/10.51#E1)
However, there are two issues I seem to have with this:
1) The documentation says that for arguments bigger than the order, reference [1] is used and for small arguments
reference [2] is used. However, when rearranging reference [2], I get _the same equation_ as in reference [1], which seems
very suspicious to me. Can someone explain this?
2) When using the relation given in the reference, it only seems to hold for arguments bigger than the order
but not for arguments smaller than the order, even though the documentation specifies that for small arguments
the same relation is used.
_PLEASE_ clarify this. It is a very important and urgent issue for me.
### Reproducing Code Example
```python
#Implementation using the derivative definition for the spherical bessel functions
def besselrecursion(n,x):
if n >= 2:
result = (2*(n-1)+1)/x * besselrecursion(n-1,x) - besselrecursion(n-2,x)
elif n == 1:
result = np.sin(x)/x**2 - np.cos(x)/x
elif n == 0:
result = np.sin(x)/x
return result
#Alternative definition of the recurrence relation using the implemented jn directly
'''
def besselrecursion(n,x):
if n >= 2:
result = (2*(n-1)+1)/x * besselrecursion(n-1,x) - besselrecursion(n-2,x)
elif n == 1:
result = spherical_jn(1,x)
elif n == 0:
result = spherical_jn(0,x)
return result
'''
#already with order 30 and argument 12 the results vary greatly
z = 12
n = 30
print(besselrecursion(n,z))
print(spherical_jn(n,z))
```
### Error message
```shell
No Error message
```
### SciPy/NumPy/Python version information
1.7.1 1.19.5 sys.version_info(major=3, minor=8, micro=11, releaselevel='final', serial=0)
|
defect
|
bug implementation of bessel functions in scipy describe your issue according to the documentation in scipy the spherical bessel function is implemented using a recursion relation provided in the documentation however there are two issues i seem to have with this the documentation says that for arguments bigger than the order reference is used and for small arguments reference is used however when rearranging reference i get the same equation as in reference which seems very suspicious to me can someone explain this when using the relation given in the reference it only seems to hold for arguments bigger than the order but not for arguments smaller than the order even though the documentation specifies that for small arguments the same relation is used please clarify this it is a very important and urgent issue for me reproducing code example python implementation using the derivative definition for the spherical bessel functions def besselrecursion n x if n result n x besselrecursion n x besselrecursion n x elif n result np sin x x np cos x x elif n result np sin x x return result alternative definition of the recurrence relation using the implemented jn directly def besselrecursion n x if n result n x besselrecursion n x besselrecursion n x elif n result spherical jn x elif n result spherical jn x return result already with order and argument the results vary greatly z n print besselrecursion n z print spherical jn n z error message shell no error message scipy numpy python version information sys version info major minor micro releaselevel final serial
| 1
|
515,217
| 14,952,509,243
|
IssuesEvent
|
2021-01-26 15:37:11
|
airshipit/treasuremap
|
https://api.github.com/repos/airshipit/treasuremap
|
closed
|
Create Elasticsearch-Data composite in airshipctl/treasuremap
|
priority/critical
|
Description :
As a developer I need the ability to to create the Elasticsearch-Data composite in airshipctl/treasuremap
This will include instance of the Elasticsearch function, configured to deploy a set of nodes configured with the Data role. These nodes connect to the Elasticsearch-ingest nodes to form the Elasticsearch cluster
Acceptance Criteria :
- An instance of the Elasticsearch-Data composite using Elasticsearch function, configured to deploy a set of nodes configured with the Data role.
|
1.0
|
Create Elasticsearch-Data composite in airshipctl/treasuremap - Description :
As a developer I need the ability to to create the Elasticsearch-Data composite in airshipctl/treasuremap
This will include instance of the Elasticsearch function, configured to deploy a set of nodes configured with the Data role. These nodes connect to the Elasticsearch-ingest nodes to form the Elasticsearch cluster
Acceptance Criteria :
- An instance of the Elasticsearch-Data composite using Elasticsearch function, configured to deploy a set of nodes configured with the Data role.
|
non_defect
|
create elasticsearch data composite in airshipctl treasuremap description as a developer i need the ability to to create the elasticsearch data composite in airshipctl treasuremap this will include instance of the elasticsearch function configured to deploy a set of nodes configured with the data role these nodes connect to the elasticsearch ingest nodes to form the elasticsearch cluster acceptance criteria an instance of the elasticsearch data composite using elasticsearch function configured to deploy a set of nodes configured with the data role
| 0
|
89,561
| 10,604,060,366
|
IssuesEvent
|
2019-10-10 17:20:19
|
ggerganov/diff-challenge
|
https://api.github.com/repos/ggerganov/diff-challenge
|
opened
|
Automatic PR merging on successful submission
|
documentation
|
Making this repo to automatically test the submitted PRs and potentially merge them if they satisfy the challenge requirements was kind of interesting to me so here is a quick summary:
1. Got a cheap [Linode](https://www.linode.com) server
2. Installed [Jenkins](https://jenkins.io) on it
3. In Jenkins - installed the [GitHub Pull Request Builder plugin](https://wiki.jenkins.io/display/JENKINS/GitHub+pull+request+builder+plugin)
4. Created a Jenkins job to run on each pull request and execute the following scripts:
```bash
# require pull-request has single commit
#
if [ "$(git rev-list --count origin/master..HEAD)" != "1" ] ; then
echo "Error: PR must have a single commit"
exit 1
fi
# run the following command in a docker container
#
# $ bash x.sh > diff
#
/home/run.sh
# require non-null output
#
if [ ! -s diff ] ; then
echo "Error: null output"
exit 2
fi
patch -f x.sh < diff
# require output is valid diff
#
if [ ! $? -eq 0 ] ; then
echo "Error: produced patch is invalid"
exit 3
fi
# require the patch reproduced origin/master -- x.sh
#
if [ "$(git diff origin/master -- x.sh)" != "" ] ; then
echo "Error: the patch does not reproduce the original x.sh"
exit 4
fi
# all checks passed - merge the pull-request !
#
/home/merge.sh ${ghprbPullId}
exit 0
```
5. The `/home/run.sh` script runs the submitted `x.sh` script inside a [Docker](https://www.docker.com) container in order to prevent from people running arbitrary code on my server:
```bash
#!/bin/bash
docker create --name sandbox -t ubuntu
docker start sandbox
docker cp ./x.sh sandbox:/x.sh
docker exec sandbox sh ./x.sh > diff
docker stop sandbox
docker rm sandbox
```
6. The `/home/merge.sh` script performs the actual PR merge using the PR number provided by the Jenkins plugin `${ghprbPullId}`:
```bash
#!/bin/bash
GITHUB_TOKEN=XXXXXXXXXXXSECRETXXXXXXXXXXXXXXXXXX
a=$(curl \
-XPUT \
-H "Authorization: token $GITHUB_TOKEN" \
https://api.github.com/repos/ggerganov/diff-challenge/pulls/$1/merge 2>/dev/null | grep merged)
if [ "$a" == "" ] ; then
echo "Merge of PR $1 failed!"
exit 1
fi
echo "Merge of PR $1 successfull!"
exit 0
```
|
1.0
|
Automatic PR merging on successful submission - Making this repo to automatically test the submitted PRs and potentially merge them if they satisfy the challenge requirements was kind of interesting to me so here is a quick summary:
1. Got a cheap [Linode](https://www.linode.com) server
2. Installed [Jenkins](https://jenkins.io) on it
3. In Jenkins - installed the [GitHub Pull Request Builder plugin](https://wiki.jenkins.io/display/JENKINS/GitHub+pull+request+builder+plugin)
4. Created a Jenkins job to run on each pull request and execute the following scripts:
```bash
# require pull-request has single commit
#
if [ "$(git rev-list --count origin/master..HEAD)" != "1" ] ; then
echo "Error: PR must have a single commit"
exit 1
fi
# run the following command in a docker container
#
# $ bash x.sh > diff
#
/home/run.sh
# require non-null output
#
if [ ! -s diff ] ; then
echo "Error: null output"
exit 2
fi
patch -f x.sh < diff
# require output is valid diff
#
if [ ! $? -eq 0 ] ; then
echo "Error: produced patch is invalid"
exit 3
fi
# require the patch reproduced origin/master -- x.sh
#
if [ "$(git diff origin/master -- x.sh)" != "" ] ; then
echo "Error: the patch does not reproduce the original x.sh"
exit 4
fi
# all checks passed - merge the pull-request !
#
/home/merge.sh ${ghprbPullId}
exit 0
```
5. The `/home/run.sh` script runs the submitted `x.sh` script inside a [Docker](https://www.docker.com) container in order to prevent from people running arbitrary code on my server:
```bash
#!/bin/bash
docker create --name sandbox -t ubuntu
docker start sandbox
docker cp ./x.sh sandbox:/x.sh
docker exec sandbox sh ./x.sh > diff
docker stop sandbox
docker rm sandbox
```
6. The `/home/merge.sh` script performs the actual PR merge using the PR number provided by the Jenkins plugin `${ghprbPullId}`:
```bash
#!/bin/bash
GITHUB_TOKEN=XXXXXXXXXXXSECRETXXXXXXXXXXXXXXXXXX
a=$(curl \
-XPUT \
-H "Authorization: token $GITHUB_TOKEN" \
https://api.github.com/repos/ggerganov/diff-challenge/pulls/$1/merge 2>/dev/null | grep merged)
if [ "$a" == "" ] ; then
echo "Merge of PR $1 failed!"
exit 1
fi
echo "Merge of PR $1 successfull!"
exit 0
```
|
non_defect
|
automatic pr merging on successful submission making this repo to automatically test the submitted prs and potentially merge them if they satisfy the challenge requirements was kind of interesting to me so here is a quick summary got a cheap server installed on it in jenkins installed the created a jenkins job to run on each pull request and execute the following scripts bash require pull request has single commit if then echo error pr must have a single commit exit fi run the following command in a docker container bash x sh diff home run sh require non null output if then echo error null output exit fi patch f x sh diff require output is valid diff if then echo error produced patch is invalid exit fi require the patch reproduced origin master x sh if then echo error the patch does not reproduce the original x sh exit fi all checks passed merge the pull request home merge sh ghprbpullid exit the home run sh script runs the submitted x sh script inside a container in order to prevent from people running arbitrary code on my server bash bin bash docker create name sandbox t ubuntu docker start sandbox docker cp x sh sandbox x sh docker exec sandbox sh x sh diff docker stop sandbox docker rm sandbox the home merge sh script performs the actual pr merge using the pr number provided by the jenkins plugin ghprbpullid bash bin bash github token xxxxxxxxxxxsecretxxxxxxxxxxxxxxxxxx a curl xput h authorization token github token dev null grep merged if then echo merge of pr failed exit fi echo merge of pr successfull exit
| 0
|
391,865
| 11,579,602,269
|
IssuesEvent
|
2020-02-21 18:17:23
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.morele.net - Unable to apply filters
|
browser-focus-geckoview engine-gecko priority-important severity-important
|
<!-- @browser: Firefox Mobile 71.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:71.0) Gecko/71.0 Firefox/71.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/48723 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.morele.net/komputery/monitory-i-akcesoria/monitory-komputerowe-523/
**Browser / Version**: Firefox Mobile 71.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: filtering doesnt load
**Steps to Reproduce**:
Unblocking trackers fixes
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.morele.net - Unable to apply filters - <!-- @browser: Firefox Mobile 71.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:71.0) Gecko/71.0 Firefox/71.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/48723 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.morele.net/komputery/monitory-i-akcesoria/monitory-komputerowe-523/
**Browser / Version**: Firefox Mobile 71.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: filtering doesnt load
**Steps to Reproduce**:
Unblocking trackers fixes
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
unable to apply filters url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description filtering doesnt load steps to reproduce unblocking trackers fixes browser configuration none from with ❤️
| 0
|
146,372
| 13,180,046,259
|
IssuesEvent
|
2020-08-12 12:06:40
|
simphony/osp-core
|
https://api.github.com/repos/simphony/osp-core
|
closed
|
Update the documentation of YAML ontologies, especially the documentation
|
documentation simple fix
|
In GitLab by @urbanmatthias on Jan 16, 2020, 10:07
|
1.0
|
Update the documentation of YAML ontologies, especially the documentation - In GitLab by @urbanmatthias on Jan 16, 2020, 10:07
|
non_defect
|
update the documentation of yaml ontologies especially the documentation in gitlab by urbanmatthias on jan
| 0
|
76,144
| 26,264,017,091
|
IssuesEvent
|
2023-01-06 10:42:03
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
opened
|
BUG: SciPy requires OpenBLAS even when building against a different library
|
defect
|
### Describe your issue.
I tried to build SciPy against Arm Performance Libraries (ArmPL); however, the build fails because of missing OpenBLAS.
For context, I was able to build NumPy against ArmPL using the same `site.cfg`.
### Reproducing Code Example
```python
I used the following `site.cfg` file:
# ArmPL
# -----
[openblas]
libraries = armpl_lp64
library_dirs = /mnt/data/opt/lib
include_dirs = /mnt/data/opt/include
runtime_library_dirs = /mnt/data/opt/lib
```
Build with `pip install .`
```
### Error message
```shell
Processing /mnt/data/andreac/scipy-1.9.3
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [48 lines of output]
The Meson build system
Version: 1.0.0
Source dir: /mnt/data/andreac/scipy-1.9.3
Build dir: /mnt/data/andreac/scipy-1.9.3/.mesonpy-_dtofmia/build
Build type: native build
Project name: SciPy
Project version: 1.9.3
C compiler for the host machine: cc (gcc 7.5.0 "cc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0")
C linker for the host machine: cc ld.bfd 2.30
C++ compiler for the host machine: c++ (gcc 7.5.0 "c++ (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0")
C++ linker for the host machine: c++ ld.bfd 2.30
Host machine cpu family: aarch64
Host machine cpu: aarch64
Compiler for C supports arguments -Wno-unused-but-set-variable: YES
Compiler for C supports arguments -Wno-unused-but-set-variable: YES (cached)
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Compiler for C supports arguments -Wno-incompatible-pointer-types: YES
Library m found: YES
Fortran compiler for the host machine: gfortran (gcc 7.5.0 "GNU Fortran (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0")
Fortran linker for the host machine: gfortran ld.bfd 2.30
Compiler for Fortran supports arguments -Wno-conversion: YES
Program cython found: YES (/tmp/pip-build-env-34ij7s89/overlay/bin/cython)
Program pythran found: YES (/tmp/pip-build-env-34ij7s89/overlay/bin/pythran)
Program cp found: YES (/bin/cp)
Program python found: YES (/mnt/data/opt/bin/python3.9)
Found pkg-config: /usr/bin/pkg-config (0.29.1)
Run-time dependency threads found: YES
Library npymath found: YES
Library npyrandom found: YES
Found CMake: /usr/bin/cmake (3.10.2)
DEPRECATION: CMake support for versions <3.17 is deprecated since Meson 0.62.0.
|
| However, Meson was only able to find CMake 3.10.2.
|
| Support for all CMake versions below 3.17.0 will be removed once
| newer CMake versions are more widely adopted. If you encounter
| any errors please try upgrading CMake to a newer version first.
WARNING: CMake Toolchain: Failed to determine CMake compilers state
Run-time dependency openblas found: NO (tried pkgconfig and cmake)
Run-time dependency openblas found: NO (tried pkgconfig and cmake)
../../scipy/meson.build:131:0: ERROR: Dependency "OpenBLAS" not found, tried pkgconfig and cmake
A full log can be found at /mnt/data/andreac/scipy-1.9.3/.mesonpy-_dtofmia/build/meson-logs/meson-log.txt
+ meson setup --prefix=/mnt/data/opt /mnt/data/andreac/scipy-1.9.3 /mnt/data/andreac/scipy-1.9.3/.mesonpy-_dtofmia/build --native-file=/mnt/data/andreac/scipy-1.9.3/.mesonpy-native-file.ini -Ddebug=false -Doptimization=2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
### SciPy/NumPy/Python version information
``` ------ os.name='posix' ------ sys.platform='linux' ------ sys.version: 3.9.15 (main, Jan 5 2023, 05:16:31) [GCC 7.5.0] ------ sys.prefix: /mnt/data/opt ------ sys.path=':/mnt/data/opt/lib/python39.zip:/mnt/data/opt/lib/python3.9:/mnt/data/opt/lib/python3.9/lib-dynload:/mnt/data/opt/lib/python3.9/site-packages' ------ Found new numpy version '1.24.1' in /mnt/data/opt/lib/python3.9/site-packages/numpy/__init__.py Found f2py2e version '1.24.1' in /mnt/data/opt/lib/python3.9/site-packages/numpy/f2py/f2py2e.py error: module 'numpy.distutils' has no attribute '__version__' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: Gnu95FCompiler instance properties: archiver = ['/usr/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-g', '-ffixed-form', '- fno-second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = ['/usr/bin/gfortran', '-Wall', '-g', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_fix = ['/usr/bin/gfortran', '-Wall', '-g', '-ffixed-form', '- fno-second-underscore', '-Wall', '-g', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops'] libraries = ['gfortran'] library_dirs = ['/usr/lib/gcc/aarch64-linux-gnu/7', '/usr/lib/gcc/aarch64-linux-gnu/7'] linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] linker_so = ['/usr/bin/gfortran', '-Wall', '-g', '-Wall', '-g', '- shared'] object_switch = '-o ' ranlib = ['/usr/bin/gfortran'] version = LooseVersion ('7') version_cmd = ['/usr/bin/gfortran', '-dumpversion'] Fortran compilers found: --fcompiler=gnu95 GNU Fortran 95 compiler (7) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=arm Arm Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=fujitsu Fujitsu Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=gnu GNU Fortran 77 compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for 64-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=nagfor NAG Fortran Compiler --fcompiler=nv NVIDIA HPC SDK --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler Compilers not available on this platform: --fcompiler=flang Portland Group Fortran LLVM Compiler --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=none Fake Fortran compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs is_64bit ------ ```
|
1.0
|
BUG: SciPy requires OpenBLAS even when building against a different library - ### Describe your issue.
I tried to build SciPy against Arm Performance Libraries (ArmPL); however, the build fails because of missing OpenBLAS.
For context, I was able to build NumPy against ArmPL using the same `site.cfg`.
### Reproducing Code Example
```python
I used the following `site.cfg` file:
# ArmPL
# -----
[openblas]
libraries = armpl_lp64
library_dirs = /mnt/data/opt/lib
include_dirs = /mnt/data/opt/include
runtime_library_dirs = /mnt/data/opt/lib
```
Build with `pip install .`
```
### Error message
```shell
Processing /mnt/data/andreac/scipy-1.9.3
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [48 lines of output]
The Meson build system
Version: 1.0.0
Source dir: /mnt/data/andreac/scipy-1.9.3
Build dir: /mnt/data/andreac/scipy-1.9.3/.mesonpy-_dtofmia/build
Build type: native build
Project name: SciPy
Project version: 1.9.3
C compiler for the host machine: cc (gcc 7.5.0 "cc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0")
C linker for the host machine: cc ld.bfd 2.30
C++ compiler for the host machine: c++ (gcc 7.5.0 "c++ (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0")
C++ linker for the host machine: c++ ld.bfd 2.30
Host machine cpu family: aarch64
Host machine cpu: aarch64
Compiler for C supports arguments -Wno-unused-but-set-variable: YES
Compiler for C supports arguments -Wno-unused-but-set-variable: YES (cached)
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Compiler for C supports arguments -Wno-incompatible-pointer-types: YES
Library m found: YES
Fortran compiler for the host machine: gfortran (gcc 7.5.0 "GNU Fortran (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0")
Fortran linker for the host machine: gfortran ld.bfd 2.30
Compiler for Fortran supports arguments -Wno-conversion: YES
Program cython found: YES (/tmp/pip-build-env-34ij7s89/overlay/bin/cython)
Program pythran found: YES (/tmp/pip-build-env-34ij7s89/overlay/bin/pythran)
Program cp found: YES (/bin/cp)
Program python found: YES (/mnt/data/opt/bin/python3.9)
Found pkg-config: /usr/bin/pkg-config (0.29.1)
Run-time dependency threads found: YES
Library npymath found: YES
Library npyrandom found: YES
Found CMake: /usr/bin/cmake (3.10.2)
DEPRECATION: CMake support for versions <3.17 is deprecated since Meson 0.62.0.
|
| However, Meson was only able to find CMake 3.10.2.
|
| Support for all CMake versions below 3.17.0 will be removed once
| newer CMake versions are more widely adopted. If you encounter
| any errors please try upgrading CMake to a newer version first.
WARNING: CMake Toolchain: Failed to determine CMake compilers state
Run-time dependency openblas found: NO (tried pkgconfig and cmake)
Run-time dependency openblas found: NO (tried pkgconfig and cmake)
../../scipy/meson.build:131:0: ERROR: Dependency "OpenBLAS" not found, tried pkgconfig and cmake
A full log can be found at /mnt/data/andreac/scipy-1.9.3/.mesonpy-_dtofmia/build/meson-logs/meson-log.txt
+ meson setup --prefix=/mnt/data/opt /mnt/data/andreac/scipy-1.9.3 /mnt/data/andreac/scipy-1.9.3/.mesonpy-_dtofmia/build --native-file=/mnt/data/andreac/scipy-1.9.3/.mesonpy-native-file.ini -Ddebug=false -Doptimization=2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
### SciPy/NumPy/Python version information
``` ------ os.name='posix' ------ sys.platform='linux' ------ sys.version: 3.9.15 (main, Jan 5 2023, 05:16:31) [GCC 7.5.0] ------ sys.prefix: /mnt/data/opt ------ sys.path=':/mnt/data/opt/lib/python39.zip:/mnt/data/opt/lib/python3.9:/mnt/data/opt/lib/python3.9/lib-dynload:/mnt/data/opt/lib/python3.9/site-packages' ------ Found new numpy version '1.24.1' in /mnt/data/opt/lib/python3.9/site-packages/numpy/__init__.py Found f2py2e version '1.24.1' in /mnt/data/opt/lib/python3.9/site-packages/numpy/f2py/f2py2e.py error: module 'numpy.distutils' has no attribute '__version__' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: Gnu95FCompiler instance properties: archiver = ['/usr/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-g', '-ffixed-form', '- fno-second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = ['/usr/bin/gfortran', '-Wall', '-g', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_fix = ['/usr/bin/gfortran', '-Wall', '-g', '-ffixed-form', '- fno-second-underscore', '-Wall', '-g', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops'] libraries = ['gfortran'] library_dirs = ['/usr/lib/gcc/aarch64-linux-gnu/7', '/usr/lib/gcc/aarch64-linux-gnu/7'] linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] linker_so = ['/usr/bin/gfortran', '-Wall', '-g', '-Wall', '-g', '- shared'] object_switch = '-o ' ranlib = ['/usr/bin/gfortran'] version = LooseVersion ('7') version_cmd = ['/usr/bin/gfortran', '-dumpversion'] Fortran compilers found: --fcompiler=gnu95 GNU Fortran 95 compiler (7) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=arm Arm Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=fujitsu Fujitsu Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=gnu GNU Fortran 77 compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for 64-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=nagfor NAG Fortran Compiler --fcompiler=nv NVIDIA HPC SDK --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler Compilers not available on this platform: --fcompiler=flang Portland Group Fortran LLVM Compiler --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=none Fake Fortran compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs is_64bit ------ ```
|
defect
|
bug scipy requires openblas even when building against a different library describe your issue i tried to build scipy against arm performance libraries armpl however the build fails because of missing openblas for context i was able to build numpy against armpl using the same site cfg reproducing code example python i used the following site cfg file armpl libraries armpl library dirs mnt data opt lib include dirs mnt data opt include runtime library dirs mnt data opt lib build with pip install error message shell processing mnt data andreac scipy installing build dependencies done getting requirements to build wheel done installing backend dependencies done preparing metadata pyproject toml error error subprocess exited with error × preparing metadata pyproject toml did not run successfully │ exit code ╰─ the meson build system version source dir mnt data andreac scipy build dir mnt data andreac scipy mesonpy dtofmia build build type native build project name scipy project version c compiler for the host machine cc gcc cc ubuntu linaro c linker for the host machine cc ld bfd c compiler for the host machine c gcc c ubuntu linaro c linker for the host machine c ld bfd host machine cpu family host machine cpu compiler for c supports arguments wno unused but set variable yes compiler for c supports arguments wno unused but set variable yes cached compiler for c supports arguments wno unused function yes compiler for c supports arguments wno conversion yes compiler for c supports arguments wno misleading indentation yes compiler for c supports arguments wno incompatible pointer types yes library m found yes fortran compiler for the host machine gfortran gcc gnu fortran ubuntu linaro fortran linker for the host machine gfortran ld bfd compiler for fortran supports arguments wno conversion yes program cython found yes tmp pip build env overlay bin cython program pythran found yes tmp pip build env overlay bin pythran program cp found yes bin cp program python found yes mnt data opt bin found pkg config usr bin pkg config run time dependency threads found yes library npymath found yes library npyrandom found yes found cmake usr bin cmake deprecation cmake support for versions is deprecated since meson however meson was only able to find cmake support for all cmake versions below will be removed once newer cmake versions are more widely adopted if you encounter any errors please try upgrading cmake to a newer version first warning cmake toolchain failed to determine cmake compilers state run time dependency openblas found no tried pkgconfig and cmake run time dependency openblas found no tried pkgconfig and cmake scipy meson build error dependency openblas not found tried pkgconfig and cmake a full log can be found at mnt data andreac scipy mesonpy dtofmia build meson logs meson log txt meson setup prefix mnt data opt mnt data andreac scipy mnt data andreac scipy mesonpy dtofmia build native file mnt data andreac scipy mesonpy native file ini ddebug false doptimization note this error originates from a subprocess and is likely not a problem with pip error metadata generation failed × encountered error while generating package metadata ╰─ see above for output note this is an issue with the package mentioned above not pip hint see above for details scipy numpy python version information os name posix sys platform linux sys version main jan sys prefix mnt data opt sys path mnt data opt lib zip mnt data opt lib mnt data opt lib lib dynload mnt data opt lib site packages found new numpy version in mnt data opt lib site packages numpy init py found version in mnt data opt lib site packages numpy py error module numpy distutils has no attribute version importing numpy distutils fcompiler ok checking availability of supported fortran compilers instance properties archiver compile switch c compiler compiler compiler fix libraries library dirs linker exe linker so object switch o ranlib version looseversion version cmd fortran compilers found fcompiler gnu fortran compiler compilers available for this platform but not found fcompiler absoft absoft corp fortran compiler fcompiler arm arm compiler fcompiler compaq compaq fortran compiler fcompiler fujitsu fujitsu fortran compiler fcompiler fortran compiler fcompiler gnu gnu fortran compiler fcompiler intel intel fortran compiler for bit apps fcompiler intele intel fortran compiler for itanium apps fcompiler intelem intel fortran compiler for bit apps fcompiler lahey lahey fujitsu fortran compiler fcompiler nag nagware fortran compiler fcompiler nagfor nag fortran compiler fcompiler nv nvidia hpc sdk fcompiler pathscale fortran compiler fcompiler pg portland group fortran compiler fcompiler vast pacific sierra research fortran compiler compilers not available on this platform fcompiler flang portland group fortran llvm compiler fcompiler hpux hp fortran compiler fcompiler ibm ibm xl fortran compiler fcompiler intelev intel visual fortran compiler for itanium apps fcompiler intelv intel visual fortran compiler for bit apps fcompiler intelvem intel visual fortran compiler for bit apps fcompiler mips mipspro fortran compiler fcompiler none fake fortran compiler fcompiler sun sun or forte fortran compiler for compiler details run config fc verbose setup command importing numpy distutils cpuinfo ok cpu information cpuinfobase get nbits getncpus is
| 1
|
71,893
| 23,843,870,427
|
IssuesEvent
|
2022-09-06 12:37:34
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
opened
|
dnsdist: 'Fake' entries inserted into the ring buffers on timeouts have wrong flags
|
defect dnsdist backport to dnsdist-1.7.x
|
- Program: dnsdist <!-- delete the ones that do not apply -->
- Issue type: Bug report
### Short description
When we detect a timeout while waiting for a UDP response from a backend, we insert a 'fake' entry into the ring buffers as a placeholder. This is useful to be able to look for which queries are causing a timeout using `grepq()`, among other things.
When we do that, we set the correct ID so that the response matches the query but we do not set the rest of the flags, which means `RD` does not show up in `grepq()`. We should fix that.
|
1.0
|
dnsdist: 'Fake' entries inserted into the ring buffers on timeouts have wrong flags -
- Program: dnsdist <!-- delete the ones that do not apply -->
- Issue type: Bug report
### Short description
When we detect a timeout while waiting for a UDP response from a backend, we insert a 'fake' entry into the ring buffers as a placeholder. This is useful to be able to look for which queries are causing a timeout using `grepq()`, among other things.
When we do that, we set the correct ID so that the response matches the query but we do not set the rest of the flags, which means `RD` does not show up in `grepq()`. We should fix that.
|
defect
|
dnsdist fake entries inserted into the ring buffers on timeouts have wrong flags program dnsdist issue type bug report short description when we detect a timeout while waiting for a udp response from a backend we insert a fake entry into the ring buffers as a placeholder this is useful to be able to look for which queries are causing a timeout using grepq among other things when we do that we set the correct id so that the response matches the query but we do not set the rest of the flags which means rd does not show up in grepq we should fix that
| 1
|
436,113
| 30,538,023,268
|
IssuesEvent
|
2023-07-19 18:55:48
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
GitHub Access Changes
|
platform-content-team documentation-support pw-footer-feedback
|
### Description
It has been shared with me that some teams have trouble using our GitHub access information when teams are moving into the VA.gov space for the first time. These teams do not have a program manager who has access to GitHub so they cannot create a request for access because those managers don't have a GH account either. We need additional information that details what to do when our step 1 isn't possible. The GitHub Handbook provides the best information for this case, so I have linked it below.
The GitHub handbook is the single source of truth on this, so it may be best just to link there.
### Relevant URLs
[Platform Website link
](https://depo-platform-documentation.scrollhelp.site/getting-started/request-access-to-tools#Requestaccesstotools-GitHub)[GitHub Handbook
](https://department-of-veterans-affairs.github.io/github-handbook/guides/onboarding/getting-access)
### Which type of team are you on? (Platform team, VFS team, or Leadership)
VA Leadership
|
1.0
|
GitHub Access Changes - ### Description
It has been shared with me that some teams have trouble using our GitHub access information when teams are moving into the VA.gov space for the first time. These teams do not have a program manager who has access to GitHub so they cannot create a request for access because those managers don't have a GH account either. We need additional information that details what to do when our step 1 isn't possible. The GitHub Handbook provides the best information for this case, so I have linked it below.
The GitHub handbook is the single source of truth on this, so it may be best just to link there.
### Relevant URLs
[Platform Website link
](https://depo-platform-documentation.scrollhelp.site/getting-started/request-access-to-tools#Requestaccesstotools-GitHub)[GitHub Handbook
](https://department-of-veterans-affairs.github.io/github-handbook/guides/onboarding/getting-access)
### Which type of team are you on? (Platform team, VFS team, or Leadership)
VA Leadership
|
non_defect
|
github access changes description it has been shared with me that some teams have trouble using our github access information when teams are moving into the va gov space for the first time these teams do not have a program manager who has access to github so they cannot create a request for access because those managers don t have a gh account either we need additional information that details what to do when our step isn t possible the github handbook provides the best information for this case so i have linked it below the github handbook is the single source of truth on this so it may be best just to link there relevant urls platform website link handbook which type of team are you on platform team vfs team or leadership va leadership
| 0
|
24,026
| 3,900,273,384
|
IssuesEvent
|
2016-04-18 04:32:15
|
catmaid/CATMAID
|
https://api.github.com/repos/catmaid/CATMAID
|
closed
|
Volume manager creates box selection layer, but doesn't remove it
|
status: done type: defect
|
This happens when an existing volume is edited and then the type is changed (which shouldn't be possible in the first place).
|
1.0
|
Volume manager creates box selection layer, but doesn't remove it - This happens when an existing volume is edited and then the type is changed (which shouldn't be possible in the first place).
|
defect
|
volume manager creates box selection layer but doesn t remove it this happens when an existing volume is edited and then the type is changed which shouldn t be possible in the first place
| 1
|
4,690
| 2,610,141,050
|
IssuesEvent
|
2015-02-26 18:44:28
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Players need manual/documentation/ so MOSTLY an EDITABLE wiki!
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Log-in into Hedgewars.org
2. Click wiki
3. Found you can do nothing about it.
What is the expected output? What do you see instead?
Talk page and edit button.
What version of the product are you using? On what operating system?
Current ALL
Please provide any additional information below.
We need more players stay~
```
-----
Original issue reported on code.google.com by `lilil...@gmail.com` on 29 Nov 2010 at 11:09
|
1.0
|
Players need manual/documentation/ so MOSTLY an EDITABLE wiki! - ```
What steps will reproduce the problem?
1. Log-in into Hedgewars.org
2. Click wiki
3. Found you can do nothing about it.
What is the expected output? What do you see instead?
Talk page and edit button.
What version of the product are you using? On what operating system?
Current ALL
Please provide any additional information below.
We need more players stay~
```
-----
Original issue reported on code.google.com by `lilil...@gmail.com` on 29 Nov 2010 at 11:09
|
defect
|
players need manual documentation so mostly an editable wiki what steps will reproduce the problem log in into hedgewars org click wiki found you can do nothing about it what is the expected output what do you see instead talk page and edit button what version of the product are you using on what operating system current all please provide any additional information below we need more players stay original issue reported on code google com by lilil gmail com on nov at
| 1
|
406,153
| 11,887,455,723
|
IssuesEvent
|
2020-03-28 01:55:30
|
Thorium-Sim/thorium
|
https://api.github.com/repos/Thorium-Sim/thorium
|
opened
|
Crew Positions
|
priority/medium type/feature
|
### Requested By: Lissa Hadfield
### Priority: Medium
### Version: 2.8.0
As we are creating things for the new CMSC ships, we want a wider range of Damage Control/Security Officer positions (we want specialists instead of "Security" lol). But when we add a Security Guard/Damage Control officer and change their job it removes them from that list. Can we either 1) have a way to add job position titles to Damage Control and Security or 2) have little check boxes in the creation step of a crew member where we can check if they belong to Sec or Damage Control so that those bridge officers can see and use them? (We can't really put in our crew roster until we can do this? Is that okay?)
|
1.0
|
Crew Positions - ### Requested By: Lissa Hadfield
### Priority: Medium
### Version: 2.8.0
As we are creating things for the new CMSC ships, we want a wider range of Damage Control/Security Officer positions (we want specialists instead of "Security" lol). But when we add a Security Guard/Damage Control officer and change their job it removes them from that list. Can we either 1) have a way to add job position titles to Damage Control and Security or 2) have little check boxes in the creation step of a crew member where we can check if they belong to Sec or Damage Control so that those bridge officers can see and use them? (We can't really put in our crew roster until we can do this? Is that okay?)
|
non_defect
|
crew positions requested by lissa hadfield priority medium version as we are creating things for the new cmsc ships we want a wider range of damage control security officer positions we want specialists instead of security lol but when we add a security guard damage control officer and change their job it removes them from that list can we either have a way to add job position titles to damage control and security or have little check boxes in the creation step of a crew member where we can check if they belong to sec or damage control so that those bridge officers can see and use them we can t really put in our crew roster until we can do this is that okay
| 0
|
275,056
| 20,904,931,919
|
IssuesEvent
|
2022-03-24 00:22:24
|
RainwayApp/node-clangffi
|
https://api.github.com/repos/RainwayApp/node-clangffi
|
closed
|
Move to absolute urls in README
|
bug documentation good first issue
|
Now that this repo is public, we should move to absolute URLs for links in the `README.md` files, so that they work from npm as well.
|
1.0
|
Move to absolute urls in README - Now that this repo is public, we should move to absolute URLs for links in the `README.md` files, so that they work from npm as well.
|
non_defect
|
move to absolute urls in readme now that this repo is public we should move to absolute urls for links in the readme md files so that they work from npm as well
| 0
|
43,781
| 11,845,138,589
|
IssuesEvent
|
2020-03-24 07:42:44
|
line/armeria
|
https://api.github.com/repos/line/armeria
|
opened
|
DNS resolution timeout may take longer
|
defect
|
Let's say that `/etc/resolve.conf` contains:
```
a.svc.cluster.local svc.cluster.local cluster.local
```
If `queryTimeoutMillis` is 5 seconds and the client tries to find the address of `a.com`, it sends DNS queries sequencially as follow:
```
a.com.a.svc.cluster.local. // 5 seconds timeout.
a.com.svc.cluster.local. // 5 seconds timeout.
a.com.cluster.local. // 5 seconds timeout.
a.com. // 5 seconds timeout.
```
The timeout is for the individual query not for whole DNS resolution, so it could take more than 5 seconds.
To fix this, we can:
1. Provide to set the whole timeout for DNS resolution, or
2. Apply `queryTimeoutMillis` to whole DNS resolution.
I prefer to just do 2) because a user does not care about the individual queries, but just wants to get the result.
If there's a demand for 1) later, I will implement it.
|
1.0
|
DNS resolution timeout may take longer - Let's say that `/etc/resolve.conf` contains:
```
a.svc.cluster.local svc.cluster.local cluster.local
```
If `queryTimeoutMillis` is 5 seconds and the client tries to find the address of `a.com`, it sends DNS queries sequencially as follow:
```
a.com.a.svc.cluster.local. // 5 seconds timeout.
a.com.svc.cluster.local. // 5 seconds timeout.
a.com.cluster.local. // 5 seconds timeout.
a.com. // 5 seconds timeout.
```
The timeout is for the individual query not for whole DNS resolution, so it could take more than 5 seconds.
To fix this, we can:
1. Provide to set the whole timeout for DNS resolution, or
2. Apply `queryTimeoutMillis` to whole DNS resolution.
I prefer to just do 2) because a user does not care about the individual queries, but just wants to get the result.
If there's a demand for 1) later, I will implement it.
|
defect
|
dns resolution timeout may take longer let s say that etc resolve conf contains a svc cluster local svc cluster local cluster local if querytimeoutmillis is seconds and the client tries to find the address of a com it sends dns queries sequencially as follow a com a svc cluster local seconds timeout a com svc cluster local seconds timeout a com cluster local seconds timeout a com seconds timeout the timeout is for the individual query not for whole dns resolution so it could take more than seconds to fix this we can provide to set the whole timeout for dns resolution or apply querytimeoutmillis to whole dns resolution i prefer to just do because a user does not care about the individual queries but just wants to get the result if there s a demand for later i will implement it
| 1
|
7,472
| 2,610,387,959
|
IssuesEvent
|
2015-02-26 20:05:31
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Patch for /share/hedgewars/Data/Locale/missions_it.txt
|
auto-migrated Type-Defect
|
```
Fixed error in line 11 of file missions_it.txt
```
-----
Original issue reported on code.google.com by `chipho...@yahoo.it` on 21 Dec 2011 at 5:38
Attachments:
* [missions_it.txt.patch](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-340/comment-0/missions_it.txt.patch)
|
1.0
|
Patch for /share/hedgewars/Data/Locale/missions_it.txt - ```
Fixed error in line 11 of file missions_it.txt
```
-----
Original issue reported on code.google.com by `chipho...@yahoo.it` on 21 Dec 2011 at 5:38
Attachments:
* [missions_it.txt.patch](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-340/comment-0/missions_it.txt.patch)
|
defect
|
patch for share hedgewars data locale missions it txt fixed error in line of file missions it txt original issue reported on code google com by chipho yahoo it on dec at attachments
| 1
|
161,146
| 13,806,941,496
|
IssuesEvent
|
2020-10-11 19:48:36
|
Bloceducare/LeaderBoard
|
https://api.github.com/repos/Bloceducare/LeaderBoard
|
opened
|
Setup repo and write up documentation
|
documentation
|
Setup repository with all required integrations for testing, deployment and issue tracking. Also write up some documentation
|
1.0
|
Setup repo and write up documentation - Setup repository with all required integrations for testing, deployment and issue tracking. Also write up some documentation
|
non_defect
|
setup repo and write up documentation setup repository with all required integrations for testing deployment and issue tracking also write up some documentation
| 0
|
269,029
| 28,959,972,356
|
IssuesEvent
|
2023-05-10 01:04:45
|
dpteam/RK3188_TABLET
|
https://api.github.com/repos/dpteam/RK3188_TABLET
|
reopened
|
CVE-2013-7271 (Medium) detected in multiple libraries
|
Mend: dependency security vulnerability
|
## CVE-2013-7271 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>randomv3.0.66</b>, <b>linuxv3.0.70</b>, <b>linuxv3.0</b>, <b>linuxv3.0</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The x25_recvmsg function in net/x25/af_x25.c in the Linux kernel before 3.12.4 updates a certain length value without ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call.
<p>Publish Date: 2014-01-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-7271>CVE-2013-7271</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7271">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7271</a></p>
<p>Release Date: 2014-01-06</p>
<p>Fix Resolution: v3.13-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2013-7271 (Medium) detected in multiple libraries - ## CVE-2013-7271 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>randomv3.0.66</b>, <b>linuxv3.0.70</b>, <b>linuxv3.0</b>, <b>linuxv3.0</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The x25_recvmsg function in net/x25/af_x25.c in the Linux kernel before 3.12.4 updates a certain length value without ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call.
<p>Publish Date: 2014-01-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-7271>CVE-2013-7271</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7271">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-7271</a></p>
<p>Release Date: 2014-01-06</p>
<p>Fix Resolution: v3.13-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries vulnerability details the recvmsg function in net af c in the linux kernel before updates a certain length value without ensuring that an associated data structure has been initialized which allows local users to obtain sensitive information from kernel memory via a recvfrom recvmmsg or recvmsg system call publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
79,838
| 29,351,335,758
|
IssuesEvent
|
2023-05-27 00:54:50
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
opened
|
BUG: `rv_discrete` fails when support is unbounded below
|
defect
|
### Describe your issue.
I am trying to create a discrete distribution on the integers (both positive and negative). The documentation states that `rv_discrete` takes an "optional" lower bound parameter `a`. It does not, however, seem to allow for an unbounded distribution.
The `rv_discrete` class should probably support unbounded distributions. I recognize this might be a big project, so in the meantime the documentation should be updated to reflect that
1. The `a` parameter isn't really **optional** (an optional in Python allows for the value `None`). Rather, it has a set default.
2. `a` must be bounded
### Reproducing Code Example
```python
from typing import Any
import scipy
import numpy as np
class RV(scipy.stats.rv_discrete):
def __init__(self) -> None:
# NOTE: np.nan, None, -np.inf, and float("-inf") all fail
super().__init__(a=np.nan, b=0)
def _pmf(self, k: np.ndarray[int], *_: Any) -> np.ndarray[float]:
return scipy.stats.geom.pmf(k, 0.5)
RV().expect(lb=-5, ub=5)
```
### Error message
```shell
Depends on what you pass for `a`, but generally a `ValueError`
```
### SciPy/NumPy/Python version and system information
```shell
1.10.1 1.24.3 sys.version_info(major=3, minor=11, micro=3, releaselevel='final', serial=0)
Build Dependencies:
blas:
detection method: cmake
found: true
include directory: unknown
lib directory: unknown
name: OpenBLAS
openblas configuration: unknown
pc file directory: unknown
version: 0.3.18
lapack:
detection method: cmake
found: true
include directory: unknown
lib directory: unknown
name: OpenBLAS
openblas configuration: unknown
pc file directory: unknown
version: 0.3.18
Compilers:
c:
commands: cc
linker: ld64
name: clang
version: 13.1.6
c++:
commands: c++
linker: ld64
name: clang
version: 13.1.6
cython:
commands: cython
linker: cython
name: cython
version: 0.29.33
fortran:
commands: gfortran
linker: ld64
name: gcc
version: 12.1.0
pythran:
include directory: /private/var/folders/_f/lyvxf0v13gs7984d7sf7j83c0000gn/T/pip-build-env-c1_tgrdc/overlay/lib/python3.11/site-packages/pythran
version: 0.12.1
Machine Information:
build:
cpu: aarch64
endian: little
family: aarch64
system: darwin
cross-compiled: false
host:
cpu: aarch64
endian: little
family: aarch64
system: darwin
Python Information:
path: /private/var/folders/_f/lyvxf0v13gs7984d7sf7j83c0000gn/T/cibw-run-s3_k_ke5/cp311-macosx_arm64/build/venv/bin/python
version: '3.11'
```
|
1.0
|
BUG: `rv_discrete` fails when support is unbounded below - ### Describe your issue.
I am trying to create a discrete distribution on the integers (both positive and negative). The documentation states that `rv_discrete` takes an "optional" lower bound parameter `a`. It does not, however, seem to allow for an unbounded distribution.
The `rv_discrete` class should probably support unbounded distributions. I recognize this might be a big project, so in the meantime the documentation should be updated to reflect that
1. The `a` parameter isn't really **optional** (an optional in Python allows for the value `None`). Rather, it has a set default.
2. `a` must be bounded
### Reproducing Code Example
```python
from typing import Any
import scipy
import numpy as np
class RV(scipy.stats.rv_discrete):
def __init__(self) -> None:
# NOTE: np.nan, None, -np.inf, and float("-inf") all fail
super().__init__(a=np.nan, b=0)
def _pmf(self, k: np.ndarray[int], *_: Any) -> np.ndarray[float]:
return scipy.stats.geom.pmf(k, 0.5)
RV().expect(lb=-5, ub=5)
```
### Error message
```shell
Depends on what you pass for `a`, but generally a `ValueError`
```
### SciPy/NumPy/Python version and system information
```shell
1.10.1 1.24.3 sys.version_info(major=3, minor=11, micro=3, releaselevel='final', serial=0)
Build Dependencies:
blas:
detection method: cmake
found: true
include directory: unknown
lib directory: unknown
name: OpenBLAS
openblas configuration: unknown
pc file directory: unknown
version: 0.3.18
lapack:
detection method: cmake
found: true
include directory: unknown
lib directory: unknown
name: OpenBLAS
openblas configuration: unknown
pc file directory: unknown
version: 0.3.18
Compilers:
c:
commands: cc
linker: ld64
name: clang
version: 13.1.6
c++:
commands: c++
linker: ld64
name: clang
version: 13.1.6
cython:
commands: cython
linker: cython
name: cython
version: 0.29.33
fortran:
commands: gfortran
linker: ld64
name: gcc
version: 12.1.0
pythran:
include directory: /private/var/folders/_f/lyvxf0v13gs7984d7sf7j83c0000gn/T/pip-build-env-c1_tgrdc/overlay/lib/python3.11/site-packages/pythran
version: 0.12.1
Machine Information:
build:
cpu: aarch64
endian: little
family: aarch64
system: darwin
cross-compiled: false
host:
cpu: aarch64
endian: little
family: aarch64
system: darwin
Python Information:
path: /private/var/folders/_f/lyvxf0v13gs7984d7sf7j83c0000gn/T/cibw-run-s3_k_ke5/cp311-macosx_arm64/build/venv/bin/python
version: '3.11'
```
|
defect
|
bug rv discrete fails when support is unbounded below describe your issue i am trying to create a discrete distribution on the integers both positive and negative the documentation states that rv discrete takes an optional lower bound parameter a it does not however seem to allow for an unbounded distribution the rv discrete class should probably support unbounded distributions i recognize this might be a big project so in the meantime the documentation should be updated to reflect that the a parameter isn t really optional an optional in python allows for the value none rather it has a set default a must be bounded reproducing code example python from typing import any import scipy import numpy as np class rv scipy stats rv discrete def init self none note np nan none np inf and float inf all fail super init a np nan b def pmf self k np ndarray any np ndarray return scipy stats geom pmf k rv expect lb ub error message shell depends on what you pass for a but generally a valueerror scipy numpy python version and system information shell sys version info major minor micro releaselevel final serial build dependencies blas detection method cmake found true include directory unknown lib directory unknown name openblas openblas configuration unknown pc file directory unknown version lapack detection method cmake found true include directory unknown lib directory unknown name openblas openblas configuration unknown pc file directory unknown version compilers c commands cc linker name clang version c commands c linker name clang version cython commands cython linker cython name cython version fortran commands gfortran linker name gcc version pythran include directory private var folders f t pip build env tgrdc overlay lib site packages pythran version machine information build cpu endian little family system darwin cross compiled false host cpu endian little family system darwin python information path private var folders f t cibw run k macosx build venv bin python version
| 1
|
400,866
| 27,303,967,137
|
IssuesEvent
|
2023-02-24 06:08:47
|
purpleclay/gitz
|
https://api.github.com/repos/purpleclay/gitz
|
closed
|
[Docs]: include posthog support for capturing documentation analytics
|
documentation
|
### Describe your edit
Include posthog support within the existing documentation to capture analytics.
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
1.0
|
[Docs]: include posthog support for capturing documentation analytics - ### Describe your edit
Include posthog support within the existing documentation to capture analytics.
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
non_defect
|
include posthog support for capturing documentation analytics describe your edit include posthog support within the existing documentation to capture analytics code of conduct i agree to follow this project s code of conduct
| 0
|
289,045
| 31,931,118,903
|
IssuesEvent
|
2023-09-19 07:29:57
|
Trinadh465/linux-4.1.15_CVE-2023-4128
|
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128
|
opened
|
CVE-2019-19052 (High) detected in linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## CVE-2019-19052 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/can/usb/gs_usb.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/can/usb/gs_usb.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the gs_can_open() function in drivers/net/can/usb/gs_usb.c in the Linux kernel before 5.3.11 allows attackers to cause a denial of service (memory consumption) by triggering usb_submit_urb() failures, aka CID-fb5be6a7b486.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19052>CVE-2019-19052</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19052">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19052</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: v5.4-rc7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-19052 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2019-19052 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/can/usb/gs_usb.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/can/usb/gs_usb.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the gs_can_open() function in drivers/net/can/usb/gs_usb.c in the Linux kernel before 5.3.11 allows attackers to cause a denial of service (memory consumption) by triggering usb_submit_urb() failures, aka CID-fb5be6a7b486.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19052>CVE-2019-19052</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19052">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19052</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: v5.4-rc7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch main vulnerable source files drivers net can usb gs usb c drivers net can usb gs usb c vulnerability details a memory leak in the gs can open function in drivers net can usb gs usb c in the linux kernel before allows attackers to cause a denial of service memory consumption by triggering usb submit urb failures aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
212,375
| 23,882,164,781
|
IssuesEvent
|
2022-09-08 02:58:06
|
MatBenfield/news
|
https://api.github.com/repos/MatBenfield/news
|
closed
|
[SecurityWeek] Israeli Defence Minister's Cleaner Sentenced for Spying Attempt
|
SecurityWeek Stale
|
**A man employed as a cleaner in Israeli Defence Minister Benny Gantz's home was sentenced to three years' prison for attempting to spy for Iran-linked hackers, the justice ministry said Tuesday.**
[read more](https://www.securityweek.com/israeli-defence-ministers-cleaner-sentenced-spying-attempt)
<https://www.securityweek.com/israeli-defence-ministers-cleaner-sentenced-spying-attempt>
|
True
|
[SecurityWeek] Israeli Defence Minister's Cleaner Sentenced for Spying Attempt -
**A man employed as a cleaner in Israeli Defence Minister Benny Gantz's home was sentenced to three years' prison for attempting to spy for Iran-linked hackers, the justice ministry said Tuesday.**
[read more](https://www.securityweek.com/israeli-defence-ministers-cleaner-sentenced-spying-attempt)
<https://www.securityweek.com/israeli-defence-ministers-cleaner-sentenced-spying-attempt>
|
non_defect
|
israeli defence minister s cleaner sentenced for spying attempt a man employed as a cleaner in israeli defence minister benny gantz s home was sentenced to three years prison for attempting to spy for iran linked hackers the justice ministry said tuesday
| 0
|
294,433
| 22,151,058,672
|
IssuesEvent
|
2022-06-03 16:50:17
|
jenkinsci/packaging
|
https://api.github.com/repos/jenkinsci/packaging
|
closed
|
Current documentation does not work for me despite using Docker
|
documentation
|
### Describe your use-case which is not covered by existing documentation.
Hi there 👋
I have started to modify the [documentation](https://github.com/gounthar/packaging/tree/documentation-update) in the hope of submitting a PR shortly.
I am using `docker` as it is the [only supported platform](https://github.com/jenkinsci/packaging/issues/314#issuecomment-1145088927).
Following the existing documentation, after setting the `WAR` and `JENKINS_URL` environment variables <details><summary>(not necessary if I read correctly the `docker-compose.yaml`</summary>
https://github.com/jenkinsci/packaging/blob/416f402a884be8d33d4b665d149daada5e6b361b/docker-compose.yaml#L18
</details>
, I launch `./prep.sh`. Unfortunately, `./prep.sh` does not work for me inside this container:
```bash
jenkins@420db984a78b:/srv/releases/jenkins$ ./prep.sh
: invalid option
```
`make setup` does not work either with the same kind of error:
```bash
make setup
bash -ex -c 'for f in */setup.sh; do $f; done'
+ for f in */setup.sh
+ deb/setup.sh
: invalid option
make: *** [Makefile:24: setup] Error 1
```
`bash -x prep.sh` outputs:
```bash
+ set -o $'pipefail\r'
: invalid option name pipefail
++ dirname prep.sh
+ cd $'.\r'
prep.sh: line 3: cd: $'.\r': No such file or directory
+ $'\r'
prep.sh: line 4: $'\r': command not found
+ $'\r'
prep.sh: line 10: $'\r': command not found
prep.sh: line 21: syntax error: unexpected end of file
```
Apparently, `pipefail` is a _bashism_, but we're using `bash`, aren't we? Furthermore, are these files _Windows_ encoded?
I would have like to debug by modifying the script files, but there is no editor bundled in the image (as far as I know).
Would anyone have a hint to get me out of the rut?
Here is the complete set of environment variables, just in case:
```bash
BASH=/usr/bin/bash
BASHOPTS=checkwinsize:cmdhist:complete_fullquote:expand_aliases:extquote:force_fignore:globasciiranges:histappend:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=([0]="0")
BASH_ARGV=()
BASH_CMDS=()
BASH_LINENO=()
BASH_SOURCE=()
BASH_VERSINFO=([0]="5" [1]="1" [2]="16" [3]="1" [4]="release" [5]="x86_64-pc-linux-gnu")
BASH_VERSION='5.1.16(1)-release'
BRAND=/srv/releases/jenkins/branding/jenkins.mk
BRANDING_DIR=/srv/releases/jenkins/branding
BUILDENV=/srv/releases/jenkins/env/test.mk
COLUMNS=230
DEBIAN_FRONTEND=noninteractive
DIRSTACK=()
EUID=1000
GPG_FILE=/srv/releases/jenkins/credentials/sandbox.gpg
GPG_KEYNAME=551F7B8423F34E8A1E9EEE1B3DD66EC9C147FCD1
GPG_PASSPHRASE=s3cr3t
GPG_PASSPHRASE_FILE=/srv/releases/jenkins/credentials/test.gpg.password.txt
GROUPS=()
HISTCONTROL=ignoreboth
HISTFILE=/home/jenkins/.bash_history
HISTFILESIZE=2000
HISTSIZE=1000
HOME=/home/jenkins
HOSTNAME=420db984a78b
HOSTTYPE=x86_64
IFS=$' \t\n'
JAVA_HOME=/opt/jdk-8
JENKINS_URL=http://192.168.0.144:8080/
LANG=C.UTF-8
LINES=66
LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:'
MACHTYPE=x86_64-pc-linux-gnu
MAILCHECK=60
MSI=/srv/releases/jenkins/jenkins.msi
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
PATH=/opt/jdk-8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PIPESTATUS=([0]="0")
PPID=0
PS1='\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
PS2='> '
PS4='+ '
PWD=/srv/releases/jenkins
RELEASELINE=-experimental
SHELL=/bin/sh
SHELLOPTS=braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor
SHLVL=1
TERM=xterm
TZ=UTC
UID=1000
USER=jenkins
WAR=/srv/releases/jenkins/jenkins.war
```
Thanks.
### Reference any relevant documentation, other materials or issues/pull requests that can be used for inspiration.
# Native package script for Jenkins
This repository contains scripts for packaging `jenkins.war` into various platform-specific native packages.
The following platforms are currently supported:
* Windows MSI: `msi/`
* RedHat/CentOS RPM: `rpm/`
* Debian/Ubuntu DEB: `deb/`
* OpenSUSE RPM: `suse/`
# Pre-requisites
Running the main package script requires a Ubuntu Linux [18.04](https://releases.ubuntu.com/18.04/) environment (see [JENKINS-27744](https://issues.jenkins-ci.org/browse/JENKINS-27744).)
Remark:
A docker image is available to run the following scripts. It's the only supported environment.
[](https://hub.docker.com/r/jenkinsciinfra/packaging)
Run `docker-compose run --rm packaging bash` to get a shell in the official Docker image for this repository.
<details><summary>Workaround</summary>
<p>
If your machine doesn't run this mandatory version of Ubuntu Linux, you can use these scripts within a VM or with the provided Docker image.
The use of the Docker image is <a href="https://github.com/jenkinsci/packaging/issues/314#issuecomment-1145088927">not compulsory</a> but any other environment is not supported.
Run `make setup` to install (most of the) necessary tools. Alternatively you can manually install the following onto a base install of Ubuntu:
* make
* unzip
* devscripts
* debhelper
* rpm
* expect
* createrepo
* ruby
* net-sftp (`gem install net-sftp`)
* maven
* java
</p>
</details>
You will also need a Jenkins instance with [dist-fork plugin](https://wiki.jenkins-ci.org/display/JENKINS/DistFork+Plugin)
installed. The URL of this Jenkins can be fed into `make` via the `JENKINS_URL` variable.
If you want to package for Windows, this Jenkins controller needs to have a Windows build agent that has [WiX Toolset](http://wixtoolset.org/) (currently 3.5), msbuild, [cygwin](https://www.cygwin.com/) and .net 2.0. This build agent is used to build MSI packages, which
can be only built on Windows.
You'll also need:
* a `jenkins.war` file that you are packaging, which comes from the [release process](https://www.jenkins.io/download/).
The location of this file is set via the `WAR` variable.
* a `jenkins.msi` file if you are packaging for Windows, which also comes from the [release process](https://www.jenkins.io/download/thank-you-downloading-windows-installer).
<details><summary>Download and reference `jenkins.war`</summary><p>
```bash
curl https://get.jenkins.io/war-stable/2.332.3/jenkins.war --output jenkins.war
# or jv download
export WAR=/srv/releases/jenkins/jenkins.war
# if you want to build for windows
curl https://get.jenkins.io/windows-stable/2.332.3/jenkins.msi --output jenkins.msi
export MSI=/srv/releases/jenkins/jenkins.msi
```
</p></details>
# Generating packages
Run `./prep.sh` to perform the preparatory actions of downloading the WAR and importing the GPG key.
Run `make package` to build all the native packages.
At minimum, you have to specify the `WAR` variable that points to the war file to be packaged and a branding file (for licensing and package descriptions).
You will probably need to pass in the build environment and credentials.
For example:
```shell
make package BRAND=./branding/jenkins.mk BUILDENV=./env/test.mk CREDENTIAL=./credentials/test.mk
```
Packages will be placed into `target/` directory.
See the definition of the `package` goal for how to build individual packages selectively.
# Running functional tests
The functional tests require Python 3 and Docker.
Having built the packages as described above, run the functional tests with:
```shell
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
molecule test
deactivate
```
# Publishing packages
This repository contains scripts for copying packages over to a remote web server to publish them.
Run `make publish` to publish all native packages.
See the definition of the `publish` goal for individual package publishment.
## Running local tests
These tests install packages from a web server where they are published. So if you want to
run tests prior to publishing them, you need to create a temporary web server that you can mess up.
The default branding & environment (`branding/test.mk` and `env/test.mk`) are designed to support
this scenario. To make local testing work, you also need to have `/etc/hosts` entry that maps
`test.pkg.jenkins.io` hostname to `127.0.0.1`, and your computer has to be running ssh that
lets you login as you.
Once you verified the above prerequisites, open another terminal and run `make test.local.setup`
This will run a docker container that acts as your throw-away package web server. When done, Ctrl+C
to kill it.
# Branding
`branding/` directory contains `*.mk` files that control the branding of the generated packages.
It also include text files which are used for large, branded text blocks (license and descriptions).
Specify the branding file via the `BRAND` variable.
You can create your own branding definition to customize the package generation process.
See [branding readme](branding/README.md) for more details. In the rest of the packaging script files,
these branding parameters are referenced via `@@NAME@@` and get substituted by `bin/branding.py`.
To escape a string normally like @@VALUE@@, add an additional two @@ symbols as a prefix: @@@@VALUE@@.
# Environment
`env/` directory contains `*.mk` files that control the environment into which
you publish packages. Specify the environment file via the `BUILDENV` variable.
You can create your own environment definition to customize the package generation process.
See [environment readme](env/README.md) for more details.
# Credentials
`credentials/` directory contains `test.mk` file that controls the locations of code-signing keys,
their passwords, and certificates. Specify the credentials file via the `CREDENTIAL` variable.
For production use, you need to create your own credentials file. See [credentials readme](credentials/README.md)
for more details.
# TODO (mostly note to myself)
* Split resource templates to enable customization
|
1.0
|
Current documentation does not work for me despite using Docker - ### Describe your use-case which is not covered by existing documentation.
Hi there 👋
I have started to modify the [documentation](https://github.com/gounthar/packaging/tree/documentation-update) in the hope of submitting a PR shortly.
I am using `docker` as it is the [only supported platform](https://github.com/jenkinsci/packaging/issues/314#issuecomment-1145088927).
Following the existing documentation, after setting the `WAR` and `JENKINS_URL` environment variables <details><summary>(not necessary if I read correctly the `docker-compose.yaml`</summary>
https://github.com/jenkinsci/packaging/blob/416f402a884be8d33d4b665d149daada5e6b361b/docker-compose.yaml#L18
</details>
, I launch `./prep.sh`. Unfortunately, `./prep.sh` does not work for me inside this container:
```bash
jenkins@420db984a78b:/srv/releases/jenkins$ ./prep.sh
: invalid option
```
`make setup` does not work either with the same kind of error:
```bash
make setup
bash -ex -c 'for f in */setup.sh; do $f; done'
+ for f in */setup.sh
+ deb/setup.sh
: invalid option
make: *** [Makefile:24: setup] Error 1
```
`bash -x prep.sh` outputs:
```bash
+ set -o $'pipefail\r'
: invalid option name pipefail
++ dirname prep.sh
+ cd $'.\r'
prep.sh: line 3: cd: $'.\r': No such file or directory
+ $'\r'
prep.sh: line 4: $'\r': command not found
+ $'\r'
prep.sh: line 10: $'\r': command not found
prep.sh: line 21: syntax error: unexpected end of file
```
Apparently, `pipefail` is a _bashism_, but we're using `bash`, aren't we? Furthermore, are these files _Windows_ encoded?
I would have like to debug by modifying the script files, but there is no editor bundled in the image (as far as I know).
Would anyone have a hint to get me out of the rut?
Here is the complete set of environment variables, just in case:
```bash
BASH=/usr/bin/bash
BASHOPTS=checkwinsize:cmdhist:complete_fullquote:expand_aliases:extquote:force_fignore:globasciiranges:histappend:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=([0]="0")
BASH_ARGV=()
BASH_CMDS=()
BASH_LINENO=()
BASH_SOURCE=()
BASH_VERSINFO=([0]="5" [1]="1" [2]="16" [3]="1" [4]="release" [5]="x86_64-pc-linux-gnu")
BASH_VERSION='5.1.16(1)-release'
BRAND=/srv/releases/jenkins/branding/jenkins.mk
BRANDING_DIR=/srv/releases/jenkins/branding
BUILDENV=/srv/releases/jenkins/env/test.mk
COLUMNS=230
DEBIAN_FRONTEND=noninteractive
DIRSTACK=()
EUID=1000
GPG_FILE=/srv/releases/jenkins/credentials/sandbox.gpg
GPG_KEYNAME=551F7B8423F34E8A1E9EEE1B3DD66EC9C147FCD1
GPG_PASSPHRASE=s3cr3t
GPG_PASSPHRASE_FILE=/srv/releases/jenkins/credentials/test.gpg.password.txt
GROUPS=()
HISTCONTROL=ignoreboth
HISTFILE=/home/jenkins/.bash_history
HISTFILESIZE=2000
HISTSIZE=1000
HOME=/home/jenkins
HOSTNAME=420db984a78b
HOSTTYPE=x86_64
IFS=$' \t\n'
JAVA_HOME=/opt/jdk-8
JENKINS_URL=http://192.168.0.144:8080/
LANG=C.UTF-8
LINES=66
LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:'
MACHTYPE=x86_64-pc-linux-gnu
MAILCHECK=60
MSI=/srv/releases/jenkins/jenkins.msi
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
PATH=/opt/jdk-8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PIPESTATUS=([0]="0")
PPID=0
PS1='\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
PS2='> '
PS4='+ '
PWD=/srv/releases/jenkins
RELEASELINE=-experimental
SHELL=/bin/sh
SHELLOPTS=braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor
SHLVL=1
TERM=xterm
TZ=UTC
UID=1000
USER=jenkins
WAR=/srv/releases/jenkins/jenkins.war
```
Thanks.
### Reference any relevant documentation, other materials or issues/pull requests that can be used for inspiration.
# Native package script for Jenkins
This repository contains scripts for packaging `jenkins.war` into various platform-specific native packages.
The following platforms are currently supported:
* Windows MSI: `msi/`
* RedHat/CentOS RPM: `rpm/`
* Debian/Ubuntu DEB: `deb/`
* OpenSUSE RPM: `suse/`
# Pre-requisites
Running the main package script requires a Ubuntu Linux [18.04](https://releases.ubuntu.com/18.04/) environment (see [JENKINS-27744](https://issues.jenkins-ci.org/browse/JENKINS-27744).)
Remark:
A docker image is available to run the following scripts. It's the only supported environment.
[](https://hub.docker.com/r/jenkinsciinfra/packaging)
Run `docker-compose run --rm packaging bash` to get a shell in the official Docker image for this repository.
<details><summary>Workaround</summary>
<p>
If your machine doesn't run this mandatory version of Ubuntu Linux, you can use these scripts within a VM or with the provided Docker image.
The use of the Docker image is <a href="https://github.com/jenkinsci/packaging/issues/314#issuecomment-1145088927">not compulsory</a> but any other environment is not supported.
Run `make setup` to install (most of the) necessary tools. Alternatively you can manually install the following onto a base install of Ubuntu:
* make
* unzip
* devscripts
* debhelper
* rpm
* expect
* createrepo
* ruby
* net-sftp (`gem install net-sftp`)
* maven
* java
</p>
</details>
You will also need a Jenkins instance with [dist-fork plugin](https://wiki.jenkins-ci.org/display/JENKINS/DistFork+Plugin)
installed. The URL of this Jenkins can be fed into `make` via the `JENKINS_URL` variable.
If you want to package for Windows, this Jenkins controller needs to have a Windows build agent that has [WiX Toolset](http://wixtoolset.org/) (currently 3.5), msbuild, [cygwin](https://www.cygwin.com/) and .net 2.0. This build agent is used to build MSI packages, which
can be only built on Windows.
You'll also need:
* a `jenkins.war` file that you are packaging, which comes from the [release process](https://www.jenkins.io/download/).
The location of this file is set via the `WAR` variable.
* a `jenkins.msi` file if you are packaging for Windows, which also comes from the [release process](https://www.jenkins.io/download/thank-you-downloading-windows-installer).
<details><summary>Download and reference `jenkins.war`</summary><p>
```bash
curl https://get.jenkins.io/war-stable/2.332.3/jenkins.war --output jenkins.war
# or jv download
export WAR=/srv/releases/jenkins/jenkins.war
# if you want to build for windows
curl https://get.jenkins.io/windows-stable/2.332.3/jenkins.msi --output jenkins.msi
export MSI=/srv/releases/jenkins/jenkins.msi
```
</p></details>
# Generating packages
Run `./prep.sh` to perform the preparatory actions of downloading the WAR and importing the GPG key.
Run `make package` to build all the native packages.
At minimum, you have to specify the `WAR` variable that points to the war file to be packaged and a branding file (for licensing and package descriptions).
You will probably need to pass in the build environment and credentials.
For example:
```shell
make package BRAND=./branding/jenkins.mk BUILDENV=./env/test.mk CREDENTIAL=./credentials/test.mk
```
Packages will be placed into `target/` directory.
See the definition of the `package` goal for how to build individual packages selectively.
# Running functional tests
The functional tests require Python 3 and Docker.
Having built the packages as described above, run the functional tests with:
```shell
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
molecule test
deactivate
```
# Publishing packages
This repository contains scripts for copying packages over to a remote web server to publish them.
Run `make publish` to publish all native packages.
See the definition of the `publish` goal for individual package publishment.
## Running local tests
These tests install packages from a web server where they are published. So if you want to
run tests prior to publishing them, you need to create a temporary web server that you can mess up.
The default branding & environment (`branding/test.mk` and `env/test.mk`) are designed to support
this scenario. To make local testing work, you also need to have `/etc/hosts` entry that maps
`test.pkg.jenkins.io` hostname to `127.0.0.1`, and your computer has to be running ssh that
lets you login as you.
Once you verified the above prerequisites, open another terminal and run `make test.local.setup`
This will run a docker container that acts as your throw-away package web server. When done, Ctrl+C
to kill it.
# Branding
`branding/` directory contains `*.mk` files that control the branding of the generated packages.
It also include text files which are used for large, branded text blocks (license and descriptions).
Specify the branding file via the `BRAND` variable.
You can create your own branding definition to customize the package generation process.
See [branding readme](branding/README.md) for more details. In the rest of the packaging script files,
these branding parameters are referenced via `@@NAME@@` and get substituted by `bin/branding.py`.
To escape a string normally like @@VALUE@@, add an additional two @@ symbols as a prefix: @@@@VALUE@@.
# Environment
`env/` directory contains `*.mk` files that control the environment into which
you publish packages. Specify the environment file via the `BUILDENV` variable.
You can create your own environment definition to customize the package generation process.
See [environment readme](env/README.md) for more details.
# Credentials
`credentials/` directory contains `test.mk` file that controls the locations of code-signing keys,
their passwords, and certificates. Specify the credentials file via the `CREDENTIAL` variable.
For production use, you need to create your own credentials file. See [credentials readme](credentials/README.md)
for more details.
# TODO (mostly note to myself)
* Split resource templates to enable customization
|
non_defect
|
current documentation does not work for me despite using docker describe your use case which is not covered by existing documentation hi there 👋 i have started to modify the in the hope of submitting a pr shortly i am using docker as it is the following the existing documentation after setting the war and jenkins url environment variables not necessary if i read correctly the docker compose yaml i launch prep sh unfortunately prep sh does not work for me inside this container bash jenkins srv releases jenkins prep sh invalid option make setup does not work either with the same kind of error bash make setup bash ex c for f in setup sh do f done for f in setup sh deb setup sh invalid option make error bash x prep sh outputs bash set o pipefail r invalid option name pipefail dirname prep sh cd r prep sh line cd r no such file or directory r prep sh line r command not found r prep sh line r command not found prep sh line syntax error unexpected end of file apparently pipefail is a bashism but we re using bash aren t we furthermore are these files windows encoded i would have like to debug by modifying the script files but there is no editor bundled in the image as far as i know would anyone have a hint to get me out of the rut here is the complete set of environment variables just in case bash bash usr bin bash bashopts checkwinsize cmdhist complete fullquote expand aliases extquote force fignore globasciiranges histappend hostcomplete interactive comments progcomp promptvars sourcepath bash aliases bash argc bash argv bash cmds bash lineno bash source bash versinfo release pc linux gnu bash version release brand srv releases jenkins branding jenkins mk branding dir srv releases jenkins branding buildenv srv releases jenkins env test mk columns debian frontend noninteractive dirstack euid gpg file srv releases jenkins credentials sandbox gpg gpg keyname gpg passphrase gpg passphrase file srv releases jenkins credentials test gpg password txt groups histcontrol ignoreboth histfile home jenkins bash history histfilesize histsize home home jenkins hostname hosttype ifs t n java home opt jdk jenkins url lang c utf lines ls colors rs di ln mh pi so do bd cd or mi su sg ca tw ow st ex tar tgz arc arj taz lha lzh lzma tlz txz tzo zip z dz gz lrz lz lzo xz zst tzst bz tbz tz deb rpm jar war ear sar rar alz ace zoo cpio rz cab wim swm dwm esd jpg jpeg mjpg mjpeg gif bmp pbm pgm ppm tga xbm xpm tif tiff png svg svgz mng pcx mov mpg mpeg mkv webm webp ogm vob qt nuv wmv asf rm rmvb flc avi fli flv gl dl xcf xwd yuv cgm emf ogv ogx aac au flac mid midi mka mpc ogg ra wav oga opus spx xspf machtype pc linux gnu mailcheck msi srv releases jenkins jenkins msi opterr optind ostype linux gnu path opt jdk bin usr local sbin usr local bin usr sbin usr bin sbin bin pipestatus ppid u h w a debian chroot debian chroot u h w pwd srv releases jenkins releaseline experimental shell bin sh shellopts braceexpand emacs hashall histexpand history interactive comments monitor shlvl term xterm tz utc uid user jenkins war srv releases jenkins jenkins war thanks reference any relevant documentation other materials or issues pull requests that can be used for inspiration native package script for jenkins this repository contains scripts for packaging jenkins war into various platform specific native packages the following platforms are currently supported windows msi msi redhat centos rpm rpm debian ubuntu deb deb opensuse rpm suse pre requisites running the main package script requires a ubuntu linux environment see remark a docker image is available to run the following scripts it s the only supported environment run docker compose run rm packaging bash to get a shell in the official docker image for this repository workaround if your machine doesn t run this mandatory version of ubuntu linux you can use these scripts within a vm or with the provided docker image the use of the docker image is but any other environment is not supported run make setup to install most of the necessary tools alternatively you can manually install the following onto a base install of ubuntu make unzip devscripts debhelper rpm expect createrepo ruby net sftp gem install net sftp maven java you will also need a jenkins instance with installed the url of this jenkins can be fed into make via the jenkins url variable if you want to package for windows this jenkins controller needs to have a windows build agent that has currently msbuild and net this build agent is used to build msi packages which can be only built on windows you ll also need a jenkins war file that you are packaging which comes from the the location of this file is set via the war variable a jenkins msi file if you are packaging for windows which also comes from the download and reference jenkins war bash curl output jenkins war or jv download export war srv releases jenkins jenkins war if you want to build for windows curl output jenkins msi export msi srv releases jenkins jenkins msi generating packages run prep sh to perform the preparatory actions of downloading the war and importing the gpg key run make package to build all the native packages at minimum you have to specify the war variable that points to the war file to be packaged and a branding file for licensing and package descriptions you will probably need to pass in the build environment and credentials for example shell make package brand branding jenkins mk buildenv env test mk credential credentials test mk packages will be placed into target directory see the definition of the package goal for how to build individual packages selectively running functional tests the functional tests require python and docker having built the packages as described above run the functional tests with shell m venv venv source venv bin activate pip install r requirements txt molecule test deactivate publishing packages this repository contains scripts for copying packages over to a remote web server to publish them run make publish to publish all native packages see the definition of the publish goal for individual package publishment running local tests these tests install packages from a web server where they are published so if you want to run tests prior to publishing them you need to create a temporary web server that you can mess up the default branding environment branding test mk and env test mk are designed to support this scenario to make local testing work you also need to have etc hosts entry that maps test pkg jenkins io hostname to and your computer has to be running ssh that lets you login as you once you verified the above prerequisites open another terminal and run make test local setup this will run a docker container that acts as your throw away package web server when done ctrl c to kill it branding branding directory contains mk files that control the branding of the generated packages it also include text files which are used for large branded text blocks license and descriptions specify the branding file via the brand variable you can create your own branding definition to customize the package generation process see branding readme md for more details in the rest of the packaging script files these branding parameters are referenced via name and get substituted by bin branding py to escape a string normally like value add an additional two symbols as a prefix value environment env directory contains mk files that control the environment into which you publish packages specify the environment file via the buildenv variable you can create your own environment definition to customize the package generation process see env readme md for more details credentials credentials directory contains test mk file that controls the locations of code signing keys their passwords and certificates specify the credentials file via the credential variable for production use you need to create your own credentials file see credentials readme md for more details todo mostly note to myself split resource templates to enable customization
| 0
|
224,597
| 7,471,938,912
|
IssuesEvent
|
2018-04-03 10:55:39
|
ballerina-lang/composer
|
https://api.github.com/repos/ballerina-lang/composer
|
closed
|
Errors when opening routing service sample
|
0.94-pre-release Imported Priority/Highest Severity/Major component/Composer
|
Following error was observed.
```
Oct 12, 2017 3:34:45 PM org.ballerinalang.composer.service.workspace.rest.datamodel.BLangFileRestService generateJSON
SEVERE: [json]
Oct 12, 2017 3:34:45 PM org.ballerinalang.composer.service.workspace.rest.datamodel.BLangFileRestService generateJSON
SEVERE: [string, TypeCastError]
Oct 12, 2017 3:34:45 PM org.ballerinalang.composer.service.workspace.rest.datamodel.BLangFileRestService generateJSON
SEVERE: [string, TypeCastError]
Oct 12, 2017 3:34:45 PM org.ballerinalang.composer.service.workspace.rest.datamodel.BLangFileRestService generateJSON
SEVERE: [ballerina.net.http:Response]
Oct 12, 2017 3:34:45 PM org.ballerinalang.composer.service.workspace.rest.datamodel.BLangFileRestService generateJSON
SEVERE: [ballerina.net.http:Response]
```
|
1.0
|
Errors when opening routing service sample - Following error was observed.
```
Oct 12, 2017 3:34:45 PM org.ballerinalang.composer.service.workspace.rest.datamodel.BLangFileRestService generateJSON
SEVERE: [json]
Oct 12, 2017 3:34:45 PM org.ballerinalang.composer.service.workspace.rest.datamodel.BLangFileRestService generateJSON
SEVERE: [string, TypeCastError]
Oct 12, 2017 3:34:45 PM org.ballerinalang.composer.service.workspace.rest.datamodel.BLangFileRestService generateJSON
SEVERE: [string, TypeCastError]
Oct 12, 2017 3:34:45 PM org.ballerinalang.composer.service.workspace.rest.datamodel.BLangFileRestService generateJSON
SEVERE: [ballerina.net.http:Response]
Oct 12, 2017 3:34:45 PM org.ballerinalang.composer.service.workspace.rest.datamodel.BLangFileRestService generateJSON
SEVERE: [ballerina.net.http:Response]
```
|
non_defect
|
errors when opening routing service sample following error was observed oct pm org ballerinalang composer service workspace rest datamodel blangfilerestservice generatejson severe oct pm org ballerinalang composer service workspace rest datamodel blangfilerestservice generatejson severe oct pm org ballerinalang composer service workspace rest datamodel blangfilerestservice generatejson severe oct pm org ballerinalang composer service workspace rest datamodel blangfilerestservice generatejson severe oct pm org ballerinalang composer service workspace rest datamodel blangfilerestservice generatejson severe
| 0
|
77,508
| 27,027,956,226
|
IssuesEvent
|
2023-02-11 21:05:43
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
MAINT: optimize.shgo: returns incorrect solution to Rosenbrock problem
|
defect scipy.optimize
|
<!--
Thank you for taking the time to file a bug report.
Please fill in the fields below, deleting the sections that
don't apply to your issue. You can view the final output
by clicking the preview button above.
Note: This is a comment, and won't appear in the output.
-->
My issue is about optimize.shgo. It gives an unexpected TypeError.
When running the following
from scipy.optimize import rosen, rosen_der, rosen_hess
bounds = [(0,1.6), (0, 1.6), (0, 1.4), (0, 1.4), (0, 1.4)]
result = scipy.optimize.shgo(rosen, bounds, options={'jac':rosen_der,'hess':rosen_hess})
I get
TypeError: _minimize_slsqp() got multiple values for argument 'jac'
I believe that `jac` is correctly specified here (see [this documentation][1]). Did I make a mistake or is there a bug here?
[1]: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo
#### Scipy/Numpy/Python version information:
```
1.6.0 1.20.0 sys.version_info(major=3, minor=8, micro=5, releaselevel='final', serial=0)
```
|
1.0
|
MAINT: optimize.shgo: returns incorrect solution to Rosenbrock problem - <!--
Thank you for taking the time to file a bug report.
Please fill in the fields below, deleting the sections that
don't apply to your issue. You can view the final output
by clicking the preview button above.
Note: This is a comment, and won't appear in the output.
-->
My issue is about optimize.shgo. It gives an unexpected TypeError.
When running the following
from scipy.optimize import rosen, rosen_der, rosen_hess
bounds = [(0,1.6), (0, 1.6), (0, 1.4), (0, 1.4), (0, 1.4)]
result = scipy.optimize.shgo(rosen, bounds, options={'jac':rosen_der,'hess':rosen_hess})
I get
TypeError: _minimize_slsqp() got multiple values for argument 'jac'
I believe that `jac` is correctly specified here (see [this documentation][1]). Did I make a mistake or is there a bug here?
[1]: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo
#### Scipy/Numpy/Python version information:
```
1.6.0 1.20.0 sys.version_info(major=3, minor=8, micro=5, releaselevel='final', serial=0)
```
|
defect
|
maint optimize shgo returns incorrect solution to rosenbrock problem thank you for taking the time to file a bug report please fill in the fields below deleting the sections that don t apply to your issue you can view the final output by clicking the preview button above note this is a comment and won t appear in the output my issue is about optimize shgo it gives an unexpected typeerror when running the following from scipy optimize import rosen rosen der rosen hess bounds result scipy optimize shgo rosen bounds options jac rosen der hess rosen hess i get typeerror minimize slsqp got multiple values for argument jac i believe that jac is correctly specified here see did i make a mistake or is there a bug here scipy numpy python version information sys version info major minor micro releaselevel final serial
| 1
|
34,115
| 7,346,691,360
|
IssuesEvent
|
2018-03-07 21:36:20
|
prettydiff/prettydiff
|
https://api.github.com/repos/prettydiff/prettydiff
|
closed
|
Beautification erase code when it meets several Twig `if` statements in the same line
|
Defect Not started Parsing
|
# Description
Like title says, beautification erase code in lines that contains several Twig `if` statements.
Seems like, from second `endif` tag beautification deletes everything after it, including the second `endif` tag.
# Input Before Beautification
This is what the code looked like before:
```
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow" {% endif %}>foobar</a>
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow" {% endif %} {% if bar %} rel="nofollow" {% endif %}>foobar</a>
```
# Expected Output
The beautified code should have looked like this:
```
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow" {% endif %}>foobar</a>
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow" {% endif %} {% if bar %} rel="nofollow" {% endif %}>foobar</a>
```
# Actual Output
The beautified code actually looked like this:
```
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow">foobar</a>
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow">foobar</a>
```
Thank you!
Pablo.
|
1.0
|
Beautification erase code when it meets several Twig `if` statements in the same line - # Description
Like title says, beautification erase code in lines that contains several Twig `if` statements.
Seems like, from second `endif` tag beautification deletes everything after it, including the second `endif` tag.
# Input Before Beautification
This is what the code looked like before:
```
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow" {% endif %}>foobar</a>
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow" {% endif %} {% if bar %} rel="nofollow" {% endif %}>foobar</a>
```
# Expected Output
The beautified code should have looked like this:
```
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow" {% endif %}>foobar</a>
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow" {% endif %} {% if bar %} rel="nofollow" {% endif %}>foobar</a>
```
# Actual Output
The beautified code actually looked like this:
```
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow">foobar</a>
<a {% if foo %} target="_blank" {% endif %} {% if bar %} rel="nofollow">foobar</a>
```
Thank you!
Pablo.
|
defect
|
beautification erase code when it meets several twig if statements in the same line description like title says beautification erase code in lines that contains several twig if statements seems like from second endif tag beautification deletes everything after it including the second endif tag input before beautification this is what the code looked like before foobar foobar expected output the beautified code should have looked like this foobar foobar actual output the beautified code actually looked like this foobar foobar thank you pablo
| 1
|
53,140
| 13,261,006,577
|
IssuesEvent
|
2020-08-20 19:12:50
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
tableio - doxygen errors, missing class members (Trac #799)
|
Migrated from Trac combo core defect
|
```text
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:19: warning: no matching class member found for
void convert::I3Trigger::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:40: warning: no matching class member found for
void convert::I3Trigger::FillSingleRow(const booked_type &trigger, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:56: warning: no matching class member found for
void convert::I3DOMLaunch::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:80: warning: no matching class member found for
void convert::I3DOMLaunch::FillSingleRow(const booked_type &dl, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:116: warning: no matching class member found for
void convert::I3RecoHit::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:122: warning: no matching class member found for
void convert::I3RecoHit::FillSingleRow(const booked_type &hit, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:129: warning: no matching class member found for
void convert::I3MCHit::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:141: warning: no matching class member found for
void convert::I3MCHit::FillSingleRow(const booked_type &hit, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:150: warning: no matching class member found for
void convert::I3RecoPulse::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:157: warning: no matching class member found for
void convert::I3RecoPulse::FillSingleRow(const booked_type &pulse, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:165: warning: no matching class member found for
void convert::double_pair::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:171: warning: no matching class member found for
void convert::double_pair::FillSingleRow(const booked_type &item, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:178: warning: no matching class member found for
void convert::I3FlasherInfo::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:194: warning: no matching class member found for
void convert::I3FlasherInfo::FillSingleRow(const booked_type &flasherinfo, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:215: warning: no matching class member found for
void convert::OMKey::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:221: warning: no matching class member found for
void convert::OMKey::FillSingleRow(const booked_type &key, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:228: warning: no matching class member found for
void convert::TankKey::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:234: warning: no matching class member found for
void convert::TankKey::FillSingleRow(const booked_type &key, I3TableRowPtr row)
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/799">https://code.icecube.wisc.edu/projects/icecube/ticket/799</a>, reported by nega</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-19T20:55:50",
"_ts": "1416430550952394",
"description": "{{{\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:19: warning: no matching class member found for \n void convert::I3Trigger::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:40: warning: no matching class member found for \n void convert::I3Trigger::FillSingleRow(const booked_type &trigger, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:56: warning: no matching class member found for \n void convert::I3DOMLaunch::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:80: warning: no matching class member found for \n void convert::I3DOMLaunch::FillSingleRow(const booked_type &dl, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:116: warning: no matching class member found for \n void convert::I3RecoHit::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:122: warning: no matching class member found for \n void convert::I3RecoHit::FillSingleRow(const booked_type &hit, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:129: warning: no matching class member found for \n void convert::I3MCHit::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:141: warning: no matching class member found for \n void convert::I3MCHit::FillSingleRow(const booked_type &hit, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:150: warning: no matching class member found for \n void convert::I3RecoPulse::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:157: warning: no matching class member found for \n void convert::I3RecoPulse::FillSingleRow(const booked_type &pulse, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:165: warning: no matching class member found for \n void convert::double_pair::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:171: warning: no matching class member found for \n void convert::double_pair::FillSingleRow(const booked_type &item, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:178: warning: no matching class member found for \n void convert::I3FlasherInfo::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:194: warning: no matching class member found for \n void convert::I3FlasherInfo::FillSingleRow(const booked_type &flasherinfo, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:215: warning: no matching class member found for \n void convert::OMKey::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:221: warning: no matching class member found for \n void convert::OMKey::FillSingleRow(const booked_type &key, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:228: warning: no matching class member found for \n void convert::TankKey::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:234: warning: no matching class member found for \n void convert::TankKey::FillSingleRow(const booked_type &key, I3TableRowPtr row)\n\n}}}",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"time": "2014-11-06T19:55:31",
"component": "combo core",
"summary": "tableio - doxygen errors, missing class members",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
tableio - doxygen errors, missing class members (Trac #799) -
```text
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:19: warning: no matching class member found for
void convert::I3Trigger::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:40: warning: no matching class member found for
void convert::I3Trigger::FillSingleRow(const booked_type &trigger, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:56: warning: no matching class member found for
void convert::I3DOMLaunch::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:80: warning: no matching class member found for
void convert::I3DOMLaunch::FillSingleRow(const booked_type &dl, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:116: warning: no matching class member found for
void convert::I3RecoHit::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:122: warning: no matching class member found for
void convert::I3RecoHit::FillSingleRow(const booked_type &hit, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:129: warning: no matching class member found for
void convert::I3MCHit::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:141: warning: no matching class member found for
void convert::I3MCHit::FillSingleRow(const booked_type &hit, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:150: warning: no matching class member found for
void convert::I3RecoPulse::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:157: warning: no matching class member found for
void convert::I3RecoPulse::FillSingleRow(const booked_type &pulse, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:165: warning: no matching class member found for
void convert::double_pair::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:171: warning: no matching class member found for
void convert::double_pair::FillSingleRow(const booked_type &item, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:178: warning: no matching class member found for
void convert::I3FlasherInfo::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:194: warning: no matching class member found for
void convert::I3FlasherInfo::FillSingleRow(const booked_type &flasherinfo, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:215: warning: no matching class member found for
void convert::OMKey::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:221: warning: no matching class member found for
void convert::OMKey::FillSingleRow(const booked_type &key, I3TableRowPtr row)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:228: warning: no matching class member found for
void convert::TankKey::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)
/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:234: warning: no matching class member found for
void convert::TankKey::FillSingleRow(const booked_type &key, I3TableRowPtr row)
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/799">https://code.icecube.wisc.edu/projects/icecube/ticket/799</a>, reported by nega</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-19T20:55:50",
"_ts": "1416430550952394",
"description": "{{{\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:19: warning: no matching class member found for \n void convert::I3Trigger::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:40: warning: no matching class member found for \n void convert::I3Trigger::FillSingleRow(const booked_type &trigger, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:56: warning: no matching class member found for \n void convert::I3DOMLaunch::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:80: warning: no matching class member found for \n void convert::I3DOMLaunch::FillSingleRow(const booked_type &dl, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:116: warning: no matching class member found for \n void convert::I3RecoHit::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:122: warning: no matching class member found for \n void convert::I3RecoHit::FillSingleRow(const booked_type &hit, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:129: warning: no matching class member found for \n void convert::I3MCHit::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:141: warning: no matching class member found for \n void convert::I3MCHit::FillSingleRow(const booked_type &hit, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:150: warning: no matching class member found for \n void convert::I3RecoPulse::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:157: warning: no matching class member found for \n void convert::I3RecoPulse::FillSingleRow(const booked_type &pulse, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:165: warning: no matching class member found for \n void convert::double_pair::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:171: warning: no matching class member found for \n void convert::double_pair::FillSingleRow(const booked_type &item, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:178: warning: no matching class member found for \n void convert::I3FlasherInfo::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:194: warning: no matching class member found for \n void convert::I3FlasherInfo::FillSingleRow(const booked_type &flasherinfo, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:215: warning: no matching class member found for \n void convert::OMKey::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:221: warning: no matching class member found for \n void convert::OMKey::FillSingleRow(const booked_type &key, I3TableRowPtr row)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:228: warning: no matching class member found for \n void convert::TankKey::AddFields(I3TableRowDescriptionPtr desc, const booked_type &)\n\n/data/i3home/nega/i3/offline-software/src/tableio/private/tableio/converter/dataclasses_container_convert.cxx:234: warning: no matching class member found for \n void convert::TankKey::FillSingleRow(const booked_type &key, I3TableRowPtr row)\n\n}}}",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"time": "2014-11-06T19:55:31",
"component": "combo core",
"summary": "tableio - doxygen errors, missing class members",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
defect
|
tableio doxygen errors missing class members trac text data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert addfields desc const booked type data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert fillsinglerow const booked type trigger row data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert addfields desc const booked type data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert fillsinglerow const booked type dl row data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert addfields desc const booked type data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert fillsinglerow const booked type hit row data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert addfields desc const booked type data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert fillsinglerow const booked type hit row data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert addfields desc const booked type data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert fillsinglerow const booked type pulse row data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert double pair addfields desc const booked type data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert double pair fillsinglerow const booked type item row data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert addfields desc const booked type data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert fillsinglerow const booked type flasherinfo row data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert omkey addfields desc const booked type data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert omkey fillsinglerow const booked type key row data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert tankkey addfields desc const booked type data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for void convert tankkey fillsinglerow const booked type key row migrated from json status closed changetime ts description n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert addfields desc const booked type n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert fillsinglerow const booked type trigger row n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert addfields desc const booked type n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert fillsinglerow const booked type dl row n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert addfields desc const booked type n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert fillsinglerow const booked type hit row n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert addfields desc const booked type n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert fillsinglerow const booked type hit row n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert addfields desc const booked type n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert fillsinglerow const booked type pulse row n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert double pair addfields desc const booked type n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert double pair fillsinglerow const booked type item row n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert addfields desc const booked type n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert fillsinglerow const booked type flasherinfo row n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert omkey addfields desc const booked type n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert omkey fillsinglerow const booked type key row n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert tankkey addfields desc const booked type n n data nega offline software src tableio private tableio converter dataclasses container convert cxx warning no matching class member found for n void convert tankkey fillsinglerow const booked type key row n n reporter nega cc resolution fixed time component combo core summary tableio doxygen errors missing class members priority normal keywords milestone owner type defect
| 1
|
86,225
| 15,755,442,409
|
IssuesEvent
|
2021-03-31 01:47:19
|
ervin210/LIVE-ROOM-1485590319891
|
https://api.github.com/repos/ervin210/LIVE-ROOM-1485590319891
|
opened
|
CVE-2021-21298 (Medium) detected in runtime-0.20.8.tgz
|
security vulnerability
|
## CVE-2021-21298 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>runtime-0.20.8.tgz</b></p></summary>
<p>@node-red/runtime ====================</p>
<p>Library home page: <a href="https://registry.npmjs.org/@node-red/runtime/-/runtime-0.20.8.tgz">https://registry.npmjs.org/@node-red/runtime/-/runtime-0.20.8.tgz</a></p>
<p>Path to dependency file: LIVE-ROOM-1485590319891/package.json</p>
<p>Path to vulnerable library: LIVE-ROOM-1485590319891/node_modules/@node-red/runtime/package.json</p>
<p>
Dependency Hierarchy:
- node-red-0.20.8.tgz (Root Library)
- :x: **runtime-0.20.8.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Node-Red is a low-code programming for event-driven applications built using nodejs. Node-RED 1.2.7 and earlier has a vulnerability which allows arbitrary path traversal via the Projects API. If the Projects feature is enabled, a user with `projects.read` permission is able to access any file via the Projects API. The issue has been patched in Node-RED 1.2.8. The vulnerability applies only to the Projects feature which is not enabled by default in Node-RED. The primary workaround is not give untrusted users read access to the Node-RED editor.
<p>Publish Date: 2021-02-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21298>CVE-2021-21298</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/node-red/node-red/security/advisories/GHSA-m33v-338h-4v9f">https://github.com/node-red/node-red/security/advisories/GHSA-m33v-338h-4v9f</a></p>
<p>Release Date: 2021-02-26</p>
<p>Fix Resolution: 1.2.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-21298 (Medium) detected in runtime-0.20.8.tgz - ## CVE-2021-21298 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>runtime-0.20.8.tgz</b></p></summary>
<p>@node-red/runtime ====================</p>
<p>Library home page: <a href="https://registry.npmjs.org/@node-red/runtime/-/runtime-0.20.8.tgz">https://registry.npmjs.org/@node-red/runtime/-/runtime-0.20.8.tgz</a></p>
<p>Path to dependency file: LIVE-ROOM-1485590319891/package.json</p>
<p>Path to vulnerable library: LIVE-ROOM-1485590319891/node_modules/@node-red/runtime/package.json</p>
<p>
Dependency Hierarchy:
- node-red-0.20.8.tgz (Root Library)
- :x: **runtime-0.20.8.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Node-Red is a low-code programming for event-driven applications built using nodejs. Node-RED 1.2.7 and earlier has a vulnerability which allows arbitrary path traversal via the Projects API. If the Projects feature is enabled, a user with `projects.read` permission is able to access any file via the Projects API. The issue has been patched in Node-RED 1.2.8. The vulnerability applies only to the Projects feature which is not enabled by default in Node-RED. The primary workaround is not give untrusted users read access to the Node-RED editor.
<p>Publish Date: 2021-02-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21298>CVE-2021-21298</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/node-red/node-red/security/advisories/GHSA-m33v-338h-4v9f">https://github.com/node-red/node-red/security/advisories/GHSA-m33v-338h-4v9f</a></p>
<p>Release Date: 2021-02-26</p>
<p>Fix Resolution: 1.2.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in runtime tgz cve medium severity vulnerability vulnerable library runtime tgz node red runtime library home page a href path to dependency file live room package json path to vulnerable library live room node modules node red runtime package json dependency hierarchy node red tgz root library x runtime tgz vulnerable library found in base branch master vulnerability details node red is a low code programming for event driven applications built using nodejs node red and earlier has a vulnerability which allows arbitrary path traversal via the projects api if the projects feature is enabled a user with projects read permission is able to access any file via the projects api the issue has been patched in node red the vulnerability applies only to the projects feature which is not enabled by default in node red the primary workaround is not give untrusted users read access to the node red editor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
220,030
| 24,548,548,878
|
IssuesEvent
|
2022-10-12 10:42:11
|
Vonage/vonage-python-code-snippets
|
https://api.github.com/repos/Vonage/vonage-python-code-snippets
|
closed
|
PyYAML-5.1.tar.gz: 3 vulnerabilities (highest severity is: 9.8) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>PyYAML-5.1.tar.gz</b></p></summary>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz">https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/requirements.txt</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-python-code-snippets/commit/78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c">78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2020-1747](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1747) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | PyYAML-5.1.tar.gz | Direct | pyyaml - 5.3.1 | ✅ |
| [CVE-2020-14343](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14343) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | PyYAML-5.1.tar.gz | Direct | PyYAML - 5.4 | ✅ |
| [CVE-2019-20477](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20477) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | PyYAML-5.1.tar.gz | Direct | 5.2 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-1747</summary>
### Vulnerable Library - <b>PyYAML-5.1.tar.gz</b></p>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz">https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **PyYAML-5.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-python-code-snippets/commit/78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c">78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability was discovered in the PyYAML library in versions before 5.3.1, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor.
<p>Publish Date: 2020-03-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1747>CVE-2020-1747</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-6757-jp84-gxfx">https://github.com/advisories/GHSA-6757-jp84-gxfx</a></p>
<p>Release Date: 2020-03-24</p>
<p>Fix Resolution: pyyaml - 5.3.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-14343</summary>
### Vulnerable Library - <b>PyYAML-5.1.tar.gz</b></p>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz">https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **PyYAML-5.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-python-code-snippets/commit/78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c">78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability was discovered in the PyYAML library in versions before 5.4, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. This flaw allows an attacker to execute arbitrary code on the system by abusing the python/object/new constructor. This flaw is due to an incomplete fix for CVE-2020-1747.
<p>Publish Date: 2021-02-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14343>CVE-2020-14343</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14343">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14343</a></p>
<p>Release Date: 2021-02-09</p>
<p>Fix Resolution: PyYAML - 5.4</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-20477</summary>
### Vulnerable Library - <b>PyYAML-5.1.tar.gz</b></p>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz">https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **PyYAML-5.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-python-code-snippets/commit/78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c">78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
PyYAML 5.1 through 5.1.2 has insufficient restrictions on the load and load_all functions because of a class deserialization issue, e.g., Popen is a class in the subprocess module. NOTE: this issue exists because of an incomplete fix for CVE-2017-18342.
<p>Publish Date: 2020-02-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20477>CVE-2019-20477</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20477">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20477</a></p>
<p>Release Date: 2020-02-19</p>
<p>Fix Resolution: 5.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
True
|
PyYAML-5.1.tar.gz: 3 vulnerabilities (highest severity is: 9.8) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>PyYAML-5.1.tar.gz</b></p></summary>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz">https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/requirements.txt</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-python-code-snippets/commit/78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c">78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2020-1747](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1747) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | PyYAML-5.1.tar.gz | Direct | pyyaml - 5.3.1 | ✅ |
| [CVE-2020-14343](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14343) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | PyYAML-5.1.tar.gz | Direct | PyYAML - 5.4 | ✅ |
| [CVE-2019-20477](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20477) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | PyYAML-5.1.tar.gz | Direct | 5.2 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-1747</summary>
### Vulnerable Library - <b>PyYAML-5.1.tar.gz</b></p>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz">https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **PyYAML-5.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-python-code-snippets/commit/78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c">78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability was discovered in the PyYAML library in versions before 5.3.1, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor.
<p>Publish Date: 2020-03-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1747>CVE-2020-1747</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-6757-jp84-gxfx">https://github.com/advisories/GHSA-6757-jp84-gxfx</a></p>
<p>Release Date: 2020-03-24</p>
<p>Fix Resolution: pyyaml - 5.3.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-14343</summary>
### Vulnerable Library - <b>PyYAML-5.1.tar.gz</b></p>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz">https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **PyYAML-5.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-python-code-snippets/commit/78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c">78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A vulnerability was discovered in the PyYAML library in versions before 5.4, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. This flaw allows an attacker to execute arbitrary code on the system by abusing the python/object/new constructor. This flaw is due to an incomplete fix for CVE-2020-1747.
<p>Publish Date: 2021-02-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14343>CVE-2020-14343</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14343">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14343</a></p>
<p>Release Date: 2021-02-09</p>
<p>Fix Resolution: PyYAML - 5.4</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-20477</summary>
### Vulnerable Library - <b>PyYAML-5.1.tar.gz</b></p>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz">https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **PyYAML-5.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Vonage/vonage-python-code-snippets/commit/78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c">78eeaa256d0c11e23f05f72c3b2ab90bcbb6083c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
PyYAML 5.1 through 5.1.2 has insufficient restrictions on the load and load_all functions because of a class deserialization issue, e.g., Popen is a class in the subprocess module. NOTE: this issue exists because of an incomplete fix for CVE-2017-18342.
<p>Publish Date: 2020-02-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20477>CVE-2019-20477</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20477">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20477</a></p>
<p>Release Date: 2020-02-19</p>
<p>Fix Resolution: 5.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
non_defect
|
pyyaml tar gz vulnerabilities highest severity is autoclosed vulnerable library pyyaml tar gz yaml parser and emitter for python library home page a href path to dependency file requirements txt path to vulnerable library requirements txt requirements txt found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high pyyaml tar gz direct pyyaml high pyyaml tar gz direct pyyaml high pyyaml tar gz direct details cve vulnerable library pyyaml tar gz yaml parser and emitter for python library home page a href path to dependency file requirements txt path to vulnerable library requirements txt requirements txt dependency hierarchy x pyyaml tar gz vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability was discovered in the pyyaml library in versions before where it is susceptible to arbitrary code execution when it processes untrusted yaml files through the full load method or with the fullloader loader applications that use the library to process untrusted input may be vulnerable to this flaw an attacker could use this flaw to execute arbitrary code on the system by abusing the python object new constructor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution pyyaml rescue worker helmet automatic remediation is available for this issue cve vulnerable library pyyaml tar gz yaml parser and emitter for python library home page a href path to dependency file requirements txt path to vulnerable library requirements txt requirements txt dependency hierarchy x pyyaml tar gz vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability was discovered in the pyyaml library in versions before where it is susceptible to arbitrary code execution when it processes untrusted yaml files through the full load method or with the fullloader loader applications that use the library to process untrusted input may be vulnerable to this flaw this flaw allows an attacker to execute arbitrary code on the system by abusing the python object new constructor this flaw is due to an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution pyyaml rescue worker helmet automatic remediation is available for this issue cve vulnerable library pyyaml tar gz yaml parser and emitter for python library home page a href path to dependency file requirements txt path to vulnerable library requirements txt requirements txt dependency hierarchy x pyyaml tar gz vulnerable library found in head commit a href found in base branch master vulnerability details pyyaml through has insufficient restrictions on the load and load all functions because of a class deserialization issue e g popen is a class in the subprocess module note this issue exists because of an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
| 0
|
21,985
| 3,587,538,262
|
IssuesEvent
|
2016-01-30 11:14:48
|
ariya/phantomjs
|
https://api.github.com/repos/ariya/phantomjs
|
closed
|
phantomjs crashed o Lion 10.7.5, .dmp attached
|
old.Priority-Medium old.Status-New old.Type-Defect
|
_**[hora...@gmail.com](http://code.google.com/u/106972801875058279253/) commented:**_
> <b>Which version of PhantomJS are you using? Tip: run 'phantomjs --version'.</b>
> <b>What steps will reproduce the problem?</b>
1. /usr/local/bin/phantomjs iphonereserve.coffee "firstname" "lastname" "name@mail.com" "X123456(7)" "MD297ZP/A" "R485"
> 2. phantomJS has crashed. Please file a bug report at https://code.google.com/p/phantomjs/issues/entry and attach the crash dump file: /tmp/603D7C51-6E9F-4C84-8E82-BA8EE05046B2.dmp
> <b>3.</b>
> What is the expected output? What do you see instead? crashed
>
> Which operating system are you using? OSX Lion 10.7.5
>
> Did you use binary PhantomJS or did you compile it from source? brew install phantomjs
>
> <b>Please provide any additional information below.</b>
> $ /usr/local/bin/phantomjs iphonereserve.coffee "firstname" "lastname" "name@mail.com" "X123456(7)" "MD297ZP/A" "R485"
>
> - iphonereserve.coffee script attached
> - dmp file attached
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #857](http://code.google.com/p/phantomjs/issues/detail?id=857).
:star2: **2** people had starred this issue at the time of migration.
|
1.0
|
phantomjs crashed o Lion 10.7.5, .dmp attached - _**[hora...@gmail.com](http://code.google.com/u/106972801875058279253/) commented:**_
> <b>Which version of PhantomJS are you using? Tip: run 'phantomjs --version'.</b>
> <b>What steps will reproduce the problem?</b>
1. /usr/local/bin/phantomjs iphonereserve.coffee "firstname" "lastname" "name@mail.com" "X123456(7)" "MD297ZP/A" "R485"
> 2. phantomJS has crashed. Please file a bug report at https://code.google.com/p/phantomjs/issues/entry and attach the crash dump file: /tmp/603D7C51-6E9F-4C84-8E82-BA8EE05046B2.dmp
> <b>3.</b>
> What is the expected output? What do you see instead? crashed
>
> Which operating system are you using? OSX Lion 10.7.5
>
> Did you use binary PhantomJS or did you compile it from source? brew install phantomjs
>
> <b>Please provide any additional information below.</b>
> $ /usr/local/bin/phantomjs iphonereserve.coffee "firstname" "lastname" "name@mail.com" "X123456(7)" "MD297ZP/A" "R485"
>
> - iphonereserve.coffee script attached
> - dmp file attached
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #857](http://code.google.com/p/phantomjs/issues/detail?id=857).
:star2: **2** people had starred this issue at the time of migration.
|
defect
|
phantomjs crashed o lion dmp attached commented which version of phantomjs are you using tip run phantomjs version what steps will reproduce the problem usr local bin phantomjs iphonereserve coffee quot firstname quot quot lastname quot quot name mail com quot quot quot quot a quot quot quot phantomjs has crashed please file a bug report at and attach the crash dump file tmp dmp what is the expected output what do you see instead crashed which operating system are you using osx lion did you use binary phantomjs or did you compile it from source brew install phantomjs please provide any additional information below usr local bin phantomjs iphonereserve coffee quot firstname quot quot lastname quot quot name mail com quot quot quot quot a quot quot quot iphonereserve coffee script attached dmp file attached disclaimer this issue was migrated on from the project s former issue tracker on google code nbsp people had starred this issue at the time of migration
| 1
|
295,392
| 25,472,605,286
|
IssuesEvent
|
2022-11-25 11:31:36
|
peviitor-ro/ui-js
|
https://api.github.com/repos/peviitor-ro/ui-js
|
closed
|
The briefcase icon width is 20px
|
bug TestQuality
|
## Precondition
URL: https://beta.peviitor.ro/
Device: Samsung Galaxy S21 Ultra
Browser: Chrome
Platform: Android 12
## Steps to Reproduce:
### Step 1 <span style="color:#58b880"> **[Pass]** </span>
Open URL in browser
#### Expected Result
Website is loaded without any issues
### Step 2 <span style="color:#ff5538"> **[Fail]** </span>
Inspect witdh for briefcase icon on "Alătură-te" button
#### Expected Result
Witdh should be 16.67px
|
1.0
|
The briefcase icon width is 20px - ## Precondition
URL: https://beta.peviitor.ro/
Device: Samsung Galaxy S21 Ultra
Browser: Chrome
Platform: Android 12
## Steps to Reproduce:
### Step 1 <span style="color:#58b880"> **[Pass]** </span>
Open URL in browser
#### Expected Result
Website is loaded without any issues
### Step 2 <span style="color:#ff5538"> **[Fail]** </span>
Inspect witdh for briefcase icon on "Alătură-te" button
#### Expected Result
Witdh should be 16.67px
|
non_defect
|
the briefcase icon width is precondition url device samsung galaxy ultra browser chrome platform android steps to reproduce step open url in browser expected result website is loaded without any issues step inspect witdh for briefcase icon on quot alătură te quot button expected result witdh should be
| 0
|
291,929
| 25,185,936,028
|
IssuesEvent
|
2022-11-11 17:58:11
|
microsoft/playwright
|
https://api.github.com/repos/microsoft/playwright
|
closed
|
[Question] Specific test retries
|
feature-test-runner v1.28
|
Hi!
Is there any way I could set the maximum number of retries to a specific test function?
My case is the following: in my test, I create an object via a POST request and then I want to verify that the response body of a GET request contains some info about it. The matter is that our backend will return no info within 2-4 minutes after the object is created, so I need to send one request after another, retrying this test, to ultimately reach the point when the response contains the info I need. At the same time, I don't want my other tests / test files to be retried X times as well. If they fail, they fail.
Is it possible to make such an adjustment to one test only? Thank you very much!
|
1.0
|
[Question] Specific test retries - Hi!
Is there any way I could set the maximum number of retries to a specific test function?
My case is the following: in my test, I create an object via a POST request and then I want to verify that the response body of a GET request contains some info about it. The matter is that our backend will return no info within 2-4 minutes after the object is created, so I need to send one request after another, retrying this test, to ultimately reach the point when the response contains the info I need. At the same time, I don't want my other tests / test files to be retried X times as well. If they fail, they fail.
Is it possible to make such an adjustment to one test only? Thank you very much!
|
non_defect
|
specific test retries hi is there any way i could set the maximum number of retries to a specific test function my case is the following in my test i create an object via a post request and then i want to verify that the response body of a get request contains some info about it the matter is that our backend will return no info within minutes after the object is created so i need to send one request after another retrying this test to ultimately reach the point when the response contains the info i need at the same time i don t want my other tests test files to be retried x times as well if they fail they fail is it possible to make such an adjustment to one test only thank you very much
| 0
|
94,905
| 11,940,352,116
|
IssuesEvent
|
2020-04-02 16:34:30
|
carbon-design-system/ibm-dotcom-library
|
https://api.github.com/repos/carbon-design-system/ibm-dotcom-library
|
closed
|
Profile developers are requesting adding back the "IBMid" to new IBM.com L0 mastheads (post-login)
|
Airtable Done Feature request design design: research
|
<!-- replace _{{...}}_ with your own words -->
### The problem
While redesigning the MyIBM Profile, the developers from the Profile team have noticed the new IBM.com L0/L1 no longer retain the IBMid navigation link on far right. This IBMid only appears once a user has logged in and inside MyIBM environment.
### The solution
Profile developers want to add IBMid to new IBM.com L0/L1 mastheads for post-login experiences (i.e. MyIBM environment)
### Additional information
See attachments of v18 version VS new Carbon version
<img width="1314" alt="Screen Shot 2020-03-12 at 5 55 34 PM" src="https://user-images.githubusercontent.com/33553241/76570176-23148300-648b-11ea-8194-8a9d619ef36b.png">
<img width="1228" alt="v18-masthead-l1" src="https://user-images.githubusercontent.com/33553241/76570174-227bec80-648b-11ea-9717-332073e70b0c.png">
|
2.0
|
Profile developers are requesting adding back the "IBMid" to new IBM.com L0 mastheads (post-login) - <!-- replace _{{...}}_ with your own words -->
### The problem
While redesigning the MyIBM Profile, the developers from the Profile team have noticed the new IBM.com L0/L1 no longer retain the IBMid navigation link on far right. This IBMid only appears once a user has logged in and inside MyIBM environment.
### The solution
Profile developers want to add IBMid to new IBM.com L0/L1 mastheads for post-login experiences (i.e. MyIBM environment)
### Additional information
See attachments of v18 version VS new Carbon version
<img width="1314" alt="Screen Shot 2020-03-12 at 5 55 34 PM" src="https://user-images.githubusercontent.com/33553241/76570176-23148300-648b-11ea-8194-8a9d619ef36b.png">
<img width="1228" alt="v18-masthead-l1" src="https://user-images.githubusercontent.com/33553241/76570174-227bec80-648b-11ea-9717-332073e70b0c.png">
|
non_defect
|
profile developers are requesting adding back the ibmid to new ibm com mastheads post login the problem while redesigning the myibm profile the developers from the profile team have noticed the new ibm com no longer retain the ibmid navigation link on far right this ibmid only appears once a user has logged in and inside myibm environment the solution profile developers want to add ibmid to new ibm com mastheads for post login experiences i e myibm environment additional information see attachments of version vs new carbon version img width alt screen shot at pm src img width alt masthead src
| 0
|
421,662
| 28,351,973,005
|
IssuesEvent
|
2023-04-12 03:33:55
|
oforiwaasam/converterpro
|
https://api.github.com/repos/oforiwaasam/converterpro
|
opened
|
Set up a static documentation website
|
documentation backlog
|
As demonstrated in class, setup a static website for your project. You are allowed to use any framework you like, but I recommend sticking to something tried-and-true like Sphinx. Your project’s website should have the following elements:
- How to install your library
- How to use your library
- Autodocumentation generated from your library
|
1.0
|
Set up a static documentation website - As demonstrated in class, setup a static website for your project. You are allowed to use any framework you like, but I recommend sticking to something tried-and-true like Sphinx. Your project’s website should have the following elements:
- How to install your library
- How to use your library
- Autodocumentation generated from your library
|
non_defect
|
set up a static documentation website as demonstrated in class setup a static website for your project you are allowed to use any framework you like but i recommend sticking to something tried and true like sphinx your project’s website should have the following elements how to install your library how to use your library autodocumentation generated from your library
| 0
|
7,943
| 2,611,067,995
|
IssuesEvent
|
2015-02-27 00:31:48
|
alistairreilly/andors-trail
|
https://api.github.com/repos/alistairreilly/andors-trail
|
closed
|
Physical keyboards.
|
auto-migrated Type-Defect
|
```
An option to use physical keyboard as controls. (Can be bound to any key).
Some devices don't all match up eg: HTC Vision WASD aren't as comfortable as
WASZ.
Would be used primarily as movement and quick use.
```
Original issue reported on code.google.com by `erik.the.lion@gmail.com` on 12 Oct 2011 at 11:34
|
1.0
|
Physical keyboards. - ```
An option to use physical keyboard as controls. (Can be bound to any key).
Some devices don't all match up eg: HTC Vision WASD aren't as comfortable as
WASZ.
Would be used primarily as movement and quick use.
```
Original issue reported on code.google.com by `erik.the.lion@gmail.com` on 12 Oct 2011 at 11:34
|
defect
|
physical keyboards an option to use physical keyboard as controls can be bound to any key some devices don t all match up eg htc vision wasd aren t as comfortable as wasz would be used primarily as movement and quick use original issue reported on code google com by erik the lion gmail com on oct at
| 1
|
11,155
| 2,641,231,470
|
IssuesEvent
|
2015-03-11 16:41:04
|
chrsmith/html5rocks
|
https://api.github.com/repos/chrsmith/html5rocks
|
closed
|
Interactive presentation should provide link back to the main page
|
Milestone-3 Priority-Medium Slides Type-Defect
|
Original [issue 16](https://code.google.com/p/html5rocks/issues/detail?id=16) created by chrsmith on 2010-06-22T23:51:00.000Z:
...either at the end, or below the slides all the time.
|
1.0
|
Interactive presentation should provide link back to the main page - Original [issue 16](https://code.google.com/p/html5rocks/issues/detail?id=16) created by chrsmith on 2010-06-22T23:51:00.000Z:
...either at the end, or below the slides all the time.
|
defect
|
interactive presentation should provide link back to the main page original created by chrsmith on either at the end or below the slides all the time
| 1
|
17,385
| 9,744,183,696
|
IssuesEvent
|
2019-06-03 05:59:05
|
KazDragon/terminalpp
|
https://api.github.com/repos/KazDragon/terminalpp
|
closed
|
Remove usage of boost::format when constructing terminal output
|
Improvement Performance
|
For whatever reason, boost::format is taking up some 50% of the time it takes to generate element differences (generating the element differences in total accounts for about 25% of the time of munin-acceptance)
|
True
|
Remove usage of boost::format when constructing terminal output - For whatever reason, boost::format is taking up some 50% of the time it takes to generate element differences (generating the element differences in total accounts for about 25% of the time of munin-acceptance)
|
non_defect
|
remove usage of boost format when constructing terminal output for whatever reason boost format is taking up some of the time it takes to generate element differences generating the element differences in total accounts for about of the time of munin acceptance
| 0
|
76,470
| 26,445,146,911
|
IssuesEvent
|
2023-01-16 06:21:24
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: can not launch chrome when proxy environment variables are set
|
R-awaiting answer I-defect
|
### What happened?
if the proxy is set in bash:
```bash
export http_proxy=http://127.0.0.1:1080
export https_proxy=http://127.0.0.1:1080
```
I can not launch chrome:
```python
from selenium import webdriver
browser = webdriver.Chrome()
```
raise error:
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
super().__init__(
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 106, in __init__
super().__init__(
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 288, in __init__
self.start_session(capabilities, browser_profile)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 381, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 213, in check_response
raise exception_class(value)
selenium.common.exceptions.WebDriverException: Message:
```
### How can we reproduce the issue?
```shell
set proxy in bash:
export http_proxy=http://127.0.0.1:1080
export https_proxy=http://127.0.0.1:1080
launch chrome:
```python
from selenium import webdriver
browser = webdriver.Chrome()
```
```
### Relevant log output
```shell
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
super().__init__(
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 106, in __init__
super().__init__(
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 288, in __init__
self.start_session(capabilities, browser_profile)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 381, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 213, in check_response
raise exception_class(value)
selenium.common.exceptions.WebDriverException: Message:
```
### Operating System
ubuntu 22.04
### Selenium version
python 4.7.2
### What are the browser(s) and version(s) where you see this issue?
chrome 109.0.5414.74 (Official Build) (64-bit)
### What are the browser driver(s) and version(s) where you see this issue?
chromedriver 109.0.5414.74
### Are you using Selenium Grid?
no
|
1.0
|
[🐛 Bug]: can not launch chrome when proxy environment variables are set - ### What happened?
if the proxy is set in bash:
```bash
export http_proxy=http://127.0.0.1:1080
export https_proxy=http://127.0.0.1:1080
```
I can not launch chrome:
```python
from selenium import webdriver
browser = webdriver.Chrome()
```
raise error:
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
super().__init__(
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 106, in __init__
super().__init__(
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 288, in __init__
self.start_session(capabilities, browser_profile)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 381, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 213, in check_response
raise exception_class(value)
selenium.common.exceptions.WebDriverException: Message:
```
### How can we reproduce the issue?
```shell
set proxy in bash:
export http_proxy=http://127.0.0.1:1080
export https_proxy=http://127.0.0.1:1080
launch chrome:
```python
from selenium import webdriver
browser = webdriver.Chrome()
```
```
### Relevant log output
```shell
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
super().__init__(
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 106, in __init__
super().__init__(
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 288, in __init__
self.start_session(capabilities, browser_profile)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 381, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "/home/xinyu/Downloads/spotify manipulate test/envs/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 213, in check_response
raise exception_class(value)
selenium.common.exceptions.WebDriverException: Message:
```
### Operating System
ubuntu 22.04
### Selenium version
python 4.7.2
### What are the browser(s) and version(s) where you see this issue?
chrome 109.0.5414.74 (Official Build) (64-bit)
### What are the browser driver(s) and version(s) where you see this issue?
chromedriver 109.0.5414.74
### Are you using Selenium Grid?
no
|
defect
|
can not launch chrome when proxy environment variables are set what happened if the proxy is set in bash bash export http proxy export https proxy i can not launch chrome python from selenium import webdriver browser webdriver chrome raise error python traceback most recent call last file line in file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver chrome webdriver py line in init super init file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver chromium webdriver py line in init super init file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver remote webdriver py line in init self start session capabilities browser profile file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver remote webdriver py line in start session response self execute command new session parameters file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver remote webdriver py line in execute self error handler check response response file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver remote errorhandler py line in check response raise exception class value selenium common exceptions webdriverexception message how can we reproduce the issue shell set proxy in bash export http proxy export https proxy launch chrome python from selenium import webdriver browser webdriver chrome relevant log output shell traceback most recent call last file line in file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver chrome webdriver py line in init super init file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver chromium webdriver py line in init super init file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver remote webdriver py line in init self start session capabilities browser profile file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver remote webdriver py line in start session response self execute command new session parameters file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver remote webdriver py line in execute self error handler check response response file home xinyu downloads spotify manipulate test envs lib site packages selenium webdriver remote errorhandler py line in check response raise exception class value selenium common exceptions webdriverexception message operating system ubuntu selenium version python what are the browser s and version s where you see this issue chrome official build bit what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no
| 1
|
66,668
| 8,037,829,696
|
IssuesEvent
|
2018-07-30 13:49:33
|
Opentrons/opentrons
|
https://api.github.com/repos/Opentrons/opentrons
|
closed
|
Tip Management: New Tip Required Error
|
WIP feature protocol designer stale
|
As a user, I would like to be told if I accidentally create a first step without a tip.
Note this could happen if your second step has 'change tip: use tip from previous step', and you delete the first step.
## Acceptance Criteria
- [ ] If first step has 'change tip: use tip from previous step', display a form-level error
## Design and Copy
- Error Title:
- Error Body:
|
1.0
|
Tip Management: New Tip Required Error - As a user, I would like to be told if I accidentally create a first step without a tip.
Note this could happen if your second step has 'change tip: use tip from previous step', and you delete the first step.
## Acceptance Criteria
- [ ] If first step has 'change tip: use tip from previous step', display a form-level error
## Design and Copy
- Error Title:
- Error Body:
|
non_defect
|
tip management new tip required error as a user i would like to be told if i accidentally create a first step without a tip note this could happen if your second step has change tip use tip from previous step and you delete the first step acceptance criteria if first step has change tip use tip from previous step display a form level error design and copy error title error body
| 0
|
653,173
| 21,574,553,970
|
IssuesEvent
|
2022-05-02 12:25:22
|
trimble-oss/website-modus-react-bootstrap.trimble.com
|
https://api.github.com/repos/trimble-oss/website-modus-react-bootstrap.trimble.com
|
closed
|
Content Tree React - Keyboard integrations
|
5 story priority:medium content-tree
|
* Refer to the screenshot for how to navigate through Tree items
* Tree Item selection:
- [ ] ** Shift + Click should select a range (see first example). Ctrl + click selects multiple individual items (see 2nd example).
- [ ] #160
|
1.0
|
Content Tree React - Keyboard integrations - * Refer to the screenshot for how to navigate through Tree items
* Tree Item selection:
- [ ] ** Shift + Click should select a range (see first example). Ctrl + click selects multiple individual items (see 2nd example).
- [ ] #160
|
non_defect
|
content tree react keyboard integrations refer to the screenshot for how to navigate through tree items tree item selection shift click should select a range see first example ctrl click selects multiple individual items see example
| 0
|
47,168
| 13,056,045,701
|
IssuesEvent
|
2020-07-30 03:29:25
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
glshovel doesn't compile with gcc v3.2.2 (Trac #88)
|
Migrated from Trac defect glshovel
|
I have tried to compile the icesim V02-00-04cand, but it stopped at the compilation of glshovel.
This is probably due to our old gcc version at Chiba (v3.2.2).
So, I need to change log(1+nhits) to something like log(static_cast<double>(1+nhits)) or so
at line 129 of render/StartTime.cxx.
Could you please change this, and put it to the next icesimV2 release?
Migrated from https://code.icecube.wisc.edu/ticket/88
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": "I have tried to compile the icesim V02-00-04cand, but it stopped at the compilation of glshovel.\nThis is probably due to our old gcc version at Chiba (v3.2.2).\nSo, I need to change log(1+nhits) to something like log(static_cast<double>(1+nhits)) or so\nat line 129 of render/StartTime.cxx.\nCould you please change this, and put it to the next icesimV2 release?",
"reporter": "mase",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "glshovel",
"summary": "glshovel doesn't compile with gcc v3.2.2",
"priority": "major",
"keywords": "",
"time": "2007-08-09T07:43:43",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
|
1.0
|
glshovel doesn't compile with gcc v3.2.2 (Trac #88) - I have tried to compile the icesim V02-00-04cand, but it stopped at the compilation of glshovel.
This is probably due to our old gcc version at Chiba (v3.2.2).
So, I need to change log(1+nhits) to something like log(static_cast<double>(1+nhits)) or so
at line 129 of render/StartTime.cxx.
Could you please change this, and put it to the next icesimV2 release?
Migrated from https://code.icecube.wisc.edu/ticket/88
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": "I have tried to compile the icesim V02-00-04cand, but it stopped at the compilation of glshovel.\nThis is probably due to our old gcc version at Chiba (v3.2.2).\nSo, I need to change log(1+nhits) to something like log(static_cast<double>(1+nhits)) or so\nat line 129 of render/StartTime.cxx.\nCould you please change this, and put it to the next icesimV2 release?",
"reporter": "mase",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "glshovel",
"summary": "glshovel doesn't compile with gcc v3.2.2",
"priority": "major",
"keywords": "",
"time": "2007-08-09T07:43:43",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
|
defect
|
glshovel doesn t compile with gcc trac i have tried to compile the icesim but it stopped at the compilation of glshovel this is probably due to our old gcc version at chiba so i need to change log nhits to something like log static cast nhits or so at line of render starttime cxx could you please change this and put it to the next release migrated from json status closed changetime description i have tried to compile the icesim but it stopped at the compilation of glshovel nthis is probably due to our old gcc version at chiba nso i need to change log nhits to something like log static cast nhits or so nat line of render starttime cxx ncould you please change this and put it to the next release reporter mase cc resolution fixed ts component glshovel summary glshovel doesn t compile with gcc priority major keywords time milestone owner troy type defect
| 1
|
45,471
| 12,814,902,370
|
IssuesEvent
|
2020-07-04 21:53:10
|
mestrade/k8shell
|
https://api.github.com/repos/mestrade/k8shell
|
closed
|
Dede
|
defectdojo security / info
|
*Dede*
*Severity:* Info
*Cve:*
*Product/Engagement:* k8shell / Ad Hoc Engagement
*Systems*:
*Description*:
dede
*Mitigation*:
de
*Impact*:
de
*References*:No references given
|
1.0
|
Dede - *Dede*
*Severity:* Info
*Cve:*
*Product/Engagement:* k8shell / Ad Hoc Engagement
*Systems*:
*Description*:
dede
*Mitigation*:
de
*Impact*:
de
*References*:No references given
|
defect
|
dede dede severity info cve product engagement ad hoc engagement systems description dede mitigation de impact de references no references given
| 1
|
57,846
| 16,101,985,983
|
IssuesEvent
|
2021-04-27 10:27:59
|
snowplow/snowplow-android-tracker
|
https://api.github.com/repos/snowplow/snowplow-android-tracker
|
closed
|
Fix crash on demo app on API 30
|
priority:high status:completed type:defect
|
The DemoApp crashes at the moment when using API 30. It looks like this is a OkHttp3.
Bumping to 4.9 fixes the issue.
Also, StrictMode shows a couple of minor alerts related to demo app code.
|
1.0
|
Fix crash on demo app on API 30 - The DemoApp crashes at the moment when using API 30. It looks like this is a OkHttp3.
Bumping to 4.9 fixes the issue.
Also, StrictMode shows a couple of minor alerts related to demo app code.
|
defect
|
fix crash on demo app on api the demoapp crashes at the moment when using api it looks like this is a bumping to fixes the issue also strictmode shows a couple of minor alerts related to demo app code
| 1
|
593,085
| 17,937,468,019
|
IssuesEvent
|
2021-09-10 17:14:02
|
kubernetes/ingress-nginx
|
https://api.github.com/repos/kubernetes/ingress-nginx
|
closed
|
The new v1.0.0 IngressClass handling logic makes a zero-downtime Ingress controller upgrade hard for the users
|
kind/bug needs-triage needs-priority
|
**NGINX Ingress controller version**: v1.0.0 vs 0.4x
**Kubernetes version** (use `kubectl version`):
1.19, 1.20, 1.21, 1.22
**Environment**:
- **Cloud provider or hardware configuration**: Not relevant, the problem is generic for all users
- **OS** (e.g. from /etc/os-release): Not relevant
- **Kernel** (e.g. `uname -a`): Not relevant
- **Install tools**: Not relevant
- **Basic cluster related info**:
- `kubectl version` 1.20, any kubectl that supports v1 Ingress
- **How was the ingress-nginx-controller installed**: Not relevant
**What happened**:
The 0.4x version of ingress-nginx-controller require that the `ingressClass.spec.controller` has a fixed value: `k8s.io/ingress-nginx`. In order to shard Ingresses between two ingress-nginx-controller deployments there must be 2 IngressClasses in the following form:
```
metadata.Name: C
spec.Controller: "k8s.io/ingress-nginx"
metadata.Name: D
spec.Controller: "k8s.io/ingress-nginx"
```
The The 0.4x version of ingress-nginx-controller uses the `metadata.Name` field of the IngressClass to identify which Ingresses it should process.
There were two ingress-nginx-controller deployments on the cluster with version 0.48.1. Controller A was configured to watch IngressClass C. Controller B was controlled to watch IngressClass D.
An Ingress resource refers to an IngressClass (and thus to a processing ingress-nginx-controller instance) via its `ingress.spec.ingressClassName` field. Ingress E had `ingress.spec.ingressClassName=C`, Ingress F had `ingress.spec.ingressClassName=D` on the cluster.
As the result, Controller A processed Ingress E, and Controller B processed Ingress F. The two controllers did not process the other Ingress. This was OK, Ingresses were shared between the controllers as expected. This setup is used since 1.18.
A new v1.0.0-beta2 ingress-nginx-controller instance was deployed on the same cluster to replace Controller A in the long run. I wanted to run the old and new controllers parallel to test and to verify that the new controller works OK. But the new 1.0.0-beta2 ingress-nginx-controller processed both Ingress E and Ingress F immediately.
That is, the new v1.0.0 controller cannot differentiate Ingresses based on the exisitng IngressClasses and the Ingresses that refer to those IngressClasses.
**What you expected to happen**:
As a user I would like to re-use at least my existing Ingresses during an upgrade to v1.0.0. I have v1 Ingresses since 1.19, so the restriction that the new ingress controller supports only v1 Ingresses does not affect me. For this reason I expect that my existing Ingresses are OK.
But it is not possible with the new IngressClass handling logic in v1.0.0. I should create new IngressClasses for v1.0.0 on my cluster, and if I want to run the old (e.g. v0.48.1) and new (v1.0.0) controllers parallel on the same cluster I also have to duplicate my Ingresses, and configure the new ones to refer to the new IngressClasses.
The root cause is, that the v1.0.0 Ingress controller uses the `ingressClass.spec.controller` field to identify the IngressClasses that it owns. And because the old IngressClasses must have the value `k8s.io/ingress-nginx` in that field the v1.0.0. controller will process all old IngressClasses on the cluster with that controller value.
**How to reproduce it**:
- Create 2 IngressClass resources on the cluster so, that their `.spec.controller` field has the value `k8s.io/ingress-nginx`
- Deploy 2 v0.48.1 ingress-nginx-controllers on the same cluster so, that they have different `ingress-class` parameter. One deployment should refer to the first IngressClass like `--ingress-class=<ingressclass_1.metadata.name>`. The other deployment should refer to the second IngressClass like `--ingress-class=<ingressclass_2.metadata.name>`.
- Create 2 Ingresses. One of them should refer to the first IngressClass in its `.spec.IngressClassName`. The other Ingress shall refer to the other IngressClass.
- Verify that the ingress-nginx-controllers process only one of the Ingresses. Verify that the controllers process the Ingress that refers to the right IngressClass
- Deploy a new v1.0.0 ingress-nginx-controller on the cluster
- The new v1.0.0 ingress-nginx-controller processes both Ingresses
**Anything else we need to know**:
Slack discussion: https://kubernetes.slack.com/archives/CANQGM8BA/p1629105520296900
/kind bug
|
1.0
|
The new v1.0.0 IngressClass handling logic makes a zero-downtime Ingress controller upgrade hard for the users - **NGINX Ingress controller version**: v1.0.0 vs 0.4x
**Kubernetes version** (use `kubectl version`):
1.19, 1.20, 1.21, 1.22
**Environment**:
- **Cloud provider or hardware configuration**: Not relevant, the problem is generic for all users
- **OS** (e.g. from /etc/os-release): Not relevant
- **Kernel** (e.g. `uname -a`): Not relevant
- **Install tools**: Not relevant
- **Basic cluster related info**:
- `kubectl version` 1.20, any kubectl that supports v1 Ingress
- **How was the ingress-nginx-controller installed**: Not relevant
**What happened**:
The 0.4x version of ingress-nginx-controller require that the `ingressClass.spec.controller` has a fixed value: `k8s.io/ingress-nginx`. In order to shard Ingresses between two ingress-nginx-controller deployments there must be 2 IngressClasses in the following form:
```
metadata.Name: C
spec.Controller: "k8s.io/ingress-nginx"
metadata.Name: D
spec.Controller: "k8s.io/ingress-nginx"
```
The The 0.4x version of ingress-nginx-controller uses the `metadata.Name` field of the IngressClass to identify which Ingresses it should process.
There were two ingress-nginx-controller deployments on the cluster with version 0.48.1. Controller A was configured to watch IngressClass C. Controller B was controlled to watch IngressClass D.
An Ingress resource refers to an IngressClass (and thus to a processing ingress-nginx-controller instance) via its `ingress.spec.ingressClassName` field. Ingress E had `ingress.spec.ingressClassName=C`, Ingress F had `ingress.spec.ingressClassName=D` on the cluster.
As the result, Controller A processed Ingress E, and Controller B processed Ingress F. The two controllers did not process the other Ingress. This was OK, Ingresses were shared between the controllers as expected. This setup is used since 1.18.
A new v1.0.0-beta2 ingress-nginx-controller instance was deployed on the same cluster to replace Controller A in the long run. I wanted to run the old and new controllers parallel to test and to verify that the new controller works OK. But the new 1.0.0-beta2 ingress-nginx-controller processed both Ingress E and Ingress F immediately.
That is, the new v1.0.0 controller cannot differentiate Ingresses based on the exisitng IngressClasses and the Ingresses that refer to those IngressClasses.
**What you expected to happen**:
As a user I would like to re-use at least my existing Ingresses during an upgrade to v1.0.0. I have v1 Ingresses since 1.19, so the restriction that the new ingress controller supports only v1 Ingresses does not affect me. For this reason I expect that my existing Ingresses are OK.
But it is not possible with the new IngressClass handling logic in v1.0.0. I should create new IngressClasses for v1.0.0 on my cluster, and if I want to run the old (e.g. v0.48.1) and new (v1.0.0) controllers parallel on the same cluster I also have to duplicate my Ingresses, and configure the new ones to refer to the new IngressClasses.
The root cause is, that the v1.0.0 Ingress controller uses the `ingressClass.spec.controller` field to identify the IngressClasses that it owns. And because the old IngressClasses must have the value `k8s.io/ingress-nginx` in that field the v1.0.0. controller will process all old IngressClasses on the cluster with that controller value.
**How to reproduce it**:
- Create 2 IngressClass resources on the cluster so, that their `.spec.controller` field has the value `k8s.io/ingress-nginx`
- Deploy 2 v0.48.1 ingress-nginx-controllers on the same cluster so, that they have different `ingress-class` parameter. One deployment should refer to the first IngressClass like `--ingress-class=<ingressclass_1.metadata.name>`. The other deployment should refer to the second IngressClass like `--ingress-class=<ingressclass_2.metadata.name>`.
- Create 2 Ingresses. One of them should refer to the first IngressClass in its `.spec.IngressClassName`. The other Ingress shall refer to the other IngressClass.
- Verify that the ingress-nginx-controllers process only one of the Ingresses. Verify that the controllers process the Ingress that refers to the right IngressClass
- Deploy a new v1.0.0 ingress-nginx-controller on the cluster
- The new v1.0.0 ingress-nginx-controller processes both Ingresses
**Anything else we need to know**:
Slack discussion: https://kubernetes.slack.com/archives/CANQGM8BA/p1629105520296900
/kind bug
|
non_defect
|
the new ingressclass handling logic makes a zero downtime ingress controller upgrade hard for the users nginx ingress controller version vs kubernetes version use kubectl version environment cloud provider or hardware configuration not relevant the problem is generic for all users os e g from etc os release not relevant kernel e g uname a not relevant install tools not relevant basic cluster related info kubectl version any kubectl that supports ingress how was the ingress nginx controller installed not relevant what happened the version of ingress nginx controller require that the ingressclass spec controller has a fixed value io ingress nginx in order to shard ingresses between two ingress nginx controller deployments there must be ingressclasses in the following form metadata name c spec controller io ingress nginx metadata name d spec controller io ingress nginx the the version of ingress nginx controller uses the metadata name field of the ingressclass to identify which ingresses it should process there were two ingress nginx controller deployments on the cluster with version controller a was configured to watch ingressclass c controller b was controlled to watch ingressclass d an ingress resource refers to an ingressclass and thus to a processing ingress nginx controller instance via its ingress spec ingressclassname field ingress e had ingress spec ingressclassname c ingress f had ingress spec ingressclassname d on the cluster as the result controller a processed ingress e and controller b processed ingress f the two controllers did not process the other ingress this was ok ingresses were shared between the controllers as expected this setup is used since a new ingress nginx controller instance was deployed on the same cluster to replace controller a in the long run i wanted to run the old and new controllers parallel to test and to verify that the new controller works ok but the new ingress nginx controller processed both ingress e and ingress f immediately that is the new controller cannot differentiate ingresses based on the exisitng ingressclasses and the ingresses that refer to those ingressclasses what you expected to happen as a user i would like to re use at least my existing ingresses during an upgrade to i have ingresses since so the restriction that the new ingress controller supports only ingresses does not affect me for this reason i expect that my existing ingresses are ok but it is not possible with the new ingressclass handling logic in i should create new ingressclasses for on my cluster and if i want to run the old e g and new controllers parallel on the same cluster i also have to duplicate my ingresses and configure the new ones to refer to the new ingressclasses the root cause is that the ingress controller uses the ingressclass spec controller field to identify the ingressclasses that it owns and because the old ingressclasses must have the value io ingress nginx in that field the controller will process all old ingressclasses on the cluster with that controller value how to reproduce it create ingressclass resources on the cluster so that their spec controller field has the value io ingress nginx deploy ingress nginx controllers on the same cluster so that they have different ingress class parameter one deployment should refer to the first ingressclass like ingress class the other deployment should refer to the second ingressclass like ingress class create ingresses one of them should refer to the first ingressclass in its spec ingressclassname the other ingress shall refer to the other ingressclass verify that the ingress nginx controllers process only one of the ingresses verify that the controllers process the ingress that refers to the right ingressclass deploy a new ingress nginx controller on the cluster the new ingress nginx controller processes both ingresses anything else we need to know slack discussion kind bug
| 0
|
80,493
| 30,307,409,243
|
IssuesEvent
|
2023-07-10 10:25:23
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: selenium-manager webdriver doesn't support Edge browser version is 114.0.1823.67
|
R-awaiting answer I-defect
|
### What happened?
I am using selenium version 4.10.0.0 and recently Edge version updated from 112 to 114 and now getting an error -
selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of Microsoft Edge WebDriver only supports Microsoft Edge version 112 Current browser version is 114.0.1823.67.
however when i have edge version 112 its working fine.
### How can we reproduce the issue?
```shell
from selenium import webdriver
driver = webdriver.Edge()
driver.get('https://www.google.com/')
driver.maximize_window()
```
### Relevant log output
```shell
selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of Microsoft Edge WebDriver only supports Microsoft Edge version 112
Current browser version is 114.0.1823.67 with binary path C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
Stacktrace:
Backtrace:
GetHandleVerifier [0x00007FF6E358E022+60274]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E351EC82+818786]
(No symbol) [0x00007FF6E319DFAE]
(No symbol) [0x00007FF6E31CE378]
(No symbol) [0x00007FF6E31C8D24]
(No symbol) [0x00007FF6E31C440A]
(No symbol) [0x00007FF6E3207341]
(No symbol) [0x00007FF6E31FF343]
(No symbol) [0x00007FF6E31D1796]
(No symbol) [0x00007FF6E31D0975]
(No symbol) [0x00007FF6E31D1F04]
Microsoft::Applications::Events::EventProperties::SetLevel [0x00007FF6E3427167+1678103]
Microsoft::Applications::Events::EventProperties::SetLevel [0x00007FF6E32CEEBD+268397]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E345FE77+36951]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E3457F85+4453]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00007FF6E3758163+1318403]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E3526D3C+851740]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E3522DA4+835460]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E3522EFC+835804]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E3518AD1+793777]
BaseThreadInitThunk [0x00007FF8363F7614+20]
RtlUserThreadStart [0x00007FF8383426A1+33]
```
### Operating System
Window10
### Selenium version
Rust
### What are the browser(s) and version(s) where you see this issue?
Edge version 114
### What are the browser driver(s) and version(s) where you see this issue?
Using Selenium Manager
### Are you using Selenium Grid?
_No response_
|
1.0
|
[🐛 Bug]: selenium-manager webdriver doesn't support Edge browser version is 114.0.1823.67 - ### What happened?
I am using selenium version 4.10.0.0 and recently Edge version updated from 112 to 114 and now getting an error -
selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of Microsoft Edge WebDriver only supports Microsoft Edge version 112 Current browser version is 114.0.1823.67.
however when i have edge version 112 its working fine.
### How can we reproduce the issue?
```shell
from selenium import webdriver
driver = webdriver.Edge()
driver.get('https://www.google.com/')
driver.maximize_window()
```
### Relevant log output
```shell
selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of Microsoft Edge WebDriver only supports Microsoft Edge version 112
Current browser version is 114.0.1823.67 with binary path C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
Stacktrace:
Backtrace:
GetHandleVerifier [0x00007FF6E358E022+60274]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E351EC82+818786]
(No symbol) [0x00007FF6E319DFAE]
(No symbol) [0x00007FF6E31CE378]
(No symbol) [0x00007FF6E31C8D24]
(No symbol) [0x00007FF6E31C440A]
(No symbol) [0x00007FF6E3207341]
(No symbol) [0x00007FF6E31FF343]
(No symbol) [0x00007FF6E31D1796]
(No symbol) [0x00007FF6E31D0975]
(No symbol) [0x00007FF6E31D1F04]
Microsoft::Applications::Events::EventProperties::SetLevel [0x00007FF6E3427167+1678103]
Microsoft::Applications::Events::EventProperties::SetLevel [0x00007FF6E32CEEBD+268397]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E345FE77+36951]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E3457F85+4453]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00007FF6E3758163+1318403]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E3526D3C+851740]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E3522DA4+835460]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E3522EFC+835804]
Microsoft::Applications::Events::EventProperty::~EventProperty [0x00007FF6E3518AD1+793777]
BaseThreadInitThunk [0x00007FF8363F7614+20]
RtlUserThreadStart [0x00007FF8383426A1+33]
```
### Operating System
Window10
### Selenium version
Rust
### What are the browser(s) and version(s) where you see this issue?
Edge version 114
### What are the browser driver(s) and version(s) where you see this issue?
Using Selenium Manager
### Are you using Selenium Grid?
_No response_
|
defect
|
selenium manager webdriver doesn t support edge browser version is what happened i am using selenium version and recently edge version updated from to and now getting an error selenium common exceptions sessionnotcreatedexception message session not created this version of microsoft edge webdriver only supports microsoft edge version current browser version is however when i have edge version its working fine how can we reproduce the issue shell from selenium import webdriver driver webdriver edge driver get driver maximize window relevant log output shell selenium common exceptions sessionnotcreatedexception message session not created this version of microsoft edge webdriver only supports microsoft edge version current browser version is with binary path c program files microsoft edge application msedge exe stacktrace backtrace gethandleverifier microsoft applications events eventproperty eventproperty no symbol no symbol no symbol no symbol no symbol no symbol no symbol no symbol no symbol microsoft applications events eventproperties setlevel microsoft applications events eventproperties setlevel microsoft applications events eventproperty eventproperty microsoft applications events eventproperty eventproperty microsoft applications events ilogmanager dispatcheventbroadcast microsoft applications events eventproperty eventproperty microsoft applications events eventproperty eventproperty microsoft applications events eventproperty eventproperty microsoft applications events eventproperty eventproperty basethreadinitthunk rtluserthreadstart operating system selenium version rust what are the browser s and version s where you see this issue edge version what are the browser driver s and version s where you see this issue using selenium manager are you using selenium grid no response
| 1
|
7,065
| 2,610,324,942
|
IssuesEvent
|
2015-02-26 19:44:45
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Typo
|
auto-migrated Priority-Low Type-Defect
|
```
There is a problem, like someone forgot something in the Acclamator description
after "squadrons and."
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 6 Jun 2011 at 10:27
|
1.0
|
Typo - ```
There is a problem, like someone forgot something in the Acclamator description
after "squadrons and."
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 6 Jun 2011 at 10:27
|
defect
|
typo there is a problem like someone forgot something in the acclamator description after squadrons and original issue reported on code google com by gmail com on jun at
| 1
|
259,812
| 22,553,027,515
|
IssuesEvent
|
2022-06-27 07:47:59
|
meveo-org/meveo
|
https://api.github.com/repos/meveo-org/meveo
|
closed
|
crosstorage - postgres - filter based on Pagination - lowercase
|
bug test
|
When building queries with Pagination object, the result may not be correct in some very particular case:
If we use the filter PersistenceService.SEARCH_WILDCARD_OR_IGNORE_CAS, the results are not OK, when we search specific Cyrilic characters. It's because the result of the lower case function is not the same in java/postgres (because upper/lower case function may depend on the language)
The problem, is that meveo use java's lower case to transfrom the criteria, and it uses postgre's lower function to transform stored data
Example of text with Specific U+0130 | LATIN CAPITAL LETTER I WITH DOT ABOVE :
Danışmanlık_İth
|
1.0
|
crosstorage - postgres - filter based on Pagination - lowercase - When building queries with Pagination object, the result may not be correct in some very particular case:
If we use the filter PersistenceService.SEARCH_WILDCARD_OR_IGNORE_CAS, the results are not OK, when we search specific Cyrilic characters. It's because the result of the lower case function is not the same in java/postgres (because upper/lower case function may depend on the language)
The problem, is that meveo use java's lower case to transfrom the criteria, and it uses postgre's lower function to transform stored data
Example of text with Specific U+0130 | LATIN CAPITAL LETTER I WITH DOT ABOVE :
Danışmanlık_İth
|
non_defect
|
crosstorage postgres filter based on pagination lowercase when building queries with pagination object the result may not be correct in some very particular case if we use the filter persistenceservice search wildcard or ignore cas the results are not ok when we search specific cyrilic characters it s because the result of the lower case function is not the same in java postgres because upper lower case function may depend on the language the problem is that meveo use java s lower case to transfrom the criteria and it uses postgre s lower function to transform stored data example of text with specific u latin capital letter i with dot above danışmanlık i̇th
| 0
|
788,002
| 27,739,348,581
|
IssuesEvent
|
2023-03-15 13:24:09
|
telerik/kendo-ui-core
|
https://api.github.com/repos/telerik/kendo-ui-core
|
opened
|
The toggle function of overflow buttons in Toolbar is not triggering
|
Bug C: ToolBar SEV: Medium jQuery Priority 5
|
### Bug report
The toggle functions of buttons that are in overflow in the Toolbar is not triggering
**Regression introduced with R1 2023**
### Reproduction of the problem
1. Open this example - https://dojo.telerik.com/OPebOsEC/12
2. Open the browser console
3. Click on one of the overflow buttons
### Current behavior
The toggle function is not firing
### Expected/desired behavior
The toggle function should be firing
### Environment
* **Kendo UI version:** 2023.1.314
* **Browser:** [all]
|
1.0
|
The toggle function of overflow buttons in Toolbar is not triggering - ### Bug report
The toggle functions of buttons that are in overflow in the Toolbar is not triggering
**Regression introduced with R1 2023**
### Reproduction of the problem
1. Open this example - https://dojo.telerik.com/OPebOsEC/12
2. Open the browser console
3. Click on one of the overflow buttons
### Current behavior
The toggle function is not firing
### Expected/desired behavior
The toggle function should be firing
### Environment
* **Kendo UI version:** 2023.1.314
* **Browser:** [all]
|
non_defect
|
the toggle function of overflow buttons in toolbar is not triggering bug report the toggle functions of buttons that are in overflow in the toolbar is not triggering regression introduced with reproduction of the problem open this example open the browser console click on one of the overflow buttons current behavior the toggle function is not firing expected desired behavior the toggle function should be firing environment kendo ui version browser
| 0
|
878
| 2,594,260,033
|
IssuesEvent
|
2015-02-20 01:13:21
|
BALL-Project/ball
|
https://api.github.com/repos/BALL-Project/ball
|
closed
|
BALLView VRML exporter produces wrong extension
|
C: VIEW P: minor R: fixed T: defect
|
**Reported by odin on 13 Dec 39385592 02:13 UTC**
VRML files should have .wrl not .vrml extension
|
1.0
|
BALLView VRML exporter produces wrong extension - **Reported by odin on 13 Dec 39385592 02:13 UTC**
VRML files should have .wrl not .vrml extension
|
defect
|
ballview vrml exporter produces wrong extension reported by odin on dec utc vrml files should have wrl not vrml extension
| 1
|
47,198
| 13,056,052,013
|
IssuesEvent
|
2020-07-30 03:30:33
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
hdf5merge can't handle FilterMasks (Trac #121)
|
Migrated from Trac booking defect
|
There is something funky there about merging hdf5 generated
from experimental data.
Migrated from https://code.icecube.wisc.edu/ticket/121
```json
{
"status": "closed",
"changetime": "2011-04-14T19:17:19",
"description": "There is something funky there about merging hdf5 generated \nfrom experimental data. ",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"_ts": "1302808639000000",
"component": "booking",
"summary": "hdf5merge can't handle FilterMasks",
"priority": "normal",
"keywords": "",
"time": "2008-09-05T13:37:52",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
|
1.0
|
hdf5merge can't handle FilterMasks (Trac #121) - There is something funky there about merging hdf5 generated
from experimental data.
Migrated from https://code.icecube.wisc.edu/ticket/121
```json
{
"status": "closed",
"changetime": "2011-04-14T19:17:19",
"description": "There is something funky there about merging hdf5 generated \nfrom experimental data. ",
"reporter": "troy",
"cc": "",
"resolution": "wont or cant fix",
"_ts": "1302808639000000",
"component": "booking",
"summary": "hdf5merge can't handle FilterMasks",
"priority": "normal",
"keywords": "",
"time": "2008-09-05T13:37:52",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
|
defect
|
can t handle filtermasks trac there is something funky there about merging generated from experimental data migrated from json status closed changetime description there is something funky there about merging generated nfrom experimental data reporter troy cc resolution wont or cant fix ts component booking summary can t handle filtermasks priority normal keywords time milestone owner troy type defect
| 1
|
12,245
| 2,685,530,678
|
IssuesEvent
|
2015-03-30 02:16:15
|
IssueMigrationTest/Test5
|
https://api.github.com/repos/IssueMigrationTest/Test5
|
closed
|
cannot locate module: twisted
|
auto-migrated Priority-Medium Type-Defect
|
**Issue by Conrad.C...@gmail.com**
_7 Dec 2012 at 11:28 GMT_
_Originally opened on Google Code_
----
```
What steps will reproduce the problem?
1. Add the following to a python file.
from twisted.internet import pollreactor
pollreactor.install()
reactor = pollreactor.PollReactor()
from twisted.internet import protocol
from twisted.protocols import basic
from twisted.internet import task
from twisted.internet import reactor
What is the expected output? What do you see instead?
The expect output is not show, the expect behavour is that the module is loaded
as required for use.
What version of the product are you using? On what operating system?
Shed Skin 0.9.2
Python 2.7.3
#53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 (3.2.0-34-generic x86_64)
Please provide any additional information below.
The python script runs perfectly, but fails when attempting to compile it.
Python file starts with the following
#!/usr/bin/python
from twisted.internet import pollreactor
pollreactor.install()
reactor = pollreactor.PollReactor()
from twisted.internet import protocol
from twisted.protocols import basic
from twisted.internet import task
from twisted.internet import reactor
import re
import time
import datetime
import struct
import logging
import pg
import socket
from binascii import *
import sys
import string
```
|
1.0
|
cannot locate module: twisted - **Issue by Conrad.C...@gmail.com**
_7 Dec 2012 at 11:28 GMT_
_Originally opened on Google Code_
----
```
What steps will reproduce the problem?
1. Add the following to a python file.
from twisted.internet import pollreactor
pollreactor.install()
reactor = pollreactor.PollReactor()
from twisted.internet import protocol
from twisted.protocols import basic
from twisted.internet import task
from twisted.internet import reactor
What is the expected output? What do you see instead?
The expect output is not show, the expect behavour is that the module is loaded
as required for use.
What version of the product are you using? On what operating system?
Shed Skin 0.9.2
Python 2.7.3
#53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 (3.2.0-34-generic x86_64)
Please provide any additional information below.
The python script runs perfectly, but fails when attempting to compile it.
Python file starts with the following
#!/usr/bin/python
from twisted.internet import pollreactor
pollreactor.install()
reactor = pollreactor.PollReactor()
from twisted.internet import protocol
from twisted.protocols import basic
from twisted.internet import task
from twisted.internet import reactor
import re
import time
import datetime
import struct
import logging
import pg
import socket
from binascii import *
import sys
import string
```
|
defect
|
cannot locate module twisted issue by conrad c gmail com dec at gmt originally opened on google code what steps will reproduce the problem add the following to a python file from twisted internet import pollreactor pollreactor install reactor pollreactor pollreactor from twisted internet import protocol from twisted protocols import basic from twisted internet import task from twisted internet import reactor what is the expected output what do you see instead the expect output is not show the expect behavour is that the module is loaded as required for use what version of the product are you using on what operating system shed skin python ubuntu smp thu nov utc generic please provide any additional information below the python script runs perfectly but fails when attempting to compile it python file starts with the following usr bin python from twisted internet import pollreactor pollreactor install reactor pollreactor pollreactor from twisted internet import protocol from twisted protocols import basic from twisted internet import task from twisted internet import reactor import re import time import datetime import struct import logging import pg import socket from binascii import import sys import string
| 1
|
87,635
| 10,934,365,137
|
IssuesEvent
|
2019-11-24 11:04:01
|
MarlinFirmware/Marlin
|
https://api.github.com/repos/MarlinFirmware/Marlin
|
closed
|
[Discussion] Handling of junction speeds and jerk
|
T: Design Concept T: Development
|
This is a follow-up to a topic which already was here a long time ago, but I can't find it anymore. If someone remebers it and can find the issue, you might link it here.
During the research for strange jerk behaviour today I stumbled around the suboptimal handling of junction speeds again. In one sentence: Marlins way of handling jerk and junction speeds leads to exceeded jerk in some daily situations.
The attached test gcode and the results in the table should clarify things a bit. The gcode does the following things:
- set the jerk speeds to the selected values
- First it does 4 G1 travel moves
- followed by 4 G1 print moves
- in the last round it does a prime move (extruder only), followed by two print moves, a retract - travel - prime combination and again a print move.
I printed the following values over serial:
- max junction speed as calculated by the jerk code inside the planner
- initial and final step rate of each block as it gets executed
- initial and final speed of each block (step rate / steps/mm), this one is calculated inside Excel by hand
I did two runs, first with X and Y jerk set to 5 and E jerk set to 1mm/s, and a second one switching the values. As you can see, as soon as we are switching between extruder only moves and XY moves things start to get wired. Same should be true for Z only moves and XY moves.
The problem is that Marlin connects each segment by one single junction speed, which is not always what is needed. A simple example like the one from H13 in the table, lets assume a X Jerk of 1, an E jerk of 5. Let's do a print move along X followed by a prime move. What we want ist:
- Start with a jerk speed of 1
- Accelerate to cruising speed
- Decelerate to jerk speed **1**. The extruder speed in this case is nearly 0 due to the low X jerk.
- Now the prime move, start with a jerk speed of **5**
- accelerate, cruise and decelerate to jerk speed of 5 again.
The two axis X and E are not linked, also their junction speeds shouldn't be linked. What Marlin does is in this case is:
- Start with a jerk speed of 1
- Accelerate to cruising speed
- Decelerate to jerk speed **5** -> Ignoring the needed final speed of 1!
- Now the prime move, start with a jerk speed of **5**
- accelerate, cruise and decelerate to jerk speed of **1** -> Jerk value from next print move!

It becomes crucial when we replace the first print move by a Z move. Z axis often have 0.x jerk values, but the final speed would be also 5mm/s due to the following retract move.
Conclusion:
- A junction speed in terms of final speed = next start speed is only needed to give a smooth movement along a 3D path. Retracts and prime movements does not fall into this, so their speeds should be decoupled.
- Same is true for extruder speed changes due to variable width paths: the 3D movement XYZ has always a connected speed, but the extruder speed can jump at the junctions.
- If we have to connect end and start speeds, we always have to check if all axis can follow the choosen final or start speed. We can't finish X at 5mm/s if it needs 1mm/s only because the next E move can start at 5mm/s.
- That was the easy part. Any ideas how such a jerk implementation could look like? GRBL will not help, as this is a 3D printer specific problem.
Files:
[Marlin Junction speeds.zip](https://github.com/MarlinFirmware/Marlin/files/1777767/Marlin.Junction.speeds.zip)
|
1.0
|
[Discussion] Handling of junction speeds and jerk - This is a follow-up to a topic which already was here a long time ago, but I can't find it anymore. If someone remebers it and can find the issue, you might link it here.
During the research for strange jerk behaviour today I stumbled around the suboptimal handling of junction speeds again. In one sentence: Marlins way of handling jerk and junction speeds leads to exceeded jerk in some daily situations.
The attached test gcode and the results in the table should clarify things a bit. The gcode does the following things:
- set the jerk speeds to the selected values
- First it does 4 G1 travel moves
- followed by 4 G1 print moves
- in the last round it does a prime move (extruder only), followed by two print moves, a retract - travel - prime combination and again a print move.
I printed the following values over serial:
- max junction speed as calculated by the jerk code inside the planner
- initial and final step rate of each block as it gets executed
- initial and final speed of each block (step rate / steps/mm), this one is calculated inside Excel by hand
I did two runs, first with X and Y jerk set to 5 and E jerk set to 1mm/s, and a second one switching the values. As you can see, as soon as we are switching between extruder only moves and XY moves things start to get wired. Same should be true for Z only moves and XY moves.
The problem is that Marlin connects each segment by one single junction speed, which is not always what is needed. A simple example like the one from H13 in the table, lets assume a X Jerk of 1, an E jerk of 5. Let's do a print move along X followed by a prime move. What we want ist:
- Start with a jerk speed of 1
- Accelerate to cruising speed
- Decelerate to jerk speed **1**. The extruder speed in this case is nearly 0 due to the low X jerk.
- Now the prime move, start with a jerk speed of **5**
- accelerate, cruise and decelerate to jerk speed of 5 again.
The two axis X and E are not linked, also their junction speeds shouldn't be linked. What Marlin does is in this case is:
- Start with a jerk speed of 1
- Accelerate to cruising speed
- Decelerate to jerk speed **5** -> Ignoring the needed final speed of 1!
- Now the prime move, start with a jerk speed of **5**
- accelerate, cruise and decelerate to jerk speed of **1** -> Jerk value from next print move!

It becomes crucial when we replace the first print move by a Z move. Z axis often have 0.x jerk values, but the final speed would be also 5mm/s due to the following retract move.
Conclusion:
- A junction speed in terms of final speed = next start speed is only needed to give a smooth movement along a 3D path. Retracts and prime movements does not fall into this, so their speeds should be decoupled.
- Same is true for extruder speed changes due to variable width paths: the 3D movement XYZ has always a connected speed, but the extruder speed can jump at the junctions.
- If we have to connect end and start speeds, we always have to check if all axis can follow the choosen final or start speed. We can't finish X at 5mm/s if it needs 1mm/s only because the next E move can start at 5mm/s.
- That was the easy part. Any ideas how such a jerk implementation could look like? GRBL will not help, as this is a 3D printer specific problem.
Files:
[Marlin Junction speeds.zip](https://github.com/MarlinFirmware/Marlin/files/1777767/Marlin.Junction.speeds.zip)
|
non_defect
|
handling of junction speeds and jerk this is a follow up to a topic which already was here a long time ago but i can t find it anymore if someone remebers it and can find the issue you might link it here during the research for strange jerk behaviour today i stumbled around the suboptimal handling of junction speeds again in one sentence marlins way of handling jerk and junction speeds leads to exceeded jerk in some daily situations the attached test gcode and the results in the table should clarify things a bit the gcode does the following things set the jerk speeds to the selected values first it does travel moves followed by print moves in the last round it does a prime move extruder only followed by two print moves a retract travel prime combination and again a print move i printed the following values over serial max junction speed as calculated by the jerk code inside the planner initial and final step rate of each block as it gets executed initial and final speed of each block step rate steps mm this one is calculated inside excel by hand i did two runs first with x and y jerk set to and e jerk set to s and a second one switching the values as you can see as soon as we are switching between extruder only moves and xy moves things start to get wired same should be true for z only moves and xy moves the problem is that marlin connects each segment by one single junction speed which is not always what is needed a simple example like the one from in the table lets assume a x jerk of an e jerk of let s do a print move along x followed by a prime move what we want ist start with a jerk speed of accelerate to cruising speed decelerate to jerk speed the extruder speed in this case is nearly due to the low x jerk now the prime move start with a jerk speed of accelerate cruise and decelerate to jerk speed of again the two axis x and e are not linked also their junction speeds shouldn t be linked what marlin does is in this case is start with a jerk speed of accelerate to cruising speed decelerate to jerk speed ignoring the needed final speed of now the prime move start with a jerk speed of accelerate cruise and decelerate to jerk speed of jerk value from next print move it becomes crucial when we replace the first print move by a z move z axis often have x jerk values but the final speed would be also s due to the following retract move conclusion a junction speed in terms of final speed next start speed is only needed to give a smooth movement along a path retracts and prime movements does not fall into this so their speeds should be decoupled same is true for extruder speed changes due to variable width paths the movement xyz has always a connected speed but the extruder speed can jump at the junctions if we have to connect end and start speeds we always have to check if all axis can follow the choosen final or start speed we can t finish x at s if it needs s only because the next e move can start at s that was the easy part any ideas how such a jerk implementation could look like grbl will not help as this is a printer specific problem files
| 0
|
31,578
| 5,960,953,309
|
IssuesEvent
|
2017-05-29 15:33:27
|
webpack/webpack.js.org
|
https://api.github.com/repos/webpack/webpack.js.org
|
closed
|
Document the concept and value value range of LimitChunkCountPlugin.
|
Documentation: Plugins
|
Looks like there is a nice PR opportunity with low hanging fruit for someone in webpack/webpack#4178. I'm going to leave this issue here as a stub so that once we stamp down the desired behavior then we can log the defaults and a PR here can accompany a PR in the original issue
|
1.0
|
Document the concept and value value range of LimitChunkCountPlugin. - Looks like there is a nice PR opportunity with low hanging fruit for someone in webpack/webpack#4178. I'm going to leave this issue here as a stub so that once we stamp down the desired behavior then we can log the defaults and a PR here can accompany a PR in the original issue
|
non_defect
|
document the concept and value value range of limitchunkcountplugin looks like there is a nice pr opportunity with low hanging fruit for someone in webpack webpack i m going to leave this issue here as a stub so that once we stamp down the desired behavior then we can log the defaults and a pr here can accompany a pr in the original issue
| 0
|
622,339
| 19,622,014,223
|
IssuesEvent
|
2022-01-07 08:13:53
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
apnews.com - see bug description
|
browser-firefox-mobile priority-normal engine-gecko
|
<!-- @browser: Firefox Mobile 95.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:95.0) Gecko/95.0 Firefox/95.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/97859 -->
**URL**: https://apnews.com/article/immigration-coronavirus-pandemic-novak-djokovic-sports-health-2d9a49486a7ba67981e8dd2373a2d0a1
**Browser / Version**: Firefox Mobile 95.0
**Operating System**: Android 9
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: popup x blocked
**Steps to Reproduce**:
because of your layout (not minimizing the address bar when the popup "sign up for APNews" shows), cannot click on "x" and the site is totally blocked for eternity. have to read their site on Chrome, where layout doesnt block the "x"
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/1/6bf74cd3-169c-4941-8347-ea30e3320fe5.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20211215221728</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/1/92e7a237-5dac-4b60-b77d-c2531c9c9542)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
apnews.com - see bug description - <!-- @browser: Firefox Mobile 95.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:95.0) Gecko/95.0 Firefox/95.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/97859 -->
**URL**: https://apnews.com/article/immigration-coronavirus-pandemic-novak-djokovic-sports-health-2d9a49486a7ba67981e8dd2373a2d0a1
**Browser / Version**: Firefox Mobile 95.0
**Operating System**: Android 9
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: popup x blocked
**Steps to Reproduce**:
because of your layout (not minimizing the address bar when the popup "sign up for APNews" shows), cannot click on "x" and the site is totally blocked for eternity. have to read their site on Chrome, where layout doesnt block the "x"
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/1/6bf74cd3-169c-4941-8347-ea30e3320fe5.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20211215221728</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/1/92e7a237-5dac-4b60-b77d-c2531c9c9542)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
apnews com see bug description url browser version firefox mobile operating system android tested another browser yes chrome problem type something else description popup x blocked steps to reproduce because of your layout not minimizing the address bar when the popup sign up for apnews shows cannot click on x and the site is totally blocked for eternity have to read their site on chrome where layout doesnt block the x view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
50,144
| 13,187,346,414
|
IssuesEvent
|
2020-08-13 03:07:15
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
pnf to Icetray v3: check v3 thread safety (Trac #196)
|
Migrated from Trac defect jeb + pnf
|
Check for any non-thread-safe stuff in IceTray v3 and clean them
up for pnf support.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/196
, reported by blaufuss and owned by tschmidt_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"description": "Check for any non-thread-safe stuff in IceTray v3 and clean them\nup for pnf support.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1416713877066511",
"component": "jeb + pnf",
"summary": "pnf to Icetray v3: check v3 thread safety",
"priority": "normal",
"keywords": "",
"time": "2010-03-01T16:57:26",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
pnf to Icetray v3: check v3 thread safety (Trac #196) - Check for any non-thread-safe stuff in IceTray v3 and clean them
up for pnf support.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/196
, reported by blaufuss and owned by tschmidt_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"description": "Check for any non-thread-safe stuff in IceTray v3 and clean them\nup for pnf support.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1416713877066511",
"component": "jeb + pnf",
"summary": "pnf to Icetray v3: check v3 thread safety",
"priority": "normal",
"keywords": "",
"time": "2010-03-01T16:57:26",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
|
defect
|
pnf to icetray check thread safety trac check for any non thread safe stuff in icetray and clean them up for pnf support migrated from reported by blaufuss and owned by tschmidt json status closed changetime description check for any non thread safe stuff in icetray and clean them nup for pnf support reporter blaufuss cc resolution fixed ts component jeb pnf summary pnf to icetray check thread safety priority normal keywords time milestone owner tschmidt type defect
| 1
|
367,634
| 10,860,140,549
|
IssuesEvent
|
2019-11-14 08:23:01
|
Porkins97/DinoNuggetsGame
|
https://api.github.com/repos/Porkins97/DinoNuggetsGame
|
opened
|
Player Quits Scene When Button Held Down
|
Priority Medium bug
|
When playing, if the player holds down the A/X button (Xbox/Playstation) on scene load, it opens the menu and causes them to quit to the main scene
|
1.0
|
Player Quits Scene When Button Held Down - When playing, if the player holds down the A/X button (Xbox/Playstation) on scene load, it opens the menu and causes them to quit to the main scene
|
non_defect
|
player quits scene when button held down when playing if the player holds down the a x button xbox playstation on scene load it opens the menu and causes them to quit to the main scene
| 0
|
64,055
| 18,158,850,811
|
IssuesEvent
|
2021-09-27 07:12:13
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
mvn install -DskipTests failed in hazelcast@5.0 on centos8_aarch64
|
Type: Defect
|
Hello,I meet a problem:mvn install -DskipTests failed in hazelcast@5.0 on centos8_aarch64
```console
bug:
[INFO] Attaching shaded artifact.
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Hazelcast Root 5.1-SNAPSHOT:
[INFO]
[INFO] Hazelcast Root ..................................... SUCCESS [ 18.886 s]
[INFO] hazelcast-archunit-rules ........................... SUCCESS [ 3.720 s]
[INFO] hazelcast .......................................... SUCCESS [02:55 min]
[INFO] hazelcast-spring ................................... SUCCESS [ 10.251 s]
[INFO] hazelcast-spring-tests ............................. SUCCESS [ 4.692 s]
[INFO] hazelcast-build-utils .............................. SUCCESS [ 4.695 s]
[INFO] hazelcast-jet-extensions ........................... SUCCESS [ 0.906 s]
[INFO] hazelcast-jet-kafka ................................ SUCCESS [ 9.878 s]
[INFO] hazelcast-jet-avro ................................. SUCCESS [ 8.069 s]
[INFO] hazelcast-jet-csv .................................. SUCCESS [ 4.579 s]
[INFO] hazelcast-jet-hadoop-core .......................... SUCCESS [ 31.189 s]
[INFO] hazelcast-sql ...................................... FAILURE [ 21.016 s]
[INFO] modulepath-tests ................................... SUCCESS [ 1.906 s]
[INFO] hazelcast-jet-cdc-debezium ......................... SUCCESS [ 12.550 s]
[INFO] hazelcast-jet-cdc-mysql ............................ SUCCESS [ 5.274 s]
[INFO] hazelcast-jet-cdc-postgres ......................... SUCCESS [ 3.229 s]
[INFO] hazelcast-jet-elasticsearch-5 ...................... SUCCESS [ 30.042 s]
[INFO] hazelcast-jet-elasticsearch-6 ...................... SUCCESS [ 30.634 s]
[INFO] hazelcast-jet-elasticsearch-7 ...................... SUCCESS [ 31.550 s]
[INFO] hazelcast-jet-hadoop-dist .......................... SUCCESS [ 1.254 s]
[INFO] hazelcast-jet-files-azure .......................... SUCCESS [01:01 min]
[INFO] hazelcast-jet-files-gcs ............................ SUCCESS [01:07 min]
[INFO] hazelcast-jet-files-s3 ............................. SUCCESS [03:51 min]
[INFO] hazelcast-jet-hadoop ............................... SUCCESS [ 0.644 s]
[INFO] hazelcast-jet-hadoop-all ........................... SUCCESS [ 59.659 s]
[INFO] hazelcast-3-connector-root ......................... SUCCESS [ 0.747 s]
[INFO] hazelcast-3-connector-interface .................... SUCCESS [ 2.966 s]
[INFO] hazelcast-3-connector-impl ......................... SUCCESS [ 3.170 s]
[INFO] hazelcast-3-connector-common ....................... SUCCESS [ 1.837 s]
[INFO] hazelcast-jet-kinesis .............................. SUCCESS [ 11.427 s]
[INFO] hazelcast-jet-s3 ................................... SUCCESS [ 33.130 s]
[INFO] hazelcast-jet-grpc ................................. SUCCESS [ 18.176 s]
[INFO] hazelcast-jet-protobuf ............................. SUCCESS [ 7.291 s]
[INFO] hazelcast-jet-python ............................... SUCCESS [ 7.468 s]
[INFO] hazelcast-distribution ............................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 08:07 min (Wall Clock)
[INFO] Finished at: 2021-09-27T15:05:50+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project hazelcast-sql: Could not resolve dependencies for project com.hazelcast:hazelcast-sql:jar:5.1-SNAPSHOT: Failed to collect dependencies at io.confluent:kafka-avro-serializer:jar:4.1.4: Failed to read artifact descriptor for io.confluent:kafka-avro-serializer:jar:4.1.4: Could not transfer artifact io.confluent:kafka-avro-serializer:pom:4.1.4 from/to confluent (https://packages.confluent.io/maven/): Transfer failed for https://packages.confluent.io/maven/io/confluent/kafka-avro-serializer/4.1.4/kafka-avro-serializer-4.1.4.pom ProxyInfo{host='172.18.100.92', userName='null', port=8080, type='http', nonProxyHosts='null'}: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <args> -rf :hazelcast-sql
==> Error: ProcessError: Command exited with status 1:
'/home/all_spack_env/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/maven-3.6.3-bhzkrwxzy3ujsjddmzjmfvuavg5wsuwt/bin/mvn' 'package' '-DskipTests'
10 errors found in build log:
9113 [INFO] ------------------------------------------------------------------------
9114 [INFO] BUILD FAILURE
9115 [INFO] ------------------------------------------------------------------------
9116 [INFO] Total time: 08:07 min (Wall Clock)
9117 [INFO] Finished at: 2021-09-27T15:05:50+08:00
9118 [INFO] ------------------------------------------------------------------------
>> 9119 [ERROR] Failed to execute goal on project hazelcast-sql: Could not resolve dependencies for project com.hazelcast:hazelcast-sql:jar:5.1-SNAPSHOT: Failed to collect dependencies at io.confluent:kafka
-avro-serializer:jar:4.1.4: Failed to read artifact descriptor for io.confluent:kafka-avro-serializer:jar:4.1.4: Could not transfer artifact io.confluent:kafka-avro-serializer:pom:4.1.4 from/to conf
luent (https://packages.confluent.io/maven/): Transfer failed for https://packages.confluent.io/maven/io/confluent/kafka-avro-serializer/4.1.4/kafka-avro-serializer-4.1.4.pom ProxyInfo{host='172.18.
100.92', userName='null', port=8080, type='http', nonProxyHosts='null'}: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification pat
h to requested target -> [Help 1]
>> 9120 [ERROR]
>> 9121 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
>> 9122 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>> 9123 [ERROR]
>> 9124 [ERROR] For more information about the errors and possible solutions, please read the following articles:
>> 9125 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
>> 9126 [ERROR]
>> 9127 [ERROR] After correcting the problems, you can resume the build with the command
>> 9128 [ERROR] mvn <args> -rf :hazelcast-sql
```
Can you tell me how to solve this problem?
|
1.0
|
mvn install -DskipTests failed in hazelcast@5.0 on centos8_aarch64 - Hello,I meet a problem:mvn install -DskipTests failed in hazelcast@5.0 on centos8_aarch64
```console
bug:
[INFO] Attaching shaded artifact.
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Hazelcast Root 5.1-SNAPSHOT:
[INFO]
[INFO] Hazelcast Root ..................................... SUCCESS [ 18.886 s]
[INFO] hazelcast-archunit-rules ........................... SUCCESS [ 3.720 s]
[INFO] hazelcast .......................................... SUCCESS [02:55 min]
[INFO] hazelcast-spring ................................... SUCCESS [ 10.251 s]
[INFO] hazelcast-spring-tests ............................. SUCCESS [ 4.692 s]
[INFO] hazelcast-build-utils .............................. SUCCESS [ 4.695 s]
[INFO] hazelcast-jet-extensions ........................... SUCCESS [ 0.906 s]
[INFO] hazelcast-jet-kafka ................................ SUCCESS [ 9.878 s]
[INFO] hazelcast-jet-avro ................................. SUCCESS [ 8.069 s]
[INFO] hazelcast-jet-csv .................................. SUCCESS [ 4.579 s]
[INFO] hazelcast-jet-hadoop-core .......................... SUCCESS [ 31.189 s]
[INFO] hazelcast-sql ...................................... FAILURE [ 21.016 s]
[INFO] modulepath-tests ................................... SUCCESS [ 1.906 s]
[INFO] hazelcast-jet-cdc-debezium ......................... SUCCESS [ 12.550 s]
[INFO] hazelcast-jet-cdc-mysql ............................ SUCCESS [ 5.274 s]
[INFO] hazelcast-jet-cdc-postgres ......................... SUCCESS [ 3.229 s]
[INFO] hazelcast-jet-elasticsearch-5 ...................... SUCCESS [ 30.042 s]
[INFO] hazelcast-jet-elasticsearch-6 ...................... SUCCESS [ 30.634 s]
[INFO] hazelcast-jet-elasticsearch-7 ...................... SUCCESS [ 31.550 s]
[INFO] hazelcast-jet-hadoop-dist .......................... SUCCESS [ 1.254 s]
[INFO] hazelcast-jet-files-azure .......................... SUCCESS [01:01 min]
[INFO] hazelcast-jet-files-gcs ............................ SUCCESS [01:07 min]
[INFO] hazelcast-jet-files-s3 ............................. SUCCESS [03:51 min]
[INFO] hazelcast-jet-hadoop ............................... SUCCESS [ 0.644 s]
[INFO] hazelcast-jet-hadoop-all ........................... SUCCESS [ 59.659 s]
[INFO] hazelcast-3-connector-root ......................... SUCCESS [ 0.747 s]
[INFO] hazelcast-3-connector-interface .................... SUCCESS [ 2.966 s]
[INFO] hazelcast-3-connector-impl ......................... SUCCESS [ 3.170 s]
[INFO] hazelcast-3-connector-common ....................... SUCCESS [ 1.837 s]
[INFO] hazelcast-jet-kinesis .............................. SUCCESS [ 11.427 s]
[INFO] hazelcast-jet-s3 ................................... SUCCESS [ 33.130 s]
[INFO] hazelcast-jet-grpc ................................. SUCCESS [ 18.176 s]
[INFO] hazelcast-jet-protobuf ............................. SUCCESS [ 7.291 s]
[INFO] hazelcast-jet-python ............................... SUCCESS [ 7.468 s]
[INFO] hazelcast-distribution ............................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 08:07 min (Wall Clock)
[INFO] Finished at: 2021-09-27T15:05:50+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project hazelcast-sql: Could not resolve dependencies for project com.hazelcast:hazelcast-sql:jar:5.1-SNAPSHOT: Failed to collect dependencies at io.confluent:kafka-avro-serializer:jar:4.1.4: Failed to read artifact descriptor for io.confluent:kafka-avro-serializer:jar:4.1.4: Could not transfer artifact io.confluent:kafka-avro-serializer:pom:4.1.4 from/to confluent (https://packages.confluent.io/maven/): Transfer failed for https://packages.confluent.io/maven/io/confluent/kafka-avro-serializer/4.1.4/kafka-avro-serializer-4.1.4.pom ProxyInfo{host='172.18.100.92', userName='null', port=8080, type='http', nonProxyHosts='null'}: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <args> -rf :hazelcast-sql
==> Error: ProcessError: Command exited with status 1:
'/home/all_spack_env/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/maven-3.6.3-bhzkrwxzy3ujsjddmzjmfvuavg5wsuwt/bin/mvn' 'package' '-DskipTests'
10 errors found in build log:
9113 [INFO] ------------------------------------------------------------------------
9114 [INFO] BUILD FAILURE
9115 [INFO] ------------------------------------------------------------------------
9116 [INFO] Total time: 08:07 min (Wall Clock)
9117 [INFO] Finished at: 2021-09-27T15:05:50+08:00
9118 [INFO] ------------------------------------------------------------------------
>> 9119 [ERROR] Failed to execute goal on project hazelcast-sql: Could not resolve dependencies for project com.hazelcast:hazelcast-sql:jar:5.1-SNAPSHOT: Failed to collect dependencies at io.confluent:kafka
-avro-serializer:jar:4.1.4: Failed to read artifact descriptor for io.confluent:kafka-avro-serializer:jar:4.1.4: Could not transfer artifact io.confluent:kafka-avro-serializer:pom:4.1.4 from/to conf
luent (https://packages.confluent.io/maven/): Transfer failed for https://packages.confluent.io/maven/io/confluent/kafka-avro-serializer/4.1.4/kafka-avro-serializer-4.1.4.pom ProxyInfo{host='172.18.
100.92', userName='null', port=8080, type='http', nonProxyHosts='null'}: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification pat
h to requested target -> [Help 1]
>> 9120 [ERROR]
>> 9121 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
>> 9122 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>> 9123 [ERROR]
>> 9124 [ERROR] For more information about the errors and possible solutions, please read the following articles:
>> 9125 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
>> 9126 [ERROR]
>> 9127 [ERROR] After correcting the problems, you can resume the build with the command
>> 9128 [ERROR] mvn <args> -rf :hazelcast-sql
```
Can you tell me how to solve this problem?
|
defect
|
mvn install dskiptests failed in hazelcast on hello i meet a problem mvn install dskiptests failed in hazelcast on console bug: attaching shaded artifact reactor summary for hazelcast root snapshot hazelcast root success hazelcast archunit rules success hazelcast success hazelcast spring success hazelcast spring tests success hazelcast build utils success hazelcast jet extensions success hazelcast jet kafka success hazelcast jet avro success hazelcast jet csv success hazelcast jet hadoop core success hazelcast sql failure modulepath tests success hazelcast jet cdc debezium success hazelcast jet cdc mysql success hazelcast jet cdc postgres success hazelcast jet elasticsearch success hazelcast jet elasticsearch success hazelcast jet elasticsearch success hazelcast jet hadoop dist success hazelcast jet files azure success hazelcast jet files gcs success hazelcast jet files success hazelcast jet hadoop success hazelcast jet hadoop all success hazelcast connector root success hazelcast connector interface success hazelcast connector impl success hazelcast connector common success hazelcast jet kinesis success hazelcast jet success hazelcast jet grpc success hazelcast jet protobuf success hazelcast jet python success hazelcast distribution skipped build failure total time min wall clock finished at failed to execute goal on project hazelcast sql could not resolve dependencies for project com hazelcast hazelcast sql jar snapshot failed to collect dependencies at io confluent kafka avro serializer jar failed to read artifact descriptor for io confluent kafka avro serializer jar could not transfer artifact io confluent kafka avro serializer pom from to confluent transfer failed for proxyinfo host username null port type http nonproxyhosts null pkix path building failed sun security provider certpath suncertpathbuilderexception unable to find valid certification path to requested target to see the full stack trace of the errors re run maven with the e switch re run maven using the x switch to enable full debug logging for more information about the errors and possible solutions please read the following articles after correcting the problems you can resume the build with the command mvn rf hazelcast sql error processerror command exited with status home all spack env spack opt spack linux gcc maven bin mvn package dskiptests errors found in build log build failure total time min wall clock finished at failed to execute goal on project hazelcast sql could not resolve dependencies for project com hazelcast hazelcast sql jar snapshot failed to collect dependencies at io confluent kafka avro serializer jar failed to read artifact descriptor for io confluent kafka avro serializer jar could not transfer artifact io confluent kafka avro serializer pom from to conf luent transfer failed for proxyinfo host username null port type http nonproxyhosts null pkix path building failed sun security provider certpath suncertpathbuilderexception unable to find valid certification pat h to requested target to see the full stack trace of the errors re run maven with the e switch re run maven using the x switch to enable full debug logging for more information about the errors and possible solutions please read the following articles after correcting the problems you can resume the build with the command mvn rf hazelcast sql can you tell me how to solve this problem
| 1
|
24,801
| 4,104,662,172
|
IssuesEvent
|
2016-06-05 14:40:29
|
bwu-dart/bwu_datagrid
|
https://api.github.com/repos/bwu-dart/bwu_datagrid
|
closed
|
Column Headers are ~8px wider in FF/Safari
|
type:defect
|
I'm not sure if this is a duplicate ticket/issue. But, in FF/Safari the column headers seem to be wider than Chrome/Chromium. Removing the checkbox column has no effect.
Column headers in Chrome/Chromium:

_Notice how the column's header width matches the column content..._
Column headers in FF/Safari:

_Notice how every column is at least 8px wider._
I found this 8px discrepancy while comparing all bwu-datagrid-header-column's style attribute. In there you'll see a "width: #px" that is 8px wider.
Testing enviroment:
OSX 10.11.5
SDK: 1.15.0
Firefox: 46.0.1
Safari: 9.1.1 (11601.6.17)
Chrome: 50.0.2661.102 (64-bit)
web_components: ^0.12.0
polymer_elements: ^1.0.0-rc.8
browser: ^0.10.0
polymer: ^1.0.0-rc.16
|
1.0
|
Column Headers are ~8px wider in FF/Safari - I'm not sure if this is a duplicate ticket/issue. But, in FF/Safari the column headers seem to be wider than Chrome/Chromium. Removing the checkbox column has no effect.
Column headers in Chrome/Chromium:

_Notice how the column's header width matches the column content..._
Column headers in FF/Safari:

_Notice how every column is at least 8px wider._
I found this 8px discrepancy while comparing all bwu-datagrid-header-column's style attribute. In there you'll see a "width: #px" that is 8px wider.
Testing enviroment:
OSX 10.11.5
SDK: 1.15.0
Firefox: 46.0.1
Safari: 9.1.1 (11601.6.17)
Chrome: 50.0.2661.102 (64-bit)
web_components: ^0.12.0
polymer_elements: ^1.0.0-rc.8
browser: ^0.10.0
polymer: ^1.0.0-rc.16
|
defect
|
column headers are wider in ff safari i m not sure if this is a duplicate ticket issue but in ff safari the column headers seem to be wider than chrome chromium removing the checkbox column has no effect column headers in chrome chromium notice how the column s header width matches the column content column headers in ff safari notice how every column is at least wider i found this discrepancy while comparing all bwu datagrid header column s style attribute in there you ll see a width px that is wider testing enviroment osx sdk firefox safari chrome bit web components polymer elements rc browser polymer rc
| 1
|
57,800
| 16,068,735,121
|
IssuesEvent
|
2021-04-24 01:52:38
|
H-uru/korman
|
https://api.github.com/repos/H-uru/korman
|
closed
|
Sound Volume Animations Potentially Empty
|
defect
|
Per @DoobesURU, we have observed Korman 0.11a is exporting empty controllers for sound volumes with very small deltas, eg from 0% to 1%. Empty controllers cause the game client to crash. This is likely due to the animation storing the volume in decibels and float precision issues. Korman needs to be sure to NOT export these flawed animations.
|
1.0
|
Sound Volume Animations Potentially Empty - Per @DoobesURU, we have observed Korman 0.11a is exporting empty controllers for sound volumes with very small deltas, eg from 0% to 1%. Empty controllers cause the game client to crash. This is likely due to the animation storing the volume in decibels and float precision issues. Korman needs to be sure to NOT export these flawed animations.
|
defect
|
sound volume animations potentially empty per doobesuru we have observed korman is exporting empty controllers for sound volumes with very small deltas eg from to empty controllers cause the game client to crash this is likely due to the animation storing the volume in decibels and float precision issues korman needs to be sure to not export these flawed animations
| 1
|
32,917
| 4,441,533,017
|
IssuesEvent
|
2016-08-19 09:40:40
|
KeitIG/museeks
|
https://api.github.com/repos/KeitIG/museeks
|
closed
|
macOS theme(s)?
|
design discussion front-end mac
|
Hey, nice player! I've come to know about it just minutes ago! :)
Would you be interested in having a better UI integration in macOS? I might be able to help design and theme your app in macOS but I need help.
|
1.0
|
macOS theme(s)? - Hey, nice player! I've come to know about it just minutes ago! :)
Would you be interested in having a better UI integration in macOS? I might be able to help design and theme your app in macOS but I need help.
|
non_defect
|
macos theme s hey nice player i ve come to know about it just minutes ago would you be interested in having a better ui integration in macos i might be able to help design and theme your app in macos but i need help
| 0
|
831,992
| 32,068,301,283
|
IssuesEvent
|
2023-09-25 05:58:36
|
GoogleCloudPlatform/golang-samples
|
https://api.github.com/repos/GoogleCloudPlatform/golang-samples
|
closed
|
spanner/spanner_snippets/spanner: TestUpdateDatabaseSample failed
|
type: bug priority: p1 api: spanner samples flakybot: issue flakybot: flaky
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 993a6162d95844e06564b429034b39f6da7dff72
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/2293fc94-5607-472c-986f-29e8848f89e7), [Sponge](http://sponge2/2293fc94-5607-472c-986f-29e8848f89e7)
status: failed
<details><summary>Test output</summary><br><pre> integration_test.go:1216: deleting stale test instance projects/golang-samples-tests/instances/go-sample-49129c71-e9d0-42
integration_test.go:1216: deleting stale test instance projects/golang-samples-tests/instances/go-sample-864a3483-64b7-4b
integration_test.go:1216: deleting stale test instance projects/golang-samples-tests/instances/go-sample-af3e925e-902d-41
integration_test.go:1216: deleting stale test instance projects/golang-samples-tests/instances/go-sample-cc089efd-8dd9-44
integration_test.go:1216: deleting stale test instance projects/golang-samples-tests/instances/go-sample-e4c2592c-32da-4d
integration_test.go:1327: Failed to list backups for instance projects/golang-samples-tests/instances/go-sample-e4c2592c-32da-4d: rpc error: code = NotFound desc = Instance not found: projects/golang-samples-tests/instances/go-sample-e4c2592c-32da-4d
error details: name = ResourceInfo type = type.googleapis.com/google.spanner.admin.instance.v1.Instance resourcename = projects/golang-samples-tests/instances/go-sample-e4c2592c-32da-4d owner = desc = Instance does not exist.</pre></details>
|
1.0
|
spanner/spanner_snippets/spanner: TestUpdateDatabaseSample failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 993a6162d95844e06564b429034b39f6da7dff72
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/2293fc94-5607-472c-986f-29e8848f89e7), [Sponge](http://sponge2/2293fc94-5607-472c-986f-29e8848f89e7)
status: failed
<details><summary>Test output</summary><br><pre> integration_test.go:1216: deleting stale test instance projects/golang-samples-tests/instances/go-sample-49129c71-e9d0-42
integration_test.go:1216: deleting stale test instance projects/golang-samples-tests/instances/go-sample-864a3483-64b7-4b
integration_test.go:1216: deleting stale test instance projects/golang-samples-tests/instances/go-sample-af3e925e-902d-41
integration_test.go:1216: deleting stale test instance projects/golang-samples-tests/instances/go-sample-cc089efd-8dd9-44
integration_test.go:1216: deleting stale test instance projects/golang-samples-tests/instances/go-sample-e4c2592c-32da-4d
integration_test.go:1327: Failed to list backups for instance projects/golang-samples-tests/instances/go-sample-e4c2592c-32da-4d: rpc error: code = NotFound desc = Instance not found: projects/golang-samples-tests/instances/go-sample-e4c2592c-32da-4d
error details: name = ResourceInfo type = type.googleapis.com/google.spanner.admin.instance.v1.Instance resourcename = projects/golang-samples-tests/instances/go-sample-e4c2592c-32da-4d owner = desc = Instance does not exist.</pre></details>
|
non_defect
|
spanner spanner snippets spanner testupdatedatabasesample failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output integration test go deleting stale test instance projects golang samples tests instances go sample integration test go deleting stale test instance projects golang samples tests instances go sample integration test go deleting stale test instance projects golang samples tests instances go sample integration test go deleting stale test instance projects golang samples tests instances go sample integration test go deleting stale test instance projects golang samples tests instances go sample integration test go failed to list backups for instance projects golang samples tests instances go sample rpc error code notfound desc instance not found projects golang samples tests instances go sample error details name resourceinfo type type googleapis com google spanner admin instance instance resourcename projects golang samples tests instances go sample owner desc instance does not exist
| 0
|
72,139
| 23,956,710,870
|
IssuesEvent
|
2022-09-12 15:29:02
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
closed
|
QR code view broken when no display name is set
|
T-Defect help wanted A-Invite S-Minor O-Uncommon Z-WTF
|
### Steps to reproduce
1. Unset your display name (in Settings -> General)
2. Left panel -> miniature QR code next to your name
### Outcome
#### What did you expect?
Avatar does not cover MXID

#### What happened instead?
Avatar covers MXID

### Your phone model
iPhone 11
### Operating system version
Android 11
### Application version and app store
Playstore Beta
|
1.0
|
QR code view broken when no display name is set - ### Steps to reproduce
1. Unset your display name (in Settings -> General)
2. Left panel -> miniature QR code next to your name
### Outcome
#### What did you expect?
Avatar does not cover MXID

#### What happened instead?
Avatar covers MXID

### Your phone model
iPhone 11
### Operating system version
Android 11
### Application version and app store
Playstore Beta
|
defect
|
qr code view broken when no display name is set steps to reproduce unset your display name in settings general left panel miniature qr code next to your name outcome what did you expect avatar does not cover mxid what happened instead avatar covers mxid your phone model iphone operating system version android application version and app store playstore beta
| 1
|
56,115
| 14,933,721,232
|
IssuesEvent
|
2021-01-25 09:35:34
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
p: outputPanel deferred = "true" deferredMode = "visible" inside p: datatable loads content that is off-screen immediately
|
defect
|
p: outputPanel deferred = "true" deferredMode = "visible" inside p: datatable loads content that is off-screen immediately, although the documentation says that deferredMode = "visible" only activates p: outputpanel after scrolling when it is within the screen. If p: outputPanel deferred = "true" deferredMode = "visible" is placed outside p: datatable, everything works correctly. The problem started with version 8.0 and remained in version 10.0.0
**Example XHTML**
```<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:h="http://xmlns.jcp.org/jsf/html"
xmlns:ui="http://xmlns.jcp.org/jsf/facelets"
xmlns:p="http://primefaces.org/ui"
xmlns:f="http://xmlns.jcp.org/jsf/core">
<h:head>
<title>my_editor</title>
</h:head>
<h:body styleClass="mainBody">
<div class="mainContainer">
<h:form id="formId" styleClass="mainForm">
<input type="hidden" name="${_csrf.parameterName}" value="${_csrf.token}"/>
<div class="content">
<p:dataTable id="tableRefact" value="#{myBean.myEntities}" var="entity"
rows="5" rowIndexVar="rowIndex" first="0" lazy="true"
widgetVar="newsTable" editable="true" paginator="true"
paginatorPosition="top"
paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink}
{PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}"
rowsPerPageTemplate="5,10,15,50">
<p:ajax event="rowEdit" listener="#{myBean.onRowEdit}"
update="tableRefact"/>
<p:ajax event="rowEditCancel" listener="#{myBean.onRowCancel}"
update="tableRefact"/>
<p:column style="width:150px">
<p:cellEditor>
<f:facet name="output">
<p:graphicImage value="#{myBean.myImage}" styleClass="graphicImage">
<f:param name="myUrl" value="#{entity.url}" />
</p:graphicImage>
</f:facet>
<f:facet name="input">
<p:inputTextarea rows="5" cols="200"
value="#{myBean.addedMyImageUrl}"
placeholder="Enter image url..."
styleClass="inputElement">
</p:inputTextarea>
</f:facet>
</p:cellEditor>
</p:column>
<p:column>
<p:cellEditor>
<f:facet name="output">
<h:outputText value="#{entity.name}" style="font-weight: bold"/>
</f:facet>
<f:facet name="input">
<p:inputTextarea rows="5" cols="200"
value="#{entity.name}"
placeholder="Введите заголовок..."
styleClass="inputElement">
</p:inputTextarea>
</f:facet>
</p:cellEditor>
<p:cellEditor>
<f:facet name="output">
<h:outputText value="#{entity.description}" style="font-weight: normal"/>
</f:facet>
<f:facet name="input">
<p:inputTextarea rows="7" cols="200"
value="#{entity.description}"
placeholder="Введите описание..."
styleClass="inputElement">
</p:inputTextarea>
</f:facet>
</p:cellEditor>
<a href="#{entity.url}" target="_blank" style="color: #1063aa">#{entity.url}</a>
<p:cellEditor>
<f:facet name="output">
<h:outputText value="Founded:" styleClass="underlineWithBold"/>
<h:outputText value=" #{entity.founded}" style="font-weight: normal"/>
</f:facet>
<f:facet name="input">
<p:inputTextarea rows="8" cols="200"
value="#{entity.founded}"
placeholder="Введите Founded..."
autoResize="true"
styleClass="inputElement">
</p:inputTextarea>
</f:facet>
</p:cellEditor>
<p:column>
<p:rowEditor/>
</p:column>
<p:row>
<p:column>
<p:outputPanel id="mapContainerId" deferred="true" deferredMode="visible">
#{myBean.onScreenAction()}
</p:outputPanel>
</p:column>
</p:row>
</p:column>
</p:dataTable>
</div>
</h:form>
</div>
</h:body>
</html>
```
|
1.0
|
p: outputPanel deferred = "true" deferredMode = "visible" inside p: datatable loads content that is off-screen immediately - p: outputPanel deferred = "true" deferredMode = "visible" inside p: datatable loads content that is off-screen immediately, although the documentation says that deferredMode = "visible" only activates p: outputpanel after scrolling when it is within the screen. If p: outputPanel deferred = "true" deferredMode = "visible" is placed outside p: datatable, everything works correctly. The problem started with version 8.0 and remained in version 10.0.0
**Example XHTML**
```<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:h="http://xmlns.jcp.org/jsf/html"
xmlns:ui="http://xmlns.jcp.org/jsf/facelets"
xmlns:p="http://primefaces.org/ui"
xmlns:f="http://xmlns.jcp.org/jsf/core">
<h:head>
<title>my_editor</title>
</h:head>
<h:body styleClass="mainBody">
<div class="mainContainer">
<h:form id="formId" styleClass="mainForm">
<input type="hidden" name="${_csrf.parameterName}" value="${_csrf.token}"/>
<div class="content">
<p:dataTable id="tableRefact" value="#{myBean.myEntities}" var="entity"
rows="5" rowIndexVar="rowIndex" first="0" lazy="true"
widgetVar="newsTable" editable="true" paginator="true"
paginatorPosition="top"
paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink}
{PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}"
rowsPerPageTemplate="5,10,15,50">
<p:ajax event="rowEdit" listener="#{myBean.onRowEdit}"
update="tableRefact"/>
<p:ajax event="rowEditCancel" listener="#{myBean.onRowCancel}"
update="tableRefact"/>
<p:column style="width:150px">
<p:cellEditor>
<f:facet name="output">
<p:graphicImage value="#{myBean.myImage}" styleClass="graphicImage">
<f:param name="myUrl" value="#{entity.url}" />
</p:graphicImage>
</f:facet>
<f:facet name="input">
<p:inputTextarea rows="5" cols="200"
value="#{myBean.addedMyImageUrl}"
placeholder="Enter image url..."
styleClass="inputElement">
</p:inputTextarea>
</f:facet>
</p:cellEditor>
</p:column>
<p:column>
<p:cellEditor>
<f:facet name="output">
<h:outputText value="#{entity.name}" style="font-weight: bold"/>
</f:facet>
<f:facet name="input">
<p:inputTextarea rows="5" cols="200"
value="#{entity.name}"
placeholder="Введите заголовок..."
styleClass="inputElement">
</p:inputTextarea>
</f:facet>
</p:cellEditor>
<p:cellEditor>
<f:facet name="output">
<h:outputText value="#{entity.description}" style="font-weight: normal"/>
</f:facet>
<f:facet name="input">
<p:inputTextarea rows="7" cols="200"
value="#{entity.description}"
placeholder="Введите описание..."
styleClass="inputElement">
</p:inputTextarea>
</f:facet>
</p:cellEditor>
<a href="#{entity.url}" target="_blank" style="color: #1063aa">#{entity.url}</a>
<p:cellEditor>
<f:facet name="output">
<h:outputText value="Founded:" styleClass="underlineWithBold"/>
<h:outputText value=" #{entity.founded}" style="font-weight: normal"/>
</f:facet>
<f:facet name="input">
<p:inputTextarea rows="8" cols="200"
value="#{entity.founded}"
placeholder="Введите Founded..."
autoResize="true"
styleClass="inputElement">
</p:inputTextarea>
</f:facet>
</p:cellEditor>
<p:column>
<p:rowEditor/>
</p:column>
<p:row>
<p:column>
<p:outputPanel id="mapContainerId" deferred="true" deferredMode="visible">
#{myBean.onScreenAction()}
</p:outputPanel>
</p:column>
</p:row>
</p:column>
</p:dataTable>
</div>
</h:form>
</div>
</h:body>
</html>
```
|
defect
|
p outputpanel deferred true deferredmode visible inside p datatable loads content that is off screen immediately p outputpanel deferred true deferredmode visible inside p datatable loads content that is off screen immediately although the documentation says that deferredmode visible only activates p outputpanel after scrolling when it is within the screen if p outputpanel deferred true deferredmode visible is placed outside p datatable everything works correctly the problem started with version and remained in version example xhtml doctype html public dtd xhtml transitional en html xmlns xmlns h xmlns ui xmlns p xmlns f my editor p datatable id tablerefact value mybean myentities var entity rows rowindexvar rowindex first lazy true widgetvar newstable editable true paginator true paginatorposition top paginatortemplate currentpagereport firstpagelink previouspagelink pagelinks nextpagelink lastpagelink rowsperpagedropdown rowsperpagetemplate p ajax event rowedit listener mybean onrowedit update tablerefact p ajax event roweditcancel listener mybean onrowcancel update tablerefact p inputtextarea rows cols value mybean addedmyimageurl placeholder enter image url styleclass inputelement p inputtextarea rows cols value entity name placeholder введите заголовок styleclass inputelement p inputtextarea rows cols value entity description placeholder введите описание styleclass inputelement entity url p inputtextarea rows cols value entity founded placeholder введите founded autoresize true styleclass inputelement mybean onscreenaction
| 1
|
27,138
| 4,882,425,505
|
IssuesEvent
|
2016-11-17 09:25:04
|
lanzen/ampliconnoise
|
https://api.github.com/repos/lanzen/ampliconnoise
|
closed
|
mpi problem with pyronoise
|
auto-migrated Priority-Medium Type-Defect
|
```
I'm trying to run PyroNoise on a 64bit linux system:-
mpirun -np 8 $an_home/PyroNoiseM -din $out_dir/flows.dat -lin
$out_dir/denoised.list -rin $amplicon_dat_file -v >$out_dir/denoised3.fout
This is collected in my error file:-
MPI: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize()
MPI: aborting job
This does not happen on my desktop (Mac) mpirun seems to lack a --version flag
so it is difficult for me to determine what version has been installed on the
cluster I'm using.
Hope this make sense
Best regards
Jake
```
Original issue reported on code.google.com by `jacob.hu...@gmail.com` on 28 Feb 2011 at 11:03
|
1.0
|
mpi problem with pyronoise - ```
I'm trying to run PyroNoise on a 64bit linux system:-
mpirun -np 8 $an_home/PyroNoiseM -din $out_dir/flows.dat -lin
$out_dir/denoised.list -rin $amplicon_dat_file -v >$out_dir/denoised3.fout
This is collected in my error file:-
MPI: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize()
MPI: aborting job
This does not happen on my desktop (Mac) mpirun seems to lack a --version flag
so it is difficult for me to determine what version has been installed on the
cluster I'm using.
Hope this make sense
Best regards
Jake
```
Original issue reported on code.google.com by `jacob.hu...@gmail.com` on 28 Feb 2011 at 11:03
|
defect
|
mpi problem with pyronoise i m trying to run pyronoise on a linux system mpirun np an home pyronoisem din out dir flows dat lin out dir denoised list rin amplicon dat file v out dir fout this is collected in my error file mpi mpi comm world rank has terminated without calling mpi finalize mpi aborting job this does not happen on my desktop mac mpirun seems to lack a version flag so it is difficult for me to determine what version has been installed on the cluster i m using hope this make sense best regards jake original issue reported on code google com by jacob hu gmail com on feb at
| 1
|
31,826
| 6,642,969,490
|
IssuesEvent
|
2017-09-27 09:29:28
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
BUG: scipy.sparse.linalg.linsolve() + scikits.umfpack 0.3.0 on win-amd64
|
defect scipy.sparse.linalg
|
scikits.umfpack 0.3.0 has many failures on win-amd64, caused by other bit-width chars for the same types than on linux - see https://github.com/scikit-umfpack/scikit-umfpack/issues/36. Some of the tests just call `scipy.sparse.linalg.linsolve()`, which has to be fixed in the same spirit as in the PR https://github.com/scikit-umfpack/scikit-umfpack/pull/37 (when it is finished). I will take care of that, as soon as I will have the fixes ready on the scikit-umfpack side.
|
1.0
|
BUG: scipy.sparse.linalg.linsolve() + scikits.umfpack 0.3.0 on win-amd64 - scikits.umfpack 0.3.0 has many failures on win-amd64, caused by other bit-width chars for the same types than on linux - see https://github.com/scikit-umfpack/scikit-umfpack/issues/36. Some of the tests just call `scipy.sparse.linalg.linsolve()`, which has to be fixed in the same spirit as in the PR https://github.com/scikit-umfpack/scikit-umfpack/pull/37 (when it is finished). I will take care of that, as soon as I will have the fixes ready on the scikit-umfpack side.
|
defect
|
bug scipy sparse linalg linsolve scikits umfpack on win scikits umfpack has many failures on win caused by other bit width chars for the same types than on linux see some of the tests just call scipy sparse linalg linsolve which has to be fixed in the same spirit as in the pr when it is finished i will take care of that as soon as i will have the fixes ready on the scikit umfpack side
| 1
|
8,124
| 2,611,453,296
|
IssuesEvent
|
2015-02-27 05:00:44
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Playing with 48 hedgehogs and Per Hedgehog Ammo is not possible
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Select game mode with Per Hedgehog Ammo.
2. Add 6 team with 8 player each.
3. Run fight.
What is the expected output? What do you see instead?
The fight don't run, I get a error window:
"Last two engine messages:
Establishing IPC connection... ok
Ammo stores overflow"
What version of the product are you using? On what operating system?
0.9.14.1 On Windows XP SP2
Please provide any additional information below.
```
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 21 Nov 2010 at 3:11
|
1.0
|
Playing with 48 hedgehogs and Per Hedgehog Ammo is not possible - ```
What steps will reproduce the problem?
1. Select game mode with Per Hedgehog Ammo.
2. Add 6 team with 8 player each.
3. Run fight.
What is the expected output? What do you see instead?
The fight don't run, I get a error window:
"Last two engine messages:
Establishing IPC connection... ok
Ammo stores overflow"
What version of the product are you using? On what operating system?
0.9.14.1 On Windows XP SP2
Please provide any additional information below.
```
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 21 Nov 2010 at 3:11
|
defect
|
playing with hedgehogs and per hedgehog ammo is not possible what steps will reproduce the problem select game mode with per hedgehog ammo add team with player each run fight what is the expected output what do you see instead the fight don t run i get a error window last two engine messages establishing ipc connection ok ammo stores overflow what version of the product are you using on what operating system on windows xp please provide any additional information below original issue reported on code google com by adibiaz gmail com on nov at
| 1
|
39,419
| 9,449,233,880
|
IssuesEvent
|
2019-04-16 00:56:30
|
STEllAR-GROUP/phylanx
|
https://api.github.com/repos/STEllAR-GROUP/phylanx
|
closed
|
`fmap` does not accept NumPy arrays
|
category: primitives submodule: backend type: compatibility issue type: defect
|
Having:
```py
import numpy as np
from phylanx import Phylanx
@Phylanx
def map_fn(fn, elems):
return fmap(fn, elems)
x = np.array([1,2,3,4])
map_fn(lambda a:a+1,x)
```
results in:
```pytb
Traceback (most recent call last):
File "test51.py", line 64, in <module>
print(map_fn(lambda a:a+1,x))
File "C:\Repos\phylanx\cmake-build-debug\python\build\lib.win-amd64-3.6\phylanx\ast\transducer.py", line 132, in __call__
result = self.backend.call(map(self.map_decorated, args))
File "C:\Repos\phylanx\cmake-build-debug\python\build\lib.win-amd64-3.6\phylanx\ast\physl.py", line 531, in call
return self.lazy(args).eval()
File "C:\Repos\phylanx\cmake-build-debug\python\build\lib.win-amd64-3.6\phylanx\ast\physl.py", line 471, in eval
*self.args)
RuntimeError: test51.py(60, 8): fmap:: primitive_argument_type does not hold a numeric value type (type held: 'phylanx::execution_tree::primitive'): HPX(bad_parameter)
```
|
1.0
|
`fmap` does not accept NumPy arrays - Having:
```py
import numpy as np
from phylanx import Phylanx
@Phylanx
def map_fn(fn, elems):
return fmap(fn, elems)
x = np.array([1,2,3,4])
map_fn(lambda a:a+1,x)
```
results in:
```pytb
Traceback (most recent call last):
File "test51.py", line 64, in <module>
print(map_fn(lambda a:a+1,x))
File "C:\Repos\phylanx\cmake-build-debug\python\build\lib.win-amd64-3.6\phylanx\ast\transducer.py", line 132, in __call__
result = self.backend.call(map(self.map_decorated, args))
File "C:\Repos\phylanx\cmake-build-debug\python\build\lib.win-amd64-3.6\phylanx\ast\physl.py", line 531, in call
return self.lazy(args).eval()
File "C:\Repos\phylanx\cmake-build-debug\python\build\lib.win-amd64-3.6\phylanx\ast\physl.py", line 471, in eval
*self.args)
RuntimeError: test51.py(60, 8): fmap:: primitive_argument_type does not hold a numeric value type (type held: 'phylanx::execution_tree::primitive'): HPX(bad_parameter)
```
|
defect
|
fmap does not accept numpy arrays having py import numpy as np from phylanx import phylanx phylanx def map fn fn elems return fmap fn elems x np array map fn lambda a a x results in pytb traceback most recent call last file py line in print map fn lambda a a x file c repos phylanx cmake build debug python build lib win phylanx ast transducer py line in call result self backend call map self map decorated args file c repos phylanx cmake build debug python build lib win phylanx ast physl py line in call return self lazy args eval file c repos phylanx cmake build debug python build lib win phylanx ast physl py line in eval self args runtimeerror py fmap primitive argument type does not hold a numeric value type type held phylanx execution tree primitive hpx bad parameter
| 1
|
23,627
| 3,851,864,888
|
IssuesEvent
|
2016-04-06 05:27:36
|
GPF/imame4all
|
https://api.github.com/repos/GPF/imame4all
|
closed
|
Autosave feature request
|
auto-migrated Priority-Medium Type-Defect
|
```
Any chance it would be a simple addition to add autosave to the configuration
options? I use it on my PC version by simply adding the -autosave switch to the
shortcut. This would be a great addition to this great work!
```
Original issue reported on code.google.com by `2systema...@gmail.com` on 12 Mar 2013 at 8:27
|
1.0
|
Autosave feature request - ```
Any chance it would be a simple addition to add autosave to the configuration
options? I use it on my PC version by simply adding the -autosave switch to the
shortcut. This would be a great addition to this great work!
```
Original issue reported on code.google.com by `2systema...@gmail.com` on 12 Mar 2013 at 8:27
|
defect
|
autosave feature request any chance it would be a simple addition to add autosave to the configuration options i use it on my pc version by simply adding the autosave switch to the shortcut this would be a great addition to this great work original issue reported on code google com by gmail com on mar at
| 1
|
7,092
| 10,239,466,172
|
IssuesEvent
|
2019-08-19 18:19:11
|
RIOT-OS/RIOT
|
https://api.github.com/repos/RIOT-OS/RIOT
|
closed
|
core: API: RTC interface should not use struct tm
|
Discussion: RFC Process: API change State: stale
|
While collecting implementation ideas for a new timer subsystem, I stumbled about the fact that our real time clock interface can only be used using `struct tm` time representation.
While that might sound natural for a RTC, it seems inefficient:
- `struct tm` is defined using at least 9 integers in newlib (-> 36bytes), where 8 would be enough for any conceivable use case if RTCs would be used not in calendar mode, but in counting mode (just counting seconds).
Also, while inefficient, it is very easy to convert an epoch (or something similar) to `struct tm`, should it be needed for presenting a date to the user.
I propose changing the low level interface to work with a single integer representing (epoch) seconds, maybe keeping the `struct tm` functions for convenience and backwards compatibility.
This would essentially reduce the RTCs to timers counting seconds, but we'd be able to use their capability of waking up MCUs from deep power down modes.
A quick survey of our rtc implementations and the capabilities of RTCs :
cc430: uses calendar mode. chip offers counting mode, but without alarm. can probably be easily worked around
lpc2387: only calendar mode capable
native: uses posix system calls, so it's already epoch based
sam3x8e: using calendar mode, but MCU offers a real time timer (RTT) matching counter mode with alarm.
samd21: using calendar mode, but RTC offers counter mode with alarm
kinetis: already uses counting mode
I say, let's keep the notion of days, years, (leap years) in software.
What do you think?
|
1.0
|
core: API: RTC interface should not use struct tm - While collecting implementation ideas for a new timer subsystem, I stumbled about the fact that our real time clock interface can only be used using `struct tm` time representation.
While that might sound natural for a RTC, it seems inefficient:
- `struct tm` is defined using at least 9 integers in newlib (-> 36bytes), where 8 would be enough for any conceivable use case if RTCs would be used not in calendar mode, but in counting mode (just counting seconds).
Also, while inefficient, it is very easy to convert an epoch (or something similar) to `struct tm`, should it be needed for presenting a date to the user.
I propose changing the low level interface to work with a single integer representing (epoch) seconds, maybe keeping the `struct tm` functions for convenience and backwards compatibility.
This would essentially reduce the RTCs to timers counting seconds, but we'd be able to use their capability of waking up MCUs from deep power down modes.
A quick survey of our rtc implementations and the capabilities of RTCs :
cc430: uses calendar mode. chip offers counting mode, but without alarm. can probably be easily worked around
lpc2387: only calendar mode capable
native: uses posix system calls, so it's already epoch based
sam3x8e: using calendar mode, but MCU offers a real time timer (RTT) matching counter mode with alarm.
samd21: using calendar mode, but RTC offers counter mode with alarm
kinetis: already uses counting mode
I say, let's keep the notion of days, years, (leap years) in software.
What do you think?
|
non_defect
|
core api rtc interface should not use struct tm while collecting implementation ideas for a new timer subsystem i stumbled about the fact that our real time clock interface can only be used using struct tm time representation while that might sound natural for a rtc it seems inefficient struct tm is defined using at least integers in newlib where would be enough for any conceivable use case if rtcs would be used not in calendar mode but in counting mode just counting seconds also while inefficient it is very easy to convert an epoch or something similar to struct tm should it be needed for presenting a date to the user i propose changing the low level interface to work with a single integer representing epoch seconds maybe keeping the struct tm functions for convenience and backwards compatibility this would essentially reduce the rtcs to timers counting seconds but we d be able to use their capability of waking up mcus from deep power down modes a quick survey of our rtc implementations and the capabilities of rtcs uses calendar mode chip offers counting mode but without alarm can probably be easily worked around only calendar mode capable native uses posix system calls so it s already epoch based using calendar mode but mcu offers a real time timer rtt matching counter mode with alarm using calendar mode but rtc offers counter mode with alarm kinetis already uses counting mode i say let s keep the notion of days years leap years in software what do you think
| 0
|
10,537
| 2,622,171,354
|
IssuesEvent
|
2015-03-04 00:14:38
|
byzhang/rapidjson
|
https://api.github.com/repos/byzhang/rapidjson
|
closed
|
Linking Error in VS 2010
|
auto-migrated Priority-Medium Type-Defect
|
```
Getting a linker error when I try to compile my code on VS 2010 with rapidjson
included.
Apparently msvc doesn't like the fact that an unused private class member is
only declared.
In document.h, line 54:
GenericValue(const GenericValue& rhs);
change to:
GenericValue(const GenericValue& rhs) {}
and it links correctly
```
Original issue reported on code.google.com by `tbtt...@gmail.com` on 31 Jan 2014 at 8:07
|
1.0
|
Linking Error in VS 2010 - ```
Getting a linker error when I try to compile my code on VS 2010 with rapidjson
included.
Apparently msvc doesn't like the fact that an unused private class member is
only declared.
In document.h, line 54:
GenericValue(const GenericValue& rhs);
change to:
GenericValue(const GenericValue& rhs) {}
and it links correctly
```
Original issue reported on code.google.com by `tbtt...@gmail.com` on 31 Jan 2014 at 8:07
|
defect
|
linking error in vs getting a linker error when i try to compile my code on vs with rapidjson included apparently msvc doesn t like the fact that an unused private class member is only declared in document h line genericvalue const genericvalue rhs change to genericvalue const genericvalue rhs and it links correctly original issue reported on code google com by tbtt gmail com on jan at
| 1
|
170,956
| 20,905,351,236
|
IssuesEvent
|
2022-03-24 01:08:00
|
opfab/operatorfabric-core
|
https://api.github.com/repos/opfab/operatorfabric-core
|
closed
|
CVE-2021-44907 (Low) detected in qs-6.9.7.tgz - autoclosed
|
security vulnerability Dev tools vulnerabilty
|
## CVE-2021-44907 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>qs-6.9.7.tgz</b></p></summary>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.9.7.tgz">https://registry.npmjs.org/qs/-/qs-6.9.7.tgz</a></p>
<p>Path to dependency file: /ui/main/package.json</p>
<p>Path to vulnerable library: /ui/main/node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- karma-6.3.17.tgz (Root Library)
- body-parser-1.19.2.tgz
- :x: **qs-6.9.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opfab/operatorfabric-core/commit/4e0fe133b229f8b1062ebd45b4582188f7e853cb">4e0fe133b229f8b1062ebd45b4582188f7e853cb</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Denial of Service vulnerability exists in qs up to 6.8.0 due to insufficient sanitization of property in the gs.parse function. The merge() function allows the assignment of properties on an array in the query. For any property being assigned, a value in the array is converted to an object containing these properties. Essentially, this means that the property whose expected type is Array always has to be checked with Array.isArray() by the user. This may not be obvious to the user and can cause unexpected behavior.
WhiteSource Note: After conducting further research, WhiteSource has determined that versions 0.0.1--6.10.3 of qs are vulnerable to CVE-2021-44907.
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907>CVE-2021-44907</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-44907 (Low) detected in qs-6.9.7.tgz - autoclosed - ## CVE-2021-44907 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>qs-6.9.7.tgz</b></p></summary>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.9.7.tgz">https://registry.npmjs.org/qs/-/qs-6.9.7.tgz</a></p>
<p>Path to dependency file: /ui/main/package.json</p>
<p>Path to vulnerable library: /ui/main/node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- karma-6.3.17.tgz (Root Library)
- body-parser-1.19.2.tgz
- :x: **qs-6.9.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opfab/operatorfabric-core/commit/4e0fe133b229f8b1062ebd45b4582188f7e853cb">4e0fe133b229f8b1062ebd45b4582188f7e853cb</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Denial of Service vulnerability exists in qs up to 6.8.0 due to insufficient sanitization of property in the gs.parse function. The merge() function allows the assignment of properties on an array in the query. For any property being assigned, a value in the array is converted to an object containing these properties. Essentially, this means that the property whose expected type is Array always has to be checked with Array.isArray() by the user. This may not be obvious to the user and can cause unexpected behavior.
WhiteSource Note: After conducting further research, WhiteSource has determined that versions 0.0.1--6.10.3 of qs are vulnerable to CVE-2021-44907.
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907>CVE-2021-44907</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve low detected in qs tgz autoclosed cve low severity vulnerability vulnerable library qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href path to dependency file ui main package json path to vulnerable library ui main node modules qs package json dependency hierarchy karma tgz root library body parser tgz x qs tgz vulnerable library found in head commit a href found in base branch develop vulnerability details a denial of service vulnerability exists in qs up to due to insufficient sanitization of property in the gs parse function the merge function allows the assignment of properties on an array in the query for any property being assigned a value in the array is converted to an object containing these properties essentially this means that the property whose expected type is array always has to be checked with array isarray by the user this may not be obvious to the user and can cause unexpected behavior whitesource note after conducting further research whitesource has determined that versions of qs are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href step up your open source security game with whitesource
| 0
|
254,278
| 21,776,348,165
|
IssuesEvent
|
2022-05-13 14:09:29
|
willowtreeapps/vocable-ios
|
https://api.github.com/repos/willowtreeapps/vocable-ios
|
closed
|
Voice Recognition not always triggering
|
bug Test verified
|
**Describe the bug**
When navigating between categories if you navigate to the listen category very quickly the Voice recognition doesn't seem to activate.
Listen feature seems to be working fine when saying "Hey vocable" the listening is being picked up so it may seem like its only when navigating to the category.
**To Reproduce**
Quickly toggle between the listen category and a non-listen category i.e. General.
**Expected behavior**
When navigating to the Listen category I should be able to talk to vocable and the app should pick up what I am saying.
**Actual behavior**
Sometimes the app is not picking up what I am saying.
Sometime the Listen will pick up the initial phrase but will not pick up subsequent phrases.
**Device Information**
Tested on Iphone 8 simulator as well as a physical iPad device (7th or 9th gen).
|
1.0
|
Voice Recognition not always triggering - **Describe the bug**
When navigating between categories if you navigate to the listen category very quickly the Voice recognition doesn't seem to activate.
Listen feature seems to be working fine when saying "Hey vocable" the listening is being picked up so it may seem like its only when navigating to the category.
**To Reproduce**
Quickly toggle between the listen category and a non-listen category i.e. General.
**Expected behavior**
When navigating to the Listen category I should be able to talk to vocable and the app should pick up what I am saying.
**Actual behavior**
Sometimes the app is not picking up what I am saying.
Sometime the Listen will pick up the initial phrase but will not pick up subsequent phrases.
**Device Information**
Tested on Iphone 8 simulator as well as a physical iPad device (7th or 9th gen).
|
non_defect
|
voice recognition not always triggering describe the bug when navigating between categories if you navigate to the listen category very quickly the voice recognition doesn t seem to activate listen feature seems to be working fine when saying hey vocable the listening is being picked up so it may seem like its only when navigating to the category to reproduce quickly toggle between the listen category and a non listen category i e general expected behavior when navigating to the listen category i should be able to talk to vocable and the app should pick up what i am saying actual behavior sometimes the app is not picking up what i am saying sometime the listen will pick up the initial phrase but will not pick up subsequent phrases device information tested on iphone simulator as well as a physical ipad device or gen
| 0
|
242,030
| 26,257,050,572
|
IssuesEvent
|
2023-01-06 02:19:18
|
flickerfly/k8s_auth_tests
|
https://api.github.com/repos/flickerfly/k8s_auth_tests
|
opened
|
CVE-2021-33503 (High) detected in urllib3-1.26.4-py2.py3-none-any.whl
|
security vulnerability
|
## CVE-2021-33503 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.26.4-py2.py3-none-any.whl</b></p></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/09/c6/d3e3abe5b4f4f16cf0dfc9240ab7ce10c2baa0e268989a4e3ec19e90c84e/urllib3-1.26.4-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/09/c6/d3e3abe5b4f4f16cf0dfc9240ab7ce10c2baa0e268989a4e3ec19e90c84e/urllib3-1.26.4-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- openshift-0.12.0.tar.gz (Root Library)
- kubernetes-12.0.1-py2.py3-none-any.whl
- :x: **urllib3-1.26.4-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in urllib3 before 1.26.5. When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect.
<p>Publish Date: 2021-06-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33503>CVE-2021-33503</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg">https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg</a></p>
<p>Release Date: 2021-06-29</p>
<p>Fix Resolution: urllib3 - 1.26.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-33503 (High) detected in urllib3-1.26.4-py2.py3-none-any.whl - ## CVE-2021-33503 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.26.4-py2.py3-none-any.whl</b></p></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/09/c6/d3e3abe5b4f4f16cf0dfc9240ab7ce10c2baa0e268989a4e3ec19e90c84e/urllib3-1.26.4-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/09/c6/d3e3abe5b4f4f16cf0dfc9240ab7ce10c2baa0e268989a4e3ec19e90c84e/urllib3-1.26.4-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- openshift-0.12.0.tar.gz (Root Library)
- kubernetes-12.0.1-py2.py3-none-any.whl
- :x: **urllib3-1.26.4-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in urllib3 before 1.26.5. When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect.
<p>Publish Date: 2021-06-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33503>CVE-2021-33503</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg">https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg</a></p>
<p>Release Date: 2021-06-29</p>
<p>Fix Resolution: urllib3 - 1.26.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in none any whl cve high severity vulnerability vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy openshift tar gz root library kubernetes none any whl x none any whl vulnerable library found in base branch master vulnerability details an issue was discovered in before when provided with a url containing many characters in the authority component the authority regular expression exhibits catastrophic backtracking causing a denial of service if a url were passed as a parameter or redirected to via an http redirect publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
40,811
| 10,167,596,133
|
IssuesEvent
|
2019-08-07 18:36:52
|
USDepartmentofLabor/OCIO-DOLSafety-iOS
|
https://api.github.com/repos/USDepartmentofLabor/OCIO-DOLSafety-iOS
|
closed
|
Functional - DOL Contacts Screen - Text Alignment Issue with Main Content Section
|
Fixed defect
|
I am wondering why the e-mail address and phone number do not align with the text above it?
Please see the attached screenshot.

|
1.0
|
Functional - DOL Contacts Screen - Text Alignment Issue with Main Content Section - I am wondering why the e-mail address and phone number do not align with the text above it?
Please see the attached screenshot.

|
defect
|
functional dol contacts screen text alignment issue with main content section i am wondering why the e mail address and phone number do not align with the text above it please see the attached screenshot
| 1
|
6,182
| 2,610,222,868
|
IssuesEvent
|
2015-02-26 19:10:41
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
datgen.exe.rar
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Атеист Смирнов'''
Привет всем не подскажите где можно найти
.datgen.exe.rar. как то выкладывали уже
'''Валентин Архипов'''
Вот держи линк http://bit.ly/1hext4T
'''Вилор Бобылёв'''
Просит ввести номер мобилы!Не опасно ли это?
'''Альвин Николаев'''
Неа все ок у меня ничего не списало
'''Арвид Лапин'''
Неа все ок у меня ничего не списало
Информация о файле: datgen.exe.rar
Загружен: В этом месяце
Скачан раз: 222
Рейтинг: 212
Средняя скорость скачивания: 312
Похожих файлов: 22
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 18 Dec 2013 at 5:14
|
1.0
|
datgen.exe.rar - ```
'''Атеист Смирнов'''
Привет всем не подскажите где можно найти
.datgen.exe.rar. как то выкладывали уже
'''Валентин Архипов'''
Вот держи линк http://bit.ly/1hext4T
'''Вилор Бобылёв'''
Просит ввести номер мобилы!Не опасно ли это?
'''Альвин Николаев'''
Неа все ок у меня ничего не списало
'''Арвид Лапин'''
Неа все ок у меня ничего не списало
Информация о файле: datgen.exe.rar
Загружен: В этом месяце
Скачан раз: 222
Рейтинг: 212
Средняя скорость скачивания: 312
Похожих файлов: 22
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 18 Dec 2013 at 5:14
|
defect
|
datgen exe rar атеист смирнов привет всем не подскажите где можно найти datgen exe rar как то выкладывали уже валентин архипов вот держи линк вилор бобылёв просит ввести номер мобилы не опасно ли это альвин николаев неа все ок у меня ничего не списало арвид лапин неа все ок у меня ничего не списало информация о файле datgen exe rar загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 1
|
70,439
| 23,168,113,724
|
IssuesEvent
|
2022-07-30 09:07:14
|
davebaol/d2x-cios
|
https://api.github.com/repos/davebaol/d2x-cios
|
closed
|
Norelsys NS1066 not compatible with D2x CIOS
|
Priority-Medium Type-Defect auto-migrated
|
```
What steps will reproduce the problem?
1. Attaching a hdd utilising a enclosure using this USB3-SATA bridge chipset
results in the loader hanging while attempting to access the HDD, in the case
of usbloader gx, it hangs for what seems like forever trying to load the drive
but eventually times out and loads without the installed games present
What is the expected output? What do you see instead?
Expected would be for it to work, reality is harsh and expectations a aren't
always met, i am aware.
What version of the product are you using? On what operating system?
No idea, this enclosure is new.
Please provide any additional information below.
The enclosure using the chipset in question is available from this link
http://www.ebay.com.au/itm/HIGH-QUALITY-ALUMINIUM-2-5-SATA-USB-3-0-EXTERNAL-PORT
ABLE-HDD-ENCLOSURE-CASE-/170918578318
```
Original issue reported on code.google.com by `danialho...@gmail.com` on 24 Sep 2013 at 10:08
|
1.0
|
Norelsys NS1066 not compatible with D2x CIOS - ```
What steps will reproduce the problem?
1. Attaching a hdd utilising a enclosure using this USB3-SATA bridge chipset
results in the loader hanging while attempting to access the HDD, in the case
of usbloader gx, it hangs for what seems like forever trying to load the drive
but eventually times out and loads without the installed games present
What is the expected output? What do you see instead?
Expected would be for it to work, reality is harsh and expectations a aren't
always met, i am aware.
What version of the product are you using? On what operating system?
No idea, this enclosure is new.
Please provide any additional information below.
The enclosure using the chipset in question is available from this link
http://www.ebay.com.au/itm/HIGH-QUALITY-ALUMINIUM-2-5-SATA-USB-3-0-EXTERNAL-PORT
ABLE-HDD-ENCLOSURE-CASE-/170918578318
```
Original issue reported on code.google.com by `danialho...@gmail.com` on 24 Sep 2013 at 10:08
|
defect
|
norelsys not compatible with cios what steps will reproduce the problem attaching a hdd utilising a enclosure using this sata bridge chipset results in the loader hanging while attempting to access the hdd in the case of usbloader gx it hangs for what seems like forever trying to load the drive but eventually times out and loads without the installed games present what is the expected output what do you see instead expected would be for it to work reality is harsh and expectations a aren t always met i am aware what version of the product are you using on what operating system no idea this enclosure is new please provide any additional information below the enclosure using the chipset in question is available from this link able hdd enclosure case original issue reported on code google com by danialho gmail com on sep at
| 1
|
8,994
| 2,615,118,421
|
IssuesEvent
|
2015-03-01 05:44:03
|
chrsmith/google-api-java-client
|
https://api.github.com/repos/chrsmith/google-api-java-client
|
closed
|
Adding album to google picasa NOT WORKING
|
auto-migrated Priority-Medium Type-Defect
|
```
Version of google-api-java-client (e.g. 1.2.1-alpha)
Java environment (e.g. Java 6, Android 2.2, App Engine 1.3.7)
Adding album to google picasa NOT WORKING. Exception in log.
```
Original issue reported on code.google.com by `tml...@gmail.com` on 17 Jan 2011 at 2:17
|
1.0
|
Adding album to google picasa NOT WORKING - ```
Version of google-api-java-client (e.g. 1.2.1-alpha)
Java environment (e.g. Java 6, Android 2.2, App Engine 1.3.7)
Adding album to google picasa NOT WORKING. Exception in log.
```
Original issue reported on code.google.com by `tml...@gmail.com` on 17 Jan 2011 at 2:17
|
defect
|
adding album to google picasa not working version of google api java client e g alpha java environment e g java android app engine adding album to google picasa not working exception in log original issue reported on code google com by tml gmail com on jan at
| 1
|
4,191
| 2,713,114,692
|
IssuesEvent
|
2015-04-09 17:25:45
|
GoogleCloudPlatform/kubernetes
|
https://api.github.com/repos/GoogleCloudPlatform/kubernetes
|
closed
|
kubelet test is flaky
|
area/test priority/P1 team/node
|
Appears to deadlock or otherwise hang after a few seconds and then hit the 5-minute timeout.
From start of log:
```
W0406 21:21:50.436464 21680 docker.go:482] found a container with the "k8s" prefix, but too few fields (2): "k8s_unidentified"
...
E0406 21:21:53.397806 21680 server.go:614] Timed out waiting for client to create streams
panic: test timed out after 5m0s
```
followed by stack traces of zillions of goroutines that mostly report being blocked for 4+ minutes.
Recent instances:
https://travis-ci.org/GoogleCloudPlatform/kubernetes/jobs/57351393
https://travis-ci.org/GoogleCloudPlatform/kubernetes/jobs/57389027
cc @dchen1107
|
1.0
|
kubelet test is flaky - Appears to deadlock or otherwise hang after a few seconds and then hit the 5-minute timeout.
From start of log:
```
W0406 21:21:50.436464 21680 docker.go:482] found a container with the "k8s" prefix, but too few fields (2): "k8s_unidentified"
...
E0406 21:21:53.397806 21680 server.go:614] Timed out waiting for client to create streams
panic: test timed out after 5m0s
```
followed by stack traces of zillions of goroutines that mostly report being blocked for 4+ minutes.
Recent instances:
https://travis-ci.org/GoogleCloudPlatform/kubernetes/jobs/57351393
https://travis-ci.org/GoogleCloudPlatform/kubernetes/jobs/57389027
cc @dchen1107
|
non_defect
|
kubelet test is flaky appears to deadlock or otherwise hang after a few seconds and then hit the minute timeout from start of log docker go found a container with the prefix but too few fields unidentified server go timed out waiting for client to create streams panic test timed out after followed by stack traces of zillions of goroutines that mostly report being blocked for minutes recent instances cc
| 0
|
19,347
| 3,193,155,473
|
IssuesEvent
|
2015-09-30 02:14:47
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
12 hour format datetimes with meridian not being patched correctly
|
Defect validation
|
Howdy, I'm having an issue with an application I'm developing where sometimes dates passed in 12-hour format with am/pm will not be processed correctly by patchEntity() method.
For context my app has a form for employees to fill out time sheets, and some valid entries will not be patched such as a start time of 10:30am and end time of 12:30pm. When these values are entered, only the start time of 10:30am will exist in the patched entity. Both fields have the same validation rules. I'll supply everything below such that you should be able to reproduce the issue, let me know if you need anything more.
Here's my 'add' form
```php
<?= $this->Form->create($timeSheet) ?>
<fieldset>
<legend><?= __('Add Time Sheet') ?></legend>
<?php
if ($logged_user->is_admin) { //check to see if user is admin
echo $this->Form->input('sp_id', [
'type' => 'select',
'options' => $sps,
'default' => $current_sp_id,
'label' => 'Select an ID other than your own if you need to create a time sheet for another SP.'
]);
}
echo $this->Form->input('start_time', ['type' => 'datetime',
'interval' => 5,
'minYear' => date("Y") - 1,
'maxYear' => date("Y"),
'timeFormat' => 12,
'label' => "Start Date and Time"]);
echo $this->Form->input('end_time', ['type' => 'datetime',
'interval' => 5,
'minYear' => date("Y") - 1,
'maxYear' => date("Y"),
'timeFormat' => 12,
'label' => "End Date and Time"]);
echo $this->Form->input('program', ['type' => 'integer',
'label' => 'Payroll Code']);
echo $this->Form->input('sp_role_id', ['options' => $sp_roles, 'empty' => false]);
echo $this->Form->input('notes', ['type' => 'textarea',
'label' => 'Notes (optional)']);
?>
</fieldset>
<?= $this->Form->button(__('Submit')) ?>
<?= $this->Form->end() ?>
```
Controller code I'm currently using to display that the data isn't being patched correctly
```php
public function add()
{
$timeSheet = $this->TimeSheets->newEntity();
if ($this->request->is('post')) {
$timeSheet = $this->TimeSheets->patchEntity($timeSheet, $this->request->data);
echo "<pre>Request data:";
echo print_r($this->request->data);
echo "Patched data:";
echo print_r($timeSheet);
echo "</pre>";
//code below here is to popular other drop down fields in the add form (omitted)
}
```
Validation rules
```php
$validator
->add('start_time', 'valid', ['rule' => 'datetime'])
->requirePresence('start_time', 'create')
->notEmpty('start_time');
$validator
->add('end_time', 'valid', ['rule' => 'datetime'])
->requirePresence('end_time', 'create')
->notEmpty('end_time');
```
Request data submitted by form:
```html
Request data:Array
(
[sp_id] => 520
[start_time] => Array
(
[year] => 2015
[month] => 09
[day] => 21
[hour] => 10
[minute] => 30
[meridian] => am
)
[end_time] => Array
(
[year] => 2015
[month] => 09
[day] => 21
[hour] => 12
[minute] => 30
[meridian] => pm
)
[program] => 22
[sp_role_id] => 1
[notes] =>
)
```
And the data present in the patched entity. You can see here that only 'start_time' exists
```html
Patched data:App\Model\Entity\TimeSheet Object
(
[_accessible:protected] => Array
(
[sp_id] => 1
[start_time] => 1
[end_time] => 1
[week_ending_date] => 1
[date_submitted] => 1
[program] => 1
[rate] => 1
[benefits] => 1
[total_cost] => 1
[notes] => 1
[sp_role_id] => 1
[sp] => 1
)
[_properties:protected] => Array
(
[sp_id] => 520
[start_time] => Cake\I18n\Time Object
(
[date] => 2015-09-21 10:30:00
[timezone_type] => 3
[timezone] => America/New_York
)
[program] => 22
[sp_role_id] => 1
[notes] =>
)
[_original:protected] => Array
(
)
[_hidden:protected] => Array
(
)
[_virtual:protected] => Array
(
)
[_className:protected] => App\Model\Entity\TimeSheet
[_dirty:protected] => Array
(
[sp_id] => 1
[start_time] => 1
[program] => 1
[sp_role_id] => 1
[notes] => 1
)
[_new:protected] => 1
[_errors:protected] => Array
(
[end_time] => Array
(
[valid] => The provided value is invalid
)
[week_ending_date] => Array
(
[_required] => This field is required
)
[date_submitted] => Array
(
[_required] => This field is required
)
[rate] => Array
(
[_required] => This field is required
)
[benefits] => Array
(
[_required] => This field is required
)
[total_cost] => Array
(
[_required] => This field is required
)
)
[_registryAlias:protected] => TimeSheets
)
```
Version of Cake as displayed in /vendor/cakephp/cakephp/VERSION.txt is 3.0.13
|
1.0
|
12 hour format datetimes with meridian not being patched correctly - Howdy, I'm having an issue with an application I'm developing where sometimes dates passed in 12-hour format with am/pm will not be processed correctly by patchEntity() method.
For context my app has a form for employees to fill out time sheets, and some valid entries will not be patched such as a start time of 10:30am and end time of 12:30pm. When these values are entered, only the start time of 10:30am will exist in the patched entity. Both fields have the same validation rules. I'll supply everything below such that you should be able to reproduce the issue, let me know if you need anything more.
Here's my 'add' form
```php
<?= $this->Form->create($timeSheet) ?>
<fieldset>
<legend><?= __('Add Time Sheet') ?></legend>
<?php
if ($logged_user->is_admin) { //check to see if user is admin
echo $this->Form->input('sp_id', [
'type' => 'select',
'options' => $sps,
'default' => $current_sp_id,
'label' => 'Select an ID other than your own if you need to create a time sheet for another SP.'
]);
}
echo $this->Form->input('start_time', ['type' => 'datetime',
'interval' => 5,
'minYear' => date("Y") - 1,
'maxYear' => date("Y"),
'timeFormat' => 12,
'label' => "Start Date and Time"]);
echo $this->Form->input('end_time', ['type' => 'datetime',
'interval' => 5,
'minYear' => date("Y") - 1,
'maxYear' => date("Y"),
'timeFormat' => 12,
'label' => "End Date and Time"]);
echo $this->Form->input('program', ['type' => 'integer',
'label' => 'Payroll Code']);
echo $this->Form->input('sp_role_id', ['options' => $sp_roles, 'empty' => false]);
echo $this->Form->input('notes', ['type' => 'textarea',
'label' => 'Notes (optional)']);
?>
</fieldset>
<?= $this->Form->button(__('Submit')) ?>
<?= $this->Form->end() ?>
```
Controller code I'm currently using to display that the data isn't being patched correctly
```php
public function add()
{
$timeSheet = $this->TimeSheets->newEntity();
if ($this->request->is('post')) {
$timeSheet = $this->TimeSheets->patchEntity($timeSheet, $this->request->data);
echo "<pre>Request data:";
echo print_r($this->request->data);
echo "Patched data:";
echo print_r($timeSheet);
echo "</pre>";
//code below here is to popular other drop down fields in the add form (omitted)
}
```
Validation rules
```php
$validator
->add('start_time', 'valid', ['rule' => 'datetime'])
->requirePresence('start_time', 'create')
->notEmpty('start_time');
$validator
->add('end_time', 'valid', ['rule' => 'datetime'])
->requirePresence('end_time', 'create')
->notEmpty('end_time');
```
Request data submitted by form:
```html
Request data:Array
(
[sp_id] => 520
[start_time] => Array
(
[year] => 2015
[month] => 09
[day] => 21
[hour] => 10
[minute] => 30
[meridian] => am
)
[end_time] => Array
(
[year] => 2015
[month] => 09
[day] => 21
[hour] => 12
[minute] => 30
[meridian] => pm
)
[program] => 22
[sp_role_id] => 1
[notes] =>
)
```
And the data present in the patched entity. You can see here that only 'start_time' exists
```html
Patched data:App\Model\Entity\TimeSheet Object
(
[_accessible:protected] => Array
(
[sp_id] => 1
[start_time] => 1
[end_time] => 1
[week_ending_date] => 1
[date_submitted] => 1
[program] => 1
[rate] => 1
[benefits] => 1
[total_cost] => 1
[notes] => 1
[sp_role_id] => 1
[sp] => 1
)
[_properties:protected] => Array
(
[sp_id] => 520
[start_time] => Cake\I18n\Time Object
(
[date] => 2015-09-21 10:30:00
[timezone_type] => 3
[timezone] => America/New_York
)
[program] => 22
[sp_role_id] => 1
[notes] =>
)
[_original:protected] => Array
(
)
[_hidden:protected] => Array
(
)
[_virtual:protected] => Array
(
)
[_className:protected] => App\Model\Entity\TimeSheet
[_dirty:protected] => Array
(
[sp_id] => 1
[start_time] => 1
[program] => 1
[sp_role_id] => 1
[notes] => 1
)
[_new:protected] => 1
[_errors:protected] => Array
(
[end_time] => Array
(
[valid] => The provided value is invalid
)
[week_ending_date] => Array
(
[_required] => This field is required
)
[date_submitted] => Array
(
[_required] => This field is required
)
[rate] => Array
(
[_required] => This field is required
)
[benefits] => Array
(
[_required] => This field is required
)
[total_cost] => Array
(
[_required] => This field is required
)
)
[_registryAlias:protected] => TimeSheets
)
```
Version of Cake as displayed in /vendor/cakephp/cakephp/VERSION.txt is 3.0.13
|
defect
|
hour format datetimes with meridian not being patched correctly howdy i m having an issue with an application i m developing where sometimes dates passed in hour format with am pm will not be processed correctly by patchentity method for context my app has a form for employees to fill out time sheets and some valid entries will not be patched such as a start time of and end time of when these values are entered only the start time of will exist in the patched entity both fields have the same validation rules i ll supply everything below such that you should be able to reproduce the issue let me know if you need anything more here s my add form php form create timesheet php if logged user is admin check to see if user is admin echo this form input sp id type select options sps default current sp id label select an id other than your own if you need to create a time sheet for another sp echo this form input start time type datetime interval minyear date y maxyear date y timeformat label start date and time echo this form input end time type datetime interval minyear date y maxyear date y timeformat label end date and time echo this form input program type integer label payroll code echo this form input sp role id echo this form input notes type textarea label notes optional form button submit form end controller code i m currently using to display that the data isn t being patched correctly php public function add timesheet this timesheets newentity if this request is post timesheet this timesheets patchentity timesheet this request data echo request data echo print r this request data echo patched data echo print r timesheet echo code below here is to popular other drop down fields in the add form omitted validation rules php validator add start time valid requirepresence start time create notempty start time validator add end time valid requirepresence end time create notempty end time request data submitted by form html request data array array am array pm and the data present in the patched entity you can see here that only start time exists html patched data app model entity timesheet object array array cake time object america new york array array array app model entity timesheet array array array the provided value is invalid array this field is required array this field is required array this field is required array this field is required array this field is required timesheets version of cake as displayed in vendor cakephp cakephp version txt is
| 1
|
7,529
| 18,143,479,021
|
IssuesEvent
|
2021-09-25 02:36:03
|
runt9/FusionOfSouls
|
https://api.github.com/repos/runt9/FusionOfSouls
|
closed
|
Add Boss to the game
|
enhancement UI architecture run-progression
|
- Boss is generated as any other unit with a single passive and a single attribute increase and should always have 2 classes.
- After each post-battle reward selection, unchosen unit/rune has one of its fusion possibilities randomly applied to the boss
- Button next to Hero on top bar shows Boss information
- Only shows the Fusion list, i.e. the Right Pane of the Hero dialog
|
1.0
|
Add Boss to the game - - Boss is generated as any other unit with a single passive and a single attribute increase and should always have 2 classes.
- After each post-battle reward selection, unchosen unit/rune has one of its fusion possibilities randomly applied to the boss
- Button next to Hero on top bar shows Boss information
- Only shows the Fusion list, i.e. the Right Pane of the Hero dialog
|
non_defect
|
add boss to the game boss is generated as any other unit with a single passive and a single attribute increase and should always have classes after each post battle reward selection unchosen unit rune has one of its fusion possibilities randomly applied to the boss button next to hero on top bar shows boss information only shows the fusion list i e the right pane of the hero dialog
| 0
|
50,888
| 13,187,941,364
|
IssuesEvent
|
2020-08-13 05:05:34
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
CVMFS - doc builds are choking on the lack of `napoleon` (Trac #1608)
|
Migrated from Trac defect infrastructure
|
david- could you add napoleon, or update sphinx?
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1608">https://code.icecube.wisc.edu/ticket/1608</a>, reported by nega and owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-29T19:54:11",
"description": "david- could you add napoleon, or update sphinx?",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1459281251636355",
"component": "infrastructure",
"summary": "CVMFS - doc builds are choking on the lack of `napoleon`",
"priority": "normal",
"keywords": "cvmfs sphinx documentation",
"time": "2016-03-28T21:20:33",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
CVMFS - doc builds are choking on the lack of `napoleon` (Trac #1608) - david- could you add napoleon, or update sphinx?
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1608">https://code.icecube.wisc.edu/ticket/1608</a>, reported by nega and owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-29T19:54:11",
"description": "david- could you add napoleon, or update sphinx?",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1459281251636355",
"component": "infrastructure",
"summary": "CVMFS - doc builds are choking on the lack of `napoleon`",
"priority": "normal",
"keywords": "cvmfs sphinx documentation",
"time": "2016-03-28T21:20:33",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
|
defect
|
cvmfs doc builds are choking on the lack of napoleon trac david could you add napoleon or update sphinx migrated from json status closed changetime description david could you add napoleon or update sphinx reporter nega cc resolution fixed ts component infrastructure summary cvmfs doc builds are choking on the lack of napoleon priority normal keywords cvmfs sphinx documentation time milestone owner david schultz type defect
| 1
|
74,573
| 25,184,650,576
|
IssuesEvent
|
2022-11-11 16:43:40
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
opened
|
Sibling transfer issues warning if multiapps are on different execute_ons
|
C: Framework T: defect P: normal
|
## Bug Description
When performing a sub-app to sub-app transfer, i.e. sibling transfer, where the multiapps have different `execute_on`s, it is impossible to avoid a warning stating that the transfer does not have the same `execute_on` flags as either multiapp. It is my opinion that the transfer should execute during the `to_multi_app` execution since sibling transfers are before multiapp execution. And there should only be a warning if it doesn't match the `to_multi_app`
## Steps to Reproduce
Here is the main and sub app inputs recreating the issue:
main.i:
```
[MultiApps]
[sub1]
type = FullSolveMultiApp
input_files = sibling_sub.i
execute_on = TIMESTEP_BEGIN
cli_args = 'Variables/u/initial_condition=2'
[]
[sub2]
type = FullSolveMultiApp
input_files = sibling_sub.i
execute_on = TIMESTEP_END
[]
[]
[Transfers]
[sibling_transfer]
type = MultiAppCopyTransfer
from_multi_app = sub1
to_multi_app = sub2
source_variable = u
variable = u
[]
[]
[Mesh]
[min]
type = GeneratedMeshGenerator
dim = 1
nx = 1
[]
[]
[Problem]
solve = false
kernel_coverage_check = false
skip_nl_system_check = true
verbose_multiapps = true
[]
[Executioner]
type = Steady
[]
```
sibling_sub.i:
```
[Variables/u]
[]
[Postprocessors/avg_u]
type = ElementAverageValue
variable = u
[]
[Mesh]
[min]
type = GeneratedMeshGenerator
dim = 1
nx = 1
[]
[]
[Problem]
solve = false
kernel_coverage_check = false
skip_nl_system_check = true
verbose_multiapps = true
[]
[Executioner]
type = Steady
[]
```
Without specifying the transfer's `execute_on` it seems to want to execute on every possible exec flag (`INITIAL`, `TIMESTEP_BEGIN`, `TIMESTEP_END`, `FINAL`) and it issues the warning:
```
*** Warning ***
The following warning occurred in the object "sibling_transfer", of type "MultiAppCopyTransfer".
MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags
```
With `Transfers/sibling_transfer/execute_on=TIMESTEP_END` it again executes on every flag and issues the warning:
```
*** Warning ***
The following warning occurred in the object "sibling_transfer", of type "MultiAppCopyTransfer".
MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags
```
Finally, with `Transfers/sibling_transfer/execute_on='TIMESTEP_BEGIN TIMESTEP_END'` it executes on every flag and issues the warning:
```
*** Warning ***
The following warning occurred in the object "sibling_transfer", of type "MultiAppCopyTransfer".
MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags
*** Warning ***
The following warning occurred in the object "sibling_transfer", of type "MultiAppCopyTransfer".
MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags
```
To summarize:
- Transfer wants to execute on every flag, no matter what.
- No `execute_on`: warning for `to_multi_app`
- `execute_on=TIMESTEP_END`: warning for `from_multi_app`
- `execute_on='TIMESTEP_BEGIN TIMESTEP_END'`: warning for both `from_multi_app` and `to_multi_app`
## Impact
Two major impacts:
1. Although it does not affect the answer, performing transfers on every exec flag could be costly for some transfers.
2. The unavoidable warning message is erroneous (IMO) and a test with this type of transfer will always fail because of the warning.
|
1.0
|
Sibling transfer issues warning if multiapps are on different execute_ons - ## Bug Description
When performing a sub-app to sub-app transfer, i.e. sibling transfer, where the multiapps have different `execute_on`s, it is impossible to avoid a warning stating that the transfer does not have the same `execute_on` flags as either multiapp. It is my opinion that the transfer should execute during the `to_multi_app` execution since sibling transfers are before multiapp execution. And there should only be a warning if it doesn't match the `to_multi_app`
## Steps to Reproduce
Here is the main and sub app inputs recreating the issue:
main.i:
```
[MultiApps]
[sub1]
type = FullSolveMultiApp
input_files = sibling_sub.i
execute_on = TIMESTEP_BEGIN
cli_args = 'Variables/u/initial_condition=2'
[]
[sub2]
type = FullSolveMultiApp
input_files = sibling_sub.i
execute_on = TIMESTEP_END
[]
[]
[Transfers]
[sibling_transfer]
type = MultiAppCopyTransfer
from_multi_app = sub1
to_multi_app = sub2
source_variable = u
variable = u
[]
[]
[Mesh]
[min]
type = GeneratedMeshGenerator
dim = 1
nx = 1
[]
[]
[Problem]
solve = false
kernel_coverage_check = false
skip_nl_system_check = true
verbose_multiapps = true
[]
[Executioner]
type = Steady
[]
```
sibling_sub.i:
```
[Variables/u]
[]
[Postprocessors/avg_u]
type = ElementAverageValue
variable = u
[]
[Mesh]
[min]
type = GeneratedMeshGenerator
dim = 1
nx = 1
[]
[]
[Problem]
solve = false
kernel_coverage_check = false
skip_nl_system_check = true
verbose_multiapps = true
[]
[Executioner]
type = Steady
[]
```
Without specifying the transfer's `execute_on` it seems to want to execute on every possible exec flag (`INITIAL`, `TIMESTEP_BEGIN`, `TIMESTEP_END`, `FINAL`) and it issues the warning:
```
*** Warning ***
The following warning occurred in the object "sibling_transfer", of type "MultiAppCopyTransfer".
MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags
```
With `Transfers/sibling_transfer/execute_on=TIMESTEP_END` it again executes on every flag and issues the warning:
```
*** Warning ***
The following warning occurred in the object "sibling_transfer", of type "MultiAppCopyTransfer".
MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags
```
Finally, with `Transfers/sibling_transfer/execute_on='TIMESTEP_BEGIN TIMESTEP_END'` it executes on every flag and issues the warning:
```
*** Warning ***
The following warning occurred in the object "sibling_transfer", of type "MultiAppCopyTransfer".
MultiAppTransfer execute_on flags do not match associated from_multi_app execute_on flags
*** Warning ***
The following warning occurred in the object "sibling_transfer", of type "MultiAppCopyTransfer".
MultiAppTransfer execute_on flags do not match associated to_multi_app execute_on flags
```
To summarize:
- Transfer wants to execute on every flag, no matter what.
- No `execute_on`: warning for `to_multi_app`
- `execute_on=TIMESTEP_END`: warning for `from_multi_app`
- `execute_on='TIMESTEP_BEGIN TIMESTEP_END'`: warning for both `from_multi_app` and `to_multi_app`
## Impact
Two major impacts:
1. Although it does not affect the answer, performing transfers on every exec flag could be costly for some transfers.
2. The unavoidable warning message is erroneous (IMO) and a test with this type of transfer will always fail because of the warning.
|
defect
|
sibling transfer issues warning if multiapps are on different execute ons bug description when performing a sub app to sub app transfer i e sibling transfer where the multiapps have different execute on s it is impossible to avoid a warning stating that the transfer does not have the same execute on flags as either multiapp it is my opinion that the transfer should execute during the to multi app execution since sibling transfers are before multiapp execution and there should only be a warning if it doesn t match the to multi app steps to reproduce here is the main and sub app inputs recreating the issue main i type fullsolvemultiapp input files sibling sub i execute on timestep begin cli args variables u initial condition type fullsolvemultiapp input files sibling sub i execute on timestep end type multiappcopytransfer from multi app to multi app source variable u variable u type generatedmeshgenerator dim nx solve false kernel coverage check false skip nl system check true verbose multiapps true type steady sibling sub i type elementaveragevalue variable u type generatedmeshgenerator dim nx solve false kernel coverage check false skip nl system check true verbose multiapps true type steady without specifying the transfer s execute on it seems to want to execute on every possible exec flag initial timestep begin timestep end final and it issues the warning warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated to multi app execute on flags with transfers sibling transfer execute on timestep end it again executes on every flag and issues the warning warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated from multi app execute on flags finally with transfers sibling transfer execute on timestep begin timestep end it executes on every flag and issues the warning warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated from multi app execute on flags warning the following warning occurred in the object sibling transfer of type multiappcopytransfer multiapptransfer execute on flags do not match associated to multi app execute on flags to summarize transfer wants to execute on every flag no matter what no execute on warning for to multi app execute on timestep end warning for from multi app execute on timestep begin timestep end warning for both from multi app and to multi app impact two major impacts although it does not affect the answer performing transfers on every exec flag could be costly for some transfers the unavoidable warning message is erroneous imo and a test with this type of transfer will always fail because of the warning
| 1
|
174,920
| 13,526,190,517
|
IssuesEvent
|
2020-09-15 13:57:32
|
CSOIreland/PxStat
|
https://api.github.com/repos/CSOIreland/PxStat
|
closed
|
[BUG] Last updated tables inconsistent
|
bug fixed released tested
|
In the Irish the table seems to be listed twice

In the English only once in the list

|
1.0
|
[BUG] Last updated tables inconsistent - In the Irish the table seems to be listed twice

In the English only once in the list

|
non_defect
|
last updated tables inconsistent in the irish the table seems to be listed twice in the english only once in the list
| 0
|
79,001
| 27,865,392,722
|
IssuesEvent
|
2023-03-21 09:54:54
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
closed
|
Unable to end a room video call by Slide to end a call in RTL
|
T-Defect Z-UI UX A-Jitsi Z-RTL S-Minor O-Occasional A-VideoCall
|
### Steps to reproduce
1. Set the device environment to the RTL
2. Launch the Element App
3. Enter the room
2. Start a Video call
4. Slide to end the call
### Outcome
#### What did you expect?
Slide to end the call ( removeJitsiWidgetView) should work properly as in LTR
#### What happened instead?
- The view disappears at first
- I can't end the room video call even when I scroll to the end.
#### LTR
https://user-images.githubusercontent.com/39683194/222949533-a126f5f6-f1cb-4816-814b-b26003f1b805.mp4
#### RTL
https://user-images.githubusercontent.com/39683194/222949552-da8bcdb4-8ca5-4ca2-871d-9d1d9d8b3731.mp4
### Your phone model
Samsung Galaxy S20
### Operating system version
Android 13
### Application version and app store
Element version 1.5.26 Play store
### Homeserver
matrix.org
### Will you send logs?
No
### Are you willing to provide a PR?
Yes
|
1.0
|
Unable to end a room video call by Slide to end a call in RTL - ### Steps to reproduce
1. Set the device environment to the RTL
2. Launch the Element App
3. Enter the room
2. Start a Video call
4. Slide to end the call
### Outcome
#### What did you expect?
Slide to end the call ( removeJitsiWidgetView) should work properly as in LTR
#### What happened instead?
- The view disappears at first
- I can't end the room video call even when I scroll to the end.
#### LTR
https://user-images.githubusercontent.com/39683194/222949533-a126f5f6-f1cb-4816-814b-b26003f1b805.mp4
#### RTL
https://user-images.githubusercontent.com/39683194/222949552-da8bcdb4-8ca5-4ca2-871d-9d1d9d8b3731.mp4
### Your phone model
Samsung Galaxy S20
### Operating system version
Android 13
### Application version and app store
Element version 1.5.26 Play store
### Homeserver
matrix.org
### Will you send logs?
No
### Are you willing to provide a PR?
Yes
|
defect
|
unable to end a room video call by slide to end a call in rtl steps to reproduce set the device environment to the rtl launch the element app enter the room start a video call slide to end the call outcome what did you expect slide to end the call removejitsiwidgetview should work properly as in ltr what happened instead the view disappears at first i can t end the room video call even when i scroll to the end ltr rtl your phone model samsung galaxy operating system version android application version and app store element version play store homeserver matrix org will you send logs no are you willing to provide a pr yes
| 1
|
135,315
| 18,678,893,992
|
IssuesEvent
|
2021-11-01 01:02:41
|
benchmarkdebricked/kubernetes
|
https://api.github.com/repos/benchmarkdebricked/kubernetes
|
opened
|
CVE-2020-14040 (High) detected in kubernetesv1.16.0-alpha.0
|
security vulnerability
|
## CVE-2020-14040 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kubernetesv1.16.0-alpha.0</b></p></summary>
<p>
<p>Production-Grade Container Scheduling and Management</p>
<p>Library home page: <a href=https://github.com/kubernetes/kubernetes.git>https://github.com/kubernetes/kubernetes.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>kubernetes/vendor/golang.org/x/text/encoding/unicode/unicode.go</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>kubernetes/vendor/golang.org/x/text/encoding/unicode/unicode.go</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>kubernetes/vendor/golang.org/x/text/encoding/unicode/unicode.go</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The x/text package before 0.3.3 for Go has a vulnerability in encoding/unicode that could lead to the UTF-16 decoder entering an infinite loop, causing the program to crash or run out of memory. An attacker could provide a single byte to a UTF16 decoder instantiated with UseBOM or ExpectBOM to trigger an infinite loop if the String function on the Decoder is called, or the Decoder is passed to golang.org/x/text/transform.String.
<p>Publish Date: 2020-06-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14040>CVE-2020-14040</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0015">https://osv.dev/vulnerability/GO-2020-0015</a></p>
<p>Release Date: 2020-06-17</p>
<p>Fix Resolution: v0.3.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-14040 (High) detected in kubernetesv1.16.0-alpha.0 - ## CVE-2020-14040 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kubernetesv1.16.0-alpha.0</b></p></summary>
<p>
<p>Production-Grade Container Scheduling and Management</p>
<p>Library home page: <a href=https://github.com/kubernetes/kubernetes.git>https://github.com/kubernetes/kubernetes.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>kubernetes/vendor/golang.org/x/text/encoding/unicode/unicode.go</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>kubernetes/vendor/golang.org/x/text/encoding/unicode/unicode.go</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>kubernetes/vendor/golang.org/x/text/encoding/unicode/unicode.go</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The x/text package before 0.3.3 for Go has a vulnerability in encoding/unicode that could lead to the UTF-16 decoder entering an infinite loop, causing the program to crash or run out of memory. An attacker could provide a single byte to a UTF16 decoder instantiated with UseBOM or ExpectBOM to trigger an infinite loop if the String function on the Decoder is called, or the Decoder is passed to golang.org/x/text/transform.String.
<p>Publish Date: 2020-06-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14040>CVE-2020-14040</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0015">https://osv.dev/vulnerability/GO-2020-0015</a></p>
<p>Release Date: 2020-06-17</p>
<p>Fix Resolution: v0.3.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in alpha cve high severity vulnerability vulnerable library alpha production grade container scheduling and management library home page a href vulnerable source files kubernetes vendor golang org x text encoding unicode unicode go kubernetes vendor golang org x text encoding unicode unicode go kubernetes vendor golang org x text encoding unicode unicode go vulnerability details the x text package before for go has a vulnerability in encoding unicode that could lead to the utf decoder entering an infinite loop causing the program to crash or run out of memory an attacker could provide a single byte to a decoder instantiated with usebom or expectbom to trigger an infinite loop if the string function on the decoder is called or the decoder is passed to golang org x text transform string publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
38,044
| 8,640,078,135
|
IssuesEvent
|
2018-11-24 00:31:11
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
closed
|
Wrong assembly name in arrays
|
defect
|
A description of the issue.
### Steps To Reproduce
https://deck.net/57bdad2d881bd818c8ffd1500d93a682
https://dotnetfiddle.net/l38B6t //Right assembly name
```csharp
public class Program
{
public static void Main()
{
Console.WriteLine(typeof(Object).Assembly.FullName); // Sould be mscorelib
Console.WriteLine(typeof(MyClass).Assembly.FullName); // Sould be Demo
Console.WriteLine(typeof(MyClass[]).Assembly.FullName); // Sould be Demo
}
public class MyClass
{
}
}
```
### Expected Result
```
typeof(MyClass[]).Assembly.FullName == "Demo"
```
### Actual Result
```
typeof(MyClass[]).Assembly.FullName == "mscorelib"
```
|
1.0
|
Wrong assembly name in arrays - A description of the issue.
### Steps To Reproduce
https://deck.net/57bdad2d881bd818c8ffd1500d93a682
https://dotnetfiddle.net/l38B6t //Right assembly name
```csharp
public class Program
{
public static void Main()
{
Console.WriteLine(typeof(Object).Assembly.FullName); // Sould be mscorelib
Console.WriteLine(typeof(MyClass).Assembly.FullName); // Sould be Demo
Console.WriteLine(typeof(MyClass[]).Assembly.FullName); // Sould be Demo
}
public class MyClass
{
}
}
```
### Expected Result
```
typeof(MyClass[]).Assembly.FullName == "Demo"
```
### Actual Result
```
typeof(MyClass[]).Assembly.FullName == "mscorelib"
```
|
defect
|
wrong assembly name in arrays a description of the issue steps to reproduce right assembly name csharp public class program public static void main console writeline typeof object assembly fullname sould be mscorelib console writeline typeof myclass assembly fullname sould be demo console writeline typeof myclass assembly fullname sould be demo public class myclass expected result typeof myclass assembly fullname demo actual result typeof myclass assembly fullname mscorelib
| 1
|
73,396
| 24,607,887,655
|
IssuesEvent
|
2022-10-14 18:05:26
|
matrix-org/synapse
|
https://api.github.com/repos/matrix-org/synapse
|
closed
|
Events related to the root event of a thread cannot have read receipts sent on them
|
S-Minor T-Defect A-Threads O-Occasional
|
Originally reported at vector-im/element-web#23451, info from @gsouquet.
If you have a DAG that looks something like this:
```mermaid
flowchart RL
E-->D
D-->C
C-->B
B-->A
B-.->|m.thread|A
C-.->|m.thread|A
D-.->|m.edit|C
E-.->|m.reaction|A
```
The root event (and any other reactions, etc. to it) are shown both in the "main" timeline and as part of the thread. It is desirable for clients to be able to send a receipt in either of those. E.g. looking at a "thread" which shows `A` (as the root) and the reaction to it (`E`) and are trying to send a receipt for `E`.
|
1.0
|
Events related to the root event of a thread cannot have read receipts sent on them - Originally reported at vector-im/element-web#23451, info from @gsouquet.
If you have a DAG that looks something like this:
```mermaid
flowchart RL
E-->D
D-->C
C-->B
B-->A
B-.->|m.thread|A
C-.->|m.thread|A
D-.->|m.edit|C
E-.->|m.reaction|A
```
The root event (and any other reactions, etc. to it) are shown both in the "main" timeline and as part of the thread. It is desirable for clients to be able to send a receipt in either of those. E.g. looking at a "thread" which shows `A` (as the root) and the reaction to it (`E`) and are trying to send a receipt for `E`.
|
defect
|
events related to the root event of a thread cannot have read receipts sent on them originally reported at vector im element web info from gsouquet if you have a dag that looks something like this mermaid flowchart rl e d d c c b b a b m thread a c m thread a d m edit c e m reaction a the root event and any other reactions etc to it are shown both in the main timeline and as part of the thread it is desirable for clients to be able to send a receipt in either of those e g looking at a thread which shows a as the root and the reaction to it e and are trying to send a receipt for e
| 1
|
12,139
| 7,788,278,957
|
IssuesEvent
|
2018-06-07 03:36:13
|
angular/angular
|
https://api.github.com/repos/angular/angular
|
closed
|
Angular challenge: can you do it?
|
comp: performance
|
## I'm submitting a...
[x] Performance issue
## Current behavior
angular 6 with Ivy is struggling with this example performance: "Sierpinski Triangle". React Fiber done this very good but angular won't. Can you explain why ? and how many strategy we can do to improve the performance?
## Expected behavior
make it smoooth like react fiber
## Minimal reproduction of the problem with instructions
[github repository](https://github.com/hiepxanh/Ng6-Sierpinski-triangle/tree/gh-pages)
[ng6 demo website: nope](https://hiepxanh.github.io/Ng6-Sierpinski-triangle/)
[ng4 demo website: nope!](https://gund.github.io/ng-s-triangle-demo/)
[react demo website: smooth](https://claudiopro.github.io/react-fiber-vs-stack-demo/)
[stencil by ionic demo website: smooth ](https://stencil-fiber-demo.firebaseapp.com/)
## What is the motivation / use case for changing the behavior?
Well, people moving to React because it have big community and it's performance is so sweet. can we beat them?
## Environment
Angular version: 6.0.3 with Ivy or not
Browser:
- [x ] Chrome (desktop) version 66 ( not update the number)
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [x] Safari (desktop) version 11.1 (very smooth)
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: 8.11.1
- Platform: Mac
|
True
|
Angular challenge: can you do it? - ## I'm submitting a...
[x] Performance issue
## Current behavior
angular 6 with Ivy is struggling with this example performance: "Sierpinski Triangle". React Fiber done this very good but angular won't. Can you explain why ? and how many strategy we can do to improve the performance?
## Expected behavior
make it smoooth like react fiber
## Minimal reproduction of the problem with instructions
[github repository](https://github.com/hiepxanh/Ng6-Sierpinski-triangle/tree/gh-pages)
[ng6 demo website: nope](https://hiepxanh.github.io/Ng6-Sierpinski-triangle/)
[ng4 demo website: nope!](https://gund.github.io/ng-s-triangle-demo/)
[react demo website: smooth](https://claudiopro.github.io/react-fiber-vs-stack-demo/)
[stencil by ionic demo website: smooth ](https://stencil-fiber-demo.firebaseapp.com/)
## What is the motivation / use case for changing the behavior?
Well, people moving to React because it have big community and it's performance is so sweet. can we beat them?
## Environment
Angular version: 6.0.3 with Ivy or not
Browser:
- [x ] Chrome (desktop) version 66 ( not update the number)
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [x] Safari (desktop) version 11.1 (very smooth)
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: 8.11.1
- Platform: Mac
|
non_defect
|
angular challenge can you do it i m submitting a performance issue current behavior angular with ivy is struggling with this example performance sierpinski triangle react fiber done this very good but angular won t can you explain why and how many strategy we can do to improve the performance expected behavior make it smoooth like react fiber minimal reproduction of the problem with instructions what is the motivation use case for changing the behavior well people moving to react because it have big community and it s performance is so sweet can we beat them environment angular version with ivy or not browser chrome desktop version not update the number chrome android version xx chrome ios version xx firefox version xx safari desktop version very smooth safari ios version xx ie version xx edge version xx for tooling issues node version platform mac
| 0
|
23,947
| 3,874,790,448
|
IssuesEvent
|
2016-04-11 21:45:22
|
OpenESignForms/vaadin-ckeditor
|
https://api.github.com/repos/OpenESignForms/vaadin-ckeditor
|
closed
|
How to implement "cancel"? - value changed event called when window closed
|
auto-migrated Priority-Low Type-Defect
|
```
Hi,
I'm using the CKEditor Vaadin addon inside a Window.
Now I discovered that when changing the content and closing the window, the
ValueChangeEvent is raised in the same way when "save" is clicked.
How can I distinguish between cancel and save?
Best Regards
udo
```
Original issue reported on code.google.com by `udo.offe...@zfabrik.de` on 21 Nov 2011 at 5:01
|
1.0
|
How to implement "cancel"? - value changed event called when window closed - ```
Hi,
I'm using the CKEditor Vaadin addon inside a Window.
Now I discovered that when changing the content and closing the window, the
ValueChangeEvent is raised in the same way when "save" is clicked.
How can I distinguish between cancel and save?
Best Regards
udo
```
Original issue reported on code.google.com by `udo.offe...@zfabrik.de` on 21 Nov 2011 at 5:01
|
defect
|
how to implement cancel value changed event called when window closed hi i m using the ckeditor vaadin addon inside a window now i discovered that when changing the content and closing the window the valuechangeevent is raised in the same way when save is clicked how can i distinguish between cancel and save best regards udo original issue reported on code google com by udo offe zfabrik de on nov at
| 1
|
11,967
| 2,672,490,028
|
IssuesEvent
|
2015-03-24 14:30:15
|
cfpb/hmda-pilot
|
https://api.github.com/repos/cfpb/hmda-pilot
|
closed
|
MSA/IRS Report Summary - Continue button is disabled even if there are no errors
|
defect est:2 ui
|
**Steps to Reproduce:**
1. On Step 1, select the 84-1542642.dat file, then click Start validation button.
2. Click the Continue button on Steps 2 and 3 (there should not be any errors).
**Error:**
On the MSA/IRS Summary page the Continue button is disabled even though there are no errors.
**Expected Result:**
On the MSA/IRS Summary page the Continue button should be enabled when there are no MSA and IRS related errors.
**Tested with:**
- Chrome 41.0.2272.89 m
- Windows 7
- 84-1542642.dat
|
1.0
|
MSA/IRS Report Summary - Continue button is disabled even if there are no errors - **Steps to Reproduce:**
1. On Step 1, select the 84-1542642.dat file, then click Start validation button.
2. Click the Continue button on Steps 2 and 3 (there should not be any errors).
**Error:**
On the MSA/IRS Summary page the Continue button is disabled even though there are no errors.
**Expected Result:**
On the MSA/IRS Summary page the Continue button should be enabled when there are no MSA and IRS related errors.
**Tested with:**
- Chrome 41.0.2272.89 m
- Windows 7
- 84-1542642.dat
|
defect
|
msa irs report summary continue button is disabled even if there are no errors steps to reproduce on step select the dat file then click start validation button click the continue button on steps and there should not be any errors error on the msa irs summary page the continue button is disabled even though there are no errors expected result on the msa irs summary page the continue button should be enabled when there are no msa and irs related errors tested with chrome m windows dat
| 1
|
9,002
| 2,615,119,095
|
IssuesEvent
|
2015-03-01 05:44:57
|
chrsmith/google-api-java-client
|
https://api.github.com/repos/chrsmith/google-api-java-client
|
closed
|
AtomParser in 1.3.1 does not work as in 1.2.x
|
auto-migrated Priority-Medium Type-Defect
|
```
Version: 1.3.1-alph
Java environment: Android 2.3
I have some code which worked with version 1.2.0:
List<GDocEntry> entries;
HttpRequest request = transport.buildGetRequest();
request.url = getDocUrl();
HttpResponse response = request.execute();
GDocFeed feed = response.parseAs(GDocFeed.class);
entries = feed.entries;
With version 1.3.1, entries is alway null. I'm obviusly missing something that
has to be changed for 1.3.1.
Here is how I initialize the AtomParser:
private static final XmlNamespaceDictionary DICTIONARY = new
XmlNamespaceDictionary()
.set("", "http://www.w3.org/2005/Atom")
.set("app", "http://www.w3.org/2007/app")
.set("atom", "http://www.w3.org/2005/Atom")
.set("batch", "http://schemas.google.com/gdata/batch")
.set("docs", "http://schemas.google.com/docs/2007")
.set("gAcl", "http://schemas.google.com/acl/2007")
.set("gd", "http://schemas.google.com/g/2005")
.set("openSearch", "http://a9.com/-/spec/opensearch/1.1/")
.set("xml", "http://www.w3.org/XML/1998/namespace");
//...
transport = new ApacheHttpTransport();
GoogleHeaders headers = new GoogleHeaders();
headers.setApplicationName("AGoban");
headers.gdataVersion = "3";
transport.defaultHeaders = headers;
AtomParser parser = new AtomParser();
parser.namespaceDictionary = DICTIONARY;
Log.d(TAG, "AtomParser: " + parser.namespaceDictionary);
transport.addParser(parser);
```
Original issue reported on code.google.com by `Christia...@gmail.com` on 6 Mar 2011 at 6:28
|
1.0
|
AtomParser in 1.3.1 does not work as in 1.2.x - ```
Version: 1.3.1-alph
Java environment: Android 2.3
I have some code which worked with version 1.2.0:
List<GDocEntry> entries;
HttpRequest request = transport.buildGetRequest();
request.url = getDocUrl();
HttpResponse response = request.execute();
GDocFeed feed = response.parseAs(GDocFeed.class);
entries = feed.entries;
With version 1.3.1, entries is alway null. I'm obviusly missing something that
has to be changed for 1.3.1.
Here is how I initialize the AtomParser:
private static final XmlNamespaceDictionary DICTIONARY = new
XmlNamespaceDictionary()
.set("", "http://www.w3.org/2005/Atom")
.set("app", "http://www.w3.org/2007/app")
.set("atom", "http://www.w3.org/2005/Atom")
.set("batch", "http://schemas.google.com/gdata/batch")
.set("docs", "http://schemas.google.com/docs/2007")
.set("gAcl", "http://schemas.google.com/acl/2007")
.set("gd", "http://schemas.google.com/g/2005")
.set("openSearch", "http://a9.com/-/spec/opensearch/1.1/")
.set("xml", "http://www.w3.org/XML/1998/namespace");
//...
transport = new ApacheHttpTransport();
GoogleHeaders headers = new GoogleHeaders();
headers.setApplicationName("AGoban");
headers.gdataVersion = "3";
transport.defaultHeaders = headers;
AtomParser parser = new AtomParser();
parser.namespaceDictionary = DICTIONARY;
Log.d(TAG, "AtomParser: " + parser.namespaceDictionary);
transport.addParser(parser);
```
Original issue reported on code.google.com by `Christia...@gmail.com` on 6 Mar 2011 at 6:28
|
defect
|
atomparser in does not work as in x version alph java environment android i have some code which worked with version list entries httprequest request transport buildgetrequest request url getdocurl httpresponse response request execute gdocfeed feed response parseas gdocfeed class entries feed entries with version entries is alway null i m obviusly missing something that has to be changed for here is how i initialize the atomparser private static final xmlnamespacedictionary dictionary new xmlnamespacedictionary set set app set atom set batch set docs set gacl set gd set opensearch set xml transport new apachehttptransport googleheaders headers new googleheaders headers setapplicationname agoban headers gdataversion transport defaultheaders headers atomparser parser new atomparser parser namespacedictionary dictionary log d tag atomparser parser namespacedictionary transport addparser parser original issue reported on code google com by christia gmail com on mar at
| 1
|
29,298
| 5,638,436,675
|
IssuesEvent
|
2017-04-06 11:58:00
|
phingofficial/phing
|
https://api.github.com/repos/phingofficial/phing
|
closed
|
includePath using project.basedir is failing under certain conditions (Trac #586)
|
defect migrated from Trac system tasks
|
Summary:
If running `phing -f path/to/build.xml myTarget` and using an `<includePath>` that uses ${project.basedir}, the wrong path is added.
Details
As I understand it, project.basedir is set to the basedir of the build.xml file. But under some circumstances it seems to be set to the current working directory for part of the build, and to the basedir of the build.xml for the rest of the build.
In particular, when using <includepath classpath="${project.basedir}/foo"/> and then running Phing this way:
$ phing -f ../../build.xml foo
This seems to set `${project.basedir}` to `.` during the includpath execution, but then treat substitute the basedir of `${buildxml}` for the rest of the processing.
This seems to me to be a bug... or at least a nuance that should be documented.
Platform:
```
* phing 2.4.1
* PHP 5.3.3
* Mac OS 10.6
```
Migrated from https://www.phing.info/trac/ticket/586
``` json
{
"status": "new",
"changetime": "2016-10-07T08:28:55",
"description": "Summary:\n\nIf running `phing -f path/to/build.xml myTarget` and using an `<includePath>` that uses ${project.basedir}, the wrong path is added.\n\nDetails\n\nAs I understand it, project.basedir is set to the basedir of the build.xml file. But under some circumstances it seems to be set to the current working directory for part of the build, and to the basedir of the build.xml for the rest of the build.\n\nIn particular, when using <includepath classpath=\"${project.basedir}/foo\"/> and then running Phing this way:\n\n$ phing -f ../../build.xml foo\n\nThis seems to set `${project.basedir}` to `.` during the includpath execution, but then treat substitute the basedir of `${buildxml}` for the rest of the processing.\n\nThis seems to me to be a bug... or at least a nuance that should be documented.\n\nPlatform:\n\n * phing 2.4.1\n * PHP 5.3.3 \n * Mac OS 10.6",
"reporter": "matt@aleph-null.tv",
"cc": "",
"resolution": "",
"_ts": "1475828935569836",
"component": "phing-tasks-system",
"summary": "includePath using project.basedir is failing under certain conditions",
"priority": "major",
"keywords": "",
"version": "2.4.1",
"time": "2010-11-12T22:41:08",
"milestone": "4.0",
"owner": "mrook",
"type": "defect"
}
```
|
1.0
|
includePath using project.basedir is failing under certain conditions (Trac #586) - Summary:
If running `phing -f path/to/build.xml myTarget` and using an `<includePath>` that uses ${project.basedir}, the wrong path is added.
Details
As I understand it, project.basedir is set to the basedir of the build.xml file. But under some circumstances it seems to be set to the current working directory for part of the build, and to the basedir of the build.xml for the rest of the build.
In particular, when using <includepath classpath="${project.basedir}/foo"/> and then running Phing this way:
$ phing -f ../../build.xml foo
This seems to set `${project.basedir}` to `.` during the includpath execution, but then treat substitute the basedir of `${buildxml}` for the rest of the processing.
This seems to me to be a bug... or at least a nuance that should be documented.
Platform:
```
* phing 2.4.1
* PHP 5.3.3
* Mac OS 10.6
```
Migrated from https://www.phing.info/trac/ticket/586
``` json
{
"status": "new",
"changetime": "2016-10-07T08:28:55",
"description": "Summary:\n\nIf running `phing -f path/to/build.xml myTarget` and using an `<includePath>` that uses ${project.basedir}, the wrong path is added.\n\nDetails\n\nAs I understand it, project.basedir is set to the basedir of the build.xml file. But under some circumstances it seems to be set to the current working directory for part of the build, and to the basedir of the build.xml for the rest of the build.\n\nIn particular, when using <includepath classpath=\"${project.basedir}/foo\"/> and then running Phing this way:\n\n$ phing -f ../../build.xml foo\n\nThis seems to set `${project.basedir}` to `.` during the includpath execution, but then treat substitute the basedir of `${buildxml}` for the rest of the processing.\n\nThis seems to me to be a bug... or at least a nuance that should be documented.\n\nPlatform:\n\n * phing 2.4.1\n * PHP 5.3.3 \n * Mac OS 10.6",
"reporter": "matt@aleph-null.tv",
"cc": "",
"resolution": "",
"_ts": "1475828935569836",
"component": "phing-tasks-system",
"summary": "includePath using project.basedir is failing under certain conditions",
"priority": "major",
"keywords": "",
"version": "2.4.1",
"time": "2010-11-12T22:41:08",
"milestone": "4.0",
"owner": "mrook",
"type": "defect"
}
```
|
defect
|
includepath using project basedir is failing under certain conditions trac summary if running phing f path to build xml mytarget and using an that uses project basedir the wrong path is added details as i understand it project basedir is set to the basedir of the build xml file but under some circumstances it seems to be set to the current working directory for part of the build and to the basedir of the build xml for the rest of the build in particular when using and then running phing this way phing f build xml foo this seems to set project basedir to during the includpath execution but then treat substitute the basedir of buildxml for the rest of the processing this seems to me to be a bug or at least a nuance that should be documented platform phing php mac os migrated from json status new changetime description summary n nif running phing f path to build xml mytarget and using an that uses project basedir the wrong path is added n ndetails n nas i understand it project basedir is set to the basedir of the build xml file but under some circumstances it seems to be set to the current working directory for part of the build and to the basedir of the build xml for the rest of the build n nin particular when using and then running phing this way n n phing f build xml foo n nthis seems to set project basedir to during the includpath execution but then treat substitute the basedir of buildxml for the rest of the processing n nthis seems to me to be a bug or at least a nuance that should be documented n nplatform n n phing n php n mac os reporter matt aleph null tv cc resolution ts component phing tasks system summary includepath using project basedir is failing under certain conditions priority major keywords version time milestone owner mrook type defect
| 1
|
32,152
| 13,769,215,949
|
IssuesEvent
|
2020-10-07 18:16:12
|
BTAA-Geospatial-Data-Project/geoportal
|
https://api.github.com/repos/BTAA-Geospatial-Data-Project/geoportal
|
closed
|
ArcGIS REST Feature Services - UI adjustments
|
web services
|
Some Feature Services are not queryable. Here is an example: https://geo.btaa.org/catalog/714589fbf99d4045bc57bf298be2578e_0 (this one produces an Uncaught TypeError)
Two questions brought up by TF in recent meeting:
- Why is the cursor different when clicking on feature services than for the MapServer type?
- Can the text at bottom that says _Click on map to inspect values_ be changed if the service is not clickable?
See #278
|
1.0
|
ArcGIS REST Feature Services - UI adjustments - Some Feature Services are not queryable. Here is an example: https://geo.btaa.org/catalog/714589fbf99d4045bc57bf298be2578e_0 (this one produces an Uncaught TypeError)
Two questions brought up by TF in recent meeting:
- Why is the cursor different when clicking on feature services than for the MapServer type?
- Can the text at bottom that says _Click on map to inspect values_ be changed if the service is not clickable?
See #278
|
non_defect
|
arcgis rest feature services ui adjustments some feature services are not queryable here is an example this one produces an uncaught typeerror two questions brought up by tf in recent meeting why is the cursor different when clicking on feature services than for the mapserver type can the text at bottom that says click on map to inspect values be changed if the service is not clickable see
| 0
|
53,212
| 13,789,460,377
|
IssuesEvent
|
2020-10-09 08:54:53
|
anyulled/mws-restaurant-stage-1
|
https://api.github.com/repos/anyulled/mws-restaurant-stage-1
|
opened
|
CVE-2018-20821 (Medium) detected in node-sass-3.13.1.tgz, libsass3.3.6
|
security vulnerability
|
## CVE-2018-20821 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-3.13.1.tgz</b>, <b>libsass3.3.6</b></p></summary>
<p>
<details><summary><b>node-sass-3.13.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz</a></p>
<p>Path to dependency file: mws-restaurant-stage-1/package.json</p>
<p>Path to vulnerable library: mws-restaurant-stage-1/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-2.3.2.tgz (Root Library)
- :x: **node-sass-3.13.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/anyulled/mws-restaurant-stage-1/commit/302bbf347526c27d54b90c73c0a13f471ed35ab0">302bbf347526c27d54b90c73c0a13f471ed35ab0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The parsing component in LibSass through 3.5.5 allows attackers to cause a denial-of-service (uncontrolled recursion in Sass::Parser::parse_css_variable_value in parser.cpp).
<p>Publish Date: 2019-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20821>CVE-2018-20821</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20821">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20821</a></p>
<p>Release Date: 2019-04-23</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-20821 (Medium) detected in node-sass-3.13.1.tgz, libsass3.3.6 - ## CVE-2018-20821 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-3.13.1.tgz</b>, <b>libsass3.3.6</b></p></summary>
<p>
<details><summary><b>node-sass-3.13.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz</a></p>
<p>Path to dependency file: mws-restaurant-stage-1/package.json</p>
<p>Path to vulnerable library: mws-restaurant-stage-1/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-2.3.2.tgz (Root Library)
- :x: **node-sass-3.13.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/anyulled/mws-restaurant-stage-1/commit/302bbf347526c27d54b90c73c0a13f471ed35ab0">302bbf347526c27d54b90c73c0a13f471ed35ab0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The parsing component in LibSass through 3.5.5 allows attackers to cause a denial-of-service (uncontrolled recursion in Sass::Parser::parse_css_variable_value in parser.cpp).
<p>Publish Date: 2019-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20821>CVE-2018-20821</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20821">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20821</a></p>
<p>Release Date: 2019-04-23</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in node sass tgz cve medium severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file mws restaurant stage package json path to vulnerable library mws restaurant stage node modules node sass package json dependency hierarchy gulp sass tgz root library x node sass tgz vulnerable library found in head commit a href found in base branch master vulnerability details the parsing component in libsass through allows attackers to cause a denial of service uncontrolled recursion in sass parser parse css variable value in parser cpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
| 0
|
4,688
| 2,610,140,841
|
IssuesEvent
|
2015-02-26 18:44:23
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Mines (used ase a weapon) are blowing differently than expected.
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Choose game scheme with instant activating mines.
2. Run fight.
3. Use mine (classic or sticky) as a weapon.
What is the expected output? What do you see instead?
The classic mine will blow instantly, that you won't be able to run away. As I
remember, in 0.9.13 mine blow timer was activated after few seconds. The sticky
mine blows after the end of turn, so if you play with Unlimited Attacks, you
have to wait for their activation until the turn ends.
What version of the product are you using? On what operating system?
0.9.14.1 on Windows XP SP2
Please provide any additional information below.
```
-----
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 27 Nov 2010 at 6:13
|
1.0
|
Mines (used ase a weapon) are blowing differently than expected. - ```
What steps will reproduce the problem?
1. Choose game scheme with instant activating mines.
2. Run fight.
3. Use mine (classic or sticky) as a weapon.
What is the expected output? What do you see instead?
The classic mine will blow instantly, that you won't be able to run away. As I
remember, in 0.9.13 mine blow timer was activated after few seconds. The sticky
mine blows after the end of turn, so if you play with Unlimited Attacks, you
have to wait for their activation until the turn ends.
What version of the product are you using? On what operating system?
0.9.14.1 on Windows XP SP2
Please provide any additional information below.
```
-----
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 27 Nov 2010 at 6:13
|
defect
|
mines used ase a weapon are blowing differently than expected what steps will reproduce the problem choose game scheme with instant activating mines run fight use mine classic or sticky as a weapon what is the expected output what do you see instead the classic mine will blow instantly that you won t be able to run away as i remember in mine blow timer was activated after few seconds the sticky mine blows after the end of turn so if you play with unlimited attacks you have to wait for their activation until the turn ends what version of the product are you using on what operating system on windows xp please provide any additional information below original issue reported on code google com by adibiaz gmail com on nov at
| 1
|
179,809
| 21,581,313,751
|
IssuesEvent
|
2022-05-02 19:03:16
|
timf-app-demo/NodeGoat
|
https://api.github.com/repos/timf-app-demo/NodeGoat
|
opened
|
underscore-1.9.1.tgz: 1 vulnerabilities (highest severity is: 7.2)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.9.1.tgz</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz">https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/underscore/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/timf-app-demo/NodeGoat/commit/b5024d1a9f780b80ce4f1630ee7121dcac1f0369">b5024d1a9f780b80ce4f1630ee7121dcac1f0369</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-23358](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.2 | underscore-1.9.1.tgz | Direct | 1.12.1 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-23358</summary>
### Vulnerable Library - <b>underscore-1.9.1.tgz</b></p>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz">https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/underscore/package.json</p>
<p>
Dependency Hierarchy:
- :x: **underscore-1.9.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/timf-app-demo/NodeGoat/commit/b5024d1a9f780b80ce4f1630ee7121dcac1f0369">b5024d1a9f780b80ce4f1630ee7121dcac1f0369</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.2</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: 1.12.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"underscore","packageVersion":"1.9.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"underscore:1.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.12.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23358","vulnerabilityDetails":"The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}]</REMEDIATE> -->
|
True
|
underscore-1.9.1.tgz: 1 vulnerabilities (highest severity is: 7.2) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.9.1.tgz</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz">https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/underscore/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/timf-app-demo/NodeGoat/commit/b5024d1a9f780b80ce4f1630ee7121dcac1f0369">b5024d1a9f780b80ce4f1630ee7121dcac1f0369</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-23358](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.2 | underscore-1.9.1.tgz | Direct | 1.12.1 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-23358</summary>
### Vulnerable Library - <b>underscore-1.9.1.tgz</b></p>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz">https://registry.npmjs.org/underscore/-/underscore-1.9.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/underscore/package.json</p>
<p>
Dependency Hierarchy:
- :x: **underscore-1.9.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/timf-app-demo/NodeGoat/commit/b5024d1a9f780b80ce4f1630ee7121dcac1f0369">b5024d1a9f780b80ce4f1630ee7121dcac1f0369</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.2</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: 1.12.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"underscore","packageVersion":"1.9.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"underscore:1.9.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.12.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23358","vulnerabilityDetails":"The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}]</REMEDIATE> -->
|
non_defect
|
underscore tgz vulnerabilities highest severity is vulnerable library underscore tgz javascript s functional programming helper library library home page a href path to dependency file package json path to vulnerable library node modules underscore package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high underscore tgz direct details cve vulnerable library underscore tgz javascript s functional programming helper library library home page a href path to dependency file package json path to vulnerable library node modules underscore package json dependency hierarchy x underscore tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package underscore from and before from and before are vulnerable to arbitrary code injection via the template function particularly when a variable property is passed as an argument as it is not sanitized publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue istransitivedependency false dependencytree underscore isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the package underscore from and before from and before are vulnerable to arbitrary code injection via the template function particularly when a variable property is passed as an argument as it is not sanitized vulnerabilityurl
| 0
|
46,483
| 13,055,917,836
|
IssuesEvent
|
2020-07-30 03:06:58
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
[vemcal] release notes are missing (Trac #1264)
|
Incomplete Migration Migrated from Trac defect jeb + pnf
|
Migrated from https://code.icecube.wisc.edu/ticket/1264
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:10",
"description": "Project has no release notes.",
"reporter": "kkrings",
"cc": "",
"resolution": "wontfix",
"_ts": "1458335650323600",
"component": "jeb + pnf",
"summary": "[vemcal] release notes are missing",
"priority": "critical",
"keywords": "",
"time": "2015-08-20T22:05:14",
"milestone": "",
"owner": "jgonzalez",
"type": "defect"
}
```
|
1.0
|
[vemcal] release notes are missing (Trac #1264) - Migrated from https://code.icecube.wisc.edu/ticket/1264
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:10",
"description": "Project has no release notes.",
"reporter": "kkrings",
"cc": "",
"resolution": "wontfix",
"_ts": "1458335650323600",
"component": "jeb + pnf",
"summary": "[vemcal] release notes are missing",
"priority": "critical",
"keywords": "",
"time": "2015-08-20T22:05:14",
"milestone": "",
"owner": "jgonzalez",
"type": "defect"
}
```
|
defect
|
release notes are missing trac migrated from json status closed changetime description project has no release notes reporter kkrings cc resolution wontfix ts component jeb pnf summary release notes are missing priority critical keywords time milestone owner jgonzalez type defect
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.