Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
79,912
| 29,517,214,134
|
IssuesEvent
|
2023-06-04 16:25:43
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
opened
|
BUG: scipy.stats.Pearson3 ppf function uses right tail probability instead of left
|
defect
|
### Describe your issue.
I had been using the Percent Point Function (ppf) in my optimization code to return a correct value. One of my distributions ended up being a pearson3, when I ended up comparing it to the CDF the values had a large discrepancy to the ppf function. I proceeded to check the documentation and found the following

This however, is not true when I run it in my own environment see below.
### Reproducing Code Example
```python
from scipy.stats import pearson3
skew = -2
vals = pearson3.ppf([0.001, 0.5, 0.999], skew)
check = pearson3.cdf(vals, skew)
print(f"check = {check}")
print(f"vals = {vals}")
print(np.allclose([0.001, 0.5, 0.999], check))
```
### Error message
```shell
check = [0.999 0.5 0.001]
vals = [ 0.9989995 0.30685282 -5.90775528]
False
```
### SciPy/NumPy/Python version and system information
```shell
1.7.3 1.21.5 sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)
lapack_mkl_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/damie/anaconda3/envs/geo\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\lib', 'C:/Users/damie/anaconda3/envs/geo\\Library\\include']
lapack_opt_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/damie/anaconda3/envs/geo\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\lib', 'C:/Users/damie/anaconda3/envs/geo\\Library\\include']
blas_mkl_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/damie/anaconda3/envs/geo\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\lib', 'C:/Users/damie/anaconda3/envs/geo\\Library\\include']
blas_opt_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/damie/anaconda3/envs/geo\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\lib', 'C:/Users/damie/anaconda3/envs/geo\\Library\\include']
Supported SIMD extensions in this NumPy install:
baseline = SSE,SSE2,SSE3
found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2
not found = AVX512F,AVX512CD,AVX512_SKX,AVX512_CLX,AVX512_CNL
```
|
1.0
|
BUG: scipy.stats.Pearson3 ppf function uses right tail probability instead of left - ### Describe your issue.
I had been using the Percent Point Function (ppf) in my optimization code to return a correct value. One of my distributions ended up being a pearson3, when I ended up comparing it to the CDF the values had a large discrepancy to the ppf function. I proceeded to check the documentation and found the following

This however, is not true when I run it in my own environment see below.
### Reproducing Code Example
```python
from scipy.stats import pearson3
skew = -2
vals = pearson3.ppf([0.001, 0.5, 0.999], skew)
check = pearson3.cdf(vals, skew)
print(f"check = {check}")
print(f"vals = {vals}")
print(np.allclose([0.001, 0.5, 0.999], check))
```
### Error message
```shell
check = [0.999 0.5 0.001]
vals = [ 0.9989995 0.30685282 -5.90775528]
False
```
### SciPy/NumPy/Python version and system information
```shell
1.7.3 1.21.5 sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)
lapack_mkl_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/damie/anaconda3/envs/geo\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\lib', 'C:/Users/damie/anaconda3/envs/geo\\Library\\include']
lapack_opt_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/damie/anaconda3/envs/geo\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\lib', 'C:/Users/damie/anaconda3/envs/geo\\Library\\include']
blas_mkl_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/damie/anaconda3/envs/geo\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\lib', 'C:/Users/damie/anaconda3/envs/geo\\Library\\include']
blas_opt_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/damie/anaconda3/envs/geo\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2019.0.117\\windows\\mkl\\lib', 'C:/Users/damie/anaconda3/envs/geo\\Library\\include']
Supported SIMD extensions in this NumPy install:
baseline = SSE,SSE2,SSE3
found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2
not found = AVX512F,AVX512CD,AVX512_SKX,AVX512_CLX,AVX512_CNL
```
|
defect
|
bug scipy stats ppf function uses right tail probability instead of left describe your issue i had been using the percent point function ppf in my optimization code to return a correct value one of my distributions ended up being a when i ended up comparing it to the cdf the values had a large discrepancy to the ppf function i proceeded to check the documentation and found the following this however is not true when i run it in my own environment see below reproducing code example python from scipy stats import skew vals ppf skew check cdf vals skew print f check check print f vals vals print np allclose check error message shell check vals false scipy numpy python version and system information shell sys version info major minor micro releaselevel final serial lapack mkl info libraries library dirs define macros include dirs lapack opt info libraries library dirs define macros include dirs blas mkl info libraries library dirs define macros include dirs blas opt info libraries library dirs define macros include dirs supported simd extensions in this numpy install baseline sse found popcnt avx not found skx clx cnl
| 1
|
53,846
| 13,262,371,571
|
IssuesEvent
|
2020-08-20 21:41:20
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
matching trigger config in trigger-sim (Trac #2175)
|
Migrated from Trac analysis defect
|
First, I am Elim... I know I am terrible for signing in as icecube..
but I lost my password and wasn't able to reset it.. please don't hate me...
I am trying to run
$ python /data/user/elims/strike/clsim_jessie/applytoL2.py --outfile /data/user/elims/strike/clsim_jessie/outputs/events10_toL2_test.i3.gz --infile /data/user/elims/strike/clsim_jessie/outputs/events10_clsim_lea_over1.i3 --holeice 0
The GCD file is
/data/sim/sim-new/downloadssz6dx/GCD/GeoCalibDetectorStatus_2013.56429_V1.i3.gz
The error occurs at Line 204 of /data/user/elims/strike/clsim_jessie/applytoL2.py:
> tray.AddSegment(trigger_sim.TriggerSim, 'trig',
> gcd_file = gcd_file,
> run_id = 1,
> time_shift_args = time_shift_args)
The version of trigger sim used here is 163127. And the software built is the current combo/trunk.
The error message is:
> FATAL (SimpleMajorityTrigger): Failed to configure this module from the DetectorStatus.(SimpleMajorityTrigger.cxx:268 in virtual void SimpleMajorityTrigger::DetectorStatus(I3FramePtr))
> ERROR (I3Module): SimpleMajorityTrigger_0001: Exception thrown (I3Module.cxx:123 in void I3Module::Do(void (I3Module::*)()))
> Traceback (most recent call last):
> File "applytoL2.py", line 224, in <module>
> tray.Execute()
> File "/data/user/elims/software/gpu_combo/debug/lib/I3Tray.py", line 256, in Execute
> super(I3Tray, self).Execute()
> RuntimeError: Failed to configure this module from the DetectorStatus. (in virtual void SimpleMajorityTrigger::DetectorStatus(I3FramePtr))
Here is more printout from before the fatal error:
> ERROR (trigger-sim): Here's the user specified input :
> TypeID = 0
> ConfigID = 1011
> (DetectorStatusUtils.h:176 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detS
tat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optional<std::pair<TriggerKey, I3TriggerSta
tus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::configID, boost::optional
<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::typeID, const TriggerKey::TypeI
D>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::sourceID, TriggerKey::SourceID>, boo
st::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, const boost::shared_ptr<const I3Detector
Status> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorStatus>; sourceID_type = TriggerKey::
SourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerKey::SubtypeID])
> ERROR (trigger-sim): Here are the keys in the I3TriggerStatus:
> key_matches.size() == 0
> (DetectorStatusUtils.h:186 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detS
tat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optional<std::pair<TriggerKey, I3TriggerSta
tus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::configID, boost::optional
<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::typeID, const TriggerKey::TypeI
D>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::sourceID, TriggerKey::SourceID>, boo
st::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, const boost::shared_ptr<const I3Detector
Status> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorStatus>; sourceID_type = TriggerKey::
SourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerKey::SubtypeID])
> ERROR (trigger-sim): No match found (DetectorStatusUtils.h:188 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detStat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optiona$<std::pair<TriggerKey, I3TriggerStatus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusU$ils::tag::configID, boost::optional<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::$ag::typeID, const TriggerKey::TypeID>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::$ourceID, TriggerKey::SourceID>, boost::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, cons$ boost::shared_ptr<const I3DetectorStatus> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorS$atus>; sourceID_type = TriggerKey::SourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerK$y::SubtypeID])
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2175">https://code.icecube.wisc.edu/projects/icecube/ticket/2175</a>, reported by icecube</summary>
<p>
```json
{
"status": "closed",
"changetime": "2018-07-20T16:28:05",
"_ts": "1532104085473604",
"description": "First, I am Elim... I know I am terrible for signing in as icecube..\nbut I lost my password and wasn't able to reset it.. please don't hate me...\n\nI am trying to run\n$ python /data/user/elims/strike/clsim_jessie/applytoL2.py --outfile /data/user/elims/strike/clsim_jessie/outputs/events10_toL2_test.i3.gz --infile /data/user/elims/strike/clsim_jessie/outputs/events10_clsim_lea_over1.i3 --holeice 0\n\nThe GCD file is\n/data/sim/sim-new/downloadssz6dx/GCD/GeoCalibDetectorStatus_2013.56429_V1.i3.gz\n\nThe error occurs at Line 204 of /data/user/elims/strike/clsim_jessie/applytoL2.py:\n> tray.AddSegment(trigger_sim.TriggerSim, 'trig',\n> gcd_file = gcd_file,\n> run_id = 1,\n> time_shift_args = time_shift_args)\n\nThe version of trigger sim used here is 163127. And the software built is the current combo/trunk.\n\nThe error message is:\n> FATAL (SimpleMajorityTrigger): Failed to configure this module from the DetectorStatus.(SimpleMajorityTrigger.cxx:268 in virtual void SimpleMajorityTrigger::DetectorStatus(I3FramePtr))\n> ERROR (I3Module): SimpleMajorityTrigger_0001: Exception thrown (I3Module.cxx:123 in void I3Module::Do(void (I3Module::*)()))\n> Traceback (most recent call last):\n> File \"applytoL2.py\", line 224, in <module>\n> tray.Execute()\n> File \"/data/user/elims/software/gpu_combo/debug/lib/I3Tray.py\", line 256, in Execute\n> super(I3Tray, self).Execute()\n> RuntimeError: Failed to configure this module from the DetectorStatus. (in virtual void SimpleMajorityTrigger::DetectorStatus(I3FramePtr))\n\nHere is more printout from before the fatal error:\n> ERROR (trigger-sim): Here's the user specified input : \n> TypeID = 0\n> ConfigID = 1011\n> (DetectorStatusUtils.h:176 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detS\ntat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optional<std::pair<TriggerKey, I3TriggerSta\ntus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::configID, boost::optional\n<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::typeID, const TriggerKey::TypeI\nD>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::sourceID, TriggerKey::SourceID>, boo\nst::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, const boost::shared_ptr<const I3Detector\nStatus> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorStatus>; sourceID_type = TriggerKey::\nSourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerKey::SubtypeID])\n> ERROR (trigger-sim): Here are the keys in the I3TriggerStatus: \n> key_matches.size() == 0\n> (DetectorStatusUtils.h:186 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detS\ntat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optional<std::pair<TriggerKey, I3TriggerSta\ntus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::configID, boost::optional\n<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::typeID, const TriggerKey::TypeI\nD>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::sourceID, TriggerKey::SourceID>, boo\nst::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, const boost::shared_ptr<const I3Detector\nStatus> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorStatus>; sourceID_type = TriggerKey::\nSourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerKey::SubtypeID])\n> ERROR (trigger-sim): No match found (DetectorStatusUtils.h:188 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detStat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optiona$<std::pair<TriggerKey, I3TriggerStatus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusU$ils::tag::configID, boost::optional<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::$ag::typeID, const TriggerKey::TypeID>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::$ourceID, TriggerKey::SourceID>, boost::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, cons$ boost::shared_ptr<const I3DetectorStatus> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorS$atus>; sourceID_type = TriggerKey::SourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerK$y::SubtypeID])",
"reporter": "icecube",
"cc": "",
"resolution": "invalid",
"time": "2018-07-20T01:47:13",
"component": "analysis",
"summary": "matching trigger config in trigger-sim",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
matching trigger config in trigger-sim (Trac #2175) - First, I am Elim... I know I am terrible for signing in as icecube..
but I lost my password and wasn't able to reset it.. please don't hate me...
I am trying to run
$ python /data/user/elims/strike/clsim_jessie/applytoL2.py --outfile /data/user/elims/strike/clsim_jessie/outputs/events10_toL2_test.i3.gz --infile /data/user/elims/strike/clsim_jessie/outputs/events10_clsim_lea_over1.i3 --holeice 0
The GCD file is
/data/sim/sim-new/downloadssz6dx/GCD/GeoCalibDetectorStatus_2013.56429_V1.i3.gz
The error occurs at Line 204 of /data/user/elims/strike/clsim_jessie/applytoL2.py:
> tray.AddSegment(trigger_sim.TriggerSim, 'trig',
> gcd_file = gcd_file,
> run_id = 1,
> time_shift_args = time_shift_args)
The version of trigger sim used here is 163127. And the software built is the current combo/trunk.
The error message is:
> FATAL (SimpleMajorityTrigger): Failed to configure this module from the DetectorStatus.(SimpleMajorityTrigger.cxx:268 in virtual void SimpleMajorityTrigger::DetectorStatus(I3FramePtr))
> ERROR (I3Module): SimpleMajorityTrigger_0001: Exception thrown (I3Module.cxx:123 in void I3Module::Do(void (I3Module::*)()))
> Traceback (most recent call last):
> File "applytoL2.py", line 224, in <module>
> tray.Execute()
> File "/data/user/elims/software/gpu_combo/debug/lib/I3Tray.py", line 256, in Execute
> super(I3Tray, self).Execute()
> RuntimeError: Failed to configure this module from the DetectorStatus. (in virtual void SimpleMajorityTrigger::DetectorStatus(I3FramePtr))
Here is more printout from before the fatal error:
> ERROR (trigger-sim): Here's the user specified input :
> TypeID = 0
> ConfigID = 1011
> (DetectorStatusUtils.h:176 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detS
tat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optional<std::pair<TriggerKey, I3TriggerSta
tus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::configID, boost::optional
<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::typeID, const TriggerKey::TypeI
D>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::sourceID, TriggerKey::SourceID>, boo
st::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, const boost::shared_ptr<const I3Detector
Status> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorStatus>; sourceID_type = TriggerKey::
SourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerKey::SubtypeID])
> ERROR (trigger-sim): Here are the keys in the I3TriggerStatus:
> key_matches.size() == 0
> (DetectorStatusUtils.h:186 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detS
tat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optional<std::pair<TriggerKey, I3TriggerSta
tus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::configID, boost::optional
<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::typeID, const TriggerKey::TypeI
D>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::sourceID, TriggerKey::SourceID>, boo
st::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, const boost::shared_ptr<const I3Detector
Status> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorStatus>; sourceID_type = TriggerKey::
SourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerKey::SubtypeID])
> ERROR (trigger-sim): No match found (DetectorStatusUtils.h:188 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detStat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optiona$<std::pair<TriggerKey, I3TriggerStatus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusU$ils::tag::configID, boost::optional<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::$ag::typeID, const TriggerKey::TypeID>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::$ourceID, TriggerKey::SourceID>, boost::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, cons$ boost::shared_ptr<const I3DetectorStatus> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorS$atus>; sourceID_type = TriggerKey::SourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerK$y::SubtypeID])
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2175">https://code.icecube.wisc.edu/projects/icecube/ticket/2175</a>, reported by icecube</summary>
<p>
```json
{
"status": "closed",
"changetime": "2018-07-20T16:28:05",
"_ts": "1532104085473604",
"description": "First, I am Elim... I know I am terrible for signing in as icecube..\nbut I lost my password and wasn't able to reset it.. please don't hate me...\n\nI am trying to run\n$ python /data/user/elims/strike/clsim_jessie/applytoL2.py --outfile /data/user/elims/strike/clsim_jessie/outputs/events10_toL2_test.i3.gz --infile /data/user/elims/strike/clsim_jessie/outputs/events10_clsim_lea_over1.i3 --holeice 0\n\nThe GCD file is\n/data/sim/sim-new/downloadssz6dx/GCD/GeoCalibDetectorStatus_2013.56429_V1.i3.gz\n\nThe error occurs at Line 204 of /data/user/elims/strike/clsim_jessie/applytoL2.py:\n> tray.AddSegment(trigger_sim.TriggerSim, 'trig',\n> gcd_file = gcd_file,\n> run_id = 1,\n> time_shift_args = time_shift_args)\n\nThe version of trigger sim used here is 163127. And the software built is the current combo/trunk.\n\nThe error message is:\n> FATAL (SimpleMajorityTrigger): Failed to configure this module from the DetectorStatus.(SimpleMajorityTrigger.cxx:268 in virtual void SimpleMajorityTrigger::DetectorStatus(I3FramePtr))\n> ERROR (I3Module): SimpleMajorityTrigger_0001: Exception thrown (I3Module.cxx:123 in void I3Module::Do(void (I3Module::*)()))\n> Traceback (most recent call last):\n> File \"applytoL2.py\", line 224, in <module>\n> tray.Execute()\n> File \"/data/user/elims/software/gpu_combo/debug/lib/I3Tray.py\", line 256, in Execute\n> super(I3Tray, self).Execute()\n> RuntimeError: Failed to configure this module from the DetectorStatus. (in virtual void SimpleMajorityTrigger::DetectorStatus(I3FramePtr))\n\nHere is more printout from before the fatal error:\n> ERROR (trigger-sim): Here's the user specified input : \n> TypeID = 0\n> ConfigID = 1011\n> (DetectorStatusUtils.h:176 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detS\ntat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optional<std::pair<TriggerKey, I3TriggerSta\ntus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::configID, boost::optional\n<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::typeID, const TriggerKey::TypeI\nD>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::sourceID, TriggerKey::SourceID>, boo\nst::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, const boost::shared_ptr<const I3Detector\nStatus> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorStatus>; sourceID_type = TriggerKey::\nSourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerKey::SubtypeID])\n> ERROR (trigger-sim): Here are the keys in the I3TriggerStatus: \n> key_matches.size() == 0\n> (DetectorStatusUtils.h:186 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detS\ntat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optional<std::pair<TriggerKey, I3TriggerSta\ntus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::configID, boost::optional\n<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::typeID, const TriggerKey::TypeI\nD>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::sourceID, TriggerKey::SourceID>, boo\nst::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, const boost::shared_ptr<const I3Detector\nStatus> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorStatus>; sourceID_type = TriggerKey::\nSourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerKey::SubtypeID])\n> ERROR (trigger-sim): No match found (DetectorStatusUtils.h:188 in ResultType DetectorStatusUtils::boost_param_default_139GetTriggerStatus(ResultType (*)(), const Args&, int, detStat_type&, sourceID_type&, typeID_type&, configID_type&, subtypeID_type&) [with ResultType = boost::optiona$<std::pair<TriggerKey, I3TriggerStatus> >; Args = boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusU$ils::tag::configID, boost::optional<int> >, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::$ag::typeID, const TriggerKey::TypeID>, boost::parameter::aux::arg_list<const boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::$ourceID, TriggerKey::SourceID>, boost::parameter::aux::arg_list<boost::parameter::aux::tagged_argument<DetectorStatusUtils::tag::detStat, cons$ boost::shared_ptr<const I3DetectorStatus> >, boost::parameter::aux::empty_arg_list> > > >; detStat_type = boost::shared_ptr<const I3DetectorS$atus>; sourceID_type = TriggerKey::SourceID; typeID_type = TriggerKey::TypeID; configID_type = boost::optional<int>; subtypeID_type = TriggerK$y::SubtypeID])",
"reporter": "icecube",
"cc": "",
"resolution": "invalid",
"time": "2018-07-20T01:47:13",
"component": "analysis",
"summary": "matching trigger config in trigger-sim",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
defect
|
matching trigger config in trigger sim trac first i am elim i know i am terrible for signing in as icecube but i lost my password and wasn t able to reset it please don t hate me i am trying to run python data user elims strike clsim jessie py outfile data user elims strike clsim jessie outputs test gz infile data user elims strike clsim jessie outputs clsim lea holeice the gcd file is data sim sim new gcd geocalibdetectorstatus gz the error occurs at line of data user elims strike clsim jessie py tray addsegment trigger sim triggersim trig gcd file gcd file run id time shift args time shift args the version of trigger sim used here is and the software built is the current combo trunk the error message is fatal simplemajoritytrigger failed to configure this module from the detectorstatus simplemajoritytrigger cxx in virtual void simplemajoritytrigger detectorstatus error simplemajoritytrigger exception thrown cxx in void do void traceback most recent call last file py line in tray execute file data user elims software gpu combo debug lib py line in execute super self execute runtimeerror failed to configure this module from the detectorstatus in virtual void simplemajoritytrigger detectorstatus here is more printout from before the fatal error error trigger sim here s the user specified input typeid configid detectorstatusutils h in resulttype detectorstatusutils boost param default resulttype const args int dets tat type sourceid type typeid type configid type subtypeid type with resulttype boost optional std pair triggerkey tus args boost parameter aux arg list const boost parameter aux tagged argument detectorstatusutils tag configid boost optional boost parameter aux arg list const boost parameter aux tagged argument detectorstatusutils tag typeid const triggerkey typei d boost parameter aux arg list boo st parameter aux arg list boost parameter aux tagged argument detectorstatusutils tag detstat const boost shared ptr const status boost parameter aux empty arg list detstat type boost shared ptr sourceid type triggerkey sourceid typeid type triggerkey typeid configid type boost optional subtypeid type triggerkey subtypeid error trigger sim here are the keys in the key matches size detectorstatusutils h in resulttype detectorstatusutils boost param default resulttype const args int dets tat type sourceid type typeid type configid type subtypeid type with resulttype boost optional std pair triggerkey tus args boost parameter aux arg list const boost parameter aux tagged argument detectorstatusutils tag configid boost optional boost parameter aux arg list const boost parameter aux tagged argument detectorstatusutils tag typeid const triggerkey typei d boost parameter aux arg list boo st parameter aux arg list boost parameter aux tagged argument detectorstatusutils tag detstat const boost shared ptr const status boost parameter aux empty arg list detstat type boost shared ptr sourceid type triggerkey sourceid typeid type triggerkey typeid configid type boost optional subtypeid type triggerkey subtypeid error trigger sim no match found detectorstatusutils h in resulttype detectorstatusutils boost param default resulttype const args int detstat type sourceid type typeid type configid type subtypeid type migrated from json status closed changetime ts description first i am elim i know i am terrible for signing in as icecube nbut i lost my password and wasn t able to reset it please don t hate me n ni am trying to run n python data user elims strike clsim jessie py outfile data user elims strike clsim jessie outputs test gz infile data user elims strike clsim jessie outputs clsim lea holeice n nthe gcd file is n data sim sim new gcd geocalibdetectorstatus gz n nthe error occurs at line of data user elims strike clsim jessie py n tray addsegment trigger sim triggersim trig n gcd file gcd file n run id n time shift args time shift args n nthe version of trigger sim used here is and the software built is the current combo trunk n nthe error message is n fatal simplemajoritytrigger failed to configure this module from the detectorstatus simplemajoritytrigger cxx in virtual void simplemajoritytrigger detectorstatus n error simplemajoritytrigger exception thrown cxx in void do void n traceback most recent call last n file py line in n tray execute n file data user elims software gpu combo debug lib py line in execute n super self execute n runtimeerror failed to configure this module from the detectorstatus in virtual void simplemajoritytrigger detectorstatus n nhere is more printout from before the fatal error n error trigger sim here s the user specified input n typeid n configid n detectorstatusutils h in resulttype detectorstatusutils boost param default resulttype const args int dets ntat type sourceid type typeid type configid type subtypeid type n error trigger sim here are the keys in the n key matches size n detectorstatusutils h in resulttype detectorstatusutils boost param default resulttype const args int dets ntat type sourceid type typeid type configid type subtypeid type n error trigger sim no match found detectorstatusutils h in resulttype detectorstatusutils boost param default resulttype const args int detstat type sourceid type typeid type configid type subtypeid type reporter icecube cc resolution invalid time component analysis summary matching trigger config in trigger sim priority normal keywords milestone owner type defect
| 1
|
227,838
| 7,543,521,029
|
IssuesEvent
|
2018-04-17 15:43:36
|
AZMAG/map-ATP
|
https://api.github.com/repos/AZMAG/map-ATP
|
opened
|
Prevent submittal of form when new location button is clicked.
|
Issue: Bug Priority: High
|

Hitting the “Click here to select a new location” button after point is initially placed completely clears the form and re-loads the application.
Correct behavior is to keep the form filled and not reload the application, but allow the user to pick another spot for the marker.
|
1.0
|
Prevent submittal of form when new location button is clicked. - 
Hitting the “Click here to select a new location” button after point is initially placed completely clears the form and re-loads the application.
Correct behavior is to keep the form filled and not reload the application, but allow the user to pick another spot for the marker.
|
non_defect
|
prevent submittal of form when new location button is clicked hitting the “click here to select a new location” button after point is initially placed completely clears the form and re loads the application correct behavior is to keep the form filled and not reload the application but allow the user to pick another spot for the marker
| 0
|
290,260
| 25,045,931,561
|
IssuesEvent
|
2022-11-05 08:35:00
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: jepsen/bank-multitable/majority-ring failed
|
C-test-failure O-robot O-roachtest release-blocker branch-release-22.1
|
roachtest.jepsen/bank-multitable/majority-ring [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=7330122&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=7330122&tab=artifacts#/jepsen/bank-multitable/majority-ring) on release-22.1 @ [6fdb8f55c6e3224d1d4b8bb2b5f7e757d57d29df](https://github.com/cockroachdb/cockroach/commits/6fdb8f55c6e3224d1d4b8bb2b5f7e757d57d29df):
```
(1) attached stack trace
-- stack trace:
| main.(*clusterImpl).RunE
| main/pkg/cmd/roachtest/cluster.go:1968
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:172
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func3
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:210
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (2) output in run_083421.055241452_n6_bash
Wraps: (3) bash -e -c "\
| cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
| ~/lein run test \
| --tarball file://${PWD}/cockroach.tgz \
| --username ${USER} \
| --ssh-private-key ~/.ssh/id_rsa \
| --os ubuntu \
| --time-limit 300 \
| --concurrency 30 \
| --recovery-time 25 \
| --test-count 1 \
| -n 10.142.1.99 -n 10.142.0.102 -n 10.142.1.192 -n 10.142.1.183 -n 10.142.1.137 \
| --test bank-multitable --nemesis majority-ring \
| > invoke.log 2>&1 \
| " returned
| stderr:
|
| stdout:
Wraps: (4) SSH_PROBLEM
Wraps: (5) Node 6. Command with error:
| ``````
| bash -e -c "\
| cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
| ~/lein run test \
| --tarball file://${PWD}/cockroach.tgz \
| --username ${USER} \
| --ssh-private-key ~/.ssh/id_rsa \
| --os ubuntu \
| --time-limit 300 \
| --concurrency 30 \
| --recovery-time 25 \
| --test-count 1 \
| -n 10.142.1.99 -n 10.142.0.102 -n 10.142.1.192 -n 10.142.1.183 -n 10.142.1.137 \
| --test bank-multitable --nemesis majority-ring \
| > invoke.log 2>&1 \
| "
| ``````
Wraps: (6) exit status 255
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.SSH (5) *hintdetail.withDetail (6) *exec.ExitError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*jepsen/bank-multitable/majority-ring.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: jepsen/bank-multitable/majority-ring failed - roachtest.jepsen/bank-multitable/majority-ring [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=7330122&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=7330122&tab=artifacts#/jepsen/bank-multitable/majority-ring) on release-22.1 @ [6fdb8f55c6e3224d1d4b8bb2b5f7e757d57d29df](https://github.com/cockroachdb/cockroach/commits/6fdb8f55c6e3224d1d4b8bb2b5f7e757d57d29df):
```
(1) attached stack trace
-- stack trace:
| main.(*clusterImpl).RunE
| main/pkg/cmd/roachtest/cluster.go:1968
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:172
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func3
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:210
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (2) output in run_083421.055241452_n6_bash
Wraps: (3) bash -e -c "\
| cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
| ~/lein run test \
| --tarball file://${PWD}/cockroach.tgz \
| --username ${USER} \
| --ssh-private-key ~/.ssh/id_rsa \
| --os ubuntu \
| --time-limit 300 \
| --concurrency 30 \
| --recovery-time 25 \
| --test-count 1 \
| -n 10.142.1.99 -n 10.142.0.102 -n 10.142.1.192 -n 10.142.1.183 -n 10.142.1.137 \
| --test bank-multitable --nemesis majority-ring \
| > invoke.log 2>&1 \
| " returned
| stderr:
|
| stdout:
Wraps: (4) SSH_PROBLEM
Wraps: (5) Node 6. Command with error:
| ``````
| bash -e -c "\
| cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
| ~/lein run test \
| --tarball file://${PWD}/cockroach.tgz \
| --username ${USER} \
| --ssh-private-key ~/.ssh/id_rsa \
| --os ubuntu \
| --time-limit 300 \
| --concurrency 30 \
| --recovery-time 25 \
| --test-count 1 \
| -n 10.142.1.99 -n 10.142.0.102 -n 10.142.1.192 -n 10.142.1.183 -n 10.142.1.137 \
| --test bank-multitable --nemesis majority-ring \
| > invoke.log 2>&1 \
| "
| ``````
Wraps: (6) exit status 255
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.SSH (5) *hintdetail.withDetail (6) *exec.ExitError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*jepsen/bank-multitable/majority-ring.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_defect
|
roachtest jepsen bank multitable majority ring failed roachtest jepsen bank multitable majority ring with on release attached stack trace stack trace main clusterimpl rune main pkg cmd roachtest cluster go github com cockroachdb cockroach pkg cmd roachtest tests runjepsen github com cockroachdb cockroach pkg cmd roachtest tests jepsen go github com cockroachdb cockroach pkg cmd roachtest tests runjepsen github com cockroachdb cockroach pkg cmd roachtest tests jepsen go runtime goexit goroot src runtime asm s wraps output in run bash wraps bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test bank multitable nemesis majority ring invoke log returned stderr stdout wraps ssh problem wraps node command with error bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test bank multitable nemesis majority ring invoke log wraps exit status error types withstack withstack errutil withprefix cluster withcommanddetails errors ssh hintdetail withdetail exec exiterror help see see cc cockroachdb kv triage
| 0
|
55,412
| 14,442,942,086
|
IssuesEvent
|
2020-12-07 18:53:46
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
508-defect-2 [FOCUS MANAGEMENT, SCREENREADER]: Focus on page load SHOULD be consistent
|
508-defect-3 508-issue-focus-mgmt 508/Accessibility staging-review vsa vsa-ebenefits
|
# [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3)
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
<hr/>
## Point of Contact
**VFS Point of Contact:** Josh Kim
## Details
This is similar to [this ticket for caregiver's post-submission page](https://github.com/department-of-veterans-affairs/va.gov-team/issues/9305) @jenstrickland identified earlier this year.
"On the final confirmation screen, the focus is on the h1, but had been on the h2 throughout the flow. On the confirmation screen the focus **should** be on the h2 "You've successfully submitted your application." This is of consequence for the screen reader experience."
## Acceptance Criteria
- [ ] Set the focus to the page's `h2` confirmation.
## Environment
* Operating System: Mac, Windows
* Browser: Safari, Firefox
* Screenreading device: VoiceOver (Mac), NVDA (Firefox)
* Server destination: staging, production
## Steps to Recreate
1. [Go to the form 28-8832 wizard page](https://staging.va.gov/careers-employment/education-and-career-counseling/apply-career-guidance-form-28-8832/introduction).
2. Start screenreading device listed in Environment.
3. Complete the form and submit.
4. Confirm the focus goes to the `h1` instead of the `h2` confirmation.
## Solution
- [Complete this ticket first to fix the heading level order.](https://github.com/department-of-veterans-affairs/va.gov-team/issues/16735)
- Set the focus to the page's `h2` confirmation.
## WCAG or Vendor Guidance (optional)
* [W3C WCAG Consistent Identification](https://www.w3.org/WAI/WCAG21/Understanding/consistent-identification.html)
## Screenshots or Trace Logs
<img width="864" alt="Screen Shot 2020-12-07 at 12 21 47 PM" src="https://user-images.githubusercontent.com/14154792/101391892-fadb7880-3892-11eb-916f-9ed6f0e3d654.png">
|
1.0
|
508-defect-2 [FOCUS MANAGEMENT, SCREENREADER]: Focus on page load SHOULD be consistent - # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3)
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
<hr/>
## Point of Contact
**VFS Point of Contact:** Josh Kim
## Details
This is similar to [this ticket for caregiver's post-submission page](https://github.com/department-of-veterans-affairs/va.gov-team/issues/9305) @jenstrickland identified earlier this year.
"On the final confirmation screen, the focus is on the h1, but had been on the h2 throughout the flow. On the confirmation screen the focus **should** be on the h2 "You've successfully submitted your application." This is of consequence for the screen reader experience."
## Acceptance Criteria
- [ ] Set the focus to the page's `h2` confirmation.
## Environment
* Operating System: Mac, Windows
* Browser: Safari, Firefox
* Screenreading device: VoiceOver (Mac), NVDA (Firefox)
* Server destination: staging, production
## Steps to Recreate
1. [Go to the form 28-8832 wizard page](https://staging.va.gov/careers-employment/education-and-career-counseling/apply-career-guidance-form-28-8832/introduction).
2. Start screenreading device listed in Environment.
3. Complete the form and submit.
4. Confirm the focus goes to the `h1` instead of the `h2` confirmation.
## Solution
- [Complete this ticket first to fix the heading level order.](https://github.com/department-of-veterans-affairs/va.gov-team/issues/16735)
- Set the focus to the page's `h2` confirmation.
## WCAG or Vendor Guidance (optional)
* [W3C WCAG Consistent Identification](https://www.w3.org/WAI/WCAG21/Understanding/consistent-identification.html)
## Screenshots or Trace Logs
<img width="864" alt="Screen Shot 2020-12-07 at 12 21 47 PM" src="https://user-images.githubusercontent.com/14154792/101391892-fadb7880-3892-11eb-916f-9ed6f0e3d654.png">
|
defect
|
defect focus on page load should be consistent feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements definition of done review and acknowledge feedback fix and or document decisions made accessibility specialist will close ticket after reviewing documented decisions validating fix point of contact vfs point of contact josh kim details this is similar to jenstrickland identified earlier this year on the final confirmation screen the focus is on the but had been on the throughout the flow on the confirmation screen the focus should be on the you ve successfully submitted your application this is of consequence for the screen reader experience acceptance criteria set the focus to the page s confirmation environment operating system mac windows browser safari firefox screenreading device voiceover mac nvda firefox server destination staging production steps to recreate start screenreading device listed in environment complete the form and submit confirm the focus goes to the instead of the confirmation solution set the focus to the page s confirmation wcag or vendor guidance optional screenshots or trace logs img width alt screen shot at pm src
| 1
|
53,054
| 13,260,851,432
|
IssuesEvent
|
2020-08-20 18:52:11
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
icetray/trunk/resources/docs/i3frame.rst clean up (Trac #636)
|
IceTray Migrated from Trac defect
|
needs some serious help
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/636">https://code.icecube.wisc.edu/projects/icecube/ticket/636</a>, reported by anonymousand owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"_ts": "1416713877111216",
"description": "needs some serious help",
"reporter": "anonymous",
"cc": "",
"resolution": "fixed",
"time": "2011-05-19T02:00:44",
"component": "IceTray",
"summary": "icetray/trunk/resources/docs/i3frame.rst clean up",
"priority": "normal",
"keywords": "documentation",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
icetray/trunk/resources/docs/i3frame.rst clean up (Trac #636) - needs some serious help
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/636">https://code.icecube.wisc.edu/projects/icecube/ticket/636</a>, reported by anonymousand owned by troy</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"_ts": "1416713877111216",
"description": "needs some serious help",
"reporter": "anonymous",
"cc": "",
"resolution": "fixed",
"time": "2011-05-19T02:00:44",
"component": "IceTray",
"summary": "icetray/trunk/resources/docs/i3frame.rst clean up",
"priority": "normal",
"keywords": "documentation",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
defect
|
icetray trunk resources docs rst clean up trac needs some serious help migrated from json status closed changetime ts description needs some serious help reporter anonymous cc resolution fixed time component icetray summary icetray trunk resources docs rst clean up priority normal keywords documentation milestone owner troy type defect
| 1
|
60,314
| 17,023,394,477
|
IssuesEvent
|
2021-07-03 01:48:08
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Specific exception for "the server failed to allocate memory"
|
Component: api Priority: major Resolution: wontfix Type: defect
|
**[Submitted to the original trac issue database at 11.02am, Thursday, 30th April 2009]**
It would be good if there were a specific exception for "failed to allocate memory", so that whichways could trap it. At present any failure in whichways tells the user to e-mail me, which is good for Potlatch bugs, but not so good when it's a generic server problem.
|
1.0
|
Specific exception for "the server failed to allocate memory" - **[Submitted to the original trac issue database at 11.02am, Thursday, 30th April 2009]**
It would be good if there were a specific exception for "failed to allocate memory", so that whichways could trap it. At present any failure in whichways tells the user to e-mail me, which is good for Potlatch bugs, but not so good when it's a generic server problem.
|
defect
|
specific exception for the server failed to allocate memory it would be good if there were a specific exception for failed to allocate memory so that whichways could trap it at present any failure in whichways tells the user to e mail me which is good for potlatch bugs but not so good when it s a generic server problem
| 1
|
460,031
| 13,203,479,465
|
IssuesEvent
|
2020-08-14 14:14:38
|
novelis-prod/Digital-CoE-Operations-Data---Public
|
https://api.github.com/repos/novelis-prod/Digital-CoE-Operations-Data---Public
|
closed
|
Entry-Head-Discard values are negative in coil lineage table
|
Plant: Pinda Priority #2 bug help wanted
|
Entry-Head-Discard values are negative in coil lineage table

|
1.0
|
Entry-Head-Discard values are negative in coil lineage table - Entry-Head-Discard values are negative in coil lineage table

|
non_defect
|
entry head discard values are negative in coil lineage table entry head discard values are negative in coil lineage table
| 0
|
67,858
| 14,891,992,998
|
IssuesEvent
|
2021-01-21 01:46:14
|
Nehamaefi/fitbit-api-example-java
|
https://api.github.com/repos/Nehamaefi/fitbit-api-example-java
|
opened
|
CVE-2020-36189 (Medium) detected in jackson-databind-2.8.1.jar
|
security vulnerability
|
## CVE-2020-36189 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: fitbit-api-example-java/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.8.1.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189>CVE-2020-36189</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.1","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.4.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.8.1","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2020-36189","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189","cvss2Severity":"medium","cvss2Score":"6.8","extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-36189 (Medium) detected in jackson-databind-2.8.1.jar - ## CVE-2020-36189 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: fitbit-api-example-java/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.8.1.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189>CVE-2020-36189</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.1","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.4.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.8.1","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2020-36189","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189","cvss2Severity":"medium","cvss2Score":"6.8","extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file fitbit api example java pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db drivermanagerconnectionsource publish date url a href cvss score details base score metrics not available isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db drivermanagerconnectionsource vulnerabilityurl
| 0
|
6,459
| 2,610,243,583
|
IssuesEvent
|
2015-02-26 19:17:28
|
chrsmith/jsjsj122
|
https://api.github.com/repos/chrsmith/jsjsj122
|
opened
|
台州割包皮过长费用
|
auto-migrated Priority-Medium Type-Defect
|
```
台州割包皮过长费用【台州五洲生殖医院】24小时健康咨询热
线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市椒
江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、118�
��198及椒江一金清公交车直达枫南小区,乘坐107、105、109、112
、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 31 May 2014 at 2:06
|
1.0
|
台州割包皮过长费用 - ```
台州割包皮过长费用【台州五洲生殖医院】24小时健康咨询热
线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市椒
江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、118�
��198及椒江一金清公交车直达枫南小区,乘坐107、105、109、112
、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 31 May 2014 at 2:06
|
defect
|
台州割包皮过长费用 台州割包皮过长费用【台州五洲生殖医院】 线 微信号tzwzszyy 医院地址 台州市椒 (枫南大转盘旁)乘车线路 、 、 � �� , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
| 1
|
52,134
| 13,211,392,047
|
IssuesEvent
|
2020-08-15 22:48:38
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[tableio] named argument in class declaration (Trac #1733)
|
Incomplete Migration Migrated from Trac cmake defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1733">https://code.icecube.wisc.edu/projects/icecube/ticket/1733</a>, reported by kjmeagherand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-06-10T16:08:49",
"_ts": "1465574929009082",
"description": "The sphinx build gives the following error:\n{{{\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/tableio/enum3.py\", line 43\n class enum(baseEnum, metaclass=metaEnum):\n ^\nSyntaxError: invalid syntax\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "duplicate",
"time": "2016-06-10T07:25:28",
"component": "cmake",
"summary": "[tableio] named argument in class declaration",
"priority": "normal",
"keywords": "",
"milestone": "Long-Term Future",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[tableio] named argument in class declaration (Trac #1733) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1733">https://code.icecube.wisc.edu/projects/icecube/ticket/1733</a>, reported by kjmeagherand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-06-10T16:08:49",
"_ts": "1465574929009082",
"description": "The sphinx build gives the following error:\n{{{\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/tableio/enum3.py\", line 43\n class enum(baseEnum, metaclass=metaEnum):\n ^\nSyntaxError: invalid syntax\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "duplicate",
"time": "2016-06-10T07:25:28",
"component": "cmake",
"summary": "[tableio] named argument in class declaration",
"priority": "normal",
"keywords": "",
"milestone": "Long-Term Future",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
|
defect
|
named argument in class declaration trac migrated from json status closed changetime ts description the sphinx build gives the following error n ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube tableio py line n class enum baseenum metaclass metaenum n nsyntaxerror invalid syntax n n reporter kjmeagher cc resolution duplicate time component cmake summary named argument in class declaration priority normal keywords milestone long term future owner jvansanten type defect
| 1
|
16,179
| 9,303,083,187
|
IssuesEvent
|
2019-03-24 14:57:08
|
scala/bug
|
https://api.github.com/repos/scala/bug
|
closed
|
HashMap.merged for new CHAMP-based HashMap
|
has PR library:collections performance
|
This method needs to be optimized in the same way as it was in the old HashMap implementation.
|
True
|
HashMap.merged for new CHAMP-based HashMap - This method needs to be optimized in the same way as it was in the old HashMap implementation.
|
non_defect
|
hashmap merged for new champ based hashmap this method needs to be optimized in the same way as it was in the old hashmap implementation
| 0
|
65,190
| 19,253,871,294
|
IssuesEvent
|
2021-12-09 09:13:04
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Poll Create dialog in high contrast theme: delete answer button is a circle
|
T-Defect S-Minor A-Appearance A-Themes-Official O-Occasional A-Polls Z-Labs
|
### Steps to reproduce
1. Turn on Polls in Labs settings
2. Choose the high contrast theme
3. Create a poll
### Outcome
#### What did you expect?
The delete answer buttons should look like "X".
#### What happened instead?
Instead they are filled circles:

### Operating system
Ubuntu 21.10
### Browser information
Firefox 94.0 (64-bit)
### URL for webapp
https://develop.element.io
### Application version
Element version: 10e121a5143f-react-6d3865bdd542-js-b33b01df0f32 Olm version: 3.2.3
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Poll Create dialog in high contrast theme: delete answer button is a circle - ### Steps to reproduce
1. Turn on Polls in Labs settings
2. Choose the high contrast theme
3. Create a poll
### Outcome
#### What did you expect?
The delete answer buttons should look like "X".
#### What happened instead?
Instead they are filled circles:

### Operating system
Ubuntu 21.10
### Browser information
Firefox 94.0 (64-bit)
### URL for webapp
https://develop.element.io
### Application version
Element version: 10e121a5143f-react-6d3865bdd542-js-b33b01df0f32 Olm version: 3.2.3
### Homeserver
matrix.org
### Will you send logs?
No
|
defect
|
poll create dialog in high contrast theme delete answer button is a circle steps to reproduce turn on polls in labs settings choose the high contrast theme create a poll outcome what did you expect the delete answer buttons should look like x what happened instead instead they are filled circles operating system ubuntu browser information firefox bit url for webapp application version element version react js olm version homeserver matrix org will you send logs no
| 1
|
23,039
| 3,755,593,081
|
IssuesEvent
|
2016-03-12 19:27:47
|
RomanGolovanov/aMetro
|
https://api.github.com/repos/RomanGolovanov/aMetro
|
closed
|
Отображать на карте остальные векторные объекты (реки, кварталы и т.д.)
|
auto-migrated Component-UI Priority-Low Type-Defect
|
```
Пример: Москва, Казань
```
Original issue reported on code.google.com by `G.Glaur...@gmail.com` on 30 Jan 2010 at 9:13
|
1.0
|
Отображать на карте остальные векторные объекты (реки, кварталы и т.д.) - ```
Пример: Москва, Казань
```
Original issue reported on code.google.com by `G.Glaur...@gmail.com` on 30 Jan 2010 at 9:13
|
defect
|
отображать на карте остальные векторные объекты реки кварталы и т д пример москва казань original issue reported on code google com by g glaur gmail com on jan at
| 1
|
78,389
| 27,492,835,486
|
IssuesEvent
|
2023-03-04 20:49:08
|
DependencyTrack/dependency-track
|
https://api.github.com/repos/DependencyTrack/dependency-track
|
opened
|
javax.jdo.JDOUserException: Field org.dependencytrack.model.Component.repositoryMeta is not marked as persistent so cannot be queried
|
defect in triage
|
### Current Behavior
When opening the Policy Violations tab, the PolicyViolationResource throws the following error and no violations are shown. I think this bug was introduced somewhere last week...
``
javax.jdo.JDOUserException: Field org.dependencytrack.model.Component.repositoryMeta is not marked as persistent so cannot be queried
at org.datanucleus.api.jdo.JDOAdapter.getJDOExceptionForNucleusException(JDOAdapter.java:698)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:456)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:276)
at alpine.persistence.AbstractAlpineQueryManager.execute(AbstractAlpineQueryManager.java:174)
at org.dependencytrack.persistence.PolicyQueryManager.getPolicyViolations(PolicyQueryManager.java:280)
at org.dependencytrack.persistence.QueryManager.getPolicyViolations(QueryManager.java:604)
at org.dependencytrack.resources.v1.PolicyViolationResource.getViolationsByProject(PolicyViolationResource.java:102)
### Steps to Reproduce
1.Open the Policy Violations tab for a project in the frontend
### Expected Behavior
No Exception is thrown and violations are shown (if any)
### Dependency-Track Version
4.8.0-SNAPSHOT
### Dependency-Track Distribution
Executable WAR
### Database Server
H2
### Database Server Version
_No response_
### Browser
Google Chrome
### Checklist
- [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues)
- [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported
|
1.0
|
javax.jdo.JDOUserException: Field org.dependencytrack.model.Component.repositoryMeta is not marked as persistent so cannot be queried - ### Current Behavior
When opening the Policy Violations tab, the PolicyViolationResource throws the following error and no violations are shown. I think this bug was introduced somewhere last week...
``
javax.jdo.JDOUserException: Field org.dependencytrack.model.Component.repositoryMeta is not marked as persistent so cannot be queried
at org.datanucleus.api.jdo.JDOAdapter.getJDOExceptionForNucleusException(JDOAdapter.java:698)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:456)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:276)
at alpine.persistence.AbstractAlpineQueryManager.execute(AbstractAlpineQueryManager.java:174)
at org.dependencytrack.persistence.PolicyQueryManager.getPolicyViolations(PolicyQueryManager.java:280)
at org.dependencytrack.persistence.QueryManager.getPolicyViolations(QueryManager.java:604)
at org.dependencytrack.resources.v1.PolicyViolationResource.getViolationsByProject(PolicyViolationResource.java:102)
### Steps to Reproduce
1.Open the Policy Violations tab for a project in the frontend
### Expected Behavior
No Exception is thrown and violations are shown (if any)
### Dependency-Track Version
4.8.0-SNAPSHOT
### Dependency-Track Distribution
Executable WAR
### Database Server
H2
### Database Server Version
_No response_
### Browser
Google Chrome
### Checklist
- [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues)
- [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported
|
defect
|
javax jdo jdouserexception field org dependencytrack model component repositorymeta is not marked as persistent so cannot be queried current behavior when opening the policy violations tab the policyviolationresource throws the following error and no violations are shown i think this bug was introduced somewhere last week javax jdo jdouserexception field org dependencytrack model component repositorymeta is not marked as persistent so cannot be queried at org datanucleus api jdo jdoadapter getjdoexceptionfornucleusexception jdoadapter java at org datanucleus api jdo jdoquery executeinternal jdoquery java at org datanucleus api jdo jdoquery execute jdoquery java at alpine persistence abstractalpinequerymanager execute abstractalpinequerymanager java at org dependencytrack persistence policyquerymanager getpolicyviolations policyquerymanager java at org dependencytrack persistence querymanager getpolicyviolations querymanager java at org dependencytrack resources policyviolationresource getviolationsbyproject policyviolationresource java steps to reproduce open the policy violations tab for a project in the frontend expected behavior no exception is thrown and violations are shown if any dependency track version snapshot dependency track distribution executable war database server database server version no response browser google chrome checklist i have read and understand the i have checked the for whether this defect was already reported
| 1
|
37,524
| 8,414,740,450
|
IssuesEvent
|
2018-10-13 06:32:04
|
purebred-mua/purebred
|
https://api.github.com/repos/purebred-mua/purebred
|
opened
|
Test timeout before purebred started
|
defect
|
One test failed today with the following output:
```
use file browser to add attachments: FAIL (5.05s)
Wait time exceeded. Condition not met: 'Literal "Purebred: Item"' last screen shot:
export TERM=ansi
export GHC=stack
export GHC_ARGS="$STACK_ARGS ghc --"
export PUREBRED_CONFIG_DIR=/tmp/purebredtest8989
purebred --database /tmp/purebredtest8989/Maildir/
travis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pure
bred$ export TERM=ansi
travis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur
ebred$ export GHC=stack
travis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur
ebred$ export GHC_ARGS="$STACK_ARGS ghc --"
travis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur
ebred$ export PUREBRED_CONFIG_DIR=/tmp/purebredtest8989
travis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur
ebred$ purebred --database /tmp/purebredtest8989/Maildir/
Selected resolver: lts-9.21
raw: "export TERM=ansi\nexport GHC=stack\nexport GHC_ARGS=\"$STACK_ARGS ghc --\"\nexport PUREBRED_CONFIG_DIR=/tmp/purebredtest8989\npurebred --database /tmp/purebredtest8989/Maildir/\ntravis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pure\nbred$ export TERM=ansi\ntravis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur\nebred$ export GHC=stack\ntravis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur\nebred$ export GHC_ARGS=\"$STACK_ARGS ghc --\"\ntravis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur\nebred$ export PUREBRED_CONFIG_DIR=/tmp/purebredtest8989\ntravis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur\nebred$ purebred --database /tmp/purebredtest8989/Maildir/\nSelected resolver: lts-9.21\n\n\n\n\n\n\n\n"
1 out of 25 tests failed (88.46s)
```
I currently have no hypothesis as to why this happens. Was it merely that IO was busy which caused a delay setting the environment variables until it hit the timeout?
Anyhow, we should look into it if it happens more often.
|
1.0
|
Test timeout before purebred started - One test failed today with the following output:
```
use file browser to add attachments: FAIL (5.05s)
Wait time exceeded. Condition not met: 'Literal "Purebred: Item"' last screen shot:
export TERM=ansi
export GHC=stack
export GHC_ARGS="$STACK_ARGS ghc --"
export PUREBRED_CONFIG_DIR=/tmp/purebredtest8989
purebred --database /tmp/purebredtest8989/Maildir/
travis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pure
bred$ export TERM=ansi
travis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur
ebred$ export GHC=stack
travis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur
ebred$ export GHC_ARGS="$STACK_ARGS ghc --"
travis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur
ebred$ export PUREBRED_CONFIG_DIR=/tmp/purebredtest8989
travis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur
ebred$ purebred --database /tmp/purebredtest8989/Maildir/
Selected resolver: lts-9.21
raw: "export TERM=ansi\nexport GHC=stack\nexport GHC_ARGS=\"$STACK_ARGS ghc --\"\nexport PUREBRED_CONFIG_DIR=/tmp/purebredtest8989\npurebred --database /tmp/purebredtest8989/Maildir/\ntravis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pure\nbred$ export TERM=ansi\ntravis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur\nebred$ export GHC=stack\ntravis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur\nebred$ export GHC_ARGS=\"$STACK_ARGS ghc --\"\ntravis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur\nebred$ export PUREBRED_CONFIG_DIR=/tmp/purebredtest8989\ntravis@travis-job-fddae86d-6b88-4787-a62a-052ac37ea9b4:~/build/purebred-mua/pur\nebred$ purebred --database /tmp/purebredtest8989/Maildir/\nSelected resolver: lts-9.21\n\n\n\n\n\n\n\n"
1 out of 25 tests failed (88.46s)
```
I currently have no hypothesis as to why this happens. Was it merely that IO was busy which caused a delay setting the environment variables until it hit the timeout?
Anyhow, we should look into it if it happens more often.
|
defect
|
test timeout before purebred started one test failed today with the following output use file browser to add attachments fail wait time exceeded condition not met literal purebred item last screen shot export term ansi export ghc stack export ghc args stack args ghc export purebred config dir tmp purebred database tmp maildir travis travis job build purebred mua pure bred export term ansi travis travis job build purebred mua pur ebred export ghc stack travis travis job build purebred mua pur ebred export ghc args stack args ghc travis travis job build purebred mua pur ebred export purebred config dir tmp travis travis job build purebred mua pur ebred purebred database tmp maildir selected resolver lts raw export term ansi nexport ghc stack nexport ghc args stack args ghc nexport purebred config dir tmp npurebred database tmp maildir ntravis travis job build purebred mua pure nbred export term ansi ntravis travis job build purebred mua pur nebred export ghc stack ntravis travis job build purebred mua pur nebred export ghc args stack args ghc ntravis travis job build purebred mua pur nebred export purebred config dir tmp ntravis travis job build purebred mua pur nebred purebred database tmp maildir nselected resolver lts n n n n n n n n out of tests failed i currently have no hypothesis as to why this happens was it merely that io was busy which caused a delay setting the environment variables until it hit the timeout anyhow we should look into it if it happens more often
| 1
|
51,768
| 13,211,304,154
|
IssuesEvent
|
2020-08-15 22:10:30
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
steamshovel results.i3 (preloaded for UWRF) Segmentation fault (Trac #1014)
|
Incomplete Migration Migrated from Trac combo core defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1014">https://code.icecube.wisc.edu/projects/icecube/ticket/1014</a>, reported by jdiercksand owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:10",
"_ts": "1458335650323600",
"description": "students at UWRF had preloaded a results.i3 for bootcamp and I'm running into a repeatable segfault moving into Count von Count from any other frame (both P and Q) and when moving out of Count von Count to any other frame. Tested on Ubuntu 15.04.",
"reporter": "jdiercks",
"cc": "",
"resolution": "worksforme",
"time": "2015-06-09T19:16:54",
"component": "combo core",
"summary": "steamshovel results.i3 (preloaded for UWRF) Segmentation fault",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
steamshovel results.i3 (preloaded for UWRF) Segmentation fault (Trac #1014) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1014">https://code.icecube.wisc.edu/projects/icecube/ticket/1014</a>, reported by jdiercksand owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:10",
"_ts": "1458335650323600",
"description": "students at UWRF had preloaded a results.i3 for bootcamp and I'm running into a repeatable segfault moving into Count von Count from any other frame (both P and Q) and when moving out of Count von Count to any other frame. Tested on Ubuntu 15.04.",
"reporter": "jdiercks",
"cc": "",
"resolution": "worksforme",
"time": "2015-06-09T19:16:54",
"component": "combo core",
"summary": "steamshovel results.i3 (preloaded for UWRF) Segmentation fault",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
|
defect
|
steamshovel results preloaded for uwrf segmentation fault trac migrated from json status closed changetime ts description students at uwrf had preloaded a results for bootcamp and i m running into a repeatable segfault moving into count von count from any other frame both p and q and when moving out of count von count to any other frame tested on ubuntu reporter jdiercks cc resolution worksforme time component combo core summary steamshovel results preloaded for uwrf segmentation fault priority normal keywords milestone owner hdembinski type defect
| 1
|
165,610
| 26,199,710,155
|
IssuesEvent
|
2023-01-03 16:22:00
|
coder/coder
|
https://api.github.com/repos/coder/coder
|
closed
|
users: table looks kind of squished
|
site design
|
If I am not an `Owner` or `Template Admin` the roles look squished
<img width="1093" alt="Screen Shot 2022-10-10 at 4 25 35 PM" src="https://user-images.githubusercontent.com/22407953/194954805-097f271f-50e8-4cf0-af2e-fe4b88f39392.png">
|
1.0
|
users: table looks kind of squished - If I am not an `Owner` or `Template Admin` the roles look squished
<img width="1093" alt="Screen Shot 2022-10-10 at 4 25 35 PM" src="https://user-images.githubusercontent.com/22407953/194954805-097f271f-50e8-4cf0-af2e-fe4b88f39392.png">
|
non_defect
|
users table looks kind of squished if i am not an owner or template admin the roles look squished img width alt screen shot at pm src
| 0
|
183,489
| 14,234,014,840
|
IssuesEvent
|
2020-11-18 13:00:53
|
eclipse/che
|
https://api.github.com/repos/eclipse/che
|
closed
|
.NET Core devfile E2E test is flaky on "Language server autocomplete validation" step
|
area/qe e2e-test/failure kind/bug severity/P2
|
### Describe the bug
**.NET Core** devfile E2E test is flaky on _Language serve autocomplete validation_ step https://codeready-workspaces-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/basic-MultiUser-Che-check-e2e-tests-against-k8s/2160/console
```
92 passing (30m)
3 pending
3 failing
1) .NET Core test
Language server validation
Suggestion invoking:
TimeoutError: Wait timed out after 82807ms
at /tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2201:17
at ManagedPromise.invokeCallback_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:1376:14)
at TaskQueue.execute_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:3084:14)
at TaskQueue.executeNext_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:3067:27)
at asyncRun (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2927:27)
at /tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:668:7
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
From: Task: <anonymous wait>
at scheduleWait (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2188:20)
at ControlFlow.wait (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2517:12)
at thenableWebDriverProxy.wait (/tmp/e2e/node_modules/selenium-webdriver/lib/webdriver.js:934:29)
at Editor.<anonymous> (/tmp/e2e/pageobjects/ide/Editor.ts:105:45)
at Generator.next (<anonymous>)
at /tmp/e2e/dist/pageobjects/ide/Editor.js:19:71
at new Promise (<anonymous>)
at __awaiter (/tmp/e2e/dist/pageobjects/ide/Editor.js:15:12)
at Editor.waitSuggestionWithScrolling (/tmp/e2e/dist/pageobjects/ide/Editor.js:105:16)
at Object.<anonymous> (/tmp/e2e/testsLibrary/LsTests.ts:48:22)
at Generator.next (<anonymous>)
at fulfilled (/tmp/e2e/dist/testsLibrary/LsTests.js:13:58)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
2) .NET Core test
Language server validation
Error highlighting:
TimeoutError: Exceeded maximum visibility checkings attempts, problems with 'StaleElementReferenceError' of 'By(xpath, //div[contains(@style, 'top:418px')]//div[contains(@class, 'squiggly-error')])' element
at DriverHelper.<anonymous> (/tmp/e2e/utils/DriverHelper.ts:145:15)
at Generator.throw (<anonymous>)
at rejected (/tmp/e2e/dist/utils/DriverHelper.js:17:65)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
3) .NET Core test
Language server validation
Autocomplete:
TimeoutError: Wait timed out after 92753ms
at /tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2201:17
at ManagedPromise.invokeCallback_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:1376:14)
at TaskQueue.execute_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:3084:14)
at TaskQueue.executeNext_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:3067:27)
at asyncRun (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2927:27)
at /tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:668:7
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
From: Task: <anonymous wait>
at scheduleWait (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2188:20)
at ControlFlow.wait (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2517:12)
at thenableWebDriverProxy.wait (/tmp/e2e/node_modules/selenium-webdriver/lib/webdriver.js:934:29)
at Editor.<anonymous> (/tmp/e2e/pageobjects/ide/Editor.ts:105:45)
at Generator.next (<anonymous>)
at /tmp/e2e/dist/pageobjects/ide/Editor.js:19:71
at new Promise (<anonymous>)
at __awaiter (/tmp/e2e/dist/pageobjects/ide/Editor.js:15:12)
at Editor.waitSuggestionWithScrolling (/tmp/e2e/dist/pageobjects/ide/Editor.js:105:16)
at Object.<anonymous> (/tmp/e2e/testsLibrary/LsTests.ts:57:22)
at Generator.next (<anonymous>)
at fulfilled (/tmp/e2e/dist/testsLibrary/LsTests.js:13:58)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
```

### Che version
<!-- (if workspace is running, version can be obtained with help/about menu) -->
- [ ] latest
- [x] nightly (7.18.0-SNAPSHOT)
- [ ] other: please specify
### Steps to reproduce
https://github.com/eclipse/che/blob/master/tests/e2e/tests/devfiles/DotNetCore.spec.ts#L52
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Runtime
- [ ] kubernetes (include output of `kubectl version`)
- [ ] Openshift (include output of `oc version`)
- [x] minikube 1.1.1
- [ ] minishift (include output of `minishift version` and `oc version`)
- [ ] docker-desktop + K8S (include output of `docker version` and `kubectl version`)
- [ ] other: (please specify)
### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
### Installation method
- [x] chectl
* provide a full command that was used to deploy Eclipse Che (including the output)
* provide an output of `chectl version` command
- [ ] OperatorHub
- [ ] I don't know
### Environment
- [ ] my computer
- [ ] Windows
- [ ] Linux
- [ ] macOS
- [ ] Cloud
- [ ] Amazon
- [ ] Azure
- [ ] GCE
- [ ] other (please specify)
- [x] other: CRW CCI
### Eclipse Che Logs
<!-- https://www.eclipse.org/che/docs/che-7/collecting-logs-using-chectl -->
### Additional context
<!-- Add any other context about the problem here. -->
|
1.0
|
.NET Core devfile E2E test is flaky on "Language server autocomplete validation" step - ### Describe the bug
**.NET Core** devfile E2E test is flaky on _Language serve autocomplete validation_ step https://codeready-workspaces-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/basic-MultiUser-Che-check-e2e-tests-against-k8s/2160/console
```
92 passing (30m)
3 pending
3 failing
1) .NET Core test
Language server validation
Suggestion invoking:
TimeoutError: Wait timed out after 82807ms
at /tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2201:17
at ManagedPromise.invokeCallback_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:1376:14)
at TaskQueue.execute_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:3084:14)
at TaskQueue.executeNext_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:3067:27)
at asyncRun (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2927:27)
at /tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:668:7
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
From: Task: <anonymous wait>
at scheduleWait (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2188:20)
at ControlFlow.wait (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2517:12)
at thenableWebDriverProxy.wait (/tmp/e2e/node_modules/selenium-webdriver/lib/webdriver.js:934:29)
at Editor.<anonymous> (/tmp/e2e/pageobjects/ide/Editor.ts:105:45)
at Generator.next (<anonymous>)
at /tmp/e2e/dist/pageobjects/ide/Editor.js:19:71
at new Promise (<anonymous>)
at __awaiter (/tmp/e2e/dist/pageobjects/ide/Editor.js:15:12)
at Editor.waitSuggestionWithScrolling (/tmp/e2e/dist/pageobjects/ide/Editor.js:105:16)
at Object.<anonymous> (/tmp/e2e/testsLibrary/LsTests.ts:48:22)
at Generator.next (<anonymous>)
at fulfilled (/tmp/e2e/dist/testsLibrary/LsTests.js:13:58)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
2) .NET Core test
Language server validation
Error highlighting:
TimeoutError: Exceeded maximum visibility checkings attempts, problems with 'StaleElementReferenceError' of 'By(xpath, //div[contains(@style, 'top:418px')]//div[contains(@class, 'squiggly-error')])' element
at DriverHelper.<anonymous> (/tmp/e2e/utils/DriverHelper.ts:145:15)
at Generator.throw (<anonymous>)
at rejected (/tmp/e2e/dist/utils/DriverHelper.js:17:65)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
3) .NET Core test
Language server validation
Autocomplete:
TimeoutError: Wait timed out after 92753ms
at /tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2201:17
at ManagedPromise.invokeCallback_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:1376:14)
at TaskQueue.execute_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:3084:14)
at TaskQueue.executeNext_ (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:3067:27)
at asyncRun (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2927:27)
at /tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:668:7
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
From: Task: <anonymous wait>
at scheduleWait (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2188:20)
at ControlFlow.wait (/tmp/e2e/node_modules/selenium-webdriver/lib/promise.js:2517:12)
at thenableWebDriverProxy.wait (/tmp/e2e/node_modules/selenium-webdriver/lib/webdriver.js:934:29)
at Editor.<anonymous> (/tmp/e2e/pageobjects/ide/Editor.ts:105:45)
at Generator.next (<anonymous>)
at /tmp/e2e/dist/pageobjects/ide/Editor.js:19:71
at new Promise (<anonymous>)
at __awaiter (/tmp/e2e/dist/pageobjects/ide/Editor.js:15:12)
at Editor.waitSuggestionWithScrolling (/tmp/e2e/dist/pageobjects/ide/Editor.js:105:16)
at Object.<anonymous> (/tmp/e2e/testsLibrary/LsTests.ts:57:22)
at Generator.next (<anonymous>)
at fulfilled (/tmp/e2e/dist/testsLibrary/LsTests.js:13:58)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
```

### Che version
<!-- (if workspace is running, version can be obtained with help/about menu) -->
- [ ] latest
- [x] nightly (7.18.0-SNAPSHOT)
- [ ] other: please specify
### Steps to reproduce
https://github.com/eclipse/che/blob/master/tests/e2e/tests/devfiles/DotNetCore.spec.ts#L52
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Runtime
- [ ] kubernetes (include output of `kubectl version`)
- [ ] Openshift (include output of `oc version`)
- [x] minikube 1.1.1
- [ ] minishift (include output of `minishift version` and `oc version`)
- [ ] docker-desktop + K8S (include output of `docker version` and `kubectl version`)
- [ ] other: (please specify)
### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
### Installation method
- [x] chectl
* provide a full command that was used to deploy Eclipse Che (including the output)
* provide an output of `chectl version` command
- [ ] OperatorHub
- [ ] I don't know
### Environment
- [ ] my computer
- [ ] Windows
- [ ] Linux
- [ ] macOS
- [ ] Cloud
- [ ] Amazon
- [ ] Azure
- [ ] GCE
- [ ] other (please specify)
- [x] other: CRW CCI
### Eclipse Che Logs
<!-- https://www.eclipse.org/che/docs/che-7/collecting-logs-using-chectl -->
### Additional context
<!-- Add any other context about the problem here. -->
|
non_defect
|
net core devfile test is flaky on language server autocomplete validation step describe the bug net core devfile test is flaky on language serve autocomplete validation step passing pending failing net core test language server validation suggestion invoking timeouterror wait timed out after at tmp node modules selenium webdriver lib promise js at managedpromise invokecallback tmp node modules selenium webdriver lib promise js at taskqueue execute tmp node modules selenium webdriver lib promise js at taskqueue executenext tmp node modules selenium webdriver lib promise js at asyncrun tmp node modules selenium webdriver lib promise js at tmp node modules selenium webdriver lib promise js at at process tickcallback internal process next tick js from task at schedulewait tmp node modules selenium webdriver lib promise js at controlflow wait tmp node modules selenium webdriver lib promise js at thenablewebdriverproxy wait tmp node modules selenium webdriver lib webdriver js at editor tmp pageobjects ide editor ts at generator next at tmp dist pageobjects ide editor js at new promise at awaiter tmp dist pageobjects ide editor js at editor waitsuggestionwithscrolling tmp dist pageobjects ide editor js at object tmp testslibrary lstests ts at generator next at fulfilled tmp dist testslibrary lstests js at at process tickcallback internal process next tick js net core test language server validation error highlighting timeouterror exceeded maximum visibility checkings attempts problems with staleelementreferenceerror of by xpath div div element at driverhelper tmp utils driverhelper ts at generator throw at rejected tmp dist utils driverhelper js at at process tickcallback internal process next tick js net core test language server validation autocomplete timeouterror wait timed out after at tmp node modules selenium webdriver lib promise js at managedpromise invokecallback tmp node modules selenium webdriver lib promise js at taskqueue execute tmp node modules selenium webdriver lib promise js at taskqueue executenext tmp node modules selenium webdriver lib promise js at asyncrun tmp node modules selenium webdriver lib promise js at tmp node modules selenium webdriver lib promise js at at process tickcallback internal process next tick js from task at schedulewait tmp node modules selenium webdriver lib promise js at controlflow wait tmp node modules selenium webdriver lib promise js at thenablewebdriverproxy wait tmp node modules selenium webdriver lib webdriver js at editor tmp pageobjects ide editor ts at generator next at tmp dist pageobjects ide editor js at new promise at awaiter tmp dist pageobjects ide editor js at editor waitsuggestionwithscrolling tmp dist pageobjects ide editor js at object tmp testslibrary lstests ts at generator next at fulfilled tmp dist testslibrary lstests js at at process tickcallback internal process next tick js che version latest nightly snapshot other please specify steps to reproduce expected behavior runtime kubernetes include output of kubectl version openshift include output of oc version minikube minishift include output of minishift version and oc version docker desktop include output of docker version and kubectl version other please specify screenshots installation method chectl provide a full command that was used to deploy eclipse che including the output provide an output of chectl version command operatorhub i don t know environment my computer windows linux macos cloud amazon azure gce other please specify other crw cci eclipse che logs additional context
| 0
|
42,097
| 10,818,016,841
|
IssuesEvent
|
2019-11-08 11:01:10
|
netty/netty
|
https://api.github.com/repos/netty/netty
|
closed
|
deregister and re-register, ClosedChannelException is thrown
|
defect
|
Hi,
I'm doing a project on middle-ware for distributed database, and I want to use Netty as client connecting to MySQL reading data. But a problem gets me stuck for such a long time so I have to ask for help. The scenario is as below:
1. Netty dispatches SQL query to MySQL;
1. MySQL returns a bunch of resultset which might be very large so I have to prevent it from OOM;
2. When the data received exceed a user-defined high water mark threshold, I use channel.deregister() to stop the incoming data for a while;
3. The business logic consumes the data received, when it reaches a low water mark threshold, I use eventloop.register(channel) to continue to receive the remaining data;
4. Do the same thing until all data needed is received and processed.
But here the strange thing is when I run a benchmark tool with N concurrent clients executing the same SQL constantly, and the NioEventLoopGroup is with M threads created(N>M), after a while the N-M connections are shown closed, leaving M connections active to the end.
After debugging, I find during the deregister and re-register, the channel bounded to the later failed connection will be closed(seems like the SelectionKey is set invalid) , so when trying to re-register, ClosedChannelException will be thrown.
And it should be mentioned that, if I remove the deregister and re-register logic it works fine, so obviously my usage of deregister and re-register could probably be wrong in some place, could you do me a favor instructing me to locate the reason? Thank you so much in advance.
``` java
Bootstrap bootstrap = new Bootstrap();
bootstrap
.group(MySQLConnectorConfig.group)
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new MySQLClientDecoder(conn));
ch.pipeline().addLast(new MySQLClientHandler(conn));
ch.config().setAllocator(PooledByteBufAllocator.DEFAULT);
}
})
.option(ChannelOption.SO_KEEPALIVE, true)
.option(ChannelOption.TCP_NODELAY, true)
.option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.SO_RCVBUF, MySQLConnectorConfig.DEFAULT_SO_RCVBUF);
```
I deregister the channel in MySQLClientDecoder, and re-register in my business logic when consuming the data received.
|
1.0
|
deregister and re-register, ClosedChannelException is thrown - Hi,
I'm doing a project on middle-ware for distributed database, and I want to use Netty as client connecting to MySQL reading data. But a problem gets me stuck for such a long time so I have to ask for help. The scenario is as below:
1. Netty dispatches SQL query to MySQL;
1. MySQL returns a bunch of resultset which might be very large so I have to prevent it from OOM;
2. When the data received exceed a user-defined high water mark threshold, I use channel.deregister() to stop the incoming data for a while;
3. The business logic consumes the data received, when it reaches a low water mark threshold, I use eventloop.register(channel) to continue to receive the remaining data;
4. Do the same thing until all data needed is received and processed.
But here the strange thing is when I run a benchmark tool with N concurrent clients executing the same SQL constantly, and the NioEventLoopGroup is with M threads created(N>M), after a while the N-M connections are shown closed, leaving M connections active to the end.
After debugging, I find during the deregister and re-register, the channel bounded to the later failed connection will be closed(seems like the SelectionKey is set invalid) , so when trying to re-register, ClosedChannelException will be thrown.
And it should be mentioned that, if I remove the deregister and re-register logic it works fine, so obviously my usage of deregister and re-register could probably be wrong in some place, could you do me a favor instructing me to locate the reason? Thank you so much in advance.
``` java
Bootstrap bootstrap = new Bootstrap();
bootstrap
.group(MySQLConnectorConfig.group)
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new MySQLClientDecoder(conn));
ch.pipeline().addLast(new MySQLClientHandler(conn));
ch.config().setAllocator(PooledByteBufAllocator.DEFAULT);
}
})
.option(ChannelOption.SO_KEEPALIVE, true)
.option(ChannelOption.TCP_NODELAY, true)
.option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.SO_RCVBUF, MySQLConnectorConfig.DEFAULT_SO_RCVBUF);
```
I deregister the channel in MySQLClientDecoder, and re-register in my business logic when consuming the data received.
|
defect
|
deregister and re register closedchannelexception is thrown hi i m doing a project on middle ware for distributed database and i want to use netty as client connecting to mysql reading data but a problem gets me stuck for such a long time so i have to ask for help the scenario is as below netty dispatches sql query to mysql mysql returns a bunch of resultset which might be very large so i have to prevent it from oom when the data received exceed a user defined high water mark threshold i use channel deregister to stop the incoming data for a while the business logic consumes the data received when it reaches a low water mark threshold i use eventloop register channel to continue to receive the remaining data do the same thing until all data needed is received and processed but here the strange thing is when i run a benchmark tool with n concurrent clients executing the same sql constantly and the nioeventloopgroup is with m threads created n m after a while the n m connections are shown closed leaving m connections active to the end after debugging i find during the deregister and re register the channel bounded to the later failed connection will be closed seems like the selectionkey is set invalid so when trying to re register closedchannelexception will be thrown and it should be mentioned that if i remove the deregister and re register logic it works fine so obviously my usage of deregister and re register could probably be wrong in some place could you do me a favor instructing me to locate the reason thank you so much in advance java bootstrap bootstrap new bootstrap bootstrap group mysqlconnectorconfig group channel niosocketchannel class handler new channelinitializer override protected void initchannel socketchannel ch throws exception ch pipeline addlast new mysqlclientdecoder conn ch pipeline addlast new mysqlclienthandler conn ch config setallocator pooledbytebufallocator default option channeloption so keepalive true option channeloption tcp nodelay true option channeloption so reuseaddr true option channeloption so rcvbuf mysqlconnectorconfig default so rcvbuf i deregister the channel in mysqlclientdecoder and re register in my business logic when consuming the data received
| 1
|
49,145
| 13,185,256,608
|
IssuesEvent
|
2020-08-12 21:02:00
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
copy constructor of I3Particle copies I3ParticleID (Trac #828)
|
Incomplete Migration Migrated from Trac combo core defect
|
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/828
, reported by chaack and owned by dschultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-01-27T20:32:47",
"description": "Added particle to the new I3MCTree is difficult, if you use the copy constructors of I3Particles, as these also copy the I3ParticleID.\n\nDoing something like\n\n\n{{{\np = I3Particle()\np2 = I3Particle(p)\nt = I3MCTree()\nt.add_primary(p)\nt.append_child(p.id,p2)\n}}}\n\n\nresults in a failed assertion (insertResult.second ....)\n\nIs this intended behaviour?",
"reporter": "chaack",
"cc": "",
"resolution": "invalid",
"_ts": "1422390767401643",
"component": "combo core",
"summary": "copy constructor of I3Particle copies I3ParticleID",
"priority": "normal",
"keywords": "",
"time": "2014-12-10T13:06:34",
"milestone": "",
"owner": "dschultz",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
copy constructor of I3Particle copies I3ParticleID (Trac #828) - <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/828
, reported by chaack and owned by dschultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-01-27T20:32:47",
"description": "Added particle to the new I3MCTree is difficult, if you use the copy constructors of I3Particles, as these also copy the I3ParticleID.\n\nDoing something like\n\n\n{{{\np = I3Particle()\np2 = I3Particle(p)\nt = I3MCTree()\nt.add_primary(p)\nt.append_child(p.id,p2)\n}}}\n\n\nresults in a failed assertion (insertResult.second ....)\n\nIs this intended behaviour?",
"reporter": "chaack",
"cc": "",
"resolution": "invalid",
"_ts": "1422390767401643",
"component": "combo core",
"summary": "copy constructor of I3Particle copies I3ParticleID",
"priority": "normal",
"keywords": "",
"time": "2014-12-10T13:06:34",
"milestone": "",
"owner": "dschultz",
"type": "defect"
}
```
</p>
</details>
|
defect
|
copy constructor of copies trac migrated from reported by chaack and owned by dschultz json status closed changetime description added particle to the new is difficult if you use the copy constructors of as these also copy the n ndoing something like n n n np p nt nt add primary p nt append child p id n n n nresults in a failed assertion insertresult second n nis this intended behaviour reporter chaack cc resolution invalid ts component combo core summary copy constructor of copies priority normal keywords time milestone owner dschultz type defect
| 1
|
73,070
| 24,440,618,700
|
IssuesEvent
|
2022-10-06 14:22:24
|
snowplow/snowplow-python-tracker
|
https://api.github.com/repos/snowplow/snowplow-python-tracker
|
closed
|
Fix failing build in Dockerfile
|
type:defect
|
The docker build fails with error "Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist".
|
1.0
|
Fix failing build in Dockerfile - The docker build fails with error "Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist".
|
defect
|
fix failing build in dockerfile the docker build fails with error failed to download metadata for repo appstream cannot prepare internal mirrorlist no urls in mirrorlist
| 1
|
31,298
| 6,493,221,051
|
IssuesEvent
|
2017-08-21 16:07:59
|
sukona/Grapevine
|
https://api.github.com/repos/sukona/Grapevine
|
closed
|
consider change default regex pattern from (.+) to ([^/]+), in order to avoid including sub path
|
defect
|
consider change default regex pattern from (.+) to ([^/]+), in order to avoid including sub path.
line is here:
https://github.com/sukona/Grapevine/blob/54b8299e571e8005e172755090560eb5b7e4be16/src/Grapevine/Shared/ParamParser.cs#L48
**example code:**
```
[RestRoute(HttpMethod = HttpMethod.POST, PathInfo = "/controllers/[id]")]
```
**example input:**
```
/controller/2/requests
```
**example output:**
`id` = `2/requests`
**expected:**
not match
|
1.0
|
consider change default regex pattern from (.+) to ([^/]+), in order to avoid including sub path - consider change default regex pattern from (.+) to ([^/]+), in order to avoid including sub path.
line is here:
https://github.com/sukona/Grapevine/blob/54b8299e571e8005e172755090560eb5b7e4be16/src/Grapevine/Shared/ParamParser.cs#L48
**example code:**
```
[RestRoute(HttpMethod = HttpMethod.POST, PathInfo = "/controllers/[id]")]
```
**example input:**
```
/controller/2/requests
```
**example output:**
`id` = `2/requests`
**expected:**
not match
|
defect
|
consider change default regex pattern from to in order to avoid including sub path consider change default regex pattern from to in order to avoid including sub path line is here example code example input controller requests example output id requests expected not match
| 1
|
341,557
| 24,703,634,673
|
IssuesEvent
|
2022-10-19 17:09:52
|
PyThaiNLP/pythainlp
|
https://api.github.com/repos/PyThaiNLP/pythainlp
|
closed
|
Use PyThaiNLP 4.0 instead 3.2
|
documentation
|
I think we will have many changes, so we should release the next version as 4.0 instead 3.2.
|
1.0
|
Use PyThaiNLP 4.0 instead 3.2 - I think we will have many changes, so we should release the next version as 4.0 instead 3.2.
|
non_defect
|
use pythainlp instead i think we will have many changes so we should release the next version as instead
| 0
|
54,195
| 13,458,333,075
|
IssuesEvent
|
2020-09-09 10:28:24
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
Jackson versions and vulnerabilities
|
Source: Community Team: Core Type: Defect
|
Hey folks,
Hazelcast shades [Jackson 2.9.7](https://github.com/hazelcast/hazelcast/blob/master/pom.xml#L91), which currently is [about 2 years old](https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.9.7). There's a lot of Jackson CVEs out there (https://github.com/FasterXML/jackson-databind/issues/2814 is the latest that popped up for us), often centered around gadgets, as explained by cowtowncoder (primary Jackson maintainer) himself in [this medium article](https://medium.com/@cowtowncoder/on-jackson-cves-dont-panic-here-is-what-you-need-to-know-54cd0d6e8062).
It is possible that Hazelcast is affected by CVE-2020-24616, though it looks unlikely to me, you don't seem to be using Anteros-DBCP. It is possible that Hazelcast is affected by one of the other Jackson-targeting CVEs, which I could see being exploited in ways similar to #802.
Nevertheless and since this is a 2 y.o version anyway, would it be a reasonable goal to switch to a newer version such as 2.11 and resolve both these issues?
|
1.0
|
Jackson versions and vulnerabilities - Hey folks,
Hazelcast shades [Jackson 2.9.7](https://github.com/hazelcast/hazelcast/blob/master/pom.xml#L91), which currently is [about 2 years old](https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.9.7). There's a lot of Jackson CVEs out there (https://github.com/FasterXML/jackson-databind/issues/2814 is the latest that popped up for us), often centered around gadgets, as explained by cowtowncoder (primary Jackson maintainer) himself in [this medium article](https://medium.com/@cowtowncoder/on-jackson-cves-dont-panic-here-is-what-you-need-to-know-54cd0d6e8062).
It is possible that Hazelcast is affected by CVE-2020-24616, though it looks unlikely to me, you don't seem to be using Anteros-DBCP. It is possible that Hazelcast is affected by one of the other Jackson-targeting CVEs, which I could see being exploited in ways similar to #802.
Nevertheless and since this is a 2 y.o version anyway, would it be a reasonable goal to switch to a newer version such as 2.11 and resolve both these issues?
|
defect
|
jackson versions and vulnerabilities hey folks hazelcast shades which currently is there s a lot of jackson cves out there is the latest that popped up for us often centered around gadgets as explained by cowtowncoder primary jackson maintainer himself in it is possible that hazelcast is affected by cve though it looks unlikely to me you don t seem to be using anteros dbcp it is possible that hazelcast is affected by one of the other jackson targeting cves which i could see being exploited in ways similar to nevertheless and since this is a y o version anyway would it be a reasonable goal to switch to a newer version such as and resolve both these issues
| 1
|
67,891
| 21,264,118,125
|
IssuesEvent
|
2022-04-13 08:16:11
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Wrong alignment for thread panel footer contents in Safari
|
T-Defect Z-Platform-Specific S-Tolerable O-Occasional A-Threads
|
### Steps to reproduce
Open the thread panel on Safari.
### Outcome
#### What did you expect?
The footer contents for the Beta label and feedback button to be aligned to the right.
#### What happened instead?
It's aligned to the left.
We just need to change the value for the `justify-content: end;` declaration for `.mx_ThreadPanel .mx_BaseCard_footer` to be `justify-content: flex-end;`
- https://github.com/matrix-org/matrix-react-sdk/blob/develop/res/css/views/right_panel/_ThreadPanel.scss#L240
- https://caniuse.com/mdn-css_properties_justify-content_flex_context_start_end
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
develop.element.io
### Application version
Element version: 03ab1237ed77-react-59fda5273fc4-js-b58d09aa9a7a Olm version: 3.2.8
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Wrong alignment for thread panel footer contents in Safari - ### Steps to reproduce
Open the thread panel on Safari.
### Outcome
#### What did you expect?
The footer contents for the Beta label and feedback button to be aligned to the right.
#### What happened instead?
It's aligned to the left.
We just need to change the value for the `justify-content: end;` declaration for `.mx_ThreadPanel .mx_BaseCard_footer` to be `justify-content: flex-end;`
- https://github.com/matrix-org/matrix-react-sdk/blob/develop/res/css/views/right_panel/_ThreadPanel.scss#L240
- https://caniuse.com/mdn-css_properties_justify-content_flex_context_start_end
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
develop.element.io
### Application version
Element version: 03ab1237ed77-react-59fda5273fc4-js-b58d09aa9a7a Olm version: 3.2.8
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
wrong alignment for thread panel footer contents in safari steps to reproduce open the thread panel on safari outcome what did you expect the footer contents for the beta label and feedback button to be aligned to the right what happened instead it s aligned to the left we just need to change the value for the justify content end declaration for mx threadpanel mx basecard footer to be justify content flex end operating system no response browser information no response url for webapp develop element io application version element version react js olm version homeserver no response will you send logs no
| 1
|
114,735
| 11,855,215,017
|
IssuesEvent
|
2020-03-25 03:28:59
|
WordPress/auto-updates
|
https://api.github.com/repos/WordPress/auto-updates
|
opened
|
Documentation
|
documentation task
|
_Migrated from: https://github.com/audrasjb/wp-autoupdates/issues/71
Previously opened by: @jeffpaul
Original description:_
>We'll want to ensure that ahead of any submission to merge this plugin into WordPress core that we've sufficiently documented within the plugin code and within the markdown/text files in this plugin. Assuming there is a feature plugin merge proposal for this plugin, we'll want to ensure we capture items that should be included in the WordPress Core HelpHub documentation site as well.
|
1.0
|
Documentation - _Migrated from: https://github.com/audrasjb/wp-autoupdates/issues/71
Previously opened by: @jeffpaul
Original description:_
>We'll want to ensure that ahead of any submission to merge this plugin into WordPress core that we've sufficiently documented within the plugin code and within the markdown/text files in this plugin. Assuming there is a feature plugin merge proposal for this plugin, we'll want to ensure we capture items that should be included in the WordPress Core HelpHub documentation site as well.
|
non_defect
|
documentation migrated from previously opened by jeffpaul original description we ll want to ensure that ahead of any submission to merge this plugin into wordpress core that we ve sufficiently documented within the plugin code and within the markdown text files in this plugin assuming there is a feature plugin merge proposal for this plugin we ll want to ensure we capture items that should be included in the wordpress core helphub documentation site as well
| 0
|
6,245
| 2,610,224,021
|
IssuesEvent
|
2015-02-26 19:11:00
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
шоу уродов господина араси
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Бертольд Голубев'''
День добрый никак не могу найти .шоу уродов
господина араси. как то выкладывали уже
'''Варлам Денисов'''
Вот хороший сайт где можно скачать
http://bit.ly/17ZRkLm
'''Болеслав Кабанов'''
Спасибо вроде то но просит телефон вводить
'''Аскольд Денисов'''
Не это не влияет на баланс
'''Абрам Соловьёв'''
Не это не влияет на баланс
Информация о файле: шоу уродов господина
араси
Загружен: В этом месяце
Скачан раз: 952
Рейтинг: 1335
Средняя скорость скачивания: 820
Похожих файлов: 23
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 18 Dec 2013 at 8:30
|
1.0
|
шоу уродов господина араси - ```
'''Бертольд Голубев'''
День добрый никак не могу найти .шоу уродов
господина араси. как то выкладывали уже
'''Варлам Денисов'''
Вот хороший сайт где можно скачать
http://bit.ly/17ZRkLm
'''Болеслав Кабанов'''
Спасибо вроде то но просит телефон вводить
'''Аскольд Денисов'''
Не это не влияет на баланс
'''Абрам Соловьёв'''
Не это не влияет на баланс
Информация о файле: шоу уродов господина
араси
Загружен: В этом месяце
Скачан раз: 952
Рейтинг: 1335
Средняя скорость скачивания: 820
Похожих файлов: 23
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 18 Dec 2013 at 8:30
|
defect
|
шоу уродов господина араси бертольд голубев день добрый никак не могу найти шоу уродов господина араси как то выкладывали уже варлам денисов вот хороший сайт где можно скачать болеслав кабанов спасибо вроде то но просит телефон вводить аскольд денисов не это не влияет на баланс абрам соловьёв не это не влияет на баланс информация о файле шоу уродов господина араси загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 1
|
291,751
| 25,172,324,346
|
IssuesEvent
|
2022-11-11 05:18:24
|
crispindeity/issue-tracker
|
https://api.github.com/repos/crispindeity/issue-tracker
|
closed
|
Add Milestone Controller Integration Test
|
📬 API ✅ Test
|
# Description
- Milestone Controller 통합 테스트 작성
# Progress
- [x] 302 Redirect Response
- [x] Long save()
- [x] ResponsewReadAllMilestone read()
- [x] ResponseReadAllMilestonesDto readOpenAndMilestones()
- [x] ResponseMilestoneDto detail()
- [x] void delete()
- [x] Long update()
|
1.0
|
Add Milestone Controller Integration Test - # Description
- Milestone Controller 통합 테스트 작성
# Progress
- [x] 302 Redirect Response
- [x] Long save()
- [x] ResponsewReadAllMilestone read()
- [x] ResponseReadAllMilestonesDto readOpenAndMilestones()
- [x] ResponseMilestoneDto detail()
- [x] void delete()
- [x] Long update()
|
non_defect
|
add milestone controller integration test description milestone controller 통합 테스트 작성 progress redirect response long save responsewreadallmilestone read responsereadallmilestonesdto readopenandmilestones responsemilestonedto detail void delete long update
| 0
|
31,910
| 8,774,196,993
|
IssuesEvent
|
2018-12-18 19:05:10
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
TensorFlow docker image JupyterNotebook does not start
|
stat:awaiting response type:build/install
|
1. I built the docker image using Dockerfile present in `tensorflow/tensorflow/tools/docker/Dockerfile`, When I run the image it does gives me url for notebook
```
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://(e01f14b69e8e or 127.0.0.1):8888/?token=da541XXXXXXXXXXXXXXXXXXXXXX
```
However opening the url fails with error
`127.0.0.1 didn’t send any data.
`
Following is the port-mapping output
```
debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0
debug1: Connection to port 8888 forwarding to localhost port 8888 requested.
debug1: channel 2: new [direct-tcpip]
debug1: Connection to port 8888 forwarding to localhost port 8888 requested.
debug1: channel 3: new [direct-tcpip]
channel 2: open failed: connect failed: Connection refused
channel 3: open failed: connect failed: Connection refused
debug1: channel 2: free: direct-tcpip: listening port 8888 for localhost port 8888, connect from 127.0.0.1 port 51331 to 127.0.0.1 port 8888, nchannels 4
```
|
1.0
|
TensorFlow docker image JupyterNotebook does not start - 1. I built the docker image using Dockerfile present in `tensorflow/tensorflow/tools/docker/Dockerfile`, When I run the image it does gives me url for notebook
```
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://(e01f14b69e8e or 127.0.0.1):8888/?token=da541XXXXXXXXXXXXXXXXXXXXXX
```
However opening the url fails with error
`127.0.0.1 didn’t send any data.
`
Following is the port-mapping output
```
debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0
debug1: Connection to port 8888 forwarding to localhost port 8888 requested.
debug1: channel 2: new [direct-tcpip]
debug1: Connection to port 8888 forwarding to localhost port 8888 requested.
debug1: channel 3: new [direct-tcpip]
channel 2: open failed: connect failed: Connection refused
channel 3: open failed: connect failed: Connection refused
debug1: channel 2: free: direct-tcpip: listening port 8888 for localhost port 8888, connect from 127.0.0.1 port 51331 to 127.0.0.1 port 8888, nchannels 4
```
|
non_defect
|
tensorflow docker image jupyternotebook does not start i built the docker image using dockerfile present in tensorflow tensorflow tools docker dockerfile when i run the image it does gives me url for notebook copy paste this url into your browser when you connect for the first time to login with a token or token however opening the url fails with error didn’t send any data following is the port mapping output client input global request rtype hostkeys openssh com want reply connection to port forwarding to localhost port requested channel new connection to port forwarding to localhost port requested channel new channel open failed connect failed connection refused channel open failed connect failed connection refused channel free direct tcpip listening port for localhost port connect from port to port nchannels
| 0
|
606,156
| 18,756,041,032
|
IssuesEvent
|
2021-11-05 10:52:40
|
PlaceOS/user-interfaces
|
https://api.github.com/repos/PlaceOS/user-interfaces
|
closed
|
workplace/dashboard > Your Bookings: Clicking on a booking tile does nothing, but should show the booking details (/schedule/view)
|
Type: Enhancement Priority: Medium
|
workplace/dashboard > Your Bookings: Clicking on a booking tile does nothing, but should show the booking details (/schedule/view)

|
1.0
|
workplace/dashboard > Your Bookings: Clicking on a booking tile does nothing, but should show the booking details (/schedule/view) - workplace/dashboard > Your Bookings: Clicking on a booking tile does nothing, but should show the booking details (/schedule/view)

|
non_defect
|
workplace dashboard your bookings clicking on a booking tile does nothing but should show the booking details schedule view workplace dashboard your bookings clicking on a booking tile does nothing but should show the booking details schedule view
| 0
|
85,110
| 16,601,969,207
|
IssuesEvent
|
2021-06-01 20:51:13
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
closed
|
TypeScript go to definition just goes to import
|
bug team/code-intelligence
|
https://github.com/sourcegraph/sourcegraph/pull/14256/files#diff-389012daf6ffbfb95a148bde9e6800bbL145

Go to definition on `LinkOrButton`, it just jumps to the import in the same file, instead of to https://sourcegraph.com/github.com/sourcegraph/sourcegraph@b2cd5ddf5a013b12df9691c670d8644dcf26a936/-/blob/shared/src/components/LinkOrButton.tsx#L46
|
1.0
|
TypeScript go to definition just goes to import - https://github.com/sourcegraph/sourcegraph/pull/14256/files#diff-389012daf6ffbfb95a148bde9e6800bbL145

Go to definition on `LinkOrButton`, it just jumps to the import in the same file, instead of to https://sourcegraph.com/github.com/sourcegraph/sourcegraph@b2cd5ddf5a013b12df9691c670d8644dcf26a936/-/blob/shared/src/components/LinkOrButton.tsx#L46
|
non_defect
|
typescript go to definition just goes to import go to definition on linkorbutton it just jumps to the import in the same file instead of to
| 0
|
128,819
| 10,552,552,175
|
IssuesEvent
|
2019-10-03 15:22:29
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
flannel --iface is broken on k8s 1.16
|
[zube]: To Test kind/bug team/ca
|
due to tabs being at:
https://github.com/rancher/kontainer-driver-metadata/blob/e6d8f524e4456635d1d37a05f2e454a468ed11b6/rke/templates/flannel.go#L642-L644
the `flannel_iface` configuration and flannel is broken when the rke cluster.yaml is:
```yaml
cluster_name: example
kubernetes_version: v1.16.0-beta.1-rancher2-1
network:
plugin: flannel
options:
flannel_iface: eth1
```
the bug was introduced at https://github.com/rancher/kontainer-driver-metadata/commit/10592a4f69a2c3b30855e8e0cf5f4a6ceefeba96.
|
1.0
|
flannel --iface is broken on k8s 1.16 - due to tabs being at:
https://github.com/rancher/kontainer-driver-metadata/blob/e6d8f524e4456635d1d37a05f2e454a468ed11b6/rke/templates/flannel.go#L642-L644
the `flannel_iface` configuration and flannel is broken when the rke cluster.yaml is:
```yaml
cluster_name: example
kubernetes_version: v1.16.0-beta.1-rancher2-1
network:
plugin: flannel
options:
flannel_iface: eth1
```
the bug was introduced at https://github.com/rancher/kontainer-driver-metadata/commit/10592a4f69a2c3b30855e8e0cf5f4a6ceefeba96.
|
non_defect
|
flannel iface is broken on due to tabs being at the flannel iface configuration and flannel is broken when the rke cluster yaml is yaml cluster name example kubernetes version beta network plugin flannel options flannel iface the bug was introduced at
| 0
|
55,977
| 13,728,320,069
|
IssuesEvent
|
2020-10-04 11:05:05
|
jenkins-x/terraform-aws-eks-jx
|
https://api.github.com/repos/jenkins-x/terraform-aws-eks-jx
|
closed
|
Quota exceeding for PoliciesPerUser for CI builds
|
area/build kind/bug lifecycle/rotten
|
We keep seeing the following error after some time running pull request builds:
```
Error: Error attaching policy arn:aws:iam::296178596335:policy/vault_us-east-1-2020050616512586470000000c to IAM User **-bdd-test: LimitExceeded: Cannot exceed quota for PoliciesPerUser: 10
status code: 409, request id: 53cf51d8-3234-47b7-90bd-f75efe7daa76
```
After manually cleaning up policies the builds work again until the limit is exceeded again.
It seems something does not get cleaned up properly. If the policies in question are created by Terraform, then they should be deleted on `terraform destroy`, right? If for some reason the policies cannot be deleted by Terraform, we might need to add some manual cleanup on top of the `terraform destroy`.
|
1.0
|
Quota exceeding for PoliciesPerUser for CI builds - We keep seeing the following error after some time running pull request builds:
```
Error: Error attaching policy arn:aws:iam::296178596335:policy/vault_us-east-1-2020050616512586470000000c to IAM User **-bdd-test: LimitExceeded: Cannot exceed quota for PoliciesPerUser: 10
status code: 409, request id: 53cf51d8-3234-47b7-90bd-f75efe7daa76
```
After manually cleaning up policies the builds work again until the limit is exceeded again.
It seems something does not get cleaned up properly. If the policies in question are created by Terraform, then they should be deleted on `terraform destroy`, right? If for some reason the policies cannot be deleted by Terraform, we might need to add some manual cleanup on top of the `terraform destroy`.
|
non_defect
|
quota exceeding for policiesperuser for ci builds we keep seeing the following error after some time running pull request builds error error attaching policy arn aws iam policy vault us east to iam user bdd test limitexceeded cannot exceed quota for policiesperuser status code request id after manually cleaning up policies the builds work again until the limit is exceeded again it seems something does not get cleaned up properly if the policies in question are created by terraform then they should be deleted on terraform destroy right if for some reason the policies cannot be deleted by terraform we might need to add some manual cleanup on top of the terraform destroy
| 0
|
57,804
| 16,076,133,645
|
IssuesEvent
|
2021-04-25 11:43:05
|
TykTechnologies/tyk-operator
|
https://api.github.com/repos/TykTechnologies/tyk-operator
|
reopened
|
Security Policy "configured" even without changes
|
defect
|
Applying a SecurityPolicy resource, even without changes, results in a reconcile loop each time.
Version Operator :`v0.4.1`
```
➜ tyk-operator git:(f54cfb5) ✗ k apply -f ./config/samples/httpbin_protected.yaml
apidefinition.tyk.tyk.io/httpbin created
➜ tyk-operator git:(f54cfb5) ✗ k apply -f ./config/samples/httpbin_protected_policy.yaml
securitypolicy.tyk.tyk.io/httpbin created
➜ tyk-operator git:(f54cfb5) ✗ k apply -f ./config/samples/httpbin_protected.yaml
apidefinition.tyk.tyk.io/httpbin unchanged
➜ tyk-operator git:(f54cfb5) ✗ k apply -f ./config/samples/httpbin_protected_policy.yaml
securitypolicy.tyk.tyk.io/httpbin configured
➜ tyk-operator git:(f54cfb5) ✗ k apply -f ./config/samples/httpbin_protected_policy.yaml
securitypolicy.tyk.tyk.io/httpbin configured
```
Tyk Operator logs each time the `apply` happens:
```
{"level":"info","ts":1612198820.8060765,"logger":"securitypolicy-resource","msg":"default","name":"httpbin"}
{"level":"info","ts":1612198820.811275,"logger":"securitypolicy-resource","msg":"validate update","name":"httpbin"}
{"level":"info","ts":1612198820.8208332,"logger":"controllers.SecurityPolicy","msg":"Reconciling SecurityPolicy instance","SecurityPolicy":"default/httpbin"}
{"level":"info","ts":1612198820.8208778,"logger":"controllers.SecurityPolicy","msg":"updating access rights"}
{"level":"info","ts":1612198820.9665265,"logger":"controllers.SecurityPolicy","msg":"Call","Method":"GET","URL":"http://a87905fcebea.ngrok.io/api/apis","Status":200}
{"level":"info","ts":1612198820.9675589,"logger":"controllers.SecurityPolicy","msg":"All api's","Count":1}
{"level":"info","ts":1612198820.967609,"logger":"controllers.SecurityPolicy","msg":"Updating policy"}
{"level":"info","ts":1612198821.0645616,"logger":"controllers.SecurityPolicy","msg":"Call","Method":"PUT","URL":"http://a87905fcebea.ngrok.io/api/portal/policies/60183333c8dcad0001f8b52f","Status":200}
{"level":"info","ts":1612198821.064659,"logger":"controllers.SecurityPolicy","msg":"Successfully updated Policy"}
{"level":"info","ts":1612198821.071407,"logger":"securitypolicy-resource","msg":"default","name":"httpbin"}
{"level":"info","ts":1612198821.0737884,"logger":"securitypolicy-resource","msg":"validate update","name":"httpbin"}
{"level":"info","ts":1612198821.0856066,"logger":"controllers.SecurityPolicy","msg":"Done reconcile","Op":"updated"}
{"level":"info","ts":1612198821.0857725,"logger":"controllers.SecurityPolicy","msg":"Reconciling SecurityPolicy instance","SecurityPolicy":"default/httpbin"}
{"level":"info","ts":1612198821.0858583,"logger":"controllers.SecurityPolicy","msg":"updating access rights"}
{"level":"info","ts":1612198821.1808848,"logger":"controllers.SecurityPolicy","msg":"Call","Method":"GET","URL":"http://a87905fcebea.ngrok.io/api/apis","Status":200}
{"level":"info","ts":1612198821.181176,"logger":"controllers.SecurityPolicy","msg":"All api's","Count":1}
{"level":"info","ts":1612198821.181188,"logger":"controllers.SecurityPolicy","msg":"Updating policy"}
{"level":"info","ts":1612198821.2743676,"logger":"controllers.SecurityPolicy","msg":"Call","Method":"PUT","URL":"http://a87905fcebea.ngrok.io/api/portal/policies/60183333c8dcad0001f8b52f","Status":200}
{"level":"info","ts":1612198821.2744665,"logger":"controllers.SecurityPolicy","msg":"Successfully updated Policy"}
{"level":"info","ts":1612198821.274689,"logger":"controllers.SecurityPolicy","msg":"Done reconcile","Op":"unchanged"}
```
|
1.0
|
Security Policy "configured" even without changes - Applying a SecurityPolicy resource, even without changes, results in a reconcile loop each time.
Version Operator :`v0.4.1`
```
➜ tyk-operator git:(f54cfb5) ✗ k apply -f ./config/samples/httpbin_protected.yaml
apidefinition.tyk.tyk.io/httpbin created
➜ tyk-operator git:(f54cfb5) ✗ k apply -f ./config/samples/httpbin_protected_policy.yaml
securitypolicy.tyk.tyk.io/httpbin created
➜ tyk-operator git:(f54cfb5) ✗ k apply -f ./config/samples/httpbin_protected.yaml
apidefinition.tyk.tyk.io/httpbin unchanged
➜ tyk-operator git:(f54cfb5) ✗ k apply -f ./config/samples/httpbin_protected_policy.yaml
securitypolicy.tyk.tyk.io/httpbin configured
➜ tyk-operator git:(f54cfb5) ✗ k apply -f ./config/samples/httpbin_protected_policy.yaml
securitypolicy.tyk.tyk.io/httpbin configured
```
Tyk Operator logs each time the `apply` happens:
```
{"level":"info","ts":1612198820.8060765,"logger":"securitypolicy-resource","msg":"default","name":"httpbin"}
{"level":"info","ts":1612198820.811275,"logger":"securitypolicy-resource","msg":"validate update","name":"httpbin"}
{"level":"info","ts":1612198820.8208332,"logger":"controllers.SecurityPolicy","msg":"Reconciling SecurityPolicy instance","SecurityPolicy":"default/httpbin"}
{"level":"info","ts":1612198820.8208778,"logger":"controllers.SecurityPolicy","msg":"updating access rights"}
{"level":"info","ts":1612198820.9665265,"logger":"controllers.SecurityPolicy","msg":"Call","Method":"GET","URL":"http://a87905fcebea.ngrok.io/api/apis","Status":200}
{"level":"info","ts":1612198820.9675589,"logger":"controllers.SecurityPolicy","msg":"All api's","Count":1}
{"level":"info","ts":1612198820.967609,"logger":"controllers.SecurityPolicy","msg":"Updating policy"}
{"level":"info","ts":1612198821.0645616,"logger":"controllers.SecurityPolicy","msg":"Call","Method":"PUT","URL":"http://a87905fcebea.ngrok.io/api/portal/policies/60183333c8dcad0001f8b52f","Status":200}
{"level":"info","ts":1612198821.064659,"logger":"controllers.SecurityPolicy","msg":"Successfully updated Policy"}
{"level":"info","ts":1612198821.071407,"logger":"securitypolicy-resource","msg":"default","name":"httpbin"}
{"level":"info","ts":1612198821.0737884,"logger":"securitypolicy-resource","msg":"validate update","name":"httpbin"}
{"level":"info","ts":1612198821.0856066,"logger":"controllers.SecurityPolicy","msg":"Done reconcile","Op":"updated"}
{"level":"info","ts":1612198821.0857725,"logger":"controllers.SecurityPolicy","msg":"Reconciling SecurityPolicy instance","SecurityPolicy":"default/httpbin"}
{"level":"info","ts":1612198821.0858583,"logger":"controllers.SecurityPolicy","msg":"updating access rights"}
{"level":"info","ts":1612198821.1808848,"logger":"controllers.SecurityPolicy","msg":"Call","Method":"GET","URL":"http://a87905fcebea.ngrok.io/api/apis","Status":200}
{"level":"info","ts":1612198821.181176,"logger":"controllers.SecurityPolicy","msg":"All api's","Count":1}
{"level":"info","ts":1612198821.181188,"logger":"controllers.SecurityPolicy","msg":"Updating policy"}
{"level":"info","ts":1612198821.2743676,"logger":"controllers.SecurityPolicy","msg":"Call","Method":"PUT","URL":"http://a87905fcebea.ngrok.io/api/portal/policies/60183333c8dcad0001f8b52f","Status":200}
{"level":"info","ts":1612198821.2744665,"logger":"controllers.SecurityPolicy","msg":"Successfully updated Policy"}
{"level":"info","ts":1612198821.274689,"logger":"controllers.SecurityPolicy","msg":"Done reconcile","Op":"unchanged"}
```
|
defect
|
security policy configured even without changes applying a securitypolicy resource even without changes results in a reconcile loop each time version operator ➜ tyk operator git ✗ k apply f config samples httpbin protected yaml apidefinition tyk tyk io httpbin created ➜ tyk operator git ✗ k apply f config samples httpbin protected policy yaml securitypolicy tyk tyk io httpbin created ➜ tyk operator git ✗ k apply f config samples httpbin protected yaml apidefinition tyk tyk io httpbin unchanged ➜ tyk operator git ✗ k apply f config samples httpbin protected policy yaml securitypolicy tyk tyk io httpbin configured ➜ tyk operator git ✗ k apply f config samples httpbin protected policy yaml securitypolicy tyk tyk io httpbin configured tyk operator logs each time the apply happens level info ts logger securitypolicy resource msg default name httpbin level info ts logger securitypolicy resource msg validate update name httpbin level info ts logger controllers securitypolicy msg reconciling securitypolicy instance securitypolicy default httpbin level info ts logger controllers securitypolicy msg updating access rights level info ts logger controllers securitypolicy msg call method get url level info ts logger controllers securitypolicy msg all api s count level info ts logger controllers securitypolicy msg updating policy level info ts logger controllers securitypolicy msg call method put url level info ts logger controllers securitypolicy msg successfully updated policy level info ts logger securitypolicy resource msg default name httpbin level info ts logger securitypolicy resource msg validate update name httpbin level info ts logger controllers securitypolicy msg done reconcile op updated level info ts logger controllers securitypolicy msg reconciling securitypolicy instance securitypolicy default httpbin level info ts logger controllers securitypolicy msg updating access rights level info ts logger controllers securitypolicy msg call method get url level info ts logger controllers securitypolicy msg all api s count level info ts logger controllers securitypolicy msg updating policy level info ts logger controllers securitypolicy msg call method put url level info ts logger controllers securitypolicy msg successfully updated policy level info ts logger controllers securitypolicy msg done reconcile op unchanged
| 1
|
52,249
| 13,211,411,998
|
IssuesEvent
|
2020-08-15 22:57:35
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[simprod] Missing Time Range for SplitInIcePulses in 2013 simulation (Trac #1916)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1916">https://code.icecube.wisc.edu/projects/icecube/ticket/1916</a>, reported by yiqian.xuand owned by juancarlos</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-12T20:28:04",
"_ts": "1550003284179803",
"description": "2013 data has SplitInIcePulsesTimeRange in the frame, but 2013 simulation doesn't. ",
"reporter": "yiqian.xu",
"cc": "david.schultz, olivas",
"resolution": "fixed",
"time": "2016-12-01T17:17:20",
"component": "combo simulation",
"summary": "[simprod] Missing Time Range for SplitInIcePulses in 2013 simulation",
"priority": "normal",
"keywords": "",
"milestone": "Vernal Equinox 2019",
"owner": "juancarlos",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[simprod] Missing Time Range for SplitInIcePulses in 2013 simulation (Trac #1916) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1916">https://code.icecube.wisc.edu/projects/icecube/ticket/1916</a>, reported by yiqian.xuand owned by juancarlos</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-12T20:28:04",
"_ts": "1550003284179803",
"description": "2013 data has SplitInIcePulsesTimeRange in the frame, but 2013 simulation doesn't. ",
"reporter": "yiqian.xu",
"cc": "david.schultz, olivas",
"resolution": "fixed",
"time": "2016-12-01T17:17:20",
"component": "combo simulation",
"summary": "[simprod] Missing Time Range for SplitInIcePulses in 2013 simulation",
"priority": "normal",
"keywords": "",
"milestone": "Vernal Equinox 2019",
"owner": "juancarlos",
"type": "defect"
}
```
</p>
</details>
|
defect
|
missing time range for splitinicepulses in simulation trac migrated from json status closed changetime ts description data has splitinicepulsestimerange in the frame but simulation doesn t reporter yiqian xu cc david schultz olivas resolution fixed time component combo simulation summary missing time range for splitinicepulses in simulation priority normal keywords milestone vernal equinox owner juancarlos type defect
| 1
|
5,200
| 2,610,183,058
|
IssuesEvent
|
2015-02-26 18:58:18
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
详解怎么祛色斑最好
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
不要想太多,长大的自己多会迷失在人生路上,去追求一种��
�仰。平凡也好,其他也罢,人各有见,疏而不同。当完成了�
��中所想,可以稍微调整下时间的信仰,好比—休息。每一天
,见到的,听到的,都有所不同。无论是失落的,还是快乐��
�,调整下心态,坏事也可以成为一件好事。我想如果不是因�
��脸上的雀斑我也不会认识他,我一生的最爱!怎么祛色斑最
好,
《客户案例》
张小姐 26岁<br>
我的黄褐斑是遗传的,主要分布在颧骨,鼻子上也有一��
�。因为整体肤色比较白,斑也就更加明显。大家应该都看过�
��过片里,很多小孩就是很多的黄褐斑,我就是那样的。连照
相都看的出来,我的相片都要用PS处理下才可以。<br>
为了把脸上的黄褐斑去掉,我到处寻找祛斑产品,可是��
�无果。为了找到好的祛斑产品,我花费了不少心思和金钱。�
��来听说遗传的黄褐斑根本是祛不掉的,只好放弃了。我只能
是在日常生活中更加注重保养而已。防晒、注意饮食,多水��
�和蔬菜,注意睡眠等等。我就一直这样保养着,当然脸上的�
��褐斑也一直靠防晒乳液或是粉底遮掩着。直到有一天回老家
走亲戚,我的一个远房表姐向我推荐「黛芙薇尔精华液」。��
�个表姐也是好心,一开始不怎么相信的我也上网查看了下。�
��想到还真的是不错的产品,有人也是遗传性的黄褐斑,用了
这个产品就好了。一直以为我的黄褐斑是祛不掉的,这下子��
�了信心。<br>
我又打了网站上的专家热线进行咨询,对这个产品有了��
�一步的了解,该产品不仅治标更重要的是还能治本,并且安�
��无任何毒副作用。专家告诉我像我这种遗传的黄褐斑也是可
以治愈的,于是我满怀期待定购了一个周期的产品想着先试��
�看。<br>
用完一个周期,还真是有效果,虽然不是太明显不过黄��
�斑真的是淡化了,甚至皮肤都光滑了些。看到这么好的效果�
��我当然有信心了,就再定购了一个周期继续使用。完之后的
效果让我更加有信心。然而我还是有些许的担心,担心这样��
�效果不能持久,毕竟我也花了不少金钱和时间在这上面。<br>
最终时间给了我最好的见证,一年多的时间,我还是像��
�前一样比较注重皮肤的保养,黄褐斑再也没有出现过,我在�
��相也不需要用PS了!
阅读了怎么祛色斑最好,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
怎么祛色斑最好,同时为您分享祛斑小方法
不急躁不忧郁,保持平和的心态,良好的情绪。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:59
|
1.0
|
详解怎么祛色斑最好 - ```
《摘要》
不要想太多,长大的自己多会迷失在人生路上,去追求一种��
�仰。平凡也好,其他也罢,人各有见,疏而不同。当完成了�
��中所想,可以稍微调整下时间的信仰,好比—休息。每一天
,见到的,听到的,都有所不同。无论是失落的,还是快乐��
�,调整下心态,坏事也可以成为一件好事。我想如果不是因�
��脸上的雀斑我也不会认识他,我一生的最爱!怎么祛色斑最
好,
《客户案例》
张小姐 26岁<br>
我的黄褐斑是遗传的,主要分布在颧骨,鼻子上也有一��
�。因为整体肤色比较白,斑也就更加明显。大家应该都看过�
��过片里,很多小孩就是很多的黄褐斑,我就是那样的。连照
相都看的出来,我的相片都要用PS处理下才可以。<br>
为了把脸上的黄褐斑去掉,我到处寻找祛斑产品,可是��
�无果。为了找到好的祛斑产品,我花费了不少心思和金钱。�
��来听说遗传的黄褐斑根本是祛不掉的,只好放弃了。我只能
是在日常生活中更加注重保养而已。防晒、注意饮食,多水��
�和蔬菜,注意睡眠等等。我就一直这样保养着,当然脸上的�
��褐斑也一直靠防晒乳液或是粉底遮掩着。直到有一天回老家
走亲戚,我的一个远房表姐向我推荐「黛芙薇尔精华液」。��
�个表姐也是好心,一开始不怎么相信的我也上网查看了下。�
��想到还真的是不错的产品,有人也是遗传性的黄褐斑,用了
这个产品就好了。一直以为我的黄褐斑是祛不掉的,这下子��
�了信心。<br>
我又打了网站上的专家热线进行咨询,对这个产品有了��
�一步的了解,该产品不仅治标更重要的是还能治本,并且安�
��无任何毒副作用。专家告诉我像我这种遗传的黄褐斑也是可
以治愈的,于是我满怀期待定购了一个周期的产品想着先试��
�看。<br>
用完一个周期,还真是有效果,虽然不是太明显不过黄��
�斑真的是淡化了,甚至皮肤都光滑了些。看到这么好的效果�
��我当然有信心了,就再定购了一个周期继续使用。完之后的
效果让我更加有信心。然而我还是有些许的担心,担心这样��
�效果不能持久,毕竟我也花了不少金钱和时间在这上面。<br>
最终时间给了我最好的见证,一年多的时间,我还是像��
�前一样比较注重皮肤的保养,黄褐斑再也没有出现过,我在�
��相也不需要用PS了!
阅读了怎么祛色斑最好,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
怎么祛色斑最好,同时为您分享祛斑小方法
不急躁不忧郁,保持平和的心态,良好的情绪。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:59
|
defect
|
详解怎么祛色斑最好 《摘要》 不要想太多,长大的自己多会迷失在人生路上,去追求一种�� �仰。平凡也好,其他也罢,人各有见,疏而不同。当完成了� ��中所想,可以稍微调整下时间的信仰,好比—休息。每一天 ,见到的,听到的,都有所不同。无论是失落的,还是快乐�� �,调整下心态,坏事也可以成为一件好事。我想如果不是因� ��脸上的雀斑我也不会认识他,我一生的最爱!怎么祛色斑最 好, 《客户案例》 张小姐 我的黄褐斑是遗传的,主要分布在颧骨,鼻子上也有一�� �。因为整体肤色比较白,斑也就更加明显。大家应该都看过� ��过片里,很多小孩就是很多的黄褐斑,我就是那样的。连照 相都看的出来,我的相片都要用ps处理下才可以。 为了把脸上的黄褐斑去掉,我到处寻找祛斑产品,可是�� �无果。为了找到好的祛斑产品,我花费了不少心思和金钱。� ��来听说遗传的黄褐斑根本是祛不掉的,只好放弃了。我只能 是在日常生活中更加注重保养而已。防晒、注意饮食,多水�� �和蔬菜,注意睡眠等等。我就一直这样保养着,当然脸上的� ��褐斑也一直靠防晒乳液或是粉底遮掩着。直到有一天回老家 走亲戚,我的一个远房表姐向我推荐「黛芙薇尔精华液」。�� �个表姐也是好心,一开始不怎么相信的我也上网查看了下。� ��想到还真的是不错的产品,有人也是遗传性的黄褐斑,用了 这个产品就好了。一直以为我的黄褐斑是祛不掉的,这下子�� �了信心。 我又打了网站上的专家热线进行咨询,对这个产品有了�� �一步的了解,该产品不仅治标更重要的是还能治本,并且安� ��无任何毒副作用。专家告诉我像我这种遗传的黄褐斑也是可 以治愈的,于是我满怀期待定购了一个周期的产品想着先试�� �看。 用完一个周期,还真是有效果,虽然不是太明显不过黄�� �斑真的是淡化了,甚至皮肤都光滑了些。看到这么好的效果� ��我当然有信心了,就再定购了一个周期继续使用。完之后的 效果让我更加有信心。然而我还是有些许的担心,担心这样�� �效果不能持久,毕竟我也花了不少金钱和时间在这上面。 最终时间给了我最好的见证,一年多的时间,我还是像�� �前一样比较注重皮肤的保养,黄褐斑再也没有出现过,我在� ��相也不需要用ps了 阅读了怎么祛色斑最好,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 怎么祛色斑最好,同时为您分享祛斑小方法 不急躁不忧郁,保持平和的心态,良好的情绪。 original issue reported on code google com by additive gmail com on jul at
| 1
|
168,185
| 13,065,228,692
|
IssuesEvent
|
2020-07-30 19:26:57
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
"failed to run channelserver" error seen in rancher logs
|
[zube]: To Test alpha-priority/0 area/import-rke2 kind/bug-qa
|
**What kind of request is this (question/bug/enhancement/feature request):** bug
**Steps to reproduce (least amount of steps as possible):**
On master-head - commit id: `e20f472d4`
following error is seen in the rancher logs:
```
2020/07/28 18:19:50 [INFO] kontainerdriver amazonelasticcontainerservice stopped
2020/07/28 18:19:50 [INFO] dynamic schema for kontainerdriver amazonelasticcontainerservice updating
Incorrect Usage. flag provided but not defined: -path-prefix
NAME:
Channel Server - A new cli application
USAGE:
channelserver [global options] command [command options] [arguments...]
VERSION:
v0.0.0-dev (HEAD)
COMMANDS:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--url value (default: "channels.yaml") [$URL]
--config-key value [$SUBKEY]
--refresh-interval value (default: "15m") [$REFRESH_INTERVAL]
--listen-address value (default: "0.0.0.0:8080") [$LISTEN_ADDRESS]
--channel-server-version value [$CHANNEL_SERVER_VERSION]
--help, -h show help
--version, -v print the version
time="2020-07-28T18:20:13Z" level=fatal msg="flag provided but not defined: -path-prefix"
2020/07/28 18:20:13 [INFO] failed to run channelserver: exit status 1
Incorrect Usage. flag provided but not defined: -path-prefix
NAME:
Channel Server - A new cli application
USAGE:
channelserver [global options] command [command options] [arguments...]
VERSION:
v0.0.0-dev (HEAD)
COMMANDS:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--url value (default: "channels.yaml") [$URL]
--config-key value [$SUBKEY]
--refresh-interval value (default: "15m") [$REFRESH_INTERVAL]
--listen-address value (default: "0.0.0.0:8080") [$LISTEN_ADDRESS]
--channel-server-version value [$CHANNEL_SERVER_VERSION]
--help, -h show help
--version, -v print the version
time="2020-07-28T18:21:08Z" level=fatal msg="flag provided but not defined: -path-prefix"
2020/07/28 18:21:08 [INFO] failed to run channelserver: exit status 1
```
**Other details that may be helpful:**
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): master-head - commit id: `e20f472d4`
- Installation option (single install/HA): single
|
1.0
|
"failed to run channelserver" error seen in rancher logs - **What kind of request is this (question/bug/enhancement/feature request):** bug
**Steps to reproduce (least amount of steps as possible):**
On master-head - commit id: `e20f472d4`
following error is seen in the rancher logs:
```
2020/07/28 18:19:50 [INFO] kontainerdriver amazonelasticcontainerservice stopped
2020/07/28 18:19:50 [INFO] dynamic schema for kontainerdriver amazonelasticcontainerservice updating
Incorrect Usage. flag provided but not defined: -path-prefix
NAME:
Channel Server - A new cli application
USAGE:
channelserver [global options] command [command options] [arguments...]
VERSION:
v0.0.0-dev (HEAD)
COMMANDS:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--url value (default: "channels.yaml") [$URL]
--config-key value [$SUBKEY]
--refresh-interval value (default: "15m") [$REFRESH_INTERVAL]
--listen-address value (default: "0.0.0.0:8080") [$LISTEN_ADDRESS]
--channel-server-version value [$CHANNEL_SERVER_VERSION]
--help, -h show help
--version, -v print the version
time="2020-07-28T18:20:13Z" level=fatal msg="flag provided but not defined: -path-prefix"
2020/07/28 18:20:13 [INFO] failed to run channelserver: exit status 1
Incorrect Usage. flag provided but not defined: -path-prefix
NAME:
Channel Server - A new cli application
USAGE:
channelserver [global options] command [command options] [arguments...]
VERSION:
v0.0.0-dev (HEAD)
COMMANDS:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--url value (default: "channels.yaml") [$URL]
--config-key value [$SUBKEY]
--refresh-interval value (default: "15m") [$REFRESH_INTERVAL]
--listen-address value (default: "0.0.0.0:8080") [$LISTEN_ADDRESS]
--channel-server-version value [$CHANNEL_SERVER_VERSION]
--help, -h show help
--version, -v print the version
time="2020-07-28T18:21:08Z" level=fatal msg="flag provided but not defined: -path-prefix"
2020/07/28 18:21:08 [INFO] failed to run channelserver: exit status 1
```
**Other details that may be helpful:**
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): master-head - commit id: `e20f472d4`
- Installation option (single install/HA): single
|
non_defect
|
failed to run channelserver error seen in rancher logs what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible on master head commit id following error is seen in the rancher logs kontainerdriver amazonelasticcontainerservice stopped dynamic schema for kontainerdriver amazonelasticcontainerservice updating incorrect usage flag provided but not defined path prefix name channel server a new cli application usage channelserver command version dev head commands help h shows a list of commands or help for one command global options url value default channels yaml config key value refresh interval value default listen address value default channel server version value help h show help version v print the version time level fatal msg flag provided but not defined path prefix failed to run channelserver exit status incorrect usage flag provided but not defined path prefix name channel server a new cli application usage channelserver command version dev head commands help h shows a list of commands or help for one command global options url value default channels yaml config key value refresh interval value default listen address value default channel server version value help h show help version v print the version time level fatal msg flag provided but not defined path prefix failed to run channelserver exit status other details that may be helpful environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui master head commit id installation option single install ha single
| 0
|
78,703
| 27,722,449,776
|
IssuesEvent
|
2023-03-14 21:55:27
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
PolygonConcentricCircleMeshGenerator
|
T: defect P: normal
|
## Bug Description
In the PolygonConcentricMeshGenerator when the user defines: outward_interface_boundary_names, the naming of the interfaces is not done properly. (I would assume similar behavior with inward_interface_boundary_names)
## Steps to Reproduce
For example, run this file :
```
# a Pronghorn mesh for 7 EBRT-II assemblies
# sqrt(3) / 2 is by how much flat to flat is smaller than corer to corner
f = ${fparse sqrt(3) / 2}
# units are cm - do not forget to convert to meter
outer_duct_out = 5.8166
outer_duct_in = 5.5854
inner_duct_out = 4.8437
inner_duct_in = 4.64
inter_wrapper_width = 0.3
height = 61.2
# discretization
n_ax = 10
ns = 4
duct_intervals_center = '4 4 4 3' # '2 4 2 4' #
duct_intervals_perishperic = '4 4' # '2 2' #
[Mesh]
[XX09]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '12'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse inner_duct_in / f /2} ${fparse inner_duct_out / f /2} ${fparse outer_duct_in / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_center}
duct_block_ids = '1003 1004 1005 1006'
outward_interface_boundary_names = 'inner_wall_in inner_wall_out outer_wall_in outer_wall_out'
interface_boundary_id_shift = 100
[]
[hfd]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '13'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[Partial_Driver]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '14'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[Driver1]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '15'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[Driver2]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '16'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[K011]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '17'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[X402]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '18'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[pattern]
type = PatternedHexMeshGenerator
inputs = 'XX09 hfd Partial_Driver Driver1 Driver2 K011 X402'
pattern =
'5 4 ;
6 0 3 ;
1 2 '
pattern_boundary = none
[]
[extrude]
type = AdvancedExtruderGenerator
direction = '0 0 1'
input = pattern
heights = '${height}'
num_layers = '${n_ax}'
[]
[inlet_interwall]
type = ParsedGenerateSideset
input = extrude
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 1004
normal = '0 0 -1'
new_sideset_name = inlet_interwall
[]
[inlet_interwrapper]
type = ParsedGenerateSideset
input = inlet_interwall
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 1006
normal = '0 0 -1'
new_sideset_name = inlet_interwrapper
[]
[inlet_porous_flow_hfd]
type = ParsedGenerateSideset
input = inlet_interwrapper
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 13
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_hfd
[]
[inlet_porous_flow_p]
type = ParsedGenerateSideset
input = inlet_porous_flow_hfd
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 14
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_p
[]
[inlet_porous_flow_d1]
type = ParsedGenerateSideset
input = inlet_porous_flow_p
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 15
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_d1
[]
[inlet_porous_flow_d2]
type = ParsedGenerateSideset
input = inlet_porous_flow_d1
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 16
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_d2
[]
[inlet_porous_flow_k011]
type = ParsedGenerateSideset
input = inlet_porous_flow_d2
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 17
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_k011
[]
[inlet_porous_flow_x402]
type = ParsedGenerateSideset
input = inlet_porous_flow_k011
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 18
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_x402
[]
[inlet_central_assembly]
type = ParsedGenerateSideset
input = inlet_porous_flow_x402
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = '12'
normal = '0 0 -1'
new_sideset_name = inlet_central_assembly
[]
[outlet_interwall]
type = ParsedGenerateSideset
input = inlet_central_assembly
included_subdomains = '1004'
combinatorial_geometry = 'abs(z - ${fparse height}) < 1e-6'
normal = '0 0 1'
new_sideset_name = outlet_interwall
[]
[outlet_interwrapper]
type = ParsedGenerateSideset
input = outlet_interwall
included_subdomains = '1006'
combinatorial_geometry = 'abs(z - ${fparse height}) < 1e-6'
normal = '0 0 1'
new_sideset_name = outlet_interwrapper
[]
[outlet_porous_flow]
type = ParsedGenerateSideset
input = outlet_interwrapper
included_subdomains = '13 14 15 16 17 18'
combinatorial_geometry = 'abs(z - ${fparse height}) < 1e-6'
normal = '0 0 1'
new_sideset_name = outlet_porous_flow
[]
[outlet_central_assembly]
type = ParsedGenerateSideset
input = outlet_porous_flow
included_subdomains = '12'
combinatorial_geometry = 'abs(z - ${fparse height}) < 1e-6'
normal = '0 0 1'
new_sideset_name = outlet_central_assembly
[]
[rename]
type = RenameBlockGenerator
input = outlet_central_assembly
old_block = '1003 1004 1005 1006 12'
new_block = 'wall interwall wall inter_wrapper center_porous_flow'
[]
[rename2]
type = RenameBlockGenerator
input = rename
old_block = '13 14 15 16 17 18'
new_block = 'porous_flow_hfd porous_flow_p porous_flow_d1 porous_flow_d2 porous_flow_k011 porous_flow_x402'
[]
[rotate]
type = TransformGenerator
input = rename2
transform = ROTATE
vector_value = '0 0 0'
[]
# turn into meters
[scale]
type = TransformGenerator
vector_value = '0.01 0.01 0.01'
transform = SCALE
input = rotate
[]
[new_inner_wall_boundary]
type = SideSetsBetweenSubdomainsGenerator
input = scale
new_boundary = 'prsb_interface'
primary_block = 'wall'
paired_block = 'center_porous_flow'
[]
[]
```
The mesh produced by this input file, names only the first 2 interfaces using the first and last name out of the 4, provided in line 30, for hexagon XX09:
(outward_interface_boundary_names = 'inner_wall_in inner_wall_out outer_wall_in outer_wall_out')
And it names only the first interface (wall_in) in the rest of the assemblies out of the 2 provided: (outward_interface_boundary_names = 'wall_in wall_out')
## Impact
Correcting this bug will allow the user to define interface boundary names without the need of opening the .e file and searching for interface number.
|
1.0
|
PolygonConcentricCircleMeshGenerator - ## Bug Description
In the PolygonConcentricMeshGenerator when the user defines: outward_interface_boundary_names, the naming of the interfaces is not done properly. (I would assume similar behavior with inward_interface_boundary_names)
## Steps to Reproduce
For example, run this file :
```
# a Pronghorn mesh for 7 EBRT-II assemblies
# sqrt(3) / 2 is by how much flat to flat is smaller than corer to corner
f = ${fparse sqrt(3) / 2}
# units are cm - do not forget to convert to meter
outer_duct_out = 5.8166
outer_duct_in = 5.5854
inner_duct_out = 4.8437
inner_duct_in = 4.64
inter_wrapper_width = 0.3
height = 61.2
# discretization
n_ax = 10
ns = 4
duct_intervals_center = '4 4 4 3' # '2 4 2 4' #
duct_intervals_perishperic = '4 4' # '2 2' #
[Mesh]
[XX09]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '12'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse inner_duct_in / f /2} ${fparse inner_duct_out / f /2} ${fparse outer_duct_in / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_center}
duct_block_ids = '1003 1004 1005 1006'
outward_interface_boundary_names = 'inner_wall_in inner_wall_out outer_wall_in outer_wall_out'
interface_boundary_id_shift = 100
[]
[hfd]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '13'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[Partial_Driver]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '14'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[Driver1]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '15'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[Driver2]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '16'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[K011]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '17'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[X402]
type = PolygonConcentricCircleMeshGenerator
num_sides = 6
num_sectors_per_side = '${ns} ${ns} ${ns} ${ns} ${ns} ${ns}'
background_intervals = 1
background_block_ids = '18'
polygon_size = ${fparse outer_duct_out / 2 + inter_wrapper_width / 2}
duct_sizes = '${fparse 5.6134 / f /2} ${fparse outer_duct_out / f / 2}'
duct_intervals = ${duct_intervals_perishperic}
duct_block_ids = '1003 1006'
outward_interface_boundary_names = 'wall_in wall_out'
[]
[pattern]
type = PatternedHexMeshGenerator
inputs = 'XX09 hfd Partial_Driver Driver1 Driver2 K011 X402'
pattern =
'5 4 ;
6 0 3 ;
1 2 '
pattern_boundary = none
[]
[extrude]
type = AdvancedExtruderGenerator
direction = '0 0 1'
input = pattern
heights = '${height}'
num_layers = '${n_ax}'
[]
[inlet_interwall]
type = ParsedGenerateSideset
input = extrude
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 1004
normal = '0 0 -1'
new_sideset_name = inlet_interwall
[]
[inlet_interwrapper]
type = ParsedGenerateSideset
input = inlet_interwall
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 1006
normal = '0 0 -1'
new_sideset_name = inlet_interwrapper
[]
[inlet_porous_flow_hfd]
type = ParsedGenerateSideset
input = inlet_interwrapper
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 13
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_hfd
[]
[inlet_porous_flow_p]
type = ParsedGenerateSideset
input = inlet_porous_flow_hfd
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 14
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_p
[]
[inlet_porous_flow_d1]
type = ParsedGenerateSideset
input = inlet_porous_flow_p
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 15
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_d1
[]
[inlet_porous_flow_d2]
type = ParsedGenerateSideset
input = inlet_porous_flow_d1
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 16
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_d2
[]
[inlet_porous_flow_k011]
type = ParsedGenerateSideset
input = inlet_porous_flow_d2
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 17
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_k011
[]
[inlet_porous_flow_x402]
type = ParsedGenerateSideset
input = inlet_porous_flow_k011
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = 18
normal = '0 0 -1'
new_sideset_name = inlet_porous_flow_x402
[]
[inlet_central_assembly]
type = ParsedGenerateSideset
input = inlet_porous_flow_x402
combinatorial_geometry = 'abs(z) < 1e-6'
included_subdomains = '12'
normal = '0 0 -1'
new_sideset_name = inlet_central_assembly
[]
[outlet_interwall]
type = ParsedGenerateSideset
input = inlet_central_assembly
included_subdomains = '1004'
combinatorial_geometry = 'abs(z - ${fparse height}) < 1e-6'
normal = '0 0 1'
new_sideset_name = outlet_interwall
[]
[outlet_interwrapper]
type = ParsedGenerateSideset
input = outlet_interwall
included_subdomains = '1006'
combinatorial_geometry = 'abs(z - ${fparse height}) < 1e-6'
normal = '0 0 1'
new_sideset_name = outlet_interwrapper
[]
[outlet_porous_flow]
type = ParsedGenerateSideset
input = outlet_interwrapper
included_subdomains = '13 14 15 16 17 18'
combinatorial_geometry = 'abs(z - ${fparse height}) < 1e-6'
normal = '0 0 1'
new_sideset_name = outlet_porous_flow
[]
[outlet_central_assembly]
type = ParsedGenerateSideset
input = outlet_porous_flow
included_subdomains = '12'
combinatorial_geometry = 'abs(z - ${fparse height}) < 1e-6'
normal = '0 0 1'
new_sideset_name = outlet_central_assembly
[]
[rename]
type = RenameBlockGenerator
input = outlet_central_assembly
old_block = '1003 1004 1005 1006 12'
new_block = 'wall interwall wall inter_wrapper center_porous_flow'
[]
[rename2]
type = RenameBlockGenerator
input = rename
old_block = '13 14 15 16 17 18'
new_block = 'porous_flow_hfd porous_flow_p porous_flow_d1 porous_flow_d2 porous_flow_k011 porous_flow_x402'
[]
[rotate]
type = TransformGenerator
input = rename2
transform = ROTATE
vector_value = '0 0 0'
[]
# turn into meters
[scale]
type = TransformGenerator
vector_value = '0.01 0.01 0.01'
transform = SCALE
input = rotate
[]
[new_inner_wall_boundary]
type = SideSetsBetweenSubdomainsGenerator
input = scale
new_boundary = 'prsb_interface'
primary_block = 'wall'
paired_block = 'center_porous_flow'
[]
[]
```
The mesh produced by this input file, names only the first 2 interfaces using the first and last name out of the 4, provided in line 30, for hexagon XX09:
(outward_interface_boundary_names = 'inner_wall_in inner_wall_out outer_wall_in outer_wall_out')
And it names only the first interface (wall_in) in the rest of the assemblies out of the 2 provided: (outward_interface_boundary_names = 'wall_in wall_out')
## Impact
Correcting this bug will allow the user to define interface boundary names without the need of opening the .e file and searching for interface number.
|
defect
|
polygonconcentriccirclemeshgenerator bug description in the polygonconcentricmeshgenerator when the user defines outward interface boundary names the naming of the interfaces is not done properly i would assume similar behavior with inward interface boundary names steps to reproduce for example run this file a pronghorn mesh for ebrt ii assemblies sqrt is by how much flat to flat is smaller than corer to corner f fparse sqrt units are cm do not forget to convert to meter outer duct out outer duct in inner duct out inner duct in inter wrapper width height discretization n ax ns duct intervals center duct intervals perishperic type polygonconcentriccirclemeshgenerator num sides num sectors per side ns ns ns ns ns ns background intervals background block ids polygon size fparse outer duct out inter wrapper width duct sizes fparse inner duct in f fparse inner duct out f fparse outer duct in f fparse outer duct out f duct intervals duct intervals center duct block ids outward interface boundary names inner wall in inner wall out outer wall in outer wall out interface boundary id shift type polygonconcentriccirclemeshgenerator num sides num sectors per side ns ns ns ns ns ns background intervals background block ids polygon size fparse outer duct out inter wrapper width duct sizes fparse f fparse outer duct out f duct intervals duct intervals perishperic duct block ids outward interface boundary names wall in wall out type polygonconcentriccirclemeshgenerator num sides num sectors per side ns ns ns ns ns ns background intervals background block ids polygon size fparse outer duct out inter wrapper width duct sizes fparse f fparse outer duct out f duct intervals duct intervals perishperic duct block ids outward interface boundary names wall in wall out type polygonconcentriccirclemeshgenerator num sides num sectors per side ns ns ns ns ns ns background intervals background block ids polygon size fparse outer duct out inter wrapper width duct sizes fparse f fparse outer duct out f duct intervals duct intervals perishperic duct block ids outward interface boundary names wall in wall out type polygonconcentriccirclemeshgenerator num sides num sectors per side ns ns ns ns ns ns background intervals background block ids polygon size fparse outer duct out inter wrapper width duct sizes fparse f fparse outer duct out f duct intervals duct intervals perishperic duct block ids outward interface boundary names wall in wall out type polygonconcentriccirclemeshgenerator num sides num sectors per side ns ns ns ns ns ns background intervals background block ids polygon size fparse outer duct out inter wrapper width duct sizes fparse f fparse outer duct out f duct intervals duct intervals perishperic duct block ids outward interface boundary names wall in wall out type polygonconcentriccirclemeshgenerator num sides num sectors per side ns ns ns ns ns ns background intervals background block ids polygon size fparse outer duct out inter wrapper width duct sizes fparse f fparse outer duct out f duct intervals duct intervals perishperic duct block ids outward interface boundary names wall in wall out type patternedhexmeshgenerator inputs hfd partial driver pattern pattern boundary none type advancedextrudergenerator direction input pattern heights height num layers n ax type parsedgeneratesideset input extrude combinatorial geometry abs z included subdomains normal new sideset name inlet interwall type parsedgeneratesideset input inlet interwall combinatorial geometry abs z included subdomains normal new sideset name inlet interwrapper type parsedgeneratesideset input inlet interwrapper combinatorial geometry abs z included subdomains normal new sideset name inlet porous flow hfd type parsedgeneratesideset input inlet porous flow hfd combinatorial geometry abs z included subdomains normal new sideset name inlet porous flow p type parsedgeneratesideset input inlet porous flow p combinatorial geometry abs z included subdomains normal new sideset name inlet porous flow type parsedgeneratesideset input inlet porous flow combinatorial geometry abs z included subdomains normal new sideset name inlet porous flow type parsedgeneratesideset input inlet porous flow combinatorial geometry abs z included subdomains normal new sideset name inlet porous flow type parsedgeneratesideset input inlet porous flow combinatorial geometry abs z included subdomains normal new sideset name inlet porous flow type parsedgeneratesideset input inlet porous flow combinatorial geometry abs z included subdomains normal new sideset name inlet central assembly type parsedgeneratesideset input inlet central assembly included subdomains combinatorial geometry abs z fparse height normal new sideset name outlet interwall type parsedgeneratesideset input outlet interwall included subdomains combinatorial geometry abs z fparse height normal new sideset name outlet interwrapper type parsedgeneratesideset input outlet interwrapper included subdomains combinatorial geometry abs z fparse height normal new sideset name outlet porous flow type parsedgeneratesideset input outlet porous flow included subdomains combinatorial geometry abs z fparse height normal new sideset name outlet central assembly type renameblockgenerator input outlet central assembly old block new block wall interwall wall inter wrapper center porous flow type renameblockgenerator input rename old block new block porous flow hfd porous flow p porous flow porous flow porous flow porous flow type transformgenerator input transform rotate vector value turn into meters type transformgenerator vector value transform scale input rotate type sidesetsbetweensubdomainsgenerator input scale new boundary prsb interface primary block wall paired block center porous flow the mesh produced by this input file names only the first interfaces using the first and last name out of the provided in line for hexagon outward interface boundary names inner wall in inner wall out outer wall in outer wall out and it names only the first interface wall in in the rest of the assemblies out of the provided outward interface boundary names wall in wall out impact correcting this bug will allow the user to define interface boundary names without the need of opening the e file and searching for interface number
| 1
|
69,101
| 22,156,452,471
|
IssuesEvent
|
2022-06-03 23:38:26
|
jezzsantos/automate
|
https://api.github.com/repos/jezzsantos/automate
|
closed
|
Esoteric language for core concepts
|
defect-design
|
It is pretty clear from having to describe to various potential users that there are some esoteric terms that need more than the average amount of explaining. This is a usability issue.
Words like:
- [x] Pattern -> ??? Template?
- [x] Solution -> ??? Document?
- [x] LaunchPoint -> ??? Runner?, Executor?, Trigger?
CLI Parameters like:
- [x] --istearoff -> --isoneoff
- [x] --withpath -> --targetpath
let's agree to change this language to be more understandable.
|
1.0
|
Esoteric language for core concepts - It is pretty clear from having to describe to various potential users that there are some esoteric terms that need more than the average amount of explaining. This is a usability issue.
Words like:
- [x] Pattern -> ??? Template?
- [x] Solution -> ??? Document?
- [x] LaunchPoint -> ??? Runner?, Executor?, Trigger?
CLI Parameters like:
- [x] --istearoff -> --isoneoff
- [x] --withpath -> --targetpath
let's agree to change this language to be more understandable.
|
defect
|
esoteric language for core concepts it is pretty clear from having to describe to various potential users that there are some esoteric terms that need more than the average amount of explaining this is a usability issue words like pattern template solution document launchpoint runner executor trigger cli parameters like istearoff isoneoff withpath targetpath let s agree to change this language to be more understandable
| 1
|
436,806
| 12,554,031,120
|
IssuesEvent
|
2020-06-07 00:22:59
|
eclipse-ee4j/glassfish
|
https://api.github.com/repos/eclipse-ee4j/glassfish
|
closed
|
Allow commands that take properties to accept properties file as an option
|
Component: admin ERR: Assignee Priority: Major Stale Type: Improvement
|
For a project I am working on, we're doing a lot of configuration through system
properties. All of the properties I need to load are already in a properties
file. To load them into Glassfish, I'm using an asant script with separate calls
to "create-system-properties". It would be nice to make one call to
"create-system-properties" that accepts a properties file. Perhaps add a
"--properties-file switch" to specify a file.
Example usage:
asadmin create-system-properties --properties-file deploy.properties
#### Affected Versions
[9.1peur2]
|
1.0
|
Allow commands that take properties to accept properties file as an option - For a project I am working on, we're doing a lot of configuration through system
properties. All of the properties I need to load are already in a properties
file. To load them into Glassfish, I'm using an asant script with separate calls
to "create-system-properties". It would be nice to make one call to
"create-system-properties" that accepts a properties file. Perhaps add a
"--properties-file switch" to specify a file.
Example usage:
asadmin create-system-properties --properties-file deploy.properties
#### Affected Versions
[9.1peur2]
|
non_defect
|
allow commands that take properties to accept properties file as an option for a project i am working on we re doing a lot of configuration through system properties all of the properties i need to load are already in a properties file to load them into glassfish i m using an asant script with separate calls to create system properties it would be nice to make one call to create system properties that accepts a properties file perhaps add a properties file switch to specify a file example usage asadmin create system properties properties file deploy properties affected versions
| 0
|
8,807
| 2,612,899,075
|
IssuesEvent
|
2015-02-27 17:23:28
|
chrsmith/windows-package-manager
|
https://api.github.com/repos/chrsmith/windows-package-manager
|
closed
|
PDF-XChange Viewer on 64bit Windows
|
auto-migrated Milestone-1.15 Type-Defect
|
```
The version of the PDF-XChange Viewer currently in the repository cannot be
installed on a 64bit Windows.
From the install log:
---------------------------
MSI (s) (48:D4) [13:11:39:558]: Product: PDF-XChange Viewer -- This
installation can be installed only on 32-bit Windows.
---------------------------
```
Original issue reported on code.google.com by `bur...@web.de` on 12 Feb 2011 at 12:16
|
1.0
|
PDF-XChange Viewer on 64bit Windows - ```
The version of the PDF-XChange Viewer currently in the repository cannot be
installed on a 64bit Windows.
From the install log:
---------------------------
MSI (s) (48:D4) [13:11:39:558]: Product: PDF-XChange Viewer -- This
installation can be installed only on 32-bit Windows.
---------------------------
```
Original issue reported on code.google.com by `bur...@web.de` on 12 Feb 2011 at 12:16
|
defect
|
pdf xchange viewer on windows the version of the pdf xchange viewer currently in the repository cannot be installed on a windows from the install log msi s product pdf xchange viewer this installation can be installed only on bit windows original issue reported on code google com by bur web de on feb at
| 1
|
36,359
| 7,915,819,018
|
IssuesEvent
|
2018-07-04 01:56:26
|
Microsoft/spring-data-gremlin
|
https://api.github.com/repos/Microsoft/spring-data-gremlin
|
closed
|
Refine the database AbstractConfiguration.
|
Defect enhancement
|
**Your issue may already be reported! Please search before creating a new one.**
## Expected Behavior
* placeholder
## Current Behavior
* placeholder
## Possible Solution
* placeholder
## Steps to Reproduce (for bugs)
* step-1
* step-2
* ...
## Snapshot Code for Reproduce
```java
@SpringBootApplication
public class Application {
public static void main(String... args) {
SpringApplication.run(Application.class, args);
}
}
```
## Branch
* placeholder
## Your Environment
* Version used:
* Operating System and version (desktop or mobile):
* SDK version:
|
1.0
|
Refine the database AbstractConfiguration. - **Your issue may already be reported! Please search before creating a new one.**
## Expected Behavior
* placeholder
## Current Behavior
* placeholder
## Possible Solution
* placeholder
## Steps to Reproduce (for bugs)
* step-1
* step-2
* ...
## Snapshot Code for Reproduce
```java
@SpringBootApplication
public class Application {
public static void main(String... args) {
SpringApplication.run(Application.class, args);
}
}
```
## Branch
* placeholder
## Your Environment
* Version used:
* Operating System and version (desktop or mobile):
* SDK version:
|
defect
|
refine the database abstractconfiguration your issue may already be reported please search before creating a new one expected behavior placeholder current behavior placeholder possible solution placeholder steps to reproduce for bugs step step snapshot code for reproduce java springbootapplication public class application public static void main string args springapplication run application class args branch placeholder your environment version used operating system and version desktop or mobile sdk version
| 1
|
633,561
| 20,258,538,225
|
IssuesEvent
|
2022-02-15 03:29:55
|
apcountryman/picolibrary-microchip-megaavr
|
https://api.github.com/repos/apcountryman/picolibrary-microchip-megaavr
|
opened
|
Add Microchip megaAVR asynchronous serial basic transmitter
|
priority-normal status-awaiting_development type-feature
|
Add Microchip megaAVR asynchronous serial basic transmitter (`::picolibrary::Microchip::megaAVR::Asynchronous_Serial::Basic_Transmitter`).
- [ ] The `Basic_Transmitter` class should be defined in the `include/picolibrary/microchip/megaavr/asynchronous_serial.h`/`source/picolibrary/microchip/megaavr/asynchronous_serial.cc` header/source file pair
- [ ] The `Basic_Transmitter` class should have the following template parameters:
- [ ] `typename Data_Type`: The integral type used to hold the data to be transmitted
- [ ] The `Basic_Transmitter` class should have the following member type aliases:
- [ ] `using Data = Data_Type;`: The integral type used to hold the data to be transmitted
- [ ] The `Basic_Transmitter` class should support the following operations:
- [ ] `constexpr Basic_Transmitter() noexcept = default;`
- [ ] `Basic_Transmitter( Peripheral::USART & usart, USART_Data_Bits usart_data_bits, USART_Parity usart_parity, USART_Stop_Bits usart_stop_bits, USART_Clock_Generator_Operating_Speed usart_clock_generator_operating_speed, std::uint16_t usart_clock_generator_scaling_factor ) noexcept;`
- [ ] `Basic_Transmitter( Basic_Transmitter && source ) noexcept;`
- [ ] `~Basic_Transmitter() noexcept;`
- [ ] `auto operator=( Basic_Transmitter && expression ) noexcept -> Basic_Transmitter &;`
- [ ] `void initialize() noexcept;`: Initialize the transmitter's hardware
- [ ] `void transmit( Data data ) noexcept;`: Transmit data
|
1.0
|
Add Microchip megaAVR asynchronous serial basic transmitter - Add Microchip megaAVR asynchronous serial basic transmitter (`::picolibrary::Microchip::megaAVR::Asynchronous_Serial::Basic_Transmitter`).
- [ ] The `Basic_Transmitter` class should be defined in the `include/picolibrary/microchip/megaavr/asynchronous_serial.h`/`source/picolibrary/microchip/megaavr/asynchronous_serial.cc` header/source file pair
- [ ] The `Basic_Transmitter` class should have the following template parameters:
- [ ] `typename Data_Type`: The integral type used to hold the data to be transmitted
- [ ] The `Basic_Transmitter` class should have the following member type aliases:
- [ ] `using Data = Data_Type;`: The integral type used to hold the data to be transmitted
- [ ] The `Basic_Transmitter` class should support the following operations:
- [ ] `constexpr Basic_Transmitter() noexcept = default;`
- [ ] `Basic_Transmitter( Peripheral::USART & usart, USART_Data_Bits usart_data_bits, USART_Parity usart_parity, USART_Stop_Bits usart_stop_bits, USART_Clock_Generator_Operating_Speed usart_clock_generator_operating_speed, std::uint16_t usart_clock_generator_scaling_factor ) noexcept;`
- [ ] `Basic_Transmitter( Basic_Transmitter && source ) noexcept;`
- [ ] `~Basic_Transmitter() noexcept;`
- [ ] `auto operator=( Basic_Transmitter && expression ) noexcept -> Basic_Transmitter &;`
- [ ] `void initialize() noexcept;`: Initialize the transmitter's hardware
- [ ] `void transmit( Data data ) noexcept;`: Transmit data
|
non_defect
|
add microchip megaavr asynchronous serial basic transmitter add microchip megaavr asynchronous serial basic transmitter picolibrary microchip megaavr asynchronous serial basic transmitter the basic transmitter class should be defined in the include picolibrary microchip megaavr asynchronous serial h source picolibrary microchip megaavr asynchronous serial cc header source file pair the basic transmitter class should have the following template parameters typename data type the integral type used to hold the data to be transmitted the basic transmitter class should have the following member type aliases using data data type the integral type used to hold the data to be transmitted the basic transmitter class should support the following operations constexpr basic transmitter noexcept default basic transmitter peripheral usart usart usart data bits usart data bits usart parity usart parity usart stop bits usart stop bits usart clock generator operating speed usart clock generator operating speed std t usart clock generator scaling factor noexcept basic transmitter basic transmitter source noexcept basic transmitter noexcept auto operator basic transmitter expression noexcept basic transmitter void initialize noexcept initialize the transmitter s hardware void transmit data data noexcept transmit data
| 0
|
29,537
| 5,715,697,896
|
IssuesEvent
|
2017-04-19 13:41:43
|
contao/core-bundle
|
https://api.github.com/repos/contao/core-bundle
|
closed
|
Kein Entfernen von Sonderzeichen in Dateiverwaltung mehr?
|
defect
|
<a href="https://github.com/NinaG"><img src="https://avatars1.githubusercontent.com/u/1219952?v=3" align="left" width="42" height="42"></img></a> [Issue](https://github.com/contao/core/issues/8688) by @NinaG
March 30th, 2017, 09:50 GMT
Unter C3 war es imho so, dass Contao beim Anlegen von Ordnern oder Dateien darauf geachtet hat, dass keine Sonderzeichen und Umlaute im Ordner-/Dateinamen waren. Falls der Redakteur das so angelegt hat, hat Contao es dann eigenständig geändert (Sonderzeichen bis auf - und _ entfernt, Umlaute auf ue, oe, ae und ß auf ss umgeschrieben etc.).
In Contao 4.3 scheint dieser Sicherheitsmechanismus weg zu sein. Abgesehen von den offensichtlichen Problemen, führt es auch zu neuen Fehlern. Ich habe aus Müdigkeit z.B. irrtümlich eine Datei namens **Spielen, Sport & Natur** (JPG) angelegt. Contao lässt es zu, dass die Datei gespeichert wird. Wenn der Nutzer nun aber versucht, diese Datei zu kopieren, schmeißt Contao diesen Fehler:
_Forbidden:
File or folder "files/freizeit-tourismus/uebersicht/Spielen, Sport " is not mounted or cannot be found._
Ruft man danach das Backend nochmal frisch auf, ist die kopierte Datei Spielen, Sport & Natur_1.jpg da. Aber es würde damit wohl noch diverse Folgefehler geben, wie die Fehlermeldung suggeriert.
Ich halte es für extrem wichtig, dass die automatische Fehlerkorrektur bei Ordner- und Dateinamen wieder vollumfänglich integriert wird. Meine regelmäßige Arbeit mit Redakteuren/Anwendern zeigt, dass sie sehr sehr wichtig ist. Insbesondere unerfahrene Anwender arbeiten häufig mit schrecklichen Datei-/Ordnernamen, aber wie man sehen kann, macht selbst ein erfahrener Profi vor Müdigkeit solche Fehler ;)
|
1.0
|
Kein Entfernen von Sonderzeichen in Dateiverwaltung mehr? - <a href="https://github.com/NinaG"><img src="https://avatars1.githubusercontent.com/u/1219952?v=3" align="left" width="42" height="42"></img></a> [Issue](https://github.com/contao/core/issues/8688) by @NinaG
March 30th, 2017, 09:50 GMT
Unter C3 war es imho so, dass Contao beim Anlegen von Ordnern oder Dateien darauf geachtet hat, dass keine Sonderzeichen und Umlaute im Ordner-/Dateinamen waren. Falls der Redakteur das so angelegt hat, hat Contao es dann eigenständig geändert (Sonderzeichen bis auf - und _ entfernt, Umlaute auf ue, oe, ae und ß auf ss umgeschrieben etc.).
In Contao 4.3 scheint dieser Sicherheitsmechanismus weg zu sein. Abgesehen von den offensichtlichen Problemen, führt es auch zu neuen Fehlern. Ich habe aus Müdigkeit z.B. irrtümlich eine Datei namens **Spielen, Sport & Natur** (JPG) angelegt. Contao lässt es zu, dass die Datei gespeichert wird. Wenn der Nutzer nun aber versucht, diese Datei zu kopieren, schmeißt Contao diesen Fehler:
_Forbidden:
File or folder "files/freizeit-tourismus/uebersicht/Spielen, Sport " is not mounted or cannot be found._
Ruft man danach das Backend nochmal frisch auf, ist die kopierte Datei Spielen, Sport & Natur_1.jpg da. Aber es würde damit wohl noch diverse Folgefehler geben, wie die Fehlermeldung suggeriert.
Ich halte es für extrem wichtig, dass die automatische Fehlerkorrektur bei Ordner- und Dateinamen wieder vollumfänglich integriert wird. Meine regelmäßige Arbeit mit Redakteuren/Anwendern zeigt, dass sie sehr sehr wichtig ist. Insbesondere unerfahrene Anwender arbeiten häufig mit schrecklichen Datei-/Ordnernamen, aber wie man sehen kann, macht selbst ein erfahrener Profi vor Müdigkeit solche Fehler ;)
|
defect
|
kein entfernen von sonderzeichen in dateiverwaltung mehr by ninag march gmt unter war es imho so dass contao beim anlegen von ordnern oder dateien darauf geachtet hat dass keine sonderzeichen und umlaute im ordner dateinamen waren falls der redakteur das so angelegt hat hat contao es dann eigenständig geändert sonderzeichen bis auf und entfernt umlaute auf ue oe ae und ß auf ss umgeschrieben etc in contao scheint dieser sicherheitsmechanismus weg zu sein abgesehen von den offensichtlichen problemen führt es auch zu neuen fehlern ich habe aus müdigkeit z b irrtümlich eine datei namens spielen sport natur jpg angelegt contao lässt es zu dass die datei gespeichert wird wenn der nutzer nun aber versucht diese datei zu kopieren schmeißt contao diesen fehler forbidden file or folder files freizeit tourismus uebersicht spielen sport is not mounted or cannot be found ruft man danach das backend nochmal frisch auf ist die kopierte datei spielen sport natur jpg da aber es würde damit wohl noch diverse folgefehler geben wie die fehlermeldung suggeriert ich halte es für extrem wichtig dass die automatische fehlerkorrektur bei ordner und dateinamen wieder vollumfänglich integriert wird meine regelmäßige arbeit mit redakteuren anwendern zeigt dass sie sehr sehr wichtig ist insbesondere unerfahrene anwender arbeiten häufig mit schrecklichen datei ordnernamen aber wie man sehen kann macht selbst ein erfahrener profi vor müdigkeit solche fehler
| 1
|
11,667
| 2,660,030,677
|
IssuesEvent
|
2015-03-19 01:46:42
|
perfsonar/project
|
https://api.github.com/repos/perfsonar/project
|
closed
|
owamp doesn't work with link-local addresses
|
Priority-Medium Type-Defect
|
Original [issue 1070](https://code.google.com/p/perfsonar-ps/issues/detail?id=1070) created by arlake228 on 2015-01-31T10:56:52.000Z:
<b>What steps will reproduce the problem?</b>
1. owping <IPv6LLAddrOfOwampd>
2. When a client requests an IPv6 link-local address to be used for a test session endpoint, the bind to the address fails since a scope isn't provided and a link-local address isn't valid without a scope indicating which link (interface) it applies to.
<b>What is the expected output? What do you see instead?</b>
Expect the owping to connect, successfully run and provide a summary of results.
<b>What version of the product are you using? On what operating system?</b>
3.4-10
<b>Please provide any additional information below.</b>
Fix proposed here:
https://code.google.com/r/robertshearman-owamp/source/detail?r=19a405f6fc385fe5132be3bf0043a9cd5efec6e6&name=v6-link-local
|
1.0
|
owamp doesn't work with link-local addresses - Original [issue 1070](https://code.google.com/p/perfsonar-ps/issues/detail?id=1070) created by arlake228 on 2015-01-31T10:56:52.000Z:
<b>What steps will reproduce the problem?</b>
1. owping <IPv6LLAddrOfOwampd>
2. When a client requests an IPv6 link-local address to be used for a test session endpoint, the bind to the address fails since a scope isn't provided and a link-local address isn't valid without a scope indicating which link (interface) it applies to.
<b>What is the expected output? What do you see instead?</b>
Expect the owping to connect, successfully run and provide a summary of results.
<b>What version of the product are you using? On what operating system?</b>
3.4-10
<b>Please provide any additional information below.</b>
Fix proposed here:
https://code.google.com/r/robertshearman-owamp/source/detail?r=19a405f6fc385fe5132be3bf0043a9cd5efec6e6&name=v6-link-local
|
defect
|
owamp doesn t work with link local addresses original created by on what steps will reproduce the problem owping lt gt when a client requests an link local address to be used for a test session endpoint the bind to the address fails since a scope isn t provided and a link local address isn t valid without a scope indicating which link interface it applies to what is the expected output what do you see instead expect the owping to connect successfully run and provide a summary of results what version of the product are you using on what operating system please provide any additional information below fix proposed here
| 1
|
50,815
| 13,187,766,679
|
IssuesEvent
|
2020-08-13 04:30:57
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
[PROPOSAL] should not build tables in I3_SRC (Trac #1430)
|
Migrated from Trac combo simulation defect
|
One of the targets for PROPOSAL to build tables to is `$I3_BUILD/PROPOSAL/resources/tables`. This is a symlink into `$I3_SRC` almost all the time. Two problems with this:
1. Philosophically, if I specify a separate build directory that means I don't want you touching the source.
2. The source could potentially be read-only.
Solution: write somewhere else in the build directory.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1430">https://code.icecube.wisc.edu/ticket/1430</a>, reported by david.schultz and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"description": "One of the targets for PROPOSAL to build tables to is `$I3_BUILD/PROPOSAL/resources/tables`. This is a symlink into `$I3_SRC` almost all the time. Two problems with this:\n\n1. Philosophically, if I specify a separate build directory that means I don't want you touching the source.\n\n2. The source could potentially be read-only.\n\nSolution: write somewhere else in the build directory.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067215093672",
"component": "combo simulation",
"summary": "[PROPOSAL] should not build tables in I3_SRC",
"priority": "major",
"keywords": "",
"time": "2015-11-10T16:28:21",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[PROPOSAL] should not build tables in I3_SRC (Trac #1430) - One of the targets for PROPOSAL to build tables to is `$I3_BUILD/PROPOSAL/resources/tables`. This is a symlink into `$I3_SRC` almost all the time. Two problems with this:
1. Philosophically, if I specify a separate build directory that means I don't want you touching the source.
2. The source could potentially be read-only.
Solution: write somewhere else in the build directory.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1430">https://code.icecube.wisc.edu/ticket/1430</a>, reported by david.schultz and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"description": "One of the targets for PROPOSAL to build tables to is `$I3_BUILD/PROPOSAL/resources/tables`. This is a symlink into `$I3_SRC` almost all the time. Two problems with this:\n\n1. Philosophically, if I specify a separate build directory that means I don't want you touching the source.\n\n2. The source could potentially be read-only.\n\nSolution: write somewhere else in the build directory.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067215093672",
"component": "combo simulation",
"summary": "[PROPOSAL] should not build tables in I3_SRC",
"priority": "major",
"keywords": "",
"time": "2015-11-10T16:28:21",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
defect
|
should not build tables in src trac one of the targets for proposal to build tables to is build proposal resources tables this is a symlink into src almost all the time two problems with this philosophically if i specify a separate build directory that means i don t want you touching the source the source could potentially be read only solution write somewhere else in the build directory migrated from json status closed changetime description one of the targets for proposal to build tables to is build proposal resources tables this is a symlink into src almost all the time two problems with this n philosophically if i specify a separate build directory that means i don t want you touching the source n the source could potentially be read only n nsolution write somewhere else in the build directory reporter david schultz cc resolution fixed ts component combo simulation summary should not build tables in src priority major keywords time milestone owner olivas type defect
| 1
|
72,513
| 24,160,217,206
|
IssuesEvent
|
2022-09-22 11:00:15
|
matrix-org/synapse
|
https://api.github.com/repos/matrix-org/synapse
|
closed
|
synapse 1.68.0rc1 fails to build: environment variable `SYNAPSE_RUST_DIGEST` not defined
|
S-Major T-Defect X-Release-Blocker O-Frequent
|
### Description
Try to build synapse 1.68.0rc1 now forcing an included rust component build.
### Steps to reproduce
Trying to build synapse 1.68.0rc1 I run into the following error:
```
Compiling synapse v0.1.0 (/var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/rust)
Running `rustc --crate-name synapse --edition=2021 rust/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type cdylib --emit=dep-info,link -C opt-level=3 -C embed-bitcode=no --crate-type cdylib -C metadata=71f8cac97d837ea8 --out-dir /var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps -L dependency=/var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps --extern pyo3=/var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps/libpyo3-c4fa1009b364ab5a.rlib`
error: environment variable `SYNAPSE_RUST_DIGEST` not defined
--> rust/src/lib.rs:8:5
|
8 | env!("SYNAPSE_RUST_DIGEST")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this error originates in the macro `env` (in Nightly builds, run with -Z macro-backtrace for more info)
error: could not compile `synapse` due to previous error
Caused by:
process didn't exit successfully: `rustc --crate-name synapse --edition=2021 rust/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type cdylib --emit=dep-info,link -C opt-level=3 -C embed-bitcode=no --crate-type cdylib -C metadata=71f8cac97d837ea8 --out-dir /var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps -L dependency=/var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps --extern pyo3=/var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps/libpyo3-c4fa1009b364ab5a.rlib` (exit status: 1)
```
Complete build log: [synapse-1.68.0rc1.log](https://github.com/matrix-org/synapse/files/9609115/synapse-1.68.0rc1.log)
### Homeserver
-
### Synapse Version
1.68.0rc1
### Installation Method
Other (please mention below)
### Platform
Source based package, Exherbo Linux.
### Relevant log output
```shell
-
```
### Anything else that would be useful to know?
_No response_
|
1.0
|
synapse 1.68.0rc1 fails to build: environment variable `SYNAPSE_RUST_DIGEST` not defined - ### Description
Try to build synapse 1.68.0rc1 now forcing an included rust component build.
### Steps to reproduce
Trying to build synapse 1.68.0rc1 I run into the following error:
```
Compiling synapse v0.1.0 (/var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/rust)
Running `rustc --crate-name synapse --edition=2021 rust/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type cdylib --emit=dep-info,link -C opt-level=3 -C embed-bitcode=no --crate-type cdylib -C metadata=71f8cac97d837ea8 --out-dir /var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps -L dependency=/var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps --extern pyo3=/var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps/libpyo3-c4fa1009b364ab5a.rlib`
error: environment variable `SYNAPSE_RUST_DIGEST` not defined
--> rust/src/lib.rs:8:5
|
8 | env!("SYNAPSE_RUST_DIGEST")
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this error originates in the macro `env` (in Nightly builds, run with -Z macro-backtrace for more info)
error: could not compile `synapse` due to previous error
Caused by:
process didn't exit successfully: `rustc --crate-name synapse --edition=2021 rust/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type cdylib --emit=dep-info,link -C opt-level=3 -C embed-bitcode=no --crate-type cdylib -C metadata=71f8cac97d837ea8 --out-dir /var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps -L dependency=/var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps --extern pyo3=/var/tmp/paludis/build/net-synapse-1.68.0rc1/work/matrix-synapse-1.68.0rc1/target/release/deps/libpyo3-c4fa1009b364ab5a.rlib` (exit status: 1)
```
Complete build log: [synapse-1.68.0rc1.log](https://github.com/matrix-org/synapse/files/9609115/synapse-1.68.0rc1.log)
### Homeserver
-
### Synapse Version
1.68.0rc1
### Installation Method
Other (please mention below)
### Platform
Source based package, Exherbo Linux.
### Relevant log output
```shell
-
```
### Anything else that would be useful to know?
_No response_
|
defect
|
synapse fails to build environment variable synapse rust digest not defined description try to build synapse now forcing an included rust component build steps to reproduce trying to build synapse i run into the following error compiling synapse var tmp paludis build net synapse work matrix synapse rust running rustc crate name synapse edition rust src lib rs error format json json diagnostic rendered ansi artifacts future incompat crate type cdylib emit dep info link c opt level c embed bitcode no crate type cdylib c metadata out dir var tmp paludis build net synapse work matrix synapse target release deps l dependency var tmp paludis build net synapse work matrix synapse target release deps extern var tmp paludis build net synapse work matrix synapse target release deps rlib error environment variable synapse rust digest not defined rust src lib rs env synapse rust digest note this error originates in the macro env in nightly builds run with z macro backtrace for more info error could not compile synapse due to previous error caused by process didn t exit successfully rustc crate name synapse edition rust src lib rs error format json json diagnostic rendered ansi artifacts future incompat crate type cdylib emit dep info link c opt level c embed bitcode no crate type cdylib c metadata out dir var tmp paludis build net synapse work matrix synapse target release deps l dependency var tmp paludis build net synapse work matrix synapse target release deps extern var tmp paludis build net synapse work matrix synapse target release deps rlib exit status complete build log homeserver synapse version installation method other please mention below platform source based package exherbo linux relevant log output shell anything else that would be useful to know no response
| 1
|
569
| 2,571,490,375
|
IssuesEvent
|
2015-02-10 16:48:00
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
NPE While Getting CacheOperationProvider
|
Team: Core Type: Defect
|
Occured while running stabilizer tests on X-Large cluster.
The problem is that `CacheConfig` may not be created yet before any cache operation request is received from node.
Here are the error logs of failed test from @Danny-Hazelcast.
```
message='Worked ran into an unhandled exception'
type='Worker exception'
agentAddress=10.45.5.206
time=Mon Feb 09 16:54:05 UTC 2015
workerAddress=client:10.45.5.206
workerId=worker-10.45.5.206-4-client
test=TestCase{
id=icacheMaxMediume
, class=com.hazelcast.stabilizer.tests.icache.EvictionICacheTest
, basename=maxCachMediume1
}
cause=java.lang.NullPointerException
at com.hazelcast.cache.impl.client.AbstractCacheAllPartitionsRequest.getOperationProvider(AbstractCacheAllPartitionsRequest.java:30)
at com.hazelcast.cache.impl.client.CacheSizeRequest.createOperationFactory(CacheSizeRequest.java:60)
at com.hazelcast.client.impl.client.AllPartitionsClientRequest.process(AllPartitionsClientRequest.java:33)
at com.hazelcast.client.impl.ClientEngineImpl$ClientPacketProcessor.processRequest(ClientEngineImpl.java:434)
at com.hazelcast.client.impl.ClientEngineImpl$ClientPacketProcessor.run(ClientEngineImpl.java:353)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:92)
at ------ End remote and begin local stack-trace ------.(Unknown Source)
at com.hazelcast.client.spi.impl.ClientCallFuture.resolveResponse(ClientCallFuture.java:201)
at com.hazelcast.client.spi.impl.ClientCallFuture.get(ClientCallFuture.java:142)
at com.hazelcast.client.spi.impl.ClientCallFuture.get(ClientCallFuture.java:118)
at com.hazelcast.client.cache.impl.AbstractClientCacheProxyBase.invoke(AbstractClientCacheProxyBase.java:141)
at com.hazelcast.client.cache.impl.AbstractClientCacheProxy.size(AbstractClientCacheProxy.java:306)
at com.hazelcast.client.cache.impl.ClientCacheProxy.size(ClientCacheProxy.java:72)
at com.hazelcast.stabilizer.tests.icache.EvictionICacheTest$WorkerThread.run(EvictionICacheTest.java:150)
at java.lang.Thread.run(Thread.java:701)
at com.hazelcast.stabilizer.test.utils.ThreadSpawner$DefaultThread.run(ThreadSpawner.java:88)
```
|
1.0
|
NPE While Getting CacheOperationProvider - Occured while running stabilizer tests on X-Large cluster.
The problem is that `CacheConfig` may not be created yet before any cache operation request is received from node.
Here are the error logs of failed test from @Danny-Hazelcast.
```
message='Worked ran into an unhandled exception'
type='Worker exception'
agentAddress=10.45.5.206
time=Mon Feb 09 16:54:05 UTC 2015
workerAddress=client:10.45.5.206
workerId=worker-10.45.5.206-4-client
test=TestCase{
id=icacheMaxMediume
, class=com.hazelcast.stabilizer.tests.icache.EvictionICacheTest
, basename=maxCachMediume1
}
cause=java.lang.NullPointerException
at com.hazelcast.cache.impl.client.AbstractCacheAllPartitionsRequest.getOperationProvider(AbstractCacheAllPartitionsRequest.java:30)
at com.hazelcast.cache.impl.client.CacheSizeRequest.createOperationFactory(CacheSizeRequest.java:60)
at com.hazelcast.client.impl.client.AllPartitionsClientRequest.process(AllPartitionsClientRequest.java:33)
at com.hazelcast.client.impl.ClientEngineImpl$ClientPacketProcessor.processRequest(ClientEngineImpl.java:434)
at com.hazelcast.client.impl.ClientEngineImpl$ClientPacketProcessor.run(ClientEngineImpl.java:353)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:92)
at ------ End remote and begin local stack-trace ------.(Unknown Source)
at com.hazelcast.client.spi.impl.ClientCallFuture.resolveResponse(ClientCallFuture.java:201)
at com.hazelcast.client.spi.impl.ClientCallFuture.get(ClientCallFuture.java:142)
at com.hazelcast.client.spi.impl.ClientCallFuture.get(ClientCallFuture.java:118)
at com.hazelcast.client.cache.impl.AbstractClientCacheProxyBase.invoke(AbstractClientCacheProxyBase.java:141)
at com.hazelcast.client.cache.impl.AbstractClientCacheProxy.size(AbstractClientCacheProxy.java:306)
at com.hazelcast.client.cache.impl.ClientCacheProxy.size(ClientCacheProxy.java:72)
at com.hazelcast.stabilizer.tests.icache.EvictionICacheTest$WorkerThread.run(EvictionICacheTest.java:150)
at java.lang.Thread.run(Thread.java:701)
at com.hazelcast.stabilizer.test.utils.ThreadSpawner$DefaultThread.run(ThreadSpawner.java:88)
```
|
defect
|
npe while getting cacheoperationprovider occured while running stabilizer tests on x large cluster the problem is that cacheconfig may not be created yet before any cache operation request is received from node here are the error logs of failed test from danny hazelcast message worked ran into an unhandled exception type worker exception agentaddress time mon feb utc workeraddress client workerid worker client test testcase id icachemaxmediume class com hazelcast stabilizer tests icache evictionicachetest basename cause java lang nullpointerexception at com hazelcast cache impl client abstractcacheallpartitionsrequest getoperationprovider abstractcacheallpartitionsrequest java at com hazelcast cache impl client cachesizerequest createoperationfactory cachesizerequest java at com hazelcast client impl client allpartitionsclientrequest process allpartitionsclientrequest java at com hazelcast client impl clientengineimpl clientpacketprocessor processrequest clientengineimpl java at com hazelcast client impl clientengineimpl clientpacketprocessor run clientengineimpl java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast util executor hazelcastmanagedthread run hazelcastmanagedthread java at end remote and begin local stack trace unknown source at com hazelcast client spi impl clientcallfuture resolveresponse clientcallfuture java at com hazelcast client spi impl clientcallfuture get clientcallfuture java at com hazelcast client spi impl clientcallfuture get clientcallfuture java at com hazelcast client cache impl abstractclientcacheproxybase invoke abstractclientcacheproxybase java at com hazelcast client cache impl abstractclientcacheproxy size abstractclientcacheproxy java at com hazelcast client cache impl clientcacheproxy size clientcacheproxy java at com hazelcast stabilizer tests icache evictionicachetest workerthread run evictionicachetest java at java lang thread run thread java at com hazelcast stabilizer test utils threadspawner defaultthread run threadspawner java
| 1
|
19,023
| 2,616,017,747
|
IssuesEvent
|
2015-03-02 00:59:38
|
jasonhall/bwapi
|
https://api.github.com/repos/jasonhall/bwapi
|
closed
|
MinGW Support (ones more)
|
auto-migrated NewFeature Priority-Medium Type-Enhancement
|
```
Dear BWAPI developer,
ones more I would like to mention, that it would be nice, if BWAPI clients
would also compile with mingw.
I prepared a patch, which demonstrates, that there are only little changes
needed to make the code compile with mingw. With these modifications the
ExampleAIClient runs fine.
Till now, I did not test all functionality of the code, so maybe there are
still some bugs like the two examples I already found:
1. Changes in BulletImpl.cpp: Using uninitialised variables. (I wondered it
works with VC++ ...)
2. Changes in GameTable.h: time_t has different size in VC++ and mingw.
It would be nice, if you add this patch (or a modification of it) to the
repository. I read there are also other people thinking about this issue.
If there are questions to the changes, do not hesitate to ask...
Best regards.
```
Original issue reported on code.google.com by `LordBlac...@gmail.com` on 19 Oct 2011 at 8:45
Attachments:
* [mingw.patch](https://storage.googleapis.com/google-code-attachments/bwapi/issue-412/comment-0/mingw.patch)
|
1.0
|
MinGW Support (ones more) - ```
Dear BWAPI developer,
ones more I would like to mention, that it would be nice, if BWAPI clients
would also compile with mingw.
I prepared a patch, which demonstrates, that there are only little changes
needed to make the code compile with mingw. With these modifications the
ExampleAIClient runs fine.
Till now, I did not test all functionality of the code, so maybe there are
still some bugs like the two examples I already found:
1. Changes in BulletImpl.cpp: Using uninitialised variables. (I wondered it
works with VC++ ...)
2. Changes in GameTable.h: time_t has different size in VC++ and mingw.
It would be nice, if you add this patch (or a modification of it) to the
repository. I read there are also other people thinking about this issue.
If there are questions to the changes, do not hesitate to ask...
Best regards.
```
Original issue reported on code.google.com by `LordBlac...@gmail.com` on 19 Oct 2011 at 8:45
Attachments:
* [mingw.patch](https://storage.googleapis.com/google-code-attachments/bwapi/issue-412/comment-0/mingw.patch)
|
non_defect
|
mingw support ones more dear bwapi developer ones more i would like to mention that it would be nice if bwapi clients would also compile with mingw i prepared a patch which demonstrates that there are only little changes needed to make the code compile with mingw with these modifications the exampleaiclient runs fine till now i did not test all functionality of the code so maybe there are still some bugs like the two examples i already found changes in bulletimpl cpp using uninitialised variables i wondered it works with vc changes in gametable h time t has different size in vc and mingw it would be nice if you add this patch or a modification of it to the repository i read there are also other people thinking about this issue if there are questions to the changes do not hesitate to ask best regards original issue reported on code google com by lordblac gmail com on oct at attachments
| 0
|
333,133
| 29,510,541,455
|
IssuesEvent
|
2023-06-03 21:51:17
|
sandialabs/pyttb
|
https://api.github.com/repos/sandialabs/pyttb
|
closed
|
Testing: implement tests for full coverage
|
testing doing
|
If possible, implement tests to provide full coverage of TensorToolbox code:
```
pytest --cov=pyttb tests/ --cov-report=term-missing
```
```
Name Stmts Miss Cover Missing
----------------------------------------------------
pyttb\__init__.py 23 1 96% 31
pyttb\cp_als.py 87 0 100%
pyttb\cp_apr.py 567 78 86% 84, 86, 88, 110, 197-198, 229, 247-249, 342, 351, 374-384, 419-432, 464-469, 495, 536, 540-541, 649, 658, 681-691, 719-732, 781-786, 808, 813, 819, 871-872, 1016, 1034-1037, 1044-1045, 1230, 1297-1299, 1343
pyttb\export_data.py 63 1 98% 54
pyttb\import_data.py 60 4 93% 14, 24, 61, 72
pyttb\khatrirao.py 22 0 100%
pyttb\ktensor.py 468 5 99% 772, 812-816
pyttb\pyttb_utils.py 237 2 99% 293, 328
pyttb\sptenmat.py 4 0 100%
pyttb\sptensor.py 914 4 99% 117, 562, 608, 637
pyttb\sptensor3.py 4 0 100%
pyttb\sumtensor.py 4 0 100%
pyttb\symktensor.py 4 0 100%
pyttb\symtensor.py 4 0 100%
pyttb\tenmat.py 182 1 99% 183
pyttb\tensor.py 557 3 99% 1105, 1219, 1394
pyttb\ttensor.py 4 0 100%
----------------------------------------------------
TOTAL 3204 99 97%
```
|
1.0
|
Testing: implement tests for full coverage - If possible, implement tests to provide full coverage of TensorToolbox code:
```
pytest --cov=pyttb tests/ --cov-report=term-missing
```
```
Name Stmts Miss Cover Missing
----------------------------------------------------
pyttb\__init__.py 23 1 96% 31
pyttb\cp_als.py 87 0 100%
pyttb\cp_apr.py 567 78 86% 84, 86, 88, 110, 197-198, 229, 247-249, 342, 351, 374-384, 419-432, 464-469, 495, 536, 540-541, 649, 658, 681-691, 719-732, 781-786, 808, 813, 819, 871-872, 1016, 1034-1037, 1044-1045, 1230, 1297-1299, 1343
pyttb\export_data.py 63 1 98% 54
pyttb\import_data.py 60 4 93% 14, 24, 61, 72
pyttb\khatrirao.py 22 0 100%
pyttb\ktensor.py 468 5 99% 772, 812-816
pyttb\pyttb_utils.py 237 2 99% 293, 328
pyttb\sptenmat.py 4 0 100%
pyttb\sptensor.py 914 4 99% 117, 562, 608, 637
pyttb\sptensor3.py 4 0 100%
pyttb\sumtensor.py 4 0 100%
pyttb\symktensor.py 4 0 100%
pyttb\symtensor.py 4 0 100%
pyttb\tenmat.py 182 1 99% 183
pyttb\tensor.py 557 3 99% 1105, 1219, 1394
pyttb\ttensor.py 4 0 100%
----------------------------------------------------
TOTAL 3204 99 97%
```
|
non_defect
|
testing implement tests for full coverage if possible implement tests to provide full coverage of tensortoolbox code pytest cov pyttb tests cov report term missing name stmts miss cover missing pyttb init py pyttb cp als py pyttb cp apr py pyttb export data py pyttb import data py pyttb khatrirao py pyttb ktensor py pyttb pyttb utils py pyttb sptenmat py pyttb sptensor py pyttb py pyttb sumtensor py pyttb symktensor py pyttb symtensor py pyttb tenmat py pyttb tensor py pyttb ttensor py total
| 0
|
55,165
| 14,247,119,724
|
IssuesEvent
|
2020-11-19 10:59:53
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
PDF manual doesn't correctly wrap large code blocks
|
C: Documentation E: All Editions P: Medium T: Defect
|
The PDF manual doesn't correctly wrap large (as in lines of code) code blocks, see for example:

- The first highlight shows that the block reaches the end of the page rather than wrapping to the next page
- The second highlight shows that the next page is simply blank
|
1.0
|
PDF manual doesn't correctly wrap large code blocks - The PDF manual doesn't correctly wrap large (as in lines of code) code blocks, see for example:

- The first highlight shows that the block reaches the end of the page rather than wrapping to the next page
- The second highlight shows that the next page is simply blank
|
defect
|
pdf manual doesn t correctly wrap large code blocks the pdf manual doesn t correctly wrap large as in lines of code code blocks see for example the first highlight shows that the block reaches the end of the page rather than wrapping to the next page the second highlight shows that the next page is simply blank
| 1
|
47,826
| 13,066,259,458
|
IssuesEvent
|
2020-07-30 21:19:19
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
[NoiseEngine] example script and python module are broken (Trac #1230)
|
Migrated from Trac combo reconstruction defect
|
Example script `../resources/scripts/example.py` is broken because `I3IsolatedHitsCutModule` does not exist anymore (should probably be replaced by STTools). The same is true for `../python/NoiseEngine.py`. Moreover, the import statements should be checked because some modules are missing and other are not needed. In general, more test scripts are needed.
Migrated from https://code.icecube.wisc.edu/ticket/1230
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "Example script `../resources/scripts/example.py` is broken because `I3IsolatedHitsCutModule` does not exist anymore (should probably be replaced by STTools). The same is true for `../python/NoiseEngine.py`. Moreover, the import statements should be checked because some modules are missing and other are not needed. In general, more test scripts are needed.",
"reporter": "kkrings",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[NoiseEngine] example script and python module are broken",
"priority": "blocker",
"keywords": "",
"time": "2015-08-19T21:50:50",
"milestone": "",
"owner": "mjl5147",
"type": "defect"
}
```
|
1.0
|
[NoiseEngine] example script and python module are broken (Trac #1230) - Example script `../resources/scripts/example.py` is broken because `I3IsolatedHitsCutModule` does not exist anymore (should probably be replaced by STTools). The same is true for `../python/NoiseEngine.py`. Moreover, the import statements should be checked because some modules are missing and other are not needed. In general, more test scripts are needed.
Migrated from https://code.icecube.wisc.edu/ticket/1230
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "Example script `../resources/scripts/example.py` is broken because `I3IsolatedHitsCutModule` does not exist anymore (should probably be replaced by STTools). The same is true for `../python/NoiseEngine.py`. Moreover, the import statements should be checked because some modules are missing and other are not needed. In general, more test scripts are needed.",
"reporter": "kkrings",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[NoiseEngine] example script and python module are broken",
"priority": "blocker",
"keywords": "",
"time": "2015-08-19T21:50:50",
"milestone": "",
"owner": "mjl5147",
"type": "defect"
}
```
|
defect
|
example script and python module are broken trac example script resources scripts example py is broken because does not exist anymore should probably be replaced by sttools the same is true for python noiseengine py moreover the import statements should be checked because some modules are missing and other are not needed in general more test scripts are needed migrated from json status closed changetime description example script resources scripts example py is broken because does not exist anymore should probably be replaced by sttools the same is true for python noiseengine py moreover the import statements should be checked because some modules are missing and other are not needed in general more test scripts are needed reporter kkrings cc resolution fixed ts component combo reconstruction summary example script and python module are broken priority blocker keywords time milestone owner type defect
| 1
|
596,575
| 18,106,989,934
|
IssuesEvent
|
2021-09-22 20:17:57
|
solo-io/gloo
|
https://api.github.com/repos/solo-io/gloo
|
closed
|
rabbitmq operator + gloo edge fails to create upstreams
|
Type: Bug Impact: M Priority: High
|
**Describe the bug**
deploying the rabbitmq operator either before or after gloo edge is installed fails to fully create upstreams. The upstreams appear as pending and then are removed.
**To Reproduce**
Steps to reproduce the behavior:
# deploy rabbitmq CRDs and cluster operator
```
kubectl apply -f https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml
```
# create rabbitmq namespace
```
kubectl create ns codefirefood
```
# create rabbitmq cluster with persistent storage
```
kubectl apply -f- <<EOF
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmqcluster
namespace: codefirefood
spec:
replicas: 3
persistence:
storageClassName: standard
storage: 5Gi
resources:
requests:
cpu: 100m
memory: 2Gi
limits:
cpu: 100m
memory: 2Gi
EOF
```
```
glooctl install gateway
glooctl get us
+----------+------+--------+---------+
| UPSTREAM | TYPE | STATUS | DETAILS |
+----------+------+--------+---------+
+----------+------+--------+---------+
+-------------------------+------------+---------+----------------------------+
| UPSTREAM | TYPE | STATUS | DETAILS |
+-------------------------+------------+---------+----------------------------+
| kube-system-kube-dns-53 | Kubernetes | Pending | svc name: kube-dns |
| | | | svc namespace: kube-system |
| | | | port: 53 |
| | | | |
| rabbitmq-newsdesk-15672 | Kubernetes | Pending | svc name: newsdesk |
| | | | svc namespace: rabbitmq |
| | | | port: 15672 |
| | | | |
+-------------------------+------------+---------+----------------------------+
+-------------------------------------+------------+----------+--------------------------------+
| UPSTREAM | TYPE | STATUS | DETAILS |
+-------------------------------------+------------+----------+--------------------------------+
| gloo-system-gateway-443 | Kubernetes | Pending | svc name: gateway |
| | | | svc namespace: gloo-system |
| | | | port: 443 |
| | | | |
| gloo-system-gateway-proxy-80 | Kubernetes | Accepted | svc name: gateway-proxy |
| | | | svc namespace: gloo-system |
| | | | port: 80 |
| | | | |
| kube-system-default-http-backend-80 | Kubernetes | Pending | svc name: |
| | | | default-http-backend |
| | | | svc namespace: kube-system |
| | | | port: 80 |
| | | | |
| kube-system-kube-dns-53 | Kubernetes | Accepted | svc name: kube-dns |
| | | | svc namespace: kube-system |
| | | | port: 53 |
| | | | |
+-------------------------------------+------------+----------+--------------------------------+
```
```
{"level":"info","ts":1629412127.2817044,"logger":"fds.v1.event_loop.fds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:46","msg":"end sync 4678640285947615906","version":"1.8.6"}
{"level":"warn","ts":1629412127.934898,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer.upstream_reconciler.reconciler","caller":"reconcile/reconciler.go:131","msg":"unable to read updated resource name:\"rabbitmq-newsdesk-nodes-4369\" namespace:\"gloo-system\" to get updated resource version; gloo-system.rabbitmq-newsdesk-nodes-4369 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-nodes-4369\" not found","version":"1.8.6"}
{"level":"error","ts":1629412127.9350755,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer","caller":"discovery/discovery.go:145","msg":"failed reconciling upstreams","version":"1.8.6","discovered_by":"kubernetesplugin","upstreams":17,"error":"reconciling resource rabbitmq-newsdesk-nodes-4369: gloo-system.rabbitmq-newsdesk-nodes-4369 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-nodes-4369\" not found","errorVerbose":"gloo-system.rabbitmq-newsdesk-nodes-4369 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-nodes-4369\" not found\nreconciling resource rabbitmq-newsdesk-nodes-4369\ngithub.com/solo-io/solo-kit/pkg/errors.Wrapf\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/errors/errors.go:15\ngithub.com/solo-io/solo-kit/pkg/api/v1/reconcile.(*reconciler).Reconcile\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/api/v1/reconcile/reconciler.go:43\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*upstreamReconciler).Reconcile\n\t/workspace/gloo/projects/gloo/pkg/api/v1/upstream_reconciler.sk.go:46\ngithub.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:141\ngithub.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.(*syncer).Sync\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/discovery_syncer.go:44\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*discoveryEventLoop).Run.func1\n\t/workspace/gloo/projects/gloo/pkg/api/v1/discovery_event_loop.sk.go:84\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371","stacktrace":"github.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:145\ngithub.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.(*syncer).Sync\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/discovery_syncer.go:44\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*discoveryEventLoop).Run.func1\n\t/workspace/gloo/projects/gloo/pkg/api/v1/discovery_event_loop.sk.go:84"}
{"level":"info","ts":1629412127.9352522,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:44","msg":"end sync 14320022036453992014","version":"1.8.6"}
{"level":"error","ts":1629412127.935421,"logger":"uds.v1.event_loop.uds","caller":"syncer/setup_syncer.go:99","msg":"error: event_loop.uds: reconciling resource rabbitmq-newsdesk-nodes-4369: gloo-system.rabbitmq-newsdesk-nodes-4369 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-nodes-4369\" not found","version":"1.8.6","stacktrace":"github.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.RunUDS.func2\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/setup_syncer.go:99"}
{"level":"info","ts":1629412127.9373128,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:34","msg":"begin sync 14320022036453992014 (0 upstreams)","version":"1.8.6"}
{"level":"info","ts":1629412128.274076,"logger":"fds.v1.event_loop.fds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:35","msg":"begin sync 10922543970876210891 (5 upstreams)","version":"1.8.6"}
{"level":"info","ts":1629412128.2741578,"logger":"fds.v1.event_loop.fds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:46","msg":"end sync 10922543970876210891","version":"1.8.6"}
{"level":"warn","ts":1629412128.7236285,"logger":"uds.v1.event_loop.uds.upstream_reconciler.reconciler","caller":"reconcile/reconciler.go:131","msg":"unable to read updated resource name:\"rabbitmq-newsdesk-15672\" namespace:\"gloo-system\" to get updated resource version; gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","version":"1.8.6"}
{"level":"error","ts":1629412128.7237267,"logger":"uds.v1.event_loop.uds","caller":"discovery/discovery.go:145","msg":"failed reconciling upstreams","version":"1.8.6","discovered_by":"kubernetesplugin","upstreams":17,"error":"reconciling resource rabbitmq-newsdesk-15672: gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","errorVerbose":"gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found\nreconciling resource rabbitmq-newsdesk-15672\ngithub.com/solo-io/solo-kit/pkg/errors.Wrapf\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/errors/errors.go:15\ngithub.com/solo-io/solo-kit/pkg/api/v1/reconcile.(*reconciler).Reconcile\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/api/v1/reconcile/reconciler.go:43\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*upstreamReconciler).Reconcile\n\t/workspace/gloo/projects/gloo/pkg/api/v1/upstream_reconciler.sk.go:46\ngithub.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:141\ngithub.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).StartUds.func1\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:112\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371","stacktrace":"github.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:145\ngithub.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).StartUds.func1\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:112"}
{"level":"error","ts":1629412128.7240367,"logger":"uds.v1.event_loop.uds","caller":"syncer/setup_syncer.go:99","msg":"error: event_loop.uds: error in uds plugin : reconciling resource rabbitmq-newsdesk-15672: gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","version":"1.8.6","stacktrace":"github.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.RunUDS.func2\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/setup_syncer.go:99"}
{"level":"warn","ts":1629412129.5764635,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer.upstream_reconciler.reconciler","caller":"reconcile/reconciler.go:131","msg":"unable to read updated resource name:\"rabbitmq-newsdesk-15672\" namespace:\"gloo-system\" to get updated resource version; gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","version":"1.8.6"}
{"level":"error","ts":1629412129.5765572,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer","caller":"discovery/discovery.go:145","msg":"failed reconciling upstreams","version":"1.8.6","discovered_by":"kubernetesplugin","upstreams":17,"error":"reconciling resource rabbitmq-newsdesk-15672: gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","errorVerbose":"gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found\nreconciling resource rabbitmq-newsdesk-15672\ngithub.com/solo-io/solo-kit/pkg/errors.Wrapf\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/errors/errors.go:15\ngithub.com/solo-io/solo-kit/pkg/api/v1/reconcile.(*reconciler).Reconcile\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/api/v1/reconcile/reconciler.go:43\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*upstreamReconciler).Reconcile\n\t/workspace/gloo/projects/gloo/pkg/api/v1/upstream_reconciler.sk.go:46\ngithub.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:141\ngithub.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.(*syncer).Sync\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/discovery_syncer.go:44\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*discoveryEventLoop).Run.func1\n\t/workspace/gloo/projects/gloo/pkg/api/v1/discovery_event_loop.sk.go:84\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371","stacktrace":"github.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:145\ngithub.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.(*syncer).Sync\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/discovery_syncer.go:44\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*discoveryEventLoop).Run.func1\n\t/workspace/gloo/projects/gloo/pkg/api/v1/discovery_event_loop.sk.go:84"}
{"level":"info","ts":1629412129.5766547,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:44","msg":"end sync 14320022036453992014","version":"1.8.6"}
{"level":"error","ts":1629412129.5766995,"logger":"uds.v1.event_loop.uds","caller":"syncer/setup_syncer.go:99","msg":"error: event_loop.uds: reconciling resource rabbitmq-newsdesk-15672: gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","version":"1.8.6","stacktrace":"github.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.RunUDS.func2\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/setup_syncer.go:99"}
```
**Expected behavior**
Upstreams for rabbit mq are created
**Additional context**
Add any other context about the problem here, e.g.
- Gloo Edge version v1.8.6
- Kubernetes version 1.20
|
1.0
|
rabbitmq operator + gloo edge fails to create upstreams - **Describe the bug**
deploying the rabbitmq operator either before or after gloo edge is installed fails to fully create upstreams. The upstreams appear as pending and then are removed.
**To Reproduce**
Steps to reproduce the behavior:
# deploy rabbitmq CRDs and cluster operator
```
kubectl apply -f https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml
```
# create rabbitmq namespace
```
kubectl create ns codefirefood
```
# create rabbitmq cluster with persistent storage
```
kubectl apply -f- <<EOF
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmqcluster
namespace: codefirefood
spec:
replicas: 3
persistence:
storageClassName: standard
storage: 5Gi
resources:
requests:
cpu: 100m
memory: 2Gi
limits:
cpu: 100m
memory: 2Gi
EOF
```
```
glooctl install gateway
glooctl get us
+----------+------+--------+---------+
| UPSTREAM | TYPE | STATUS | DETAILS |
+----------+------+--------+---------+
+----------+------+--------+---------+
+-------------------------+------------+---------+----------------------------+
| UPSTREAM | TYPE | STATUS | DETAILS |
+-------------------------+------------+---------+----------------------------+
| kube-system-kube-dns-53 | Kubernetes | Pending | svc name: kube-dns |
| | | | svc namespace: kube-system |
| | | | port: 53 |
| | | | |
| rabbitmq-newsdesk-15672 | Kubernetes | Pending | svc name: newsdesk |
| | | | svc namespace: rabbitmq |
| | | | port: 15672 |
| | | | |
+-------------------------+------------+---------+----------------------------+
+-------------------------------------+------------+----------+--------------------------------+
| UPSTREAM | TYPE | STATUS | DETAILS |
+-------------------------------------+------------+----------+--------------------------------+
| gloo-system-gateway-443 | Kubernetes | Pending | svc name: gateway |
| | | | svc namespace: gloo-system |
| | | | port: 443 |
| | | | |
| gloo-system-gateway-proxy-80 | Kubernetes | Accepted | svc name: gateway-proxy |
| | | | svc namespace: gloo-system |
| | | | port: 80 |
| | | | |
| kube-system-default-http-backend-80 | Kubernetes | Pending | svc name: |
| | | | default-http-backend |
| | | | svc namespace: kube-system |
| | | | port: 80 |
| | | | |
| kube-system-kube-dns-53 | Kubernetes | Accepted | svc name: kube-dns |
| | | | svc namespace: kube-system |
| | | | port: 53 |
| | | | |
+-------------------------------------+------------+----------+--------------------------------+
```
```
{"level":"info","ts":1629412127.2817044,"logger":"fds.v1.event_loop.fds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:46","msg":"end sync 4678640285947615906","version":"1.8.6"}
{"level":"warn","ts":1629412127.934898,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer.upstream_reconciler.reconciler","caller":"reconcile/reconciler.go:131","msg":"unable to read updated resource name:\"rabbitmq-newsdesk-nodes-4369\" namespace:\"gloo-system\" to get updated resource version; gloo-system.rabbitmq-newsdesk-nodes-4369 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-nodes-4369\" not found","version":"1.8.6"}
{"level":"error","ts":1629412127.9350755,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer","caller":"discovery/discovery.go:145","msg":"failed reconciling upstreams","version":"1.8.6","discovered_by":"kubernetesplugin","upstreams":17,"error":"reconciling resource rabbitmq-newsdesk-nodes-4369: gloo-system.rabbitmq-newsdesk-nodes-4369 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-nodes-4369\" not found","errorVerbose":"gloo-system.rabbitmq-newsdesk-nodes-4369 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-nodes-4369\" not found\nreconciling resource rabbitmq-newsdesk-nodes-4369\ngithub.com/solo-io/solo-kit/pkg/errors.Wrapf\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/errors/errors.go:15\ngithub.com/solo-io/solo-kit/pkg/api/v1/reconcile.(*reconciler).Reconcile\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/api/v1/reconcile/reconciler.go:43\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*upstreamReconciler).Reconcile\n\t/workspace/gloo/projects/gloo/pkg/api/v1/upstream_reconciler.sk.go:46\ngithub.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:141\ngithub.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.(*syncer).Sync\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/discovery_syncer.go:44\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*discoveryEventLoop).Run.func1\n\t/workspace/gloo/projects/gloo/pkg/api/v1/discovery_event_loop.sk.go:84\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371","stacktrace":"github.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:145\ngithub.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.(*syncer).Sync\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/discovery_syncer.go:44\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*discoveryEventLoop).Run.func1\n\t/workspace/gloo/projects/gloo/pkg/api/v1/discovery_event_loop.sk.go:84"}
{"level":"info","ts":1629412127.9352522,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:44","msg":"end sync 14320022036453992014","version":"1.8.6"}
{"level":"error","ts":1629412127.935421,"logger":"uds.v1.event_loop.uds","caller":"syncer/setup_syncer.go:99","msg":"error: event_loop.uds: reconciling resource rabbitmq-newsdesk-nodes-4369: gloo-system.rabbitmq-newsdesk-nodes-4369 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-nodes-4369\" not found","version":"1.8.6","stacktrace":"github.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.RunUDS.func2\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/setup_syncer.go:99"}
{"level":"info","ts":1629412127.9373128,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:34","msg":"begin sync 14320022036453992014 (0 upstreams)","version":"1.8.6"}
{"level":"info","ts":1629412128.274076,"logger":"fds.v1.event_loop.fds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:35","msg":"begin sync 10922543970876210891 (5 upstreams)","version":"1.8.6"}
{"level":"info","ts":1629412128.2741578,"logger":"fds.v1.event_loop.fds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:46","msg":"end sync 10922543970876210891","version":"1.8.6"}
{"level":"warn","ts":1629412128.7236285,"logger":"uds.v1.event_loop.uds.upstream_reconciler.reconciler","caller":"reconcile/reconciler.go:131","msg":"unable to read updated resource name:\"rabbitmq-newsdesk-15672\" namespace:\"gloo-system\" to get updated resource version; gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","version":"1.8.6"}
{"level":"error","ts":1629412128.7237267,"logger":"uds.v1.event_loop.uds","caller":"discovery/discovery.go:145","msg":"failed reconciling upstreams","version":"1.8.6","discovered_by":"kubernetesplugin","upstreams":17,"error":"reconciling resource rabbitmq-newsdesk-15672: gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","errorVerbose":"gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found\nreconciling resource rabbitmq-newsdesk-15672\ngithub.com/solo-io/solo-kit/pkg/errors.Wrapf\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/errors/errors.go:15\ngithub.com/solo-io/solo-kit/pkg/api/v1/reconcile.(*reconciler).Reconcile\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/api/v1/reconcile/reconciler.go:43\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*upstreamReconciler).Reconcile\n\t/workspace/gloo/projects/gloo/pkg/api/v1/upstream_reconciler.sk.go:46\ngithub.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:141\ngithub.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).StartUds.func1\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:112\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371","stacktrace":"github.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:145\ngithub.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).StartUds.func1\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:112"}
{"level":"error","ts":1629412128.7240367,"logger":"uds.v1.event_loop.uds","caller":"syncer/setup_syncer.go:99","msg":"error: event_loop.uds: error in uds plugin : reconciling resource rabbitmq-newsdesk-15672: gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","version":"1.8.6","stacktrace":"github.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.RunUDS.func2\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/setup_syncer.go:99"}
{"level":"warn","ts":1629412129.5764635,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer.upstream_reconciler.reconciler","caller":"reconcile/reconciler.go:131","msg":"unable to read updated resource name:\"rabbitmq-newsdesk-15672\" namespace:\"gloo-system\" to get updated resource version; gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","version":"1.8.6"}
{"level":"error","ts":1629412129.5765572,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer","caller":"discovery/discovery.go:145","msg":"failed reconciling upstreams","version":"1.8.6","discovered_by":"kubernetesplugin","upstreams":17,"error":"reconciling resource rabbitmq-newsdesk-15672: gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","errorVerbose":"gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found\nreconciling resource rabbitmq-newsdesk-15672\ngithub.com/solo-io/solo-kit/pkg/errors.Wrapf\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/errors/errors.go:15\ngithub.com/solo-io/solo-kit/pkg/api/v1/reconcile.(*reconciler).Reconcile\n\t/go/pkg/mod/github.com/solo-io/solo-kit@v0.20.2/pkg/api/v1/reconcile/reconciler.go:43\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*upstreamReconciler).Reconcile\n\t/workspace/gloo/projects/gloo/pkg/api/v1/upstream_reconciler.sk.go:46\ngithub.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:141\ngithub.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.(*syncer).Sync\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/discovery_syncer.go:44\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*discoveryEventLoop).Run.func1\n\t/workspace/gloo/projects/gloo/pkg/api/v1/discovery_event_loop.sk.go:84\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1371","stacktrace":"github.com/solo-io/gloo/projects/gloo/pkg/discovery.(*UpstreamDiscovery).Resync\n\t/workspace/gloo/projects/gloo/pkg/discovery/discovery.go:145\ngithub.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.(*syncer).Sync\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/discovery_syncer.go:44\ngithub.com/solo-io/gloo/projects/gloo/pkg/api/v1.(*discoveryEventLoop).Run.func1\n\t/workspace/gloo/projects/gloo/pkg/api/v1/discovery_event_loop.sk.go:84"}
{"level":"info","ts":1629412129.5766547,"logger":"uds.v1.event_loop.uds.v1.event_loop.syncer","caller":"syncer/discovery_syncer.go:44","msg":"end sync 14320022036453992014","version":"1.8.6"}
{"level":"error","ts":1629412129.5766995,"logger":"uds.v1.event_loop.uds","caller":"syncer/setup_syncer.go:99","msg":"error: event_loop.uds: reconciling resource rabbitmq-newsdesk-15672: gloo-system.rabbitmq-newsdesk-15672 does not exist: upstreams.gloo.solo.io \"rabbitmq-newsdesk-15672\" not found","version":"1.8.6","stacktrace":"github.com/solo-io/gloo/projects/discovery/pkg/uds/syncer.RunUDS.func2\n\t/workspace/gloo/projects/discovery/pkg/uds/syncer/setup_syncer.go:99"}
```
**Expected behavior**
Upstreams for rabbit mq are created
**Additional context**
Add any other context about the problem here, e.g.
- Gloo Edge version v1.8.6
- Kubernetes version 1.20
|
non_defect
|
rabbitmq operator gloo edge fails to create upstreams describe the bug deploying the rabbitmq operator either before or after gloo edge is installed fails to fully create upstreams the upstreams appear as pending and then are removed to reproduce steps to reproduce the behavior deploy rabbitmq crds and cluster operator kubectl apply f create rabbitmq namespace kubectl create ns codefirefood create rabbitmq cluster with persistent storage kubectl apply f eof apiversion rabbitmq com kind rabbitmqcluster metadata name rabbitmqcluster namespace codefirefood spec replicas persistence storageclassname standard storage resources requests cpu memory limits cpu memory eof glooctl install gateway glooctl get us upstream type status details upstream type status details kube system kube dns kubernetes pending svc name kube dns svc namespace kube system port rabbitmq newsdesk kubernetes pending svc name newsdesk svc namespace rabbitmq port upstream type status details gloo system gateway kubernetes pending svc name gateway svc namespace gloo system port gloo system gateway proxy kubernetes accepted svc name gateway proxy svc namespace gloo system port kube system default http backend kubernetes pending svc name default http backend svc namespace kube system port kube system kube dns kubernetes accepted svc name kube dns svc namespace kube system port level info ts logger fds event loop fds event loop syncer caller syncer discovery syncer go msg end sync version level warn ts logger uds event loop uds event loop syncer upstream reconciler reconciler caller reconcile reconciler go msg unable to read updated resource name rabbitmq newsdesk nodes namespace gloo system to get updated resource version gloo system rabbitmq newsdesk nodes does not exist upstreams gloo solo io rabbitmq newsdesk nodes not found version level error ts logger uds event loop uds event loop syncer caller discovery discovery go msg failed reconciling upstreams version discovered by kubernetesplugin upstreams error reconciling resource rabbitmq newsdesk nodes gloo system rabbitmq newsdesk nodes does not exist upstreams gloo solo io rabbitmq newsdesk nodes not found errorverbose gloo system rabbitmq newsdesk nodes does not exist upstreams gloo solo io rabbitmq newsdesk nodes not found nreconciling resource rabbitmq newsdesk nodes ngithub com solo io solo kit pkg errors wrapf n t go pkg mod github com solo io solo kit pkg errors errors go ngithub com solo io solo kit pkg api reconcile reconciler reconcile n t go pkg mod github com solo io solo kit pkg api reconcile reconciler go ngithub com solo io gloo projects gloo pkg api upstreamreconciler reconcile n t workspace gloo projects gloo pkg api upstream reconciler sk go ngithub com solo io gloo projects gloo pkg discovery upstreamdiscovery resync n t workspace gloo projects gloo pkg discovery discovery go ngithub com solo io gloo projects discovery pkg uds syncer syncer sync n t workspace gloo projects discovery pkg uds syncer discovery syncer go ngithub com solo io gloo projects gloo pkg api discoveryeventloop run n t workspace gloo projects gloo pkg api discovery event loop sk go nruntime goexit n t usr local go src runtime asm s stacktrace github com solo io gloo projects gloo pkg discovery upstreamdiscovery resync n t workspace gloo projects gloo pkg discovery discovery go ngithub com solo io gloo projects discovery pkg uds syncer syncer sync n t workspace gloo projects discovery pkg uds syncer discovery syncer go ngithub com solo io gloo projects gloo pkg api discoveryeventloop run n t workspace gloo projects gloo pkg api discovery event loop sk go level info ts logger uds event loop uds event loop syncer caller syncer discovery syncer go msg end sync version level error ts logger uds event loop uds caller syncer setup syncer go msg error event loop uds reconciling resource rabbitmq newsdesk nodes gloo system rabbitmq newsdesk nodes does not exist upstreams gloo solo io rabbitmq newsdesk nodes not found version stacktrace github com solo io gloo projects discovery pkg uds syncer runuds n t workspace gloo projects discovery pkg uds syncer setup syncer go level info ts logger uds event loop uds event loop syncer caller syncer discovery syncer go msg begin sync upstreams version level info ts logger fds event loop fds event loop syncer caller syncer discovery syncer go msg begin sync upstreams version level info ts logger fds event loop fds event loop syncer caller syncer discovery syncer go msg end sync version level warn ts logger uds event loop uds upstream reconciler reconciler caller reconcile reconciler go msg unable to read updated resource name rabbitmq newsdesk namespace gloo system to get updated resource version gloo system rabbitmq newsdesk does not exist upstreams gloo solo io rabbitmq newsdesk not found version level error ts logger uds event loop uds caller discovery discovery go msg failed reconciling upstreams version discovered by kubernetesplugin upstreams error reconciling resource rabbitmq newsdesk gloo system rabbitmq newsdesk does not exist upstreams gloo solo io rabbitmq newsdesk not found errorverbose gloo system rabbitmq newsdesk does not exist upstreams gloo solo io rabbitmq newsdesk not found nreconciling resource rabbitmq newsdesk ngithub com solo io solo kit pkg errors wrapf n t go pkg mod github com solo io solo kit pkg errors errors go ngithub com solo io solo kit pkg api reconcile reconciler reconcile n t go pkg mod github com solo io solo kit pkg api reconcile reconciler go ngithub com solo io gloo projects gloo pkg api upstreamreconciler reconcile n t workspace gloo projects gloo pkg api upstream reconciler sk go ngithub com solo io gloo projects gloo pkg discovery upstreamdiscovery resync n t workspace gloo projects gloo pkg discovery discovery go ngithub com solo io gloo projects gloo pkg discovery upstreamdiscovery startuds n t workspace gloo projects gloo pkg discovery discovery go nruntime goexit n t usr local go src runtime asm s stacktrace github com solo io gloo projects gloo pkg discovery upstreamdiscovery resync n t workspace gloo projects gloo pkg discovery discovery go ngithub com solo io gloo projects gloo pkg discovery upstreamdiscovery startuds n t workspace gloo projects gloo pkg discovery discovery go level error ts logger uds event loop uds caller syncer setup syncer go msg error event loop uds error in uds plugin reconciling resource rabbitmq newsdesk gloo system rabbitmq newsdesk does not exist upstreams gloo solo io rabbitmq newsdesk not found version stacktrace github com solo io gloo projects discovery pkg uds syncer runuds n t workspace gloo projects discovery pkg uds syncer setup syncer go level warn ts logger uds event loop uds event loop syncer upstream reconciler reconciler caller reconcile reconciler go msg unable to read updated resource name rabbitmq newsdesk namespace gloo system to get updated resource version gloo system rabbitmq newsdesk does not exist upstreams gloo solo io rabbitmq newsdesk not found version level error ts logger uds event loop uds event loop syncer caller discovery discovery go msg failed reconciling upstreams version discovered by kubernetesplugin upstreams error reconciling resource rabbitmq newsdesk gloo system rabbitmq newsdesk does not exist upstreams gloo solo io rabbitmq newsdesk not found errorverbose gloo system rabbitmq newsdesk does not exist upstreams gloo solo io rabbitmq newsdesk not found nreconciling resource rabbitmq newsdesk ngithub com solo io solo kit pkg errors wrapf n t go pkg mod github com solo io solo kit pkg errors errors go ngithub com solo io solo kit pkg api reconcile reconciler reconcile n t go pkg mod github com solo io solo kit pkg api reconcile reconciler go ngithub com solo io gloo projects gloo pkg api upstreamreconciler reconcile n t workspace gloo projects gloo pkg api upstream reconciler sk go ngithub com solo io gloo projects gloo pkg discovery upstreamdiscovery resync n t workspace gloo projects gloo pkg discovery discovery go ngithub com solo io gloo projects discovery pkg uds syncer syncer sync n t workspace gloo projects discovery pkg uds syncer discovery syncer go ngithub com solo io gloo projects gloo pkg api discoveryeventloop run n t workspace gloo projects gloo pkg api discovery event loop sk go nruntime goexit n t usr local go src runtime asm s stacktrace github com solo io gloo projects gloo pkg discovery upstreamdiscovery resync n t workspace gloo projects gloo pkg discovery discovery go ngithub com solo io gloo projects discovery pkg uds syncer syncer sync n t workspace gloo projects discovery pkg uds syncer discovery syncer go ngithub com solo io gloo projects gloo pkg api discoveryeventloop run n t workspace gloo projects gloo pkg api discovery event loop sk go level info ts logger uds event loop uds event loop syncer caller syncer discovery syncer go msg end sync version level error ts logger uds event loop uds caller syncer setup syncer go msg error event loop uds reconciling resource rabbitmq newsdesk gloo system rabbitmq newsdesk does not exist upstreams gloo solo io rabbitmq newsdesk not found version stacktrace github com solo io gloo projects discovery pkg uds syncer runuds n t workspace gloo projects discovery pkg uds syncer setup syncer go expected behavior upstreams for rabbit mq are created additional context add any other context about the problem here e g gloo edge version kubernetes version
| 0
|
474,124
| 13,653,108,622
|
IssuesEvent
|
2020-09-27 11:03:01
|
STAMACODING/RSA-App
|
https://api.github.com/repos/STAMACODING/RSA-App
|
closed
|
Neues Log Level hinzufügen oder vielleicht noch mehr?
|
enhancement / feature high priority log
|
Im Moment haben wir folgende Log Levels:
1. Error
2. Warning
3. Debug
4. Test
Würde mir noch ein Log-Level wünschen, was eine höhere Priorität als Debug hat, aber eine niedrigere als Warning. Also so:
1. Error
2. Warning
3. _Info_
4. Debug
5. Test
Als ich nach einem Namen für das neue Log Level gesucht habe, bin im Internet auch auf Standards für Log-Systeme gestoßen, die von Firmen genutzt werden und bin da auf [das](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_10.2.2/com.ibm.swg.ba.cognos.ug_rtm_wb.10.2.2.doc/c_n30e74.html) gestoßen.
Mir persönlich würde das System von IBM **deutlich mehr** gefallen, da es **mächtiger** ist. Man kann die Logs semantisch mehr nach ihrer Wichtigkeit ordnen. Auch gefällt mir die Unterscheidung zwischen "Fatal" und "Error". Denn manchmal treten eben Fehler auf, die man abfangen kann, manchmal führen Fehler aber auch zwanghaft zum Programmabsturz. Auch die Einteilung der Debug-Stufen ist top. Denn viele Logs sind nur fürs Debuggen da, manche davon jedoch nur für die "Detaillprogrammierung", manche für das grobe Ganze wichtig. So kann man, wenn man eine Funktion zum Filtern der Logs hat (siehe [hier](https://github.com/STAMACODING/RSA-App/issues/43#issue-694748507)), den Log viel besser nachvollziehen.
1. Fatal
- Logger.fatal(String class, String additionalMessage, Exception e) (Exception e als variables Argument)
- führt immer zum Absturz
- Konsole: e.printStrackTrace() und dann System.exit(-1)
- Datei: getStackTrace(e)* und nach dem Schreiben in die Datei System.exit(-1)
2. Error
- Logger.error(String class, String additionalMessage, Exception e) (Exception e als variables Argument)
- führt nie zum Absturz
- Konsole: nur e.printStrackTrace()
- Datei: nur getStackTrace(e)*
3. Warn
- Logger.warn(String class, String message)
4. Info
- Logger.info(String class, String message)
5. Debug - Low
- Logger.debugLow(String class, String message)
6. Debug - Medium
- Logger.debugMedium(String class, String message)
7. Debug - High
- Logger.debugHigh(String class, String message)
Falls ihr findet, dass die Funktionsnamen zu lang sind, würde ja auch f, e, w, i, dL, dM und dH als Abkürzung reichen, aber das wäre eure Sache. Würde aber eh erstmal eure Meinung zum Vorschlag hören^^.
*Diese Funktion ist cool, weil damit der StackTrace eines Fehlers als String abfangbar ist. So kann man ihn auch ganz leicht in die Log-Dateien schreiben:
```java
public static String getStackTrace(final Throwable throwable) {
final StringWriter sw = new StringWriter();
final PrintWriter pw = new PrintWriter(sw, true);
throwable.printStackTrace(pw);
return sw.getBuffer().toString();
}
(...)
String s = getStackTrace(e);
(...)
```
|
1.0
|
Neues Log Level hinzufügen oder vielleicht noch mehr? - Im Moment haben wir folgende Log Levels:
1. Error
2. Warning
3. Debug
4. Test
Würde mir noch ein Log-Level wünschen, was eine höhere Priorität als Debug hat, aber eine niedrigere als Warning. Also so:
1. Error
2. Warning
3. _Info_
4. Debug
5. Test
Als ich nach einem Namen für das neue Log Level gesucht habe, bin im Internet auch auf Standards für Log-Systeme gestoßen, die von Firmen genutzt werden und bin da auf [das](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_10.2.2/com.ibm.swg.ba.cognos.ug_rtm_wb.10.2.2.doc/c_n30e74.html) gestoßen.
Mir persönlich würde das System von IBM **deutlich mehr** gefallen, da es **mächtiger** ist. Man kann die Logs semantisch mehr nach ihrer Wichtigkeit ordnen. Auch gefällt mir die Unterscheidung zwischen "Fatal" und "Error". Denn manchmal treten eben Fehler auf, die man abfangen kann, manchmal führen Fehler aber auch zwanghaft zum Programmabsturz. Auch die Einteilung der Debug-Stufen ist top. Denn viele Logs sind nur fürs Debuggen da, manche davon jedoch nur für die "Detaillprogrammierung", manche für das grobe Ganze wichtig. So kann man, wenn man eine Funktion zum Filtern der Logs hat (siehe [hier](https://github.com/STAMACODING/RSA-App/issues/43#issue-694748507)), den Log viel besser nachvollziehen.
1. Fatal
- Logger.fatal(String class, String additionalMessage, Exception e) (Exception e als variables Argument)
- führt immer zum Absturz
- Konsole: e.printStrackTrace() und dann System.exit(-1)
- Datei: getStackTrace(e)* und nach dem Schreiben in die Datei System.exit(-1)
2. Error
- Logger.error(String class, String additionalMessage, Exception e) (Exception e als variables Argument)
- führt nie zum Absturz
- Konsole: nur e.printStrackTrace()
- Datei: nur getStackTrace(e)*
3. Warn
- Logger.warn(String class, String message)
4. Info
- Logger.info(String class, String message)
5. Debug - Low
- Logger.debugLow(String class, String message)
6. Debug - Medium
- Logger.debugMedium(String class, String message)
7. Debug - High
- Logger.debugHigh(String class, String message)
Falls ihr findet, dass die Funktionsnamen zu lang sind, würde ja auch f, e, w, i, dL, dM und dH als Abkürzung reichen, aber das wäre eure Sache. Würde aber eh erstmal eure Meinung zum Vorschlag hören^^.
*Diese Funktion ist cool, weil damit der StackTrace eines Fehlers als String abfangbar ist. So kann man ihn auch ganz leicht in die Log-Dateien schreiben:
```java
public static String getStackTrace(final Throwable throwable) {
final StringWriter sw = new StringWriter();
final PrintWriter pw = new PrintWriter(sw, true);
throwable.printStackTrace(pw);
return sw.getBuffer().toString();
}
(...)
String s = getStackTrace(e);
(...)
```
|
non_defect
|
neues log level hinzufügen oder vielleicht noch mehr im moment haben wir folgende log levels error warning debug test würde mir noch ein log level wünschen was eine höhere priorität als debug hat aber eine niedrigere als warning also so error warning info debug test als ich nach einem namen für das neue log level gesucht habe bin im internet auch auf standards für log systeme gestoßen die von firmen genutzt werden und bin da auf gestoßen mir persönlich würde das system von ibm deutlich mehr gefallen da es mächtiger ist man kann die logs semantisch mehr nach ihrer wichtigkeit ordnen auch gefällt mir die unterscheidung zwischen fatal und error denn manchmal treten eben fehler auf die man abfangen kann manchmal führen fehler aber auch zwanghaft zum programmabsturz auch die einteilung der debug stufen ist top denn viele logs sind nur fürs debuggen da manche davon jedoch nur für die detaillprogrammierung manche für das grobe ganze wichtig so kann man wenn man eine funktion zum filtern der logs hat siehe den log viel besser nachvollziehen fatal logger fatal string class string additionalmessage exception e exception e als variables argument führt immer zum absturz konsole e printstracktrace und dann system exit datei getstacktrace e und nach dem schreiben in die datei system exit error logger error string class string additionalmessage exception e exception e als variables argument führt nie zum absturz konsole nur e printstracktrace datei nur getstacktrace e warn logger warn string class string message info logger info string class string message debug low logger debuglow string class string message debug medium logger debugmedium string class string message debug high logger debughigh string class string message falls ihr findet dass die funktionsnamen zu lang sind würde ja auch f e w i dl dm und dh als abkürzung reichen aber das wäre eure sache würde aber eh erstmal eure meinung zum vorschlag hören diese funktion ist cool weil damit der stacktrace eines fehlers als string abfangbar ist so kann man ihn auch ganz leicht in die log dateien schreiben java public static string getstacktrace final throwable throwable final stringwriter sw new stringwriter final printwriter pw new printwriter sw true throwable printstacktrace pw return sw getbuffer tostring string s getstacktrace e
| 0
|
183,328
| 14,938,617,861
|
IssuesEvent
|
2021-01-25 15:58:52
|
skypyproject/skypy
|
https://api.github.com/repos/skypyproject/skypy
|
closed
|
Contributor guidelines to documentation
|
documentation enhancement
|
## Description
Move contributor guidelines to documentation under Developer section.
|
1.0
|
Contributor guidelines to documentation - ## Description
Move contributor guidelines to documentation under Developer section.
|
non_defect
|
contributor guidelines to documentation description move contributor guidelines to documentation under developer section
| 0
|
16,608
| 11,136,498,562
|
IssuesEvent
|
2019-12-20 16:44:31
|
RHEAGROUP/CDP4-IME-Community-Edition
|
https://api.github.com/repos/RHEAGROUP/CDP4-IME-Community-Edition
|
closed
|
The scale of the simpleParameterType should be displayed in the requirements browser, in brackets next to the shortName.
|
minor trivial usability
|
The scale is currently not displayed in the browser.
|
True
|
The scale of the simpleParameterType should be displayed in the requirements browser, in brackets next to the shortName. - The scale is currently not displayed in the browser.
|
non_defect
|
the scale of the simpleparametertype should be displayed in the requirements browser in brackets next to the shortname the scale is currently not displayed in the browser
| 0
|
754,249
| 26,378,543,262
|
IssuesEvent
|
2023-01-12 06:12:11
|
idom-team/idom
|
https://api.github.com/repos/idom-team/idom
|
closed
|
Document `should_render`
|
type: docs priority: 3 (low)
|
### Current Situation
Currently we do not explain how the user can utilize `should_render` to conditionally render a component.
Related issue: #738
### Proposed Actions
Either document the usage of `should_render`, or include a `ConditionalRender` helper utility that automatically does this.
|
1.0
|
Document `should_render` - ### Current Situation
Currently we do not explain how the user can utilize `should_render` to conditionally render a component.
Related issue: #738
### Proposed Actions
Either document the usage of `should_render`, or include a `ConditionalRender` helper utility that automatically does this.
|
non_defect
|
document should render current situation currently we do not explain how the user can utilize should render to conditionally render a component related issue proposed actions either document the usage of should render or include a conditionalrender helper utility that automatically does this
| 0
|
121,776
| 26,031,490,533
|
IssuesEvent
|
2022-12-21 21:50:05
|
Clueless-Community/seamless-ui
|
https://api.github.com/repos/Clueless-Community/seamless-ui
|
closed
|
Create a contact-us-map.html
|
codepeak 22 issue:3
|
One need to make this component using `HTML` and `Tailwind CSS`. I would suggest to use [Tailwind Playgrounds](https://play.tailwindcss.com/) to make things faster and quicker.
Here is a reference to the component.

After building the component please raise a PR with a screenshot of the component and add the component in `form-group/src/contact-us-map.html`.
If you need to use any icon please use it from [Hero Icons](https://heroicons.com/)
Good luck.
|
1.0
|
Create a contact-us-map.html - One need to make this component using `HTML` and `Tailwind CSS`. I would suggest to use [Tailwind Playgrounds](https://play.tailwindcss.com/) to make things faster and quicker.
Here is a reference to the component.

After building the component please raise a PR with a screenshot of the component and add the component in `form-group/src/contact-us-map.html`.
If you need to use any icon please use it from [Hero Icons](https://heroicons.com/)
Good luck.
|
non_defect
|
create a contact us map html one need to make this component using html and tailwind css i would suggest to use to make things faster and quicker here is a reference to the component after building the component please raise a pr with a screenshot of the component and add the component in form group src contact us map html if you need to use any icon please use it from good luck
| 0
|
692,028
| 23,720,622,947
|
IssuesEvent
|
2022-08-30 15:04:13
|
chaotic-aur/packages
|
https://api.github.com/repos/chaotic-aur/packages
|
closed
|
[Request] micropad
|
request:new-pkg priority:low
|
### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/micropad
### Utility this package has for you
Quite a unique notes taking app. You can even paste videos, audio files and such into it.
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
No, but for a great amount.
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [X] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_
|
1.0
|
[Request] micropad - ### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/micropad
### Utility this package has for you
Quite a unique notes taking app. You can even paste videos, audio files and such into it.
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
No, but for a great amount.
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [X] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_
|
non_defect
|
micropad link to the package s in the aur utility this package has for you quite a unique notes taking app you can even paste videos audio files and such into it do you consider the package s to be useful for every chaotic aur user no but for a great amount do you consider the package to be useful for feature testing preview yes have you tested if the package builds in a clean chroot yes does the package s license allow redistributing it yes have you searched the issues to ensure this request is unique yes have you read the readme to ensure this package is not banned yes more information no response
| 0
|
22,136
| 3,602,976,379
|
IssuesEvent
|
2016-02-03 17:22:33
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
RequestContext#update validation should skip unrendered components
|
defect enhancement
|
If you update a compoent, it must be rendered. So skipping unrendered prevents error and improves performance.
|
1.0
|
RequestContext#update validation should skip unrendered components - If you update a compoent, it must be rendered. So skipping unrendered prevents error and improves performance.
|
defect
|
requestcontext update validation should skip unrendered components if you update a compoent it must be rendered so skipping unrendered prevents error and improves performance
| 1
|
113,473
| 9,647,843,373
|
IssuesEvent
|
2019-05-17 14:50:50
|
magnumripper/JohnTheRipper
|
https://api.github.com/repos/magnumripper/JohnTheRipper
|
opened
|
More CI tests
|
RFC / discussion testing
|
Brainstorming. We now have a free container for the "more-tests".
We could add an MPI build (although only testing locally much like --fork).
Do we currently test a legacy build anywhere?
Anything else we should test? Perhaps running the Test Suite in `-internal` mode (disabling the few formats that give false positives)?
|
1.0
|
More CI tests - Brainstorming. We now have a free container for the "more-tests".
We could add an MPI build (although only testing locally much like --fork).
Do we currently test a legacy build anywhere?
Anything else we should test? Perhaps running the Test Suite in `-internal` mode (disabling the few formats that give false positives)?
|
non_defect
|
more ci tests brainstorming we now have a free container for the more tests we could add an mpi build although only testing locally much like fork do we currently test a legacy build anywhere anything else we should test perhaps running the test suite in internal mode disabling the few formats that give false positives
| 0
|
19,013
| 3,122,792,651
|
IssuesEvent
|
2015-09-06 21:24:17
|
pemsley/coot
|
https://api.github.com/repos/pemsley/coot
|
closed
|
the configure script do not fail if swig is not available
|
auto-migrated Priority-Medium Type-Defect
|
```
Hello,
without swig the configure process run without error
but make gives this error
make[1]: entrant dans le répertoire « /home/picca/Projets/coot/src »
swig -o coot_wrap_guile_pre_gtk2.cc -DCOOT_USE_GTK2_INTERFACE
-DHAVE_SYS_STDTYPES_H=0 -DUSE_LIBCURL -I../src -guile -c++ ../src/coot.i
/bin/bash: swig : commande introuvable
so it seems that swig is mendatory.
Can you teach configure.in to fail if swig is not available or do not compile
the swig code if it is not available.
thanks
Frederic
```
Original issue reported on code.google.com by `pi...@synchrotron-soleil.fr` on 13 Jan 2013 at 7:05
|
1.0
|
the configure script do not fail if swig is not available - ```
Hello,
without swig the configure process run without error
but make gives this error
make[1]: entrant dans le répertoire « /home/picca/Projets/coot/src »
swig -o coot_wrap_guile_pre_gtk2.cc -DCOOT_USE_GTK2_INTERFACE
-DHAVE_SYS_STDTYPES_H=0 -DUSE_LIBCURL -I../src -guile -c++ ../src/coot.i
/bin/bash: swig : commande introuvable
so it seems that swig is mendatory.
Can you teach configure.in to fail if swig is not available or do not compile
the swig code if it is not available.
thanks
Frederic
```
Original issue reported on code.google.com by `pi...@synchrotron-soleil.fr` on 13 Jan 2013 at 7:05
|
defect
|
the configure script do not fail if swig is not available hello without swig the configure process run without error but make gives this error make entrant dans le répertoire « home picca projets coot src » swig o coot wrap guile pre cc dcoot use interface dhave sys stdtypes h duse libcurl i src guile c src coot i bin bash swig commande introuvable so it seems that swig is mendatory can you teach configure in to fail if swig is not available or do not compile the swig code if it is not available thanks frederic original issue reported on code google com by pi synchrotron soleil fr on jan at
| 1
|
293,900
| 22,097,856,545
|
IssuesEvent
|
2022-06-01 11:36:12
|
dev-heeseok/web-react
|
https://api.github.com/repos/dev-heeseok/web-react
|
closed
|
feat: React-Bootstrap Navbar Tutorial 추가 (Bootstrap 따라하기 샘플)
|
documentation enhancement
|
React-Bootstrap 을 사용하기 위해 샘플을 추가하는 방법을 공유하려고 한다.
- [ ] Bootstrap Example 사용방법 작성하기
- [ ] Navbar 를 이용하여 Web Pages 전환하기
|
1.0
|
feat: React-Bootstrap Navbar Tutorial 추가 (Bootstrap 따라하기 샘플) - React-Bootstrap 을 사용하기 위해 샘플을 추가하는 방법을 공유하려고 한다.
- [ ] Bootstrap Example 사용방법 작성하기
- [ ] Navbar 를 이용하여 Web Pages 전환하기
|
non_defect
|
feat react bootstrap navbar tutorial 추가 bootstrap 따라하기 샘플 react bootstrap 을 사용하기 위해 샘플을 추가하는 방법을 공유하려고 한다 bootstrap example 사용방법 작성하기 navbar 를 이용하여 web pages 전환하기
| 0
|
27,169
| 4,894,105,847
|
IssuesEvent
|
2016-11-19 03:55:10
|
prettydiff/prettydiff
|
https://api.github.com/repos/prettydiff/prettydiff
|
opened
|
Breaking on JavaScript argument list
|
Beautification Defect Underway
|
fs
.writeFile(dataA.finalpath + ending, "", function pdNodeLocal__fileWrite_writing_writeFileEmpty(err) {
if (err !== null) {
console.log(lf + "Error writing empty output." + lf);
console.log(err);
} else if (method === "file" && options.endquietly !== "quiet") {
console.log(lf + "Empty file successfully written to file.");
}
total[1] += 1;
if (total[1] === total[0]) {
ender();
}
});
|
1.0
|
Breaking on JavaScript argument list - fs
.writeFile(dataA.finalpath + ending, "", function pdNodeLocal__fileWrite_writing_writeFileEmpty(err) {
if (err !== null) {
console.log(lf + "Error writing empty output." + lf);
console.log(err);
} else if (method === "file" && options.endquietly !== "quiet") {
console.log(lf + "Empty file successfully written to file.");
}
total[1] += 1;
if (total[1] === total[0]) {
ender();
}
});
|
defect
|
breaking on javascript argument list fs writefile dataa finalpath ending function pdnodelocal filewrite writing writefileempty err if err null console log lf error writing empty output lf console log err else if method file options endquietly quiet console log lf empty file successfully written to file total if total total ender
| 1
|
489,714
| 14,111,606,999
|
IssuesEvent
|
2020-11-07 00:53:23
|
chingu-voyages/v25-geckos-team-01
|
https://api.github.com/repos/chingu-voyages/v25-geckos-team-01
|
opened
|
Delete an existing Nonprofit task
|
UserStory priority:must_have
|
**User Story Description**
As a Nonprofit user
I want to delete an existing task previously created by my organization
So myself and potential volunteers won't waste time on it
**Steps to Follow (optional)**
- [ ] TBD
- [ ] Additional steps as necessary
**Additional Considerations**
Any supplemental information including unresolved questions, links to external resources, screenshots, etc.
|
1.0
|
Delete an existing Nonprofit task - **User Story Description**
As a Nonprofit user
I want to delete an existing task previously created by my organization
So myself and potential volunteers won't waste time on it
**Steps to Follow (optional)**
- [ ] TBD
- [ ] Additional steps as necessary
**Additional Considerations**
Any supplemental information including unresolved questions, links to external resources, screenshots, etc.
|
non_defect
|
delete an existing nonprofit task user story description as a nonprofit user i want to delete an existing task previously created by my organization so myself and potential volunteers won t waste time on it steps to follow optional tbd additional steps as necessary additional considerations any supplemental information including unresolved questions links to external resources screenshots etc
| 0
|
76,337
| 21,367,725,677
|
IssuesEvent
|
2022-04-20 04:52:11
|
PDP-10/its
|
https://api.github.com/repos/PDP-10/its
|
opened
|
Update SIMH configuration
|
build process
|
- Replace `at tty` with `attach dz` for simh, see #2099.
- `set mta enabled` for pdp10-kl.
|
1.0
|
Update SIMH configuration - - Replace `at tty` with `attach dz` for simh, see #2099.
- `set mta enabled` for pdp10-kl.
|
non_defect
|
update simh configuration replace at tty with attach dz for simh see set mta enabled for kl
| 0
|
27,216
| 4,932,559,551
|
IssuesEvent
|
2016-11-28 14:04:12
|
Guake/guake
|
https://api.github.com/repos/Guake/guake
|
closed
|
Custom tab names lost after opening new tab
|
Priority:High Type: Defect
|
If I have some tabs, and then I rename them (double click on the tab name, in the bottom, and then type new name), and then I open a new tab (CTRL+SHIFT+T) all tab names get reset to the default name.
The correct behavior should be: the tab names are always kept unchanged if they have been customized
|
1.0
|
Custom tab names lost after opening new tab - If I have some tabs, and then I rename them (double click on the tab name, in the bottom, and then type new name), and then I open a new tab (CTRL+SHIFT+T) all tab names get reset to the default name.
The correct behavior should be: the tab names are always kept unchanged if they have been customized
|
defect
|
custom tab names lost after opening new tab if i have some tabs and then i rename them double click on the tab name in the bottom and then type new name and then i open a new tab ctrl shift t all tab names get reset to the default name the correct behavior should be the tab names are always kept unchanged if they have been customized
| 1
|
532,242
| 15,531,925,997
|
IssuesEvent
|
2021-03-14 02:28:19
|
eidan66/Saikai
|
https://api.github.com/repos/eidan66/Saikai
|
closed
|
Feature - add date to position
|
enhancement priority 2
|
When i add position i want to see the date i added it (or initial with custom date).
|
1.0
|
Feature - add date to position - When i add position i want to see the date i added it (or initial with custom date).
|
non_defect
|
feature add date to position when i add position i want to see the date i added it or initial with custom date
| 0
|
134,558
| 19,270,577,520
|
IssuesEvent
|
2021-12-10 04:28:53
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
closed
|
[Bug]: Visible toggle on page properties animates from off to on when pages are set to visible
|
Bug Design System UX Improvement Low Release Platform Pod New Developers Pod
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Visible toggle animates from off to on position when settings button is clicked. Ideally, it should be in the on position when the page is set to visible and not toggle when user navigates to page settings.
[](https://www.loom.com/share/5a82a3280a1f4fc1a2422f023313fd67)
### Steps To Reproduce
1. Go to page properties and click on settings icon and observe the toggle move from off state to on.
### Environment
Release
### Version
Cloud
|
1.0
|
[Bug]: Visible toggle on page properties animates from off to on when pages are set to visible - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Visible toggle animates from off to on position when settings button is clicked. Ideally, it should be in the on position when the page is set to visible and not toggle when user navigates to page settings.
[](https://www.loom.com/share/5a82a3280a1f4fc1a2422f023313fd67)
### Steps To Reproduce
1. Go to page properties and click on settings icon and observe the toggle move from off state to on.
### Environment
Release
### Version
Cloud
|
non_defect
|
visible toggle on page properties animates from off to on when pages are set to visible is there an existing issue for this i have searched the existing issues current behavior visible toggle animates from off to on position when settings button is clicked ideally it should be in the on position when the page is set to visible and not toggle when user navigates to page settings steps to reproduce go to page properties and click on settings icon and observe the toggle move from off state to on environment release version cloud
| 0
|
13,020
| 2,732,875,394
|
IssuesEvent
|
2015-04-17 09:54:47
|
tiku01/oryx-editor
|
https://api.github.com/repos/tiku01/oryx-editor
|
closed
|
shape menu disabled for elements having no morphing rules
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Create a stencilset with connection rules
2. add morphing rules for some (not all) the elements having connection rules
What is the expected output?
All elements having the connection rules should show the shape menu
What do you see instead?
The elements having connection rules *and no morphing rules* do not show the
shape menu when the user select them.
Please provide any additional information below.
A hack that fixes this problem is provided here :
https://gist.github.com/1078057
This bug is due to the wrong use of
ORYX.Core.StencilSet.Rules.prototype.containsMorphingRules in
ORYX.Plugins.ShapeMenuPlugin.showStencilButtons.
Indeed, containsMorphingRules returns the existance of morphing rules for any
elements in the stencilsets. But, what we want in showStencilButtons is a
boolean (isMorphing) that is true when a morphing rule exists for the
*selected* elements.
```
Original issue reported on code.google.com by `florent....@gmail.com` on 12 Jul 2011 at 2:23
|
1.0
|
shape menu disabled for elements having no morphing rules - ```
What steps will reproduce the problem?
1. Create a stencilset with connection rules
2. add morphing rules for some (not all) the elements having connection rules
What is the expected output?
All elements having the connection rules should show the shape menu
What do you see instead?
The elements having connection rules *and no morphing rules* do not show the
shape menu when the user select them.
Please provide any additional information below.
A hack that fixes this problem is provided here :
https://gist.github.com/1078057
This bug is due to the wrong use of
ORYX.Core.StencilSet.Rules.prototype.containsMorphingRules in
ORYX.Plugins.ShapeMenuPlugin.showStencilButtons.
Indeed, containsMorphingRules returns the existance of morphing rules for any
elements in the stencilsets. But, what we want in showStencilButtons is a
boolean (isMorphing) that is true when a morphing rule exists for the
*selected* elements.
```
Original issue reported on code.google.com by `florent....@gmail.com` on 12 Jul 2011 at 2:23
|
defect
|
shape menu disabled for elements having no morphing rules what steps will reproduce the problem create a stencilset with connection rules add morphing rules for some not all the elements having connection rules what is the expected output all elements having the connection rules should show the shape menu what do you see instead the elements having connection rules and no morphing rules do not show the shape menu when the user select them please provide any additional information below a hack that fixes this problem is provided here this bug is due to the wrong use of oryx core stencilset rules prototype containsmorphingrules in oryx plugins shapemenuplugin showstencilbuttons indeed containsmorphingrules returns the existance of morphing rules for any elements in the stencilsets but what we want in showstencilbuttons is a boolean ismorphing that is true when a morphing rule exists for the selected elements original issue reported on code google com by florent gmail com on jul at
| 1
|
120,016
| 15,690,181,962
|
IssuesEvent
|
2021-03-25 16:26:47
|
carbon-design-system/carbon-for-ibm-dotcom-website
|
https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom-website
|
closed
|
⭐️ Template // Website content crafting and publishing: (Guidelines/ Name of the new guideline name)
|
content design icebox website: Carbon for ibm.com
|
## Objective
Provide Carbon for IBM.com adopter the best practices and guidance they need to adopt Carbon for IBM.com.
## General publishing process
Please make sure you always start the content crafting process by viewing the **[2020 website content publishing process box document](https://ibm.box.com/s/6oiqsvhxtchi1lfozyuw1soje67unbmj)** first.
## Publishing for Guidelines section website page
- Content crafting starts as soon as Initial research and publishing agreements are accepted by the DDS leadership team.
- The content can be at the MVP level. MVP means the content meets the basic needs of an adopter for learning. It does not mean the content crafting will or should stop at this stage. Future enhancement should be working on with a separate task issue.
- MVP content publishing template and requirement can be found in the Publishing template and best practices/[_General page folder](https://ibm.box.com/s/ts8m1a3bb5z5vwexrmh0c1ztthdm8nej).
- Content publishing to the live website can be code or markdowns. If you need guidance on how to start with markdowns, please see the step by step instruction in the [Editing and publishing in Github Box document](https://ibm.box.com/s/as8fo8gcecvizngxgq2dpuk4po2alq7t).
- When the page is published, please also make sure the guideline table list on the [Overview page](https://www.ibm.com/standards/web/carbon-for-ibm-dotcom/guidelines) is updated as part of the publishing task.
## File naming and storage
- Name the Sketch file with the following convention: Name of the category + components/pattern/guideline + version. ex: Components-Feature card block v1.0, Guidelines-Expressive theme v1.0, Help-Overview v1.0 and etc.
- Please save all content publishing work files (box, markdowns link, or Sketch file) in the proper sub-folder under the[ ✍️ Content + Design folder](https://ibm.box.com/s/tty3jhshkt6l18jkn2v5ac2uznyjexns).
## Publishing acceptance criteria
- [ ] Content crafting is done.
- [ ] Asset image creation is done.
- [ ] Page design is done (as needed).
- [ ] Publishing with markdowns is done.
- [ ] Update the [Overview page component table list](https://www.ibm.com/standards/web/carbon-for-ibm-dotcom/components/overview/) with the guideline name, link, description and status when the markdown is done.
- [ ] QA'd by Jenny.
- [ ] Web page preview is approved by Linda, Jeff, and Wonil.
- [ ] PR merged by Jenny.
- [ ] Web page is published and live on the website.
|
1.0
|
⭐️ Template // Website content crafting and publishing: (Guidelines/ Name of the new guideline name) - ## Objective
Provide Carbon for IBM.com adopter the best practices and guidance they need to adopt Carbon for IBM.com.
## General publishing process
Please make sure you always start the content crafting process by viewing the **[2020 website content publishing process box document](https://ibm.box.com/s/6oiqsvhxtchi1lfozyuw1soje67unbmj)** first.
## Publishing for Guidelines section website page
- Content crafting starts as soon as Initial research and publishing agreements are accepted by the DDS leadership team.
- The content can be at the MVP level. MVP means the content meets the basic needs of an adopter for learning. It does not mean the content crafting will or should stop at this stage. Future enhancement should be working on with a separate task issue.
- MVP content publishing template and requirement can be found in the Publishing template and best practices/[_General page folder](https://ibm.box.com/s/ts8m1a3bb5z5vwexrmh0c1ztthdm8nej).
- Content publishing to the live website can be code or markdowns. If you need guidance on how to start with markdowns, please see the step by step instruction in the [Editing and publishing in Github Box document](https://ibm.box.com/s/as8fo8gcecvizngxgq2dpuk4po2alq7t).
- When the page is published, please also make sure the guideline table list on the [Overview page](https://www.ibm.com/standards/web/carbon-for-ibm-dotcom/guidelines) is updated as part of the publishing task.
## File naming and storage
- Name the Sketch file with the following convention: Name of the category + components/pattern/guideline + version. ex: Components-Feature card block v1.0, Guidelines-Expressive theme v1.0, Help-Overview v1.0 and etc.
- Please save all content publishing work files (box, markdowns link, or Sketch file) in the proper sub-folder under the[ ✍️ Content + Design folder](https://ibm.box.com/s/tty3jhshkt6l18jkn2v5ac2uznyjexns).
## Publishing acceptance criteria
- [ ] Content crafting is done.
- [ ] Asset image creation is done.
- [ ] Page design is done (as needed).
- [ ] Publishing with markdowns is done.
- [ ] Update the [Overview page component table list](https://www.ibm.com/standards/web/carbon-for-ibm-dotcom/components/overview/) with the guideline name, link, description and status when the markdown is done.
- [ ] QA'd by Jenny.
- [ ] Web page preview is approved by Linda, Jeff, and Wonil.
- [ ] PR merged by Jenny.
- [ ] Web page is published and live on the website.
|
non_defect
|
⭐️ template website content crafting and publishing guidelines name of the new guideline name objective provide carbon for ibm com adopter the best practices and guidance they need to adopt carbon for ibm com general publishing process please make sure you always start the content crafting process by viewing the first publishing for guidelines section website page content crafting starts as soon as initial research and publishing agreements are accepted by the dds leadership team the content can be at the mvp level mvp means the content meets the basic needs of an adopter for learning it does not mean the content crafting will or should stop at this stage future enhancement should be working on with a separate task issue mvp content publishing template and requirement can be found in the publishing template and best practices content publishing to the live website can be code or markdowns if you need guidance on how to start with markdowns please see the step by step instruction in the when the page is published please also make sure the guideline table list on the is updated as part of the publishing task file naming and storage name the sketch file with the following convention name of the category components pattern guideline version ex components feature card block guidelines expressive theme help overview and etc please save all content publishing work files box markdowns link or sketch file in the proper sub folder under the publishing acceptance criteria content crafting is done asset image creation is done page design is done as needed publishing with markdowns is done update the with the guideline name link description and status when the markdown is done qa d by jenny web page preview is approved by linda jeff and wonil pr merged by jenny web page is published and live on the website
| 0
|
5,717
| 2,610,214,001
|
IssuesEvent
|
2015-02-26 19:08:19
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
хлебопечка clatronic 3365 bba инструкция.rar
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Борислав Борисов'''
День добрый никак не могу найти .хлебопечка
clatronic 3365 bba инструкция.rar. как то выкладывали
уже
'''Адонис Матвеев'''
Качай тут http://bit.ly/1818vMI
'''Геронтий Попов'''
Просит ввести номер мобилы!Не опасно ли это?
'''Вольфрам Вишняков'''
Не это не влияет на баланс
'''Арнольд Гордеев'''
Не это не влияет на баланс
Информация о файле: хлебопечка clatronic 3365 bba
инструкция.rar
Загружен: В этом месяце
Скачан раз: 167
Рейтинг: 160
Средняя скорость скачивания: 261
Похожих файлов: 19
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 12:23
|
1.0
|
хлебопечка clatronic 3365 bba инструкция.rar - ```
'''Борислав Борисов'''
День добрый никак не могу найти .хлебопечка
clatronic 3365 bba инструкция.rar. как то выкладывали
уже
'''Адонис Матвеев'''
Качай тут http://bit.ly/1818vMI
'''Геронтий Попов'''
Просит ввести номер мобилы!Не опасно ли это?
'''Вольфрам Вишняков'''
Не это не влияет на баланс
'''Арнольд Гордеев'''
Не это не влияет на баланс
Информация о файле: хлебопечка clatronic 3365 bba
инструкция.rar
Загружен: В этом месяце
Скачан раз: 167
Рейтинг: 160
Средняя скорость скачивания: 261
Похожих файлов: 19
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 12:23
|
defect
|
хлебопечка clatronic bba инструкция rar борислав борисов день добрый никак не могу найти хлебопечка clatronic bba инструкция rar как то выкладывали уже адонис матвеев качай тут геронтий попов просит ввести номер мобилы не опасно ли это вольфрам вишняков не это не влияет на баланс арнольд гордеев не это не влияет на баланс информация о файле хлебопечка clatronic bba инструкция rar загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 1
|
206,779
| 16,056,312,427
|
IssuesEvent
|
2021-04-23 05:55:59
|
JoshClose/CsvHelper
|
https://api.github.com/repos/JoshClose/CsvHelper
|
opened
|
How to add options for a build in converter
|
documentation
|
Hi, it would be good to have documentation on how to add options for a build converter.
ByteArrayConverter has options, but I don't know where to specify them
|
1.0
|
How to add options for a build in converter - Hi, it would be good to have documentation on how to add options for a build converter.
ByteArrayConverter has options, but I don't know where to specify them
|
non_defect
|
how to add options for a build in converter hi it would be good to have documentation on how to add options for a build converter bytearrayconverter has options but i don t know where to specify them
| 0
|
82,623
| 3,617,425,286
|
IssuesEvent
|
2016-02-08 03:28:09
|
yocheah/Fluxnet-Ameriflux
|
https://api.github.com/repos/yocheah/Fluxnet-Ameriflux
|
closed
|
Ability to see, on web site, downloads for any site. Possibility to download the list of users that downloaded single sites (for PIs)
|
FLUXNET April Release High Priority Website (Release)
|
Tasked to Megha
|
1.0
|
Ability to see, on web site, downloads for any site. Possibility to download the list of users that downloaded single sites (for PIs) - Tasked to Megha
|
non_defect
|
ability to see on web site downloads for any site possibility to download the list of users that downloaded single sites for pis tasked to megha
| 0
|
51,882
| 13,211,333,351
|
IssuesEvent
|
2020-08-15 22:22:50
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[clast] no sphinx documentation (Trac #1210)
|
Incomplete Migration Migrated from Trac combo reconstruction defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1210">https://code.icecube.wisc.edu/projects/icecube/ticket/1210</a>, reported by david.schultzand owned by markw04</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:55",
"_ts": "1550067295757382",
"description": "Add something to `resources/docs/index.rst`. Even a link to the doxygen would be useful.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"time": "2015-08-19T19:28:52",
"component": "combo reconstruction",
"summary": "[clast] no sphinx documentation",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "markw04",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[clast] no sphinx documentation (Trac #1210) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1210">https://code.icecube.wisc.edu/projects/icecube/ticket/1210</a>, reported by david.schultzand owned by markw04</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:55",
"_ts": "1550067295757382",
"description": "Add something to `resources/docs/index.rst`. Even a link to the doxygen would be useful.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"time": "2015-08-19T19:28:52",
"component": "combo reconstruction",
"summary": "[clast] no sphinx documentation",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "markw04",
"type": "defect"
}
```
</p>
</details>
|
defect
|
no sphinx documentation trac migrated from json status closed changetime ts description add something to resources docs index rst even a link to the doxygen would be useful reporter david schultz cc resolution fixed time component combo reconstruction summary no sphinx documentation priority blocker keywords milestone owner type defect
| 1
|
56,404
| 15,079,303,101
|
IssuesEvent
|
2021-02-05 09:58:16
|
martinrotter/rssguard
|
https://api.github.com/repos/martinrotter/rssguard
|
closed
|
[BUG]: Sometimes thick feed entry lines
|
Component-Message-List Status-Not-Enough-Data Type-Defect
|
#### Brief description of the issue.
Some feed entry lines become "thick", i.e. they consume multiple lines (see the video attached).
#### How to reproduce the bug?
I don't exactly know its cause so I cannot give an exact reproducibility guide. It happens when new feeds are downloaded, but not all the time.
#### What was the expected result?
I would expect that the entries do not get thicker. As you can see in the video attached, for some feed accounts it works as expected, while for others the feed entry lines get thicker. I don't think it has to do with the specific feeds because most of the time normal feed entry line widths are displayed for all of my incoming feeds. Restarting RSS Guard does not solve this issue.
#### Other information
* OS: Linux/x86_64
* Desktop Environment: Gnome 3
* RSS Guard version: 3.8.4, revision 7ab95a75
* Qt version: 5.14.2
https://user-images.githubusercontent.com/12742890/106880766-0dc8c880-66dd-11eb-8c0b-20dd1a7eff4e.mp4
|
1.0
|
[BUG]: Sometimes thick feed entry lines - #### Brief description of the issue.
Some feed entry lines become "thick", i.e. they consume multiple lines (see the video attached).
#### How to reproduce the bug?
I don't exactly know its cause so I cannot give an exact reproducibility guide. It happens when new feeds are downloaded, but not all the time.
#### What was the expected result?
I would expect that the entries do not get thicker. As you can see in the video attached, for some feed accounts it works as expected, while for others the feed entry lines get thicker. I don't think it has to do with the specific feeds because most of the time normal feed entry line widths are displayed for all of my incoming feeds. Restarting RSS Guard does not solve this issue.
#### Other information
* OS: Linux/x86_64
* Desktop Environment: Gnome 3
* RSS Guard version: 3.8.4, revision 7ab95a75
* Qt version: 5.14.2
https://user-images.githubusercontent.com/12742890/106880766-0dc8c880-66dd-11eb-8c0b-20dd1a7eff4e.mp4
|
defect
|
sometimes thick feed entry lines brief description of the issue some feed entry lines become thick i e they consume multiple lines see the video attached how to reproduce the bug i don t exactly know its cause so i cannot give an exact reproducibility guide it happens when new feeds are downloaded but not all the time what was the expected result i would expect that the entries do not get thicker as you can see in the video attached for some feed accounts it works as expected while for others the feed entry lines get thicker i don t think it has to do with the specific feeds because most of the time normal feed entry line widths are displayed for all of my incoming feeds restarting rss guard does not solve this issue other information os linux desktop environment gnome rss guard version revision qt version
| 1
|
35,414
| 7,736,554,843
|
IssuesEvent
|
2018-05-28 03:06:58
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
[2.x] afterDispatch does not work when using testAction()
|
Defect Need more information
|
This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: EXACT RELEASE VERSION OR COMMIT HASH, HERE.
CakePHP Version: 2.10.7
* Platform and Target: YOUR WEB-SERVER, DATABASE AND OTHER RELEVANT INFO AND HOW THE REQUEST IS BEING MADE, HERE.
### What you did
Creating a class that extends DispatchFilter and running the test with the testAction method will not execute the afterDispatch method.
However, because the default option of testAction is set with 'return' => 'result', I took time to figure out why this problem occurred.
If the request parameter contains return, dispatcher immediately returns the response.
https://github.com/cakephp/cakephp/blob/2.x/lib/Cake/Routing/Dispatcher.php#L168
Since afterDispatch is executed only when it is not return, the default option of testAction prevents afterDispatch from being executed.
This problem occurs because the return parameter is attached to the request parameter only when the return option is result.
https://github.com/cakephp/cakephp/blob/2.x/lib/Cake/TestSuite/ControllerTestCase.php#L289
### What happened
The workaround for this problem is to specify 'return' => 'vars' in the argument option of testAction,
### What you expected to happen
afterDispatch should be executed correctly
|
1.0
|
[2.x] afterDispatch does not work when using testAction() - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: EXACT RELEASE VERSION OR COMMIT HASH, HERE.
CakePHP Version: 2.10.7
* Platform and Target: YOUR WEB-SERVER, DATABASE AND OTHER RELEVANT INFO AND HOW THE REQUEST IS BEING MADE, HERE.
### What you did
Creating a class that extends DispatchFilter and running the test with the testAction method will not execute the afterDispatch method.
However, because the default option of testAction is set with 'return' => 'result', I took time to figure out why this problem occurred.
If the request parameter contains return, dispatcher immediately returns the response.
https://github.com/cakephp/cakephp/blob/2.x/lib/Cake/Routing/Dispatcher.php#L168
Since afterDispatch is executed only when it is not return, the default option of testAction prevents afterDispatch from being executed.
This problem occurs because the return parameter is attached to the request parameter only when the return option is result.
https://github.com/cakephp/cakephp/blob/2.x/lib/Cake/TestSuite/ControllerTestCase.php#L289
### What happened
The workaround for this problem is to specify 'return' => 'vars' in the argument option of testAction,
### What you expected to happen
afterDispatch should be executed correctly
|
defect
|
afterdispatch does not work when using testaction this is a multiple allowed bug enhancement feature discussion rfc cakephp version exact release version or commit hash here cakephp version platform and target your web server database and other relevant info and how the request is being made here what you did creating a class that extends dispatchfilter and running the test with the testaction method will not execute the afterdispatch method however because the default option of testaction is set with return result i took time to figure out why this problem occurred if the request parameter contains return dispatcher immediately returns the response since afterdispatch is executed only when it is not return the default option of testaction prevents afterdispatch from being executed this problem occurs because the return parameter is attached to the request parameter only when the return option is result what happened the workaround for this problem is to specify return vars in the argument option of testaction what you expected to happen afterdispatch should be executed correctly
| 1
|
51,598
| 12,749,729,506
|
IssuesEvent
|
2020-06-27 00:03:18
|
NVIDIA/TRTorch
|
https://api.github.com/repos/NVIDIA/TRTorch
|
closed
|
How to use local pytorch instead of installing again.
|
No Activity component: api [Python] component: build system question
|
Hi Naren,
glad to see that you check-in py binding and test.
TRTorch needs to install pytorch and torchvision again and I know it is easy to build trt from scratch.
But as a developer, I always build and set pytorch env locally and do need to install it again. Could you help provide options to call local pytorch instead of installing again. @narendasan
Thanks,
Alan
|
1.0
|
How to use local pytorch instead of installing again. - Hi Naren,
glad to see that you check-in py binding and test.
TRTorch needs to install pytorch and torchvision again and I know it is easy to build trt from scratch.
But as a developer, I always build and set pytorch env locally and do need to install it again. Could you help provide options to call local pytorch instead of installing again. @narendasan
Thanks,
Alan
|
non_defect
|
how to use local pytorch instead of installing again hi naren glad to see that you check in py binding and test trtorch needs to install pytorch and torchvision again and i know it is easy to build trt from scratch but as a developer i always build and set pytorch env locally and do need to install it again could you help provide options to call local pytorch instead of installing again narendasan thanks alan
| 0
|
56,050
| 14,913,522,562
|
IssuesEvent
|
2021-01-22 14:15:22
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Parser parses PRIMARYKEY as PRIMARY KEY
|
C: Parser E: All Editions P: High T: Defect
|
Our parsing utilities seem to parse `PRIMARYKEY` and `PRIMARY KEY` as the same thing. This seems to apply to other compound keywords as well, e.g. `GROUPBY` is parsed as `GROUP BY`.
~This is probably a recent regression from a refactoring. I don't think this was the case when the parser was first created.~
|
1.0
|
Parser parses PRIMARYKEY as PRIMARY KEY - Our parsing utilities seem to parse `PRIMARYKEY` and `PRIMARY KEY` as the same thing. This seems to apply to other compound keywords as well, e.g. `GROUPBY` is parsed as `GROUP BY`.
~This is probably a recent regression from a refactoring. I don't think this was the case when the parser was first created.~
|
defect
|
parser parses primarykey as primary key our parsing utilities seem to parse primarykey and primary key as the same thing this seems to apply to other compound keywords as well e g groupby is parsed as group by this is probably a recent regression from a refactoring i don t think this was the case when the parser was first created
| 1
|
22,633
| 3,670,966,529
|
IssuesEvent
|
2016-02-22 03:02:20
|
gperftools/gperftools
|
https://api.github.com/repos/gperftools/gperftools
|
closed
|
object name conflicts in archive in Cygwin on windows.
|
Priority-Medium Status-Started Type-Defect
|
Originally reported on Google Code with ID 469
```
What steps will reproduce the problem?
1.Downloaded gperftools-2.0.zip file.
2.Went to the shell in Cygwin.
3.Executed the commands "./configure" and then tried "make".
What is the expected output? What do you see instead?
Expected is to generate a shared object file.
But compilation is failing.
What version of the product are you using? On what operating system?
Cygwin on Windows7 - x64
Please provide any additional information below.
Attached the files
```
Reported by `suman.aluvala` on 2012-09-26 09:41:34
<hr>
* *Attachment: [ConfigLog.txt](https://storage.googleapis.com/google-code-attachments/gperftools/issue-469/comment-0/ConfigLog.txt)*
* *Attachment: [MakeLog.txt](https://storage.googleapis.com/google-code-attachments/gperftools/issue-469/comment-0/MakeLog.txt)*
|
1.0
|
object name conflicts in archive in Cygwin on windows. - Originally reported on Google Code with ID 469
```
What steps will reproduce the problem?
1.Downloaded gperftools-2.0.zip file.
2.Went to the shell in Cygwin.
3.Executed the commands "./configure" and then tried "make".
What is the expected output? What do you see instead?
Expected is to generate a shared object file.
But compilation is failing.
What version of the product are you using? On what operating system?
Cygwin on Windows7 - x64
Please provide any additional information below.
Attached the files
```
Reported by `suman.aluvala` on 2012-09-26 09:41:34
<hr>
* *Attachment: [ConfigLog.txt](https://storage.googleapis.com/google-code-attachments/gperftools/issue-469/comment-0/ConfigLog.txt)*
* *Attachment: [MakeLog.txt](https://storage.googleapis.com/google-code-attachments/gperftools/issue-469/comment-0/MakeLog.txt)*
|
defect
|
object name conflicts in archive in cygwin on windows originally reported on google code with id what steps will reproduce the problem downloaded gperftools zip file went to the shell in cygwin executed the commands configure and then tried make what is the expected output what do you see instead expected is to generate a shared object file but compilation is failing what version of the product are you using on what operating system cygwin on please provide any additional information below attached the files reported by suman aluvala on attachment attachment
| 1
|
3,922
| 3,606,417,325
|
IssuesEvent
|
2016-02-04 11:11:02
|
d-ronin/dRonin
|
https://api.github.com/repos/d-ronin/dRonin
|
closed
|
Rename Revomini to Revolution
|
difficulty/low enhancement flight gcs status/work-in-progress targets/open-hardware usability
|
The OP Revolution target had been named Revomini, which is confusing to users as this name is largely unknown. I suggest we rename it to Revolution or maybe OP-Revolution.
|
True
|
Rename Revomini to Revolution - The OP Revolution target had been named Revomini, which is confusing to users as this name is largely unknown. I suggest we rename it to Revolution or maybe OP-Revolution.
|
non_defect
|
rename revomini to revolution the op revolution target had been named revomini which is confusing to users as this name is largely unknown i suggest we rename it to revolution or maybe op revolution
| 0
|
55,392
| 14,437,082,382
|
IssuesEvent
|
2020-12-07 11:01:33
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
Component: fileUpload
|
defect
|
**Describe the defect**
Listener of fileUpload is not invoked in advanced mode with common uploader.
**Reproducer**
In the source of CommonsFileUploadDecoder, it looks like be possible to handle advanced mode, but simply not working.
With native uploader, same code runs without any problem. But when switch to common mode, listener is not invoked.
**Environment:**
- PF Version: _8.0_
- JSF + version: _e.g. Mojarra 2.2.20
- Affected browsers: _e.g. Chrome, IE, Edge ALL_
**To Reproduce**
Steps to reproduce the behavior:
1.
<h:form>
<p:fileUpload listener="#{fileUploadView.handleFileUpload}" mode="advanced" dragDropSupport="false"
update="messages" sizeLimit="100000" fileLimit="3" allowTypes="/(\.|\/)(gif|jpe?g|png)$/" />
<p:growl id="messages" showDetail="true" />
</h:form>
Based on the source from the following link.
https://www.primefaces.org/showcase/ui/file/upload/single.xhtml
2. Set upload mode to common.
<context-param>
<param-name>primefaces.UPLOADER</param-name>
<param-value>commons</param-value><!-- Allowed values: auto, native and commons. -->
</context-param>
<filter>
<filter-name>primeFacesFileUploadFilter</filter-name>
<filter-class>org.primefaces.webapp.filter.FileUploadFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>primeFacesFileUploadFilter</filter-name>
<servlet-name>facesServlet</servlet-name>
<init-param>
<param-name>thresholdSize</param-name>
<param-value>10000000000</param-value>
</init-param>
<init-param>
<param-name>uploadDirectory</param-name>
<param-value>/tmp/upload/</param-value>
</init-param>
</filter-mapping>
3. Listener not invoked.
**Expected behavior**
It should work in common file upload mode as same as in native file upload mode.
|
1.0
|
Component: fileUpload - **Describe the defect**
Listener of fileUpload is not invoked in advanced mode with common uploader.
**Reproducer**
In the source of CommonsFileUploadDecoder, it looks like be possible to handle advanced mode, but simply not working.
With native uploader, same code runs without any problem. But when switch to common mode, listener is not invoked.
**Environment:**
- PF Version: _8.0_
- JSF + version: _e.g. Mojarra 2.2.20
- Affected browsers: _e.g. Chrome, IE, Edge ALL_
**To Reproduce**
Steps to reproduce the behavior:
1.
<h:form>
<p:fileUpload listener="#{fileUploadView.handleFileUpload}" mode="advanced" dragDropSupport="false"
update="messages" sizeLimit="100000" fileLimit="3" allowTypes="/(\.|\/)(gif|jpe?g|png)$/" />
<p:growl id="messages" showDetail="true" />
</h:form>
Based on the source from the following link.
https://www.primefaces.org/showcase/ui/file/upload/single.xhtml
2. Set upload mode to common.
<context-param>
<param-name>primefaces.UPLOADER</param-name>
<param-value>commons</param-value><!-- Allowed values: auto, native and commons. -->
</context-param>
<filter>
<filter-name>primeFacesFileUploadFilter</filter-name>
<filter-class>org.primefaces.webapp.filter.FileUploadFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>primeFacesFileUploadFilter</filter-name>
<servlet-name>facesServlet</servlet-name>
<init-param>
<param-name>thresholdSize</param-name>
<param-value>10000000000</param-value>
</init-param>
<init-param>
<param-name>uploadDirectory</param-name>
<param-value>/tmp/upload/</param-value>
</init-param>
</filter-mapping>
3. Listener not invoked.
**Expected behavior**
It should work in common file upload mode as same as in native file upload mode.
|
defect
|
component fileupload describe the defect listener of fileupload is not invoked in advanced mode with common uploader reproducer in the source of commonsfileuploaddecoder it looks like be possible to handle advanced mode but simply not working with native uploader same code runs without any problem but when switch to common mode listener is not invoked environment pf version jsf version e g mojarra affected browsers e g chrome ie edge all to reproduce steps to reproduce the behavior p fileupload listener fileuploadview handlefileupload mode advanced dragdropsupport false update messages sizelimit filelimit allowtypes gif jpe g png based on the source from the following link set upload mode to common primefaces uploader commons primefacesfileuploadfilter org primefaces webapp filter fileuploadfilter primefacesfileuploadfilter facesservlet thresholdsize uploaddirectory tmp upload listener not invoked expected behavior it should work in common file upload mode as same as in native file upload mode
| 1
|
90,831
| 15,856,286,371
|
IssuesEvent
|
2021-04-08 01:59:08
|
DashboardHub/Website
|
https://api.github.com/repos/DashboardHub/Website
|
opened
|
CVE-2019-18797 (Medium) detected in node-sass-4.8.3.tgz, node-sassv4.12.0
|
security vulnerability
|
## CVE-2019-18797 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.8.3.tgz</b>, <b>node-sassv4.12.0</b></p></summary>
<p>
<details><summary><b>node-sass-4.8.3.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.8.3.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.8.3.tgz</a></p>
<p>Path to dependency file: /Website/package.json</p>
<p>Path to vulnerable library: Website/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.0.1.tgz (Root Library)
- :x: **node-sass-4.8.3.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
LibSass 3.6.1 has uncontrolled recursion in Sass::Eval::operator()(Sass::Binary_Expression*) in eval.cpp.
<p>Publish Date: 2019-11-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-18797>CVE-2019-18797</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18797">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18797</a></p>
<p>Release Date: 2019-11-06</p>
<p>Fix Resolution: LibSass - 3.6.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-18797 (Medium) detected in node-sass-4.8.3.tgz, node-sassv4.12.0 - ## CVE-2019-18797 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.8.3.tgz</b>, <b>node-sassv4.12.0</b></p></summary>
<p>
<details><summary><b>node-sass-4.8.3.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.8.3.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.8.3.tgz</a></p>
<p>Path to dependency file: /Website/package.json</p>
<p>Path to vulnerable library: Website/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.0.1.tgz (Root Library)
- :x: **node-sass-4.8.3.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
LibSass 3.6.1 has uncontrolled recursion in Sass::Eval::operator()(Sass::Binary_Expression*) in eval.cpp.
<p>Publish Date: 2019-11-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-18797>CVE-2019-18797</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18797">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18797</a></p>
<p>Release Date: 2019-11-06</p>
<p>Fix Resolution: LibSass - 3.6.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in node sass tgz node cve medium severity vulnerability vulnerable libraries node sass tgz node node sass tgz wrapper around libsass library home page a href path to dependency file website package json path to vulnerable library website node modules node sass package json dependency hierarchy gulp sass tgz root library x node sass tgz vulnerable library vulnerability details libsass has uncontrolled recursion in sass eval operator sass binary expression in eval cpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
| 0
|
24,050
| 3,901,947,639
|
IssuesEvent
|
2016-04-18 13:03:11
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
Potential lifecycle error in autocomplete
|
defect
|
If suggestions are updated before viewinit, then a js error will occur. More info at;
http://forum.primefaces.org/viewtopic.php?f=35&t=45174
|
1.0
|
Potential lifecycle error in autocomplete - If suggestions are updated before viewinit, then a js error will occur. More info at;
http://forum.primefaces.org/viewtopic.php?f=35&t=45174
|
defect
|
potential lifecycle error in autocomplete if suggestions are updated before viewinit then a js error will occur more info at
| 1
|
344,024
| 30,707,577,390
|
IssuesEvent
|
2023-07-27 07:30:45
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
ccl/streamingccl/streamingest: TestTenantStatusWithFutureCutoverTime failed
|
C-test-failure O-robot T-disaster-recovery branch-release-23.1
|
ccl/streamingccl/streamingest.TestTenantStatusWithFutureCutoverTime [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11078962?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11078962?buildTab=artifacts#/) on release-23.1 @ [aaedb3dd0d97097cff7afeb2532c6265d3d62818](https://github.com/cockroachdb/cockroach/commits/aaedb3dd0d97097cff7afeb2532c6265d3d62818):
```
=== RUN TestTenantStatusWithFutureCutoverTime
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/90c1b75d835f45b8488807abb5b1092d/logTestTenantStatusWithFutureCutoverTime2874947416
test_log_scope.go:79: use -show-logs to present logs inline
test_server_shim.go:439: server stop before successful start
(1) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/server.(*channelOrchestrator).startControlledServer.func5.1
| github.com/cockroachdb/cockroach/pkg/server/server_controller_channel_orchestrator.go:307
| github.com/cockroachdb/cockroach/pkg/server.(*channelOrchestrator).startControlledServer.func5
| github.com/cockroachdb/cockroach/pkg/server/server_controller_channel_orchestrator.go:401
| github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx.func2
| github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:470
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1594
Wraps: (2) server stop before successful start
Error types: (1) *withstack.withStack (2) *errutil.leafError
panic.go:522: -- test log scope end --
test logs left over in: /artifacts/tmp/_tmp/90c1b75d835f45b8488807abb5b1092d/logTestTenantStatusWithFutureCutoverTime2874947416
--- FAIL: TestTenantStatusWithFutureCutoverTime (27.86s)
```
<p>Parameters: <code>TAGS=bazel,gss,deadlock</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/disaster-recovery
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestTenantStatusWithFutureCutoverTime.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
ccl/streamingccl/streamingest: TestTenantStatusWithFutureCutoverTime failed - ccl/streamingccl/streamingest.TestTenantStatusWithFutureCutoverTime [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11078962?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11078962?buildTab=artifacts#/) on release-23.1 @ [aaedb3dd0d97097cff7afeb2532c6265d3d62818](https://github.com/cockroachdb/cockroach/commits/aaedb3dd0d97097cff7afeb2532c6265d3d62818):
```
=== RUN TestTenantStatusWithFutureCutoverTime
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/90c1b75d835f45b8488807abb5b1092d/logTestTenantStatusWithFutureCutoverTime2874947416
test_log_scope.go:79: use -show-logs to present logs inline
test_server_shim.go:439: server stop before successful start
(1) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/server.(*channelOrchestrator).startControlledServer.func5.1
| github.com/cockroachdb/cockroach/pkg/server/server_controller_channel_orchestrator.go:307
| github.com/cockroachdb/cockroach/pkg/server.(*channelOrchestrator).startControlledServer.func5
| github.com/cockroachdb/cockroach/pkg/server/server_controller_channel_orchestrator.go:401
| github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx.func2
| github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:470
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1594
Wraps: (2) server stop before successful start
Error types: (1) *withstack.withStack (2) *errutil.leafError
panic.go:522: -- test log scope end --
test logs left over in: /artifacts/tmp/_tmp/90c1b75d835f45b8488807abb5b1092d/logTestTenantStatusWithFutureCutoverTime2874947416
--- FAIL: TestTenantStatusWithFutureCutoverTime (27.86s)
```
<p>Parameters: <code>TAGS=bazel,gss,deadlock</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/disaster-recovery
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestTenantStatusWithFutureCutoverTime.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_defect
|
ccl streamingccl streamingest testtenantstatuswithfuturecutovertime failed ccl streamingccl streamingest testtenantstatuswithfuturecutovertime with on release run testtenantstatuswithfuturecutovertime test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline test server shim go server stop before successful start attached stack trace stack trace github com cockroachdb cockroach pkg server channelorchestrator startcontrolledserver github com cockroachdb cockroach pkg server server controller channel orchestrator go github com cockroachdb cockroach pkg server channelorchestrator startcontrolledserver github com cockroachdb cockroach pkg server server controller channel orchestrator go github com cockroachdb cockroach pkg util stop stopper runasynctaskex github com cockroachdb cockroach pkg util stop stopper go runtime goexit goroot src runtime asm s wraps server stop before successful start error types withstack withstack errutil leaferror panic go test log scope end test logs left over in artifacts tmp tmp fail testtenantstatuswithfuturecutovertime parameters tags bazel gss deadlock help see also cc cockroachdb disaster recovery
| 0
|
79,947
| 29,737,751,613
|
IssuesEvent
|
2023-06-14 03:21:09
|
fieldenms/tg
|
https://api.github.com/repos/fieldenms/tg
|
closed
|
Inline EGI Master: continuation dialog doesn't get closed after it was accepted
|
Defect Entity centre P1 EGI Pull request
|
### Description
Inline EGI master does not handle continuations properly. For example, attempting to save an entity with warning opens up a continuation dialog, but accepting warnings, which leads to saving of the entity, does not close the Warning Acknowledgement (continuation) dialog (i.e., it remains open). Tapping button ACCEPT in the Warning Acknowledgement dialog again, causes an exceptional situation.
This behavior happens because unlike Entity Masters, which have their own dialog, the inline EGI masters are missing UUIDs, which are required for the dialog closing process.
- [x] 1. Adding UUID generation for inline EGI masters should resolve this situation. Also, need to make sure that accepting warnings saves the inline EGI master and continues editing next/previous row if it is needed (Enter/Alt+arrow/tab key were pressed).
- [x] 2. Ensure that the use of continuations other than the Warning Acknowledgement continuation works correctly with the Inline EGI Masters. This includes both providing new data as well as cancelling such continuations.
### Expected outcome
Correct handling of continuations (including the Warning Acknowledgement continuation), without unexpected exceptions as part of the Inline EGI Master saving logic.
### Actual outcome
Tapping button ACCEPT saves the underlying entity, but the Warning dialog does not get closed. Tapping button ACCEPT again throws exception.
|
1.0
|
Inline EGI Master: continuation dialog doesn't get closed after it was accepted - ### Description
Inline EGI master does not handle continuations properly. For example, attempting to save an entity with warning opens up a continuation dialog, but accepting warnings, which leads to saving of the entity, does not close the Warning Acknowledgement (continuation) dialog (i.e., it remains open). Tapping button ACCEPT in the Warning Acknowledgement dialog again, causes an exceptional situation.
This behavior happens because unlike Entity Masters, which have their own dialog, the inline EGI masters are missing UUIDs, which are required for the dialog closing process.
- [x] 1. Adding UUID generation for inline EGI masters should resolve this situation. Also, need to make sure that accepting warnings saves the inline EGI master and continues editing next/previous row if it is needed (Enter/Alt+arrow/tab key were pressed).
- [x] 2. Ensure that the use of continuations other than the Warning Acknowledgement continuation works correctly with the Inline EGI Masters. This includes both providing new data as well as cancelling such continuations.
### Expected outcome
Correct handling of continuations (including the Warning Acknowledgement continuation), without unexpected exceptions as part of the Inline EGI Master saving logic.
### Actual outcome
Tapping button ACCEPT saves the underlying entity, but the Warning dialog does not get closed. Tapping button ACCEPT again throws exception.
|
defect
|
inline egi master continuation dialog doesn t get closed after it was accepted description inline egi master does not handle continuations properly for example attempting to save an entity with warning opens up a continuation dialog but accepting warnings which leads to saving of the entity does not close the warning acknowledgement continuation dialog i e it remains open tapping button accept in the warning acknowledgement dialog again causes an exceptional situation this behavior happens because unlike entity masters which have their own dialog the inline egi masters are missing uuids which are required for the dialog closing process adding uuid generation for inline egi masters should resolve this situation also need to make sure that accepting warnings saves the inline egi master and continues editing next previous row if it is needed enter alt arrow tab key were pressed ensure that the use of continuations other than the warning acknowledgement continuation works correctly with the inline egi masters this includes both providing new data as well as cancelling such continuations expected outcome correct handling of continuations including the warning acknowledgement continuation without unexpected exceptions as part of the inline egi master saving logic actual outcome tapping button accept saves the underlying entity but the warning dialog does not get closed tapping button accept again throws exception
| 1
|
44,766
| 12,374,614,592
|
IssuesEvent
|
2020-05-19 02:07:21
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
[ZOOM]: Section 103 - Modal windows MUST be usable at 200% to 400% zoom
|
508-defect-2 508-issue-mobile-design 508/Accessibility bah-section103
|
# [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
<!--
Enter an issue title using the format [ERROR TYPE]: Brief description of the problem
---
[SCREENREADER]: Edit buttons need aria-label for context
[KEYBOARD]: Add another user link will not receive keyboard focus
[AXE-CORE]: Heading levels should increase by one
[COGNITION]: Error messages should be more specific
[COLOR]: Blue button on blue background does not have sufficient contrast ratio
---
-->
<!-- It's okay to delete the instructions above, but leave the link to the 508 defect severity level for your issue. -->
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Description
<!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. -->
Several of the modal windows are expanding out of the viewport at 200-250% zoom. I saw this on a few of the longer modals like Yellow Ribbon and the college housing. Screenshot of Yellow Ribbon attached below.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket.
-->
**VFS Point of Contact:** _Trevor_
## Acceptance Criteria
<!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. -->
- [ ] The modals do not break out of the viewport at any zoom between 200% and 400%
- [ ] No other zoom levels break or change their layout
## Environment
* Any browser, zoomed to 200% or more at 1280px width
## Screenshots or Trace Logs
<!-- Drop any screenshots or error logs that might be useful for debugging -->
<img width="1274" alt="Screen Shot 2020-05-18 at 7 54 54 PM" src="https://user-images.githubusercontent.com/934879/82276657-7bfd0d80-994b-11ea-8085-32695592ce23.png">
|
1.0
|
[ZOOM]: Section 103 - Modal windows MUST be usable at 200% to 400% zoom - # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
<!--
Enter an issue title using the format [ERROR TYPE]: Brief description of the problem
---
[SCREENREADER]: Edit buttons need aria-label for context
[KEYBOARD]: Add another user link will not receive keyboard focus
[AXE-CORE]: Heading levels should increase by one
[COGNITION]: Error messages should be more specific
[COLOR]: Blue button on blue background does not have sufficient contrast ratio
---
-->
<!-- It's okay to delete the instructions above, but leave the link to the 508 defect severity level for your issue. -->
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Description
<!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. -->
Several of the modal windows are expanding out of the viewport at 200-250% zoom. I saw this on a few of the longer modals like Yellow Ribbon and the college housing. Screenshot of Yellow Ribbon attached below.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket.
-->
**VFS Point of Contact:** _Trevor_
## Acceptance Criteria
<!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. -->
- [ ] The modals do not break out of the viewport at any zoom between 200% and 400%
- [ ] No other zoom levels break or change their layout
## Environment
* Any browser, zoomed to 200% or more at 1280px width
## Screenshots or Trace Logs
<!-- Drop any screenshots or error logs that might be useful for debugging -->
<img width="1274" alt="Screen Shot 2020-05-18 at 7 54 54 PM" src="https://user-images.githubusercontent.com/934879/82276657-7bfd0d80-994b-11ea-8085-32695592ce23.png">
|
defect
|
section modal windows must be usable at to zoom enter an issue title using the format brief description of the problem edit buttons need aria label for context add another user link will not receive keyboard focus heading levels should increase by one error messages should be more specific blue button on blue background does not have sufficient contrast ratio feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements description several of the modal windows are expanding out of the viewport at zoom i saw this on a few of the longer modals like yellow ribbon and the college housing screenshot of yellow ribbon attached below point of contact if this issue is being opened by a vfs team member please add a point of contact usually this is the same person who enters the issue ticket vfs point of contact trevor acceptance criteria the modals do not break out of the viewport at any zoom between and no other zoom levels break or change their layout environment any browser zoomed to or more at width screenshots or trace logs img width alt screen shot at pm src
| 1
|
404
| 2,700,976,801
|
IssuesEvent
|
2015-04-04 19:39:44
|
pnl8zp/CS-4750-Group-Project
|
https://api.github.com/repos/pnl8zp/CS-4750-Group-Project
|
closed
|
Store repo on CS servers
|
Requirement
|
the data for this project needs to be saved on the servers for plato.cs.virgina.edu/~computing_id/
|
1.0
|
Store repo on CS servers - the data for this project needs to be saved on the servers for plato.cs.virgina.edu/~computing_id/
|
non_defect
|
store repo on cs servers the data for this project needs to be saved on the servers for plato cs virgina edu computing id
| 0
|
189,316
| 14,497,797,846
|
IssuesEvent
|
2020-12-11 14:42:18
|
ImagingDataCommons/IDC-WebApp
|
https://api.github.com/repos/ImagingDataCommons/IDC-WebApp
|
closed
|
Debug and fix, or explain the behavior of the portal
|
bug explore page merged:dev testing needed testing passed
|
Why do I get empty cohort list if I set the Volume attribute to [27872-31113]? As a user, my naive assumption is that if the upper limit of the range is set to X, then there must be something that hits that max limit. What am I missing?

It's a bug in production, until someone proves otherwise.
|
2.0
|
Debug and fix, or explain the behavior of the portal - Why do I get empty cohort list if I set the Volume attribute to [27872-31113]? As a user, my naive assumption is that if the upper limit of the range is set to X, then there must be something that hits that max limit. What am I missing?

It's a bug in production, until someone proves otherwise.
|
non_defect
|
debug and fix or explain the behavior of the portal why do i get empty cohort list if i set the volume attribute to as a user my naive assumption is that if the upper limit of the range is set to x then there must be something that hits that max limit what am i missing it s a bug in production until someone proves otherwise
| 0
|
10,554
| 7,228,170,236
|
IssuesEvent
|
2018-02-11 05:54:46
|
PaddlePaddle/Paddle
|
https://api.github.com/repos/PaddlePaddle/Paddle
|
closed
|
`concat_op` is extremely slow in GPU
|
performance tuning
|
`concat_op` invoke `cudaMemcpyAsync` many times. Even though `cudaMemcpyAsync` is async, the kernel launch time is huge.
|
True
|
`concat_op` is extremely slow in GPU - `concat_op` invoke `cudaMemcpyAsync` many times. Even though `cudaMemcpyAsync` is async, the kernel launch time is huge.
|
non_defect
|
concat op is extremely slow in gpu concat op invoke cudamemcpyasync many times even though cudamemcpyasync is async the kernel launch time is huge
| 0
|
205,452
| 15,615,217,069
|
IssuesEvent
|
2021-03-19 18:50:48
|
ampproject/amp-github-apps
|
https://api.github.com/repos/ampproject/amp-github-apps
|
closed
|
Update test case reporting internals to work on CircleCI
|
Category: Test Cases P2: Soon Type: Feature Request
|
There is a bunch of test case reporting code that relies on Travis: https://github.com/ampproject/amp-github-apps/search?q=travis+f%3Atest-case-reporting
There don't appear to be any major changes needed to `build-system/tasks/test-report-upload.js`, but it's possible that things like default SHA lengths, etc. could change on CircleCI.
This is a tracking bug and can be closed when we're sure report uploading works.
|
1.0
|
Update test case reporting internals to work on CircleCI - There is a bunch of test case reporting code that relies on Travis: https://github.com/ampproject/amp-github-apps/search?q=travis+f%3Atest-case-reporting
There don't appear to be any major changes needed to `build-system/tasks/test-report-upload.js`, but it's possible that things like default SHA lengths, etc. could change on CircleCI.
This is a tracking bug and can be closed when we're sure report uploading works.
|
non_defect
|
update test case reporting internals to work on circleci there is a bunch of test case reporting code that relies on travis there don t appear to be any major changes needed to build system tasks test report upload js but it s possible that things like default sha lengths etc could change on circleci this is a tracking bug and can be closed when we re sure report uploading works
| 0
|
122,656
| 17,762,059,160
|
IssuesEvent
|
2021-08-29 21:57:08
|
ghc-dev/Bryan-Brown
|
https://api.github.com/repos/ghc-dev/Bryan-Brown
|
opened
|
CVE-2020-10744 (Medium) detected in ansible-2.9.9.tar.gz
|
security vulnerability
|
## CVE-2020-10744 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: Bryan-Brown/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Bryan-Brown/commit/45110085c1c88f87ad86e317c0a890fcf4888688">45110085c1c88f87ad86e317c0a890fcf4888688</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An incomplete fix was found for the fix of the flaw CVE-2020-1733 ansible: insecure temporary directory when running become_user from become directive. The provided fix is insufficient to prevent the race condition on systems using ACLs and FUSE filesystems. Ansible Engine 2.7.18, 2.8.12, and 2.9.9 as well as previous versions are affected and Ansible Tower 3.4.5, 3.5.6 and 3.6.4 as well as previous versions are affected.
<p>Publish Date: 2020-05-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10744>CVE-2020-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-10744","vulnerabilityDetails":"An incomplete fix was found for the fix of the flaw CVE-2020-1733 ansible: insecure temporary directory when running become_user from become directive. The provided fix is insufficient to prevent the race condition on systems using ACLs and FUSE filesystems. Ansible Engine 2.7.18, 2.8.12, and 2.9.9 as well as previous versions are affected and Ansible Tower 3.4.5, 3.5.6 and 3.6.4 as well as previous versions are affected.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10744","cvss3Severity":"medium","cvss3Score":"5.0","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Changed","C":"Low","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-10744 (Medium) detected in ansible-2.9.9.tar.gz - ## CVE-2020-10744 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: Bryan-Brown/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Bryan-Brown/commit/45110085c1c88f87ad86e317c0a890fcf4888688">45110085c1c88f87ad86e317c0a890fcf4888688</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An incomplete fix was found for the fix of the flaw CVE-2020-1733 ansible: insecure temporary directory when running become_user from become directive. The provided fix is insufficient to prevent the race condition on systems using ACLs and FUSE filesystems. Ansible Engine 2.7.18, 2.8.12, and 2.9.9 as well as previous versions are affected and Ansible Tower 3.4.5, 3.5.6 and 3.6.4 as well as previous versions are affected.
<p>Publish Date: 2020-05-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10744>CVE-2020-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-10744","vulnerabilityDetails":"An incomplete fix was found for the fix of the flaw CVE-2020-1733 ansible: insecure temporary directory when running become_user from become directive. The provided fix is insufficient to prevent the race condition on systems using ACLs and FUSE filesystems. Ansible Engine 2.7.18, 2.8.12, and 2.9.9 as well as previous versions are affected and Ansible Tower 3.4.5, 3.5.6 and 3.6.4 as well as previous versions are affected.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10744","cvss3Severity":"medium","cvss3Score":"5.0","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Changed","C":"Low","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve medium detected in ansible tar gz cve medium severity vulnerability vulnerable library ansible tar gz radically simple it automation library home page a href path to dependency file bryan brown requirements txt path to vulnerable library requirements txt dependency hierarchy x ansible tar gz vulnerable library found in head commit a href found in base branch master vulnerability details an incomplete fix was found for the fix of the flaw cve ansible insecure temporary directory when running become user from become directive the provided fix is insufficient to prevent the race condition on systems using acls and fuse filesystems ansible engine and as well as previous versions are affected and ansible tower and as well as previous versions are affected publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree ansible isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails an incomplete fix was found for the fix of the flaw cve ansible insecure temporary directory when running become user from become directive the provided fix is insufficient to prevent the race condition on systems using acls and fuse filesystems ansible engine and as well as previous versions are affected and ansible tower and as well as previous versions are affected vulnerabilityurl
| 0
|
230,780
| 17,645,992,802
|
IssuesEvent
|
2021-08-20 06:09:44
|
amzn/selling-partner-api-docs
|
https://api.github.com/repos/amzn/selling-partner-api-docs
|
opened
|
How can I get the image URL of product?
|
documentation enhancement request
|
I am using GetReport API of Amazon MWS with the report type "_GET_MERCHANT_LISTINGS_ALL_DATA_", Actually it's not returning images URL? Is there a way to get image URL?
|
1.0
|
How can I get the image URL of product? - I am using GetReport API of Amazon MWS with the report type "_GET_MERCHANT_LISTINGS_ALL_DATA_", Actually it's not returning images URL? Is there a way to get image URL?
|
non_defect
|
how can i get the image url of product i am using getreport api of amazon mws with the report type get merchant listings all data actually it s not returning images url is there a way to get image url
| 0
|
26,803
| 4,789,121,732
|
IssuesEvent
|
2016-10-30 22:18:54
|
belangeo/pyo
|
https://api.github.com/repos/belangeo/pyo
|
closed
|
SyntaxError: invalid syntax
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. python2
2. from pyo import *
3. example(Harmonizer)
What is the expected output? What do you see instead?
Instead of any sounds I get:
File "/tmp/tmpP1crQz", line 13
"""
^
SyntaxError: invalid syntax
What version of the product are you using? On what operating system?
python2-pyo 0.6.1
python2 2.7.3
64-bit Arch Linux
Please provide any additional information below.
```
Original issue reported on code.google.com by `rods...@gmail.com` on 9 Dec 2012 at 12:55
|
1.0
|
SyntaxError: invalid syntax - ```
What steps will reproduce the problem?
1. python2
2. from pyo import *
3. example(Harmonizer)
What is the expected output? What do you see instead?
Instead of any sounds I get:
File "/tmp/tmpP1crQz", line 13
"""
^
SyntaxError: invalid syntax
What version of the product are you using? On what operating system?
python2-pyo 0.6.1
python2 2.7.3
64-bit Arch Linux
Please provide any additional information below.
```
Original issue reported on code.google.com by `rods...@gmail.com` on 9 Dec 2012 at 12:55
|
defect
|
syntaxerror invalid syntax what steps will reproduce the problem from pyo import example harmonizer what is the expected output what do you see instead instead of any sounds i get file tmp line syntaxerror invalid syntax what version of the product are you using on what operating system pyo bit arch linux please provide any additional information below original issue reported on code google com by rods gmail com on dec at
| 1
|
11,878
| 2,668,306,562
|
IssuesEvent
|
2015-03-23 07:23:00
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
FOR UPDATE .. OF doesn't allow qualified column names
|
C: Functionality P: Medium T: Defect
|
The current implementation of `FOR UPDATE .. OF` seems to allow only for unqualified column names - probably for some historic reason in a given SQL dialect that doesn't allow for qualification here.
Some SQL dialects do allow for qualification, though. jOOQ should get this right.
---
See also:
https://groups.google.com/forum/#!topic/jooq-user/WDAsX3ySbeQ
|
1.0
|
FOR UPDATE .. OF doesn't allow qualified column names - The current implementation of `FOR UPDATE .. OF` seems to allow only for unqualified column names - probably for some historic reason in a given SQL dialect that doesn't allow for qualification here.
Some SQL dialects do allow for qualification, though. jOOQ should get this right.
---
See also:
https://groups.google.com/forum/#!topic/jooq-user/WDAsX3ySbeQ
|
defect
|
for update of doesn t allow qualified column names the current implementation of for update of seems to allow only for unqualified column names probably for some historic reason in a given sql dialect that doesn t allow for qualification here some sql dialects do allow for qualification though jooq should get this right see also
| 1
|
30,286
| 6,085,332,182
|
IssuesEvent
|
2017-06-17 13:50:48
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
closed
|
Inheriting from an external class (no constructors case)
|
defect in progress
|
A class-successor that does not define a constructor fails to init if its parent class is marked with `[External]` attribute.
There was a similar issue (#2189), but in that sample the derived class had a constructor. So the current issue was not reproducible there.
### Steps To Reproduce
https://deck.net/8aa83e493065a91f8766abad01d5d460
```c#
public class Program
{
[Init(InitPosition.Top)]
public static void CreatePersonDefinition()
{
/*@
var Person = (function () {
function Person() {
}
return Person;
}());
*/
}
public static void Main()
{
var employee = Employee.Create("John Doe", 100);
Console.WriteLine(employee.Name);
Console.WriteLine(employee.Salary);
}
public class Employee : Person
{
public int Salary;
// Uncomment to fix the issue:
//public Employee()
//{
//}
public static Employee Create(string name, int salary)
{
var employee = new Employee();
employee.Name = name;
employee.Salary = salary;
return employee;
}
}
[External]
[Name("Person")]
public class Person
{
public string Name;
public Person()
{
}
}
}
```
### Expected Result
> John Doe
> 100
### Actual Result
> System.Exception: TypeError: Cannot read property 'call' of undefined
at new Demo.Program.Employee.ctor (https://deck.net/resources/js/bridge/bridge.js?16.0.0-beta:2524:45)
at Function.Demo.Program.Employee.Create (https://deck.net/RunHandler.ashx?h=1451275011:27:36)
at Function.Main (https://deck.net/RunHandler.ashx?h=1451275011:15:50)
at https://deck.net/resources/js/bridge/bridge.js?16.0.0-beta:3136:39
### See Also
- [#2189] Inheriting from an external class
|
1.0
|
Inheriting from an external class (no constructors case) - A class-successor that does not define a constructor fails to init if its parent class is marked with `[External]` attribute.
There was a similar issue (#2189), but in that sample the derived class had a constructor. So the current issue was not reproducible there.
### Steps To Reproduce
https://deck.net/8aa83e493065a91f8766abad01d5d460
```c#
public class Program
{
[Init(InitPosition.Top)]
public static void CreatePersonDefinition()
{
/*@
var Person = (function () {
function Person() {
}
return Person;
}());
*/
}
public static void Main()
{
var employee = Employee.Create("John Doe", 100);
Console.WriteLine(employee.Name);
Console.WriteLine(employee.Salary);
}
public class Employee : Person
{
public int Salary;
// Uncomment to fix the issue:
//public Employee()
//{
//}
public static Employee Create(string name, int salary)
{
var employee = new Employee();
employee.Name = name;
employee.Salary = salary;
return employee;
}
}
[External]
[Name("Person")]
public class Person
{
public string Name;
public Person()
{
}
}
}
```
### Expected Result
> John Doe
> 100
### Actual Result
> System.Exception: TypeError: Cannot read property 'call' of undefined
at new Demo.Program.Employee.ctor (https://deck.net/resources/js/bridge/bridge.js?16.0.0-beta:2524:45)
at Function.Demo.Program.Employee.Create (https://deck.net/RunHandler.ashx?h=1451275011:27:36)
at Function.Main (https://deck.net/RunHandler.ashx?h=1451275011:15:50)
at https://deck.net/resources/js/bridge/bridge.js?16.0.0-beta:3136:39
### See Also
- [#2189] Inheriting from an external class
|
defect
|
inheriting from an external class no constructors case a class successor that does not define a constructor fails to init if its parent class is marked with attribute there was a similar issue but in that sample the derived class had a constructor so the current issue was not reproducible there steps to reproduce c public class program public static void createpersondefinition var person function function person return person public static void main var employee employee create john doe console writeline employee name console writeline employee salary public class employee person public int salary uncomment to fix the issue public employee public static employee create string name int salary var employee new employee employee name name employee salary salary return employee public class person public string name public person expected result john doe actual result system exception typeerror cannot read property call of undefined at new demo program employee ctor at function demo program employee create at function main at see also inheriting from an external class
| 1
|
27,948
| 5,141,754,568
|
IssuesEvent
|
2017-01-12 10:54:14
|
Schematron/schematron
|
https://api.github.com/repos/Schematron/schematron
|
closed
|
Issue with iso_svrl_for_xslt2.xsl producing XSL where rules don't fire.
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.Generate XSL for the following schematron:
https://github.com/rackspace/wadl-tools/blob/master/xsd/wadl.sch
2.When run against a WADL such as:
https://github.com/openstack/compute-api/blob/master/openstack-compute-api-2/src
/os-compute-2.wadl none of the none of the rules fire.
What is the expected output? What do you see instead?
Rules should fire.
What version of the product are you using? On what operating system?
Latest trunk, on OS X with latest Saxon-EE
Please provide any additional information below.
The following customization fixes the issue:
<?xml version="1.0" ?>
<xsl:stylesheet
version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:axsl="http://www.w3.org/1999/XSL/TransformAlias"
>
<xsl:import href="iso_svrl_for_xslt2.xsl"/>
<xsl:namespace-alias stylesheet-prefix="axsl" result-prefix="xsl"/>
<xsl:template name="process-prolog">
<axsl:template match="@*|node()" mode="#all">
<axsl:apply-templates select="@*|node()" mode="#current"/>
</axsl:template>
</xsl:template>
</xsl:stylesheet>
```
Original issue reported on code.google.com by `jorge.lu...@gmail.com` on 8 Oct 2012 at 9:19
|
1.0
|
Issue with iso_svrl_for_xslt2.xsl producing XSL where rules don't fire. - ```
What steps will reproduce the problem?
1.Generate XSL for the following schematron:
https://github.com/rackspace/wadl-tools/blob/master/xsd/wadl.sch
2.When run against a WADL such as:
https://github.com/openstack/compute-api/blob/master/openstack-compute-api-2/src
/os-compute-2.wadl none of the none of the rules fire.
What is the expected output? What do you see instead?
Rules should fire.
What version of the product are you using? On what operating system?
Latest trunk, on OS X with latest Saxon-EE
Please provide any additional information below.
The following customization fixes the issue:
<?xml version="1.0" ?>
<xsl:stylesheet
version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:axsl="http://www.w3.org/1999/XSL/TransformAlias"
>
<xsl:import href="iso_svrl_for_xslt2.xsl"/>
<xsl:namespace-alias stylesheet-prefix="axsl" result-prefix="xsl"/>
<xsl:template name="process-prolog">
<axsl:template match="@*|node()" mode="#all">
<axsl:apply-templates select="@*|node()" mode="#current"/>
</axsl:template>
</xsl:template>
</xsl:stylesheet>
```
Original issue reported on code.google.com by `jorge.lu...@gmail.com` on 8 Oct 2012 at 9:19
|
defect
|
issue with iso svrl for xsl producing xsl where rules don t fire what steps will reproduce the problem generate xsl for the following schematron when run against a wadl such as os compute wadl none of the none of the rules fire what is the expected output what do you see instead rules should fire what version of the product are you using on what operating system latest trunk on os x with latest saxon ee please provide any additional information below the following customization fixes the issue xsl stylesheet version xmlns xsl xmlns axsl original issue reported on code google com by jorge lu gmail com on oct at
| 1
|
292,935
| 8,970,900,805
|
IssuesEvent
|
2019-01-29 14:45:47
|
Taxmannen/In-Heaven
|
https://api.github.com/repos/Taxmannen/In-Heaven
|
closed
|
Sound assets
|
Normal Priority
|
Player
- [x] Weapon Fire (Placeholder)
- [x] Jump (Placeholder)
- [x] Double Jump (Placeholder)
- [x] Dash (Placeholder)
Enemy
- [x] Take Damage (Placeholder)
- [x] Die (Placeholder)
|
1.0
|
Sound assets - Player
- [x] Weapon Fire (Placeholder)
- [x] Jump (Placeholder)
- [x] Double Jump (Placeholder)
- [x] Dash (Placeholder)
Enemy
- [x] Take Damage (Placeholder)
- [x] Die (Placeholder)
|
non_defect
|
sound assets player weapon fire placeholder jump placeholder double jump placeholder dash placeholder enemy take damage placeholder die placeholder
| 0
|
32,931
| 6,970,826,437
|
IssuesEvent
|
2017-12-11 11:45:35
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
Ensure thread safety for Config.get and Config.find methods
|
Team: Core Type: Defect
|
These methods can be called from different partition threads but they do not establish any happens-before relation. Different threads may then publish different instances. Even though the config might be properly constructed (all values are assigned in the constructor), the reference is not properly published. If the newly created config is mutated, this problem is exacerbated even further.
|
1.0
|
Ensure thread safety for Config.get and Config.find methods - These methods can be called from different partition threads but they do not establish any happens-before relation. Different threads may then publish different instances. Even though the config might be properly constructed (all values are assigned in the constructor), the reference is not properly published. If the newly created config is mutated, this problem is exacerbated even further.
|
defect
|
ensure thread safety for config get and config find methods these methods can be called from different partition threads but they do not establish any happens before relation different threads may then publish different instances even though the config might be properly constructed all values are assigned in the constructor the reference is not properly published if the newly created config is mutated this problem is exacerbated even further
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.